markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Here we see clearly, that Pclass is contributing to a persons chance of survival, especially if this person is in class 1. We will create another pclass plot below.
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend();
/opt/conda/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning)
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
The plot above confirms our assumption about pclass 1, but we can also spot a high probability that a person in pclass 3 will not survive. **5. SibSp and Parch:**SibSp and Parch would make more sense as a combined feature, that shows the total number of relatives, a person has on the Titanic. I will create it below and also a feature that sows if someone is not alone.
data = [train_df, test_df] for dataset in data: dataset['relatives'] = dataset['SibSp'] + dataset['Parch'] dataset.loc[dataset['relatives'] > 0, 'not_alone'] = 0 dataset.loc[dataset['relatives'] == 0, 'not_alone'] = 1 dataset['not_alone'] = dataset['not_alone'].astype(int) train_df['not_alone'].value_counts() axes = sns.factorplot('relatives','Survived', data=train_df, aspect = 2.5, )
/opt/conda/lib/python3.6/site-packages/seaborn/categorical.py:3666: UserWarning: The `factorplot` function has been renamed to `catplot`. The original name will be removed in a future release. Please update your code. Note that the default `kind` in `factorplot` (`'point'`) has changed `'strip'` in `catplot`. warnings.warn(msg) /opt/conda/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Here we can see that you had a high probabilty of survival with 1 to 3 realitves, but a lower one if you had less than 1 or more than 3 (except for some cases with 6 relatives). **Data Preprocessing** First, I will drop 'PassengerId' from the train set, because it does not contribute to a persons survival probability. I will not drop it from the test set, since it is required there for the submission
train_df = train_df.drop(['PassengerId'], axis=1)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Missing Data: Cabin:As a reminder, we have to deal with Cabin (687), Embarked (2) and Age (177). First I thought, we have to delete the 'Cabin' variable but then I found something interesting. A cabin number looks like ‘C123’ and the **letter refers to the deck**. Therefore we’re going to extract these and create a new feature, that contains a persons deck. Afterwords we will convert the feature into a numeric variable. The missing values will be converted to zero.In the picture below you can see the actual decks of the titanic, ranging from A to G.![titanic decks](http://upload.wikimedia.org/wikipedia/commons/thumb/8/84/Titanic_cutaway_diagram.png/687px-Titanic_cutaway_diagram.png)
import re deck = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "U": 8} data = [train_df, test_df] for dataset in data: dataset['Cabin'] = dataset['Cabin'].fillna("U0") dataset['Deck'] = dataset['Cabin'].map(lambda x: re.compile("([a-zA-Z]+)").search(x).group()) dataset['Deck'] = dataset['Deck'].map(deck) dataset['Deck'] = dataset['Deck'].fillna(0) dataset['Deck'] = dataset['Deck'].astype(int) # we can now drop the cabin feature train_df = train_df.drop(['Cabin'], axis=1) test_df = test_df.drop(['Cabin'], axis=1)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Age:Now we can tackle the issue with the age features missing values. I will create an array that contains random numbers, which are computed based on the mean age value in regards to the standard deviation and is_null.
data = [train_df, test_df] for dataset in data: mean = train_df["Age"].mean() std = test_df["Age"].std() is_null = dataset["Age"].isnull().sum() # compute random numbers between the mean, std and is_null rand_age = np.random.randint(mean - std, mean + std, size = is_null) # fill NaN values in Age column with random values generated age_slice = dataset["Age"].copy() age_slice[np.isnan(age_slice)] = rand_age dataset["Age"] = age_slice dataset["Age"] = train_df["Age"].astype(int) train_df["Age"].isnull().sum()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Embarked:Since the Embarked feature has only 2 missing values, we will just fill these with the most common one.
train_df['Embarked'].describe() common_value = 'S' data = [train_df, test_df] for dataset in data: dataset['Embarked'] = dataset['Embarked'].fillna(common_value)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Converting Features:
train_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 13 columns): Survived 891 non-null int64 Pclass 891 non-null int64 Name 891 non-null object Sex 891 non-null object Age 891 non-null int64 SibSp 891 non-null int64 Parch 891 non-null int64 Ticket 891 non-null object Fare 891 non-null float64 Embarked 891 non-null object relatives 891 non-null int64 not_alone 891 non-null int64 Deck 891 non-null int64 dtypes: float64(1), int64(8), object(4) memory usage: 90.6+ KB
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Above you can see that 'Fare' is a float and we have to deal with 4 categorical features: Name, Sex, Ticket and Embarked. Lets investigate and transfrom one after another. Fare:Converting "Fare" from float to int64, using the "astype()" function pandas provides:
data = [train_df, test_df] for dataset in data: dataset['Fare'] = dataset['Fare'].fillna(0) dataset['Fare'] = dataset['Fare'].astype(int)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Name:We will use the Name feature to extract the Titles from the Name, so that we can build a new feature out of that.
data = [train_df, test_df] titles = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5} for dataset in data: # extract titles dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False) # replace titles with a more common title or as Rare dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr',\ 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare') dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') dataset['Title'] = dataset['Title'].replace('Ms', 'Miss') dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') # convert titles into numbers dataset['Title'] = dataset['Title'].map(titles) # filling NaN with 0, to get safe dataset['Title'] = dataset['Title'].fillna(0) train_df = train_df.drop(['Name'], axis=1) test_df = test_df.drop(['Name'], axis=1)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Sex:Convert 'Sex' feature into numeric.
genders = {"male": 0, "female": 1} data = [train_df, test_df] for dataset in data: dataset['Sex'] = dataset['Sex'].map(genders)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Ticket:
train_df['Ticket'].describe()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Since the Ticket attribute has 681 unique tickets, it will be a bit tricky to convert them into useful categories. So we will drop it from the dataset.
train_df = train_df.drop(['Ticket'], axis=1) test_df = test_df.drop(['Ticket'], axis=1)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Embarked:Convert 'Embarked' feature into numeric.
ports = {"S": 0, "C": 1, "Q": 2} data = [train_df, test_df] for dataset in data: dataset['Embarked'] = dataset['Embarked'].map(ports)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Creating Categories:We will now create categories within the following features: Age:Now we need to convert the 'age' feature. First we will convert it from float into integer. Then we will create the new 'AgeGroup" variable, by categorizing every age into a group. Note that it is important to place attention on how you form these groups, since you don't want for example that 80% of your data falls into group 1.
data = [train_df, test_df] for dataset in data: dataset['Age'] = dataset['Age'].astype(int) dataset.loc[ dataset['Age'] <= 11, 'Age'] = 0 dataset.loc[(dataset['Age'] > 11) & (dataset['Age'] <= 18), 'Age'] = 1 dataset.loc[(dataset['Age'] > 18) & (dataset['Age'] <= 22), 'Age'] = 2 dataset.loc[(dataset['Age'] > 22) & (dataset['Age'] <= 27), 'Age'] = 3 dataset.loc[(dataset['Age'] > 27) & (dataset['Age'] <= 33), 'Age'] = 4 dataset.loc[(dataset['Age'] > 33) & (dataset['Age'] <= 40), 'Age'] = 5 dataset.loc[(dataset['Age'] > 40) & (dataset['Age'] <= 66), 'Age'] = 6 dataset.loc[ dataset['Age'] > 66, 'Age'] = 6 # let's see how it's distributed train_df['Age'].value_counts()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Fare:For the 'Fare' feature, we need to do the same as with the 'Age' feature. But it isn't that easy, because if we cut the range of the fare values into a few equally big categories, 80% of the values would fall into the first category. Fortunately, we can use sklearn "qcut()" function, that we can use to see, how we can form the categories.
train_df.head(10) data = [train_df, test_df] for dataset in data: dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0 dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1 dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2 dataset.loc[(dataset['Fare'] > 31) & (dataset['Fare'] <= 99), 'Fare'] = 3 dataset.loc[(dataset['Fare'] > 99) & (dataset['Fare'] <= 250), 'Fare'] = 4 dataset.loc[ dataset['Fare'] > 250, 'Fare'] = 5 dataset['Fare'] = dataset['Fare'].astype(int)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Creating new FeaturesI will add two new features to the dataset, that I compute out of other features. 1. Age times Class
data = [train_df, test_df] for dataset in data: dataset['Age_Class']= dataset['Age']* dataset['Pclass']
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
2. Fare per Person
for dataset in data: dataset['Fare_Per_Person'] = dataset['Fare']/(dataset['relatives']+1) dataset['Fare_Per_Person'] = dataset['Fare_Per_Person'].astype(int) # Let's take a last look at the training set, before we start training the models. train_df.head(20)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**Building Machine Learning Models**
X_train = train_df.drop("Survived", axis=1) Y_train = train_df["Survived"] X_test = test_df.drop("PassengerId", axis=1).copy() # stochastic gradient descent (SGD) learning sgd = linear_model.SGDClassifier(max_iter=5, tol=None) sgd.fit(X_train, Y_train) Y_pred = sgd.predict(X_test) sgd.score(X_train, Y_train) acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2) print(round(acc_sgd,2,), "%") # Random Forest random_forest = RandomForestClassifier(n_estimators=100) random_forest.fit(X_train, Y_train) Y_prediction = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) print(round(acc_random_forest,2,), "%") # Logistic Regression logreg = LogisticRegression() logreg.fit(X_train, Y_train) Y_pred = logreg.predict(X_test) acc_log = round(logreg.score(X_train, Y_train) * 100, 2) print(round(acc_log,2,), "%") # KNN knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train, Y_train) Y_pred = knn.predict(X_test) acc_knn = round(knn.score(X_train, Y_train) * 100, 2) print(round(acc_knn,2,), "%") # Gaussian Naive Bayes gaussian = GaussianNB() gaussian.fit(X_train, Y_train) Y_pred = gaussian.predict(X_test) acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2) print(round(acc_gaussian,2,), "%") # Perceptron perceptron = Perceptron(max_iter=5) perceptron.fit(X_train, Y_train) Y_pred = perceptron.predict(X_test) acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2) print(round(acc_perceptron,2,), "%") # Linear SVC linear_svc = LinearSVC() linear_svc.fit(X_train, Y_train) Y_pred = linear_svc.predict(X_test) acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2) print(round(acc_linear_svc,2,), "%") # Decision Tree decision_tree = DecisionTreeClassifier() decision_tree.fit(X_train, Y_train) Y_pred = decision_tree.predict(X_test) acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2) print(round(acc_decision_tree,2,), "%")
92.48 %
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Which is the best Model ?
results = pd.DataFrame({ 'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', 'Random Forest', 'Naive Bayes', 'Perceptron', 'Stochastic Gradient Decent', 'Decision Tree'], 'Score': [acc_linear_svc, acc_knn, acc_log, acc_random_forest, acc_gaussian, acc_perceptron, acc_sgd, acc_decision_tree]}) result_df = results.sort_values(by='Score', ascending=False) result_df = result_df.set_index('Score') result_df.head(9)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
As we can see, the Random Forest classifier goes on the first place. But first, let us check, how random-forest performs, when we use cross validation. K-Fold Cross Validation:K-Fold Cross Validation randomly splits the training data into **K subsets called folds**. Let's image we would split our data into 4 folds (K = 4). Our random forest model would be trained and evaluated 4 times, using a different fold for evaluation everytime, while it would be trained on the remaining 3 folds. The image below shows the process, using 4 folds (K = 4). Every row represents one training + evaluation process. In the first row, the model get's trained on the first, second and third subset and evaluated on the fourth. In the second row, the model get's trained on the second, third and fourth subset and evaluated on the first. K-Fold Cross Validation repeats this process till every fold acted once as an evaluation fold.![cross-v.](https://img3.picload.org/image/ddwrppcl/bildschirmfoto2018-02-02um10.0.png)The result of our K-Fold Cross Validation example would be an array that contains 4 different scores. We then need to compute the mean and the standard deviation for these scores. The code below perform K-Fold Cross Validation on our random forest model, using 10 folds (K = 10). Therefore it outputs an array with 10 different scores.
from sklearn.model_selection import cross_val_score rf = RandomForestClassifier(n_estimators=100) scores = cross_val_score(rf, X_train, Y_train, cv=10, scoring = "accuracy") print("Scores:", scores) print("Mean:", scores.mean()) print("Standard Deviation:", scores.std())
Scores: [0.77777778 0.83333333 0.73033708 0.84269663 0.87640449 0.82022472 0.80898876 0.7752809 0.85393258 0.88636364] Mean: 0.8205339916014074 Standard Deviation: 0.04622805870202012
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
This looks much more realistic than before. Our model has a average accuracy of 82% with a standard deviation of 4 %. The standard deviation shows us, how precise the estimates are . This means in our case that the accuracy of our model can differ **+ -** 4%.I think the accuracy is still really good and since random forest is an easy to use model, we will try to increase it's performance even further in the following section. **Random Forest** What is Random Forest ?Random Forest is a supervised learning algorithm. Like you can already see from it’s name, it creates a forest and makes it somehow random. The „forest“ it builds, is an ensemble of Decision Trees, most of the time trained with the “bagging” method. The general idea of the bagging method is that a combination of learning models increases the overall result.To say it in simple words: Random forest builds multiple decision trees and merges them together to get a more accurate and stable prediction.One big advantage of random forest is, that it can be used for both classification and regression problems, which form the majority of current machine learning systems. With a few exceptions a random-forest classifier has all the hyperparameters of a decision-tree classifier and also all the hyperparameters of a bagging classifier, to control the ensemble itself. The random-forest algorithm brings extra randomness into the model, when it is growing the trees. Instead of searching for the best feature while splitting a node, it searches for the best feature among a random subset of features. This process creates a wide diversity, which generally results in a better model. Therefore when you are growing a tree in random forest, only a random subset of the features is considered for splitting a node. You can even make trees more random, by using random thresholds on top of it, for each feature rather than searching for the best possible thresholds (like a normal decision tree does).Below you can see how a random forest would look like with two trees:![picture](https://img3.picload.org/image/dagpgdpw/bildschirmfoto-2018-02-06-um-1.png) Feature ImportanceAnother great quality of random forest is that they make it very easy to measure the relative importance of each feature. Sklearn measure a features importance by looking at how much the treee nodes, that use that feature, reduce impurity on average (across all trees in the forest). It computes this score automaticall for each feature after training and scales the results so that the sum of all importances is equal to 1. We will acces this below:
importances = pd.DataFrame({'feature':X_train.columns,'importance':np.round(random_forest.feature_importances_,3)}) importances = importances.sort_values('importance',ascending=False).set_index('feature') importances.head(15) importances.plot.bar()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**Conclusion:**not_alone and Parch doesn't play a significant role in our random forest classifiers prediction process. Because of that I will drop them from the dataset and train the classifier again. We could also remove more or less features, but this would need a more detailed investigation of the features effect on our model. But I think it's just fine to remove only Alone and Parch.
train_df = train_df.drop("not_alone", axis=1) test_df = test_df.drop("not_alone", axis=1) train_df = train_df.drop("Parch", axis=1) test_df = test_df.drop("Parch", axis=1)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
**Training random forest again:**
# Random Forest random_forest = RandomForestClassifier(n_estimators=100, oob_score = True) random_forest.fit(X_train, Y_train) Y_prediction = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) print(round(acc_random_forest,2,), "%")
92.48 %
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Our random forest model predicts as good as it did before. A general rule is that, **the more features you have, the more likely your model will suffer from overfitting** and vice versa. But I think our data looks fine for now and hasn't too much features.There is also another way to evaluate a random-forest classifier, which is probably much more accurate than the score we used before. What I am talking about is the **out-of-bag samples** to estimate the generalization accuracy. I will not go into details here about how it works. Just note that out-of-bag estimate is as accurate as using a test set of the same size as the training set. Therefore, using the out-of-bag error estimate removes the need for a set aside test set.
print("oob score:", round(random_forest.oob_score_, 4)*100, "%")
oob score: 81.82000000000001 %
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Now we can start tuning the hyperameters of random forest. Hyperparameter TuningBelow you can see the code of the hyperparamter tuning for the parameters criterion, min_samples_leaf, min_samples_split and n_estimators. I put this code into a markdown cell and not into a code cell, because it takes a long time to run it. Directly underneeth it, I put a screenshot of the gridsearch's output. param_grid = { "criterion" : ["gini", "entropy"], "min_samples_leaf" : [1, 5, 10, 25, 50, 70], "min_samples_split" : [2, 4, 10, 12, 16, 18, 25, 35], "n_estimators": [100, 400, 700, 1000, 1500]}from sklearn.model_selection import GridSearchCV, cross_val_scorerf = RandomForestClassifier(n_estimators=100, max_features='auto', oob_score=True, random_state=1, n_jobs=-1)clf = GridSearchCV(estimator=rf, param_grid=param_grid, n_jobs=-1)clf.fit(X_train, Y_train) clf.best_params_![GridSearch Output](https://img2.picload.org/image/ddwglili/bildschirmfoto2018-02-01um15.4.png) **Test new paramters:**
# Random Forest random_forest = RandomForestClassifier(criterion = "gini", min_samples_leaf = 1, min_samples_split = 10, n_estimators=100, max_features='auto', oob_score=True, random_state=1, n_jobs=-1) random_forest.fit(X_train, Y_train) Y_prediction = random_forest.predict(X_test) random_forest.score(X_train, Y_train) print("oob score:", round(random_forest.oob_score_, 4)*100, "%")
oob score: 83.39 %
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Now that we have a proper model, we can start evaluating it's performace in a more accurate way. Previously we only used accuracy and the oob score, which is just another form of accuracy. The problem is just, that it's more complicated to evaluate a classification model than a regression model. We will talk about this in the following section. **Further Evaluation** Confusion Matrix:
from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix predictions = cross_val_predict(random_forest, X_train, Y_train, cv=3) confusion_matrix(Y_train, predictions)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
The first row is about the not-survived-predictions: **493 passengers were correctly classified as not survived** (called true negatives) and **56 where wrongly classified as not survived** (false negatives).The second row is about the survived-predictions: **93 passengers where wrongly classified as survived** (false positives) and **249 where correctly classified as survived** (true positives).A confusion matrix gives you a lot of information about how well your model does, but theres a way to get even more, like computing the classifiers precision. Precision and Recall:
from sklearn.metrics import precision_score, recall_score print("Precision:", precision_score(Y_train, predictions)) print("Recall:",recall_score(Y_train, predictions))
Precision: 0.8013029315960912 Recall: 0.7192982456140351
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Our model predicts 81% of the time, a passengers survival correctly (precision). The recall tells us that it predicted the survival of 73 % of the people who actually survived. F-ScoreYou can combine precision and recall into one score, which is called the F-score. The F-score is computed with the harmonic mean of precision and recall. Note that it assigns much more weight to low values. As a result of that, the classifier will only get a high F-score, if both recall and precision are high.
from sklearn.metrics import f1_score f1_score(Y_train, predictions)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
There we have it, a 77 % F-score. The score is not that high, because we have a recall of 73%.But unfortunately the F-score is not perfect, because it favors classifiers that have a similar precision and recall. This is a problem, because you sometimes want a high precision and sometimes a high recall. The thing is that an increasing precision, sometimes results in an decreasing recall and vice versa (depending on the threshold). This is called the precision/recall tradeoff. We will discuss this in the following section. Precision Recall CurveFor each person the Random Forest algorithm has to classify, it computes a probability based on a function and it classifies the person as survived (when the score is bigger the than threshold) or as not survived (when the score is smaller than the threshold). That's why the threshold plays an important part.We will plot the precision and recall with the threshold using matplotlib:
from sklearn.metrics import precision_recall_curve # getting the probabilities of our predictions y_scores = random_forest.predict_proba(X_train) y_scores = y_scores[:,1] precision, recall, threshold = precision_recall_curve(Y_train, y_scores) def plot_precision_and_recall(precision, recall, threshold): plt.plot(threshold, precision[:-1], "r-", label="precision", linewidth=5) plt.plot(threshold, recall[:-1], "b", label="recall", linewidth=5) plt.xlabel("threshold", fontsize=19) plt.legend(loc="upper right", fontsize=19) plt.ylim([0, 1]) plt.figure(figsize=(14, 7)) plot_precision_and_recall(precision, recall, threshold) plt.show()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Above you can clearly see that the recall is falling of rapidly at a precision of around 85%. Because of that you may want to select the precision/recall tradeoff before that - maybe at around 75 %.You are now able to choose a threshold, that gives you the best precision/recall tradeoff for your current machine learning problem. If you want for example a precision of 80%, you can easily look at the plots and see that you would need a threshold of around 0.4. Then you could train a model with exactly that threshold and would get the desired accuracy.Another way is to plot the precision and recall against each other:
def plot_precision_vs_recall(precision, recall): plt.plot(recall, precision, "g--", linewidth=2.5) plt.ylabel("recall", fontsize=19) plt.xlabel("precision", fontsize=19) plt.axis([0, 1.5, 0, 1.5]) plt.figure(figsize=(14, 7)) plot_precision_vs_recall(precision, recall) plt.show()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
ROC AUC CurveAnother way to evaluate and compare your binary classifier is provided by the ROC AUC Curve. This curve plots the true positive rate (also called recall) against the false positive rate (ratio of incorrectly classified negative instances), instead of plotting the precision versus the recall.
from sklearn.metrics import roc_curve # compute true positive rate and false positive rate false_positive_rate, true_positive_rate, thresholds = roc_curve(Y_train, y_scores) # plotting them against each other def plot_roc_curve(false_positive_rate, true_positive_rate, label=None): plt.plot(false_positive_rate, true_positive_rate, linewidth=2, label=label) plt.plot([0, 1], [0, 1], 'r', linewidth=4) plt.axis([0, 1, 0, 1]) plt.xlabel('False Positive Rate (FPR)', fontsize=16) plt.ylabel('True Positive Rate (TPR)', fontsize=16) plt.figure(figsize=(14, 7)) plot_roc_curve(false_positive_rate, true_positive_rate) plt.show()
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
The red line in the middel represents a purely random classifier (e.g a coin flip) and therefore your classifier should be as far away from it as possible. Our Random Forest model seems to do a good job. Of course we also have a tradeoff here, because the classifier produces more false positives, the higher the true positive rate is. ROC AUC ScoreThe ROC AUC Score is the corresponding score to the ROC AUC Curve. It is simply computed by measuring the area under the curve, which is called AUC. A classifiers that is 100% correct, would have a ROC AUC Score of 1 and a completely random classiffier would have a score of 0.5.
from sklearn.metrics import roc_auc_score r_a_score = roc_auc_score(Y_train, y_scores) print("ROC-AUC-Score:", r_a_score)
ROC-AUC-Score: 0.9424898007009023
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
Nice ! I think that score is good enough to submit the predictions for the test-set to the Kaggle leaderboard. **Submission**
submission = pd.DataFrame({ "PassengerId": test_df["PassengerId"], "Survived": Y_prediction }) submission.to_csv('submission.csv', index=False)
_____no_output_____
MIT
titanic/end-to-end-project-with-python.ipynb
MLVPRASAD/KaggleProjects
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 02 Clase 02: Manipulación de Datos Objetivos* Comprender objetos de pandas* Poder realizar manipulación de datos Contenidos* [Introducción a Pandas](pandas)* [Series](series)* [DataFrames](dataframes) Introducción a Pandas Desde el [repositorio](https://github.com/pandas-dev/pandas) oficial:pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, **real world** data analysis in Python. Additionally, it has the broader goal of becoming **the most powerful and flexible open source data analysis / manipulation tool available in any language**. It is already well on its way towards this goal. Principales Características* Easy handling of missing data (represented as NaN) in floating point as well as non-floating point data* Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects* Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the user can simply ignore the labels and let Series, DataFrame, etc. automatically align the data for you in computations* Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data* Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects* Intelligent label-based slicing, fancy indexing, and subsetting of large data sets* Intuitive merging and joining data sets* Flexible reshaping and pivoting of data sets* Hierarchical labeling of axes (possible to have multiple labels per tick)* Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving/loading data from the ultrafast HDF5 format* Time series-specific functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
import pandas as pd pd.__version__
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Series Arreglos unidimensionales con etiquetas. Se puede pensar como una generalización de los diccionarios de Python.
pd.Series?
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Para crear una instancia de una serie existen muchas opciones, las más comunes son:* A partir de una lista.* A partir de un _numpy.array_.* A partir de un diccionario.* A partir de un archivo (por ejemplo un csv).
my_serie = pd.Series(range(3, 33, 3)) my_serie type(my_serie) # Presiona TAB y sorpréndete con la cantidad de métodos y atributos que poseen! # my_serie.
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Las series son arreglos unidemensionales que constan de _data_ e _index_.
# Data my_serie.values type(my_serie.values) # Index my_serie.index type(my_serie.index)
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
¿Te fijaste que el index es de otra clase? A diferencia de Numpy, pandas ofrece más flexibilidad para los valores e índices.
my_serie_2 = pd.Series(range(3, 33, 3), index=list('abcdefghij')) my_serie_2
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Acceder a los valores de una
my_serie_2['b'] my_serie_2.loc['b'] my_serie_2.iloc[1]
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
```loc```?? ```iloc```??
# pd.Series.loc?
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
A modo de resumen:* ```loc``` es un método que hace referencia a las etiquetas (*labels*) del objeto .* ```iloc``` es un método que hace referencia posicional del objeto. **Consejo**: Si quieres editar valores siempre utiliza ```loc``` y/o ```iloc```.
my_serie_2.loc['d'] = 1000 my_serie_2
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
¿Y si quiero escoger más de un valor?
my_serie_2.loc["b":"e"] # Incluso retorna el último valor! my_serie_2.iloc[1:5] # Incluso retorna el último valor!
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Sorpresa! También puedes filtrar según condiciones! En la mayoría de los tutoriales en internet encontrarás algo como lo siguiente:
my_serie_2[my_serie_2 % 2 == 0]
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Lo siguiente se conoce como _mask_, y se basa en el siguiente hecho:
my_serie_2 % 2 == 0 # Retorna una serie con valores booleanos pero los mismos index!
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Si es una serie resultante de otra operación, tendrás que guardarla en una variable para así tener el nombre y luego acceder a ella. La siguiente manera puede qeu sea un poco más verboso, pero te otorga más flexibilidad.
my_serie_2.loc[lambda s: s % 2 == 0]
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Una función lambda es una función pequeña y anónima. Pueden tomar cualquer número de argumentos pero solo tienen una expresión. Trabajar con fechas Pandas incluso permite que los index sean fechas! Por ejemplo, a continuación se crea una serie con las tendencia de búsqueda de *data science* en Google.
import os ds_trend = pd.read_csv(os.path.join('data', 'dataScienceTrend.csv'), index_col=0, squeeze=True) ds_trend.head(10) ds_trend.tail(10) ds_trend.dtype ds_trend.index
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
**OJO!** Los valores del Index son _strings_ (_object_ es una generalización). **Solución:** _Parsear_ a elementos de fecha con la función ```pd.to_datetime()```.
# pd.to_datetime? ds_trend.index = pd.to_datetime(ds_trend.index, format='%Y-%m-%d') ds_trend.index
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Para otros tipos de _parse_ puedes visitar la documentación [aquí](https://docs.python.org/3/library/datetime.htmlstrftime-and-strptime-behavior). La idea de los elementos de fecha es poder realizar operaciones que resulten naturales para el ser humano. Por ejemplo:
ds_trend.index.min() ds_trend.index.max() ds_trend.index.max() - ds_trend.index.min()
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Volviendo a la Serie, podemos trabajar con todos sus elementos, por ejemplo, determinar rápidamente la máxima tendencia.
max_trend = ds_trend.max() max_trend
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Para determinar el _index_ correspondiente al valor máximo usualmente se utilizan dos formas:* Utilizar una máscara (*mask*)* Utilizar métodos ya implementados
# Mask ds_trend[ds_trend == max_trend] # Built-in method ds_trend.idxmax()
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
DataFrames Arreglo bidimensional y extensión natural de una serie. Podemos pensarlo como la generalización de un numpy.array. Utilizando el dataset de los jugadores de la NBA la flexibilidad de pandas se hace mucho más visible. No es necesario que todos los elementos sean del mismo tipo!
player_data = pd.read_csv(os.path.join('data', 'player_data.csv'), index_col='name') player_data.head() type(player_data) player_data.info(memory_usage=True) player_data.dtypes
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Puedes pensar que un dataframe es una colección de series
player_data['birth_date'].head() type(player_data['birth_date'])
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Exploración
player_data.describe().T player_data.describe(include='all').T player_data.max()
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Para extraer elementos lo más recomendable es el método loc.
player_data.loc['Zaid Abdul-Aziz', 'college']
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Evita acceder con doble corchete
player_data['college']['Zaid Abdul-Aziz']
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Aunque en ocasiones funcione, no se asegura que sea siempre así. [Más info aquí.](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlwhy-does-assignment-fail-when-using-chained-indexing)
player_data['position'].value_counts()
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Valores perdidos/nulos Pandas ofrece herramientas para trabajar con valors nulos, pero es necesario conocerlas y saber aplicarlas. Por ejemplo, el método ```isnull()``` entrega un booleano si algún valor es nulo. Por ejemplo: ¿Qué jugadores no tienen registrado su fecha de nacimiento?
player_data.index.shape player_data.loc[lambda x: x['birth_date'].isnull()]
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Si deseamos encontrar todas las filas que contengan por lo menos un valor nulo.
player_data.isnull() # pd.DataFrame.any? rows_null_mask = player_data.isnull().any(axis=1) # axis=1 hace referencia a las filas. rows_null_mask.head() player_data[rows_null_mask].head() player_data[rows_null_mask].shape
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Para determinar aquellos que no tienen valors nulos el prodecimiento es similar.
player_data.loc[lambda x: x.notnull().all(axis=1)].head()
_____no_output_____
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Pandas incluso ofrece opciones para eliminar elementos nulos!
pd.DataFrame.dropna? # Cualquier registro con null print(player_data.dropna().shape) # Filas con elementos nulos print(player_data.dropna(axis=0).shape) # Columnas con elementos nulos print(player_data.dropna(axis=1).shape)
(4213, 7) (4213, 7) (4550, 2)
MIT
m02_data_analysis/m02_c02_data_manipulation/m02_c02_data_manipulation.ipynb
Taz-Ricardo/mat281_portafolio
Even though the linear model looks quite different with the 365 days moving average and is obviously a bad fit, for simplicity and interpretability we'll continue working with it for now.
forecast_n_days = 30 forecast_index = pd.date_range(df.index.max(), periods=forecast_n_days+1).tolist()[1:] X = dp.out_of_sample(steps=forecast_n_days,forecast_index=forecast_index) y_fore = pd.Series(model.predict(X), index=X.index) y_fore.head() from datetime import datetime min_date = datetime.strptime("2021-05", '%Y-%m').date() ax = df[df.index > min_date].plot(title="NEXO Price - Linear Trend Forecast", **plot_params) ax = y_pred[y_pred.index > min_date].plot(ax=ax, linewidth=3, label="Trend") ax = y_fore.plot(ax=ax, linewidth=3, label="Trend Forecast", color="C3") _ = ax.legend()
_____no_output_____
MIT
src/Trend.ipynb
sekR4/cryptoracle
**This colab allows users to inference and share released Dall-E Models.** [Dall-E Service](https://github.com/rom1504/dalle-service) | [Models](https://github.com/robvanvolt/DALLE-models)*Colab created by mega b6696*
#@title # **Setup, run this once** from IPython.display import clear_output !sudo apt-get -y install llvm-9-dev cmake !git clone https://github.com/microsoft/DeepSpeed.git /tmp/Deepspeed %cd /tmp/Deepspeed !DS_BUILD_SPARSE_ATTN=1 ./install.sh -r !npm install -g localtunnel clear_output() %cd /content/ !pip install Flask==1.1.2 Flask-Cors==3.0.9 Flask-RESTful==0.3.8 dalle-pytorch tqdm !git clone https://github.com/rom1504/dalle-service.git clear_output() print("Finished setup.") #@title # **Enter direct model download** #@markdown # Publicly released models are located [here](https://github.com/robvanvolt/DALLE-models). model_url = "https://github.com/johnpaulbin/DALLE-models/releases/download/model/16L_64HD_8H_512I_128T_cc12m_cc3m_3E.pt" #@param {type:"string"} !wget "$model_url" -O dalle_checkpoint.pt !echo '{"good": "dalle_checkpoint.pt"}' > model_paths.json clear_output() print("Finished download.") #@title # **Start backend** #@markdown ## Copy the url it provides you, and you will be able to use it in https://rom1504.github.io/dalle-service #@markdown #### Example: https://rom1504.github.io/dalle-service?back=https://XXXX.loca.lt from threading import Thread def app(): !python dalle-service/back/dalle_service.py 8000 if __name__ == '__main__': t1 = Thread(target = app) a = t1.start() !lt --port 8000
_____no_output_____
MIT
dalle_back.ipynb
johnpaulbin/dalle-service
0. SetupPlease make sure your environment is set up according to the instructions here: https://github.com/NASA-NAVO/aas_workshop_2020_winter/blob/master/00_SETUP.mdEnsure you have the latest version of the workshop material by updating your environment:TBD 1. OverviewNASA services can be queried from Python in multiple ways.* Generic Virtual Observatory (VO) queries. * Call sequence is consistent, including for non-NASA resources. * Use the `pyvo` package: https://pyvo.readthedocs.io/en/latest/ * Known issues/caveats: https://github.com/NASA-NAVO/aas_workshop_2020_winter/blob/master/KNOWN_ISSUES.md* Astroquery interfaces * Call sequences not quite as consistent, but follow similar patterns. * See https://astroquery.readthedocs.io/en/latest/ * Informal Q&A session Tuesday, 5:30pm-6:30pm, NumFocus booth* Ad hoc archive-specific interfaces 2. VO ServicesThis workshop will introduce 4 types of VO queries:* **VO Registry** - Discover what services are available worldwide* **Simple Cone Search** - Search for catalog object within a specified cone region* **Simple Image Access** - Search for image products within a spatial region* **Simple Spectral Access** - Search for spectral products within a spatial region* **Table Access** - SQL-like queries to databases 2.1 Import Necessary Packages
# Generic VO access routines import pyvo as vo # For specifying coordinates and angles from astropy.coordinates import SkyCoord from astropy.coordinates import Angle from astropy import units as u # For downloading files from astropy.utils.data import download_file # Ignore unimportant warnings import warnings warnings.filterwarnings('ignore', '.*Unknown element mirrorURL.*', vo.utils.xml.elements.UnknownElementWarning)
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.1 Look Up Services in VO RegistrySimple example: Find Simple Cone Search (conesearch) services related to SWIFT.
services = vo.regsearch(servicetype='conesearch', keywords=['swift']) services
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.1.1 Use different arguments/values to modify the simple example| Argument | Description | Examples || :-----: | :----------- | :-------- || **servicetype** | Type of service | `conesearch` or `scs` for **Simple Cone Search** `image` or `sia` for **Simple Image Access** `spectrum` or `ssa` for **Simple Spectral Access** `table` or `tap` for **Table Access Protocol**|| **keyword** | List of one or more keyword(s) to match service's metadata. Both ORs and ANDs may be specified.(OR) A list of keywords match a service if **any** of the keywords match the service.(AND) If a keyword contains multiple space-delimited words, **all** the words must match the metadata.| `['galex', 'swift']` matches 'galex' or 'swift'`['hst survey']` matches services mentioning both 'hst' and 'survey' || **waveband** | Resulting services have data in the specified waveband(s) | ‘radio’, ‘millimeter’, ‘infrared’, ‘optical’, ‘uv’, ‘euv’, ‘x-ray’ ‘gamma-ray’ | 2.1.2 Inspect the results. Using pyvoAlthough not lists, `pyvo` results can be iterated over to see each individual result. The results are specialized based on the type of query, providing access to the important properties of the results. Some useful accessors with registry results are:* `short_name` - A short name* `res_title` - A more descriptive title* `res_description` - A more verbose description* `reference_url` - A link for more information* `ivoid` - A unique identifier for the service. Gives some indication of what organization is serving the data.
# Print the number of results and the 1st 4 short names and titles. print(f'Number of results: {len(services)}\n') for s in list(services)[:4]: # (Treat services as list to get the subset of rows) print(f'{s.short_name} - {s.res_title}')
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Filtering resultsOf the services we found, which one(s) have 'stsci.edu' in their unique identifier?
stsci_services = [s for s in services if 'stsci.edu' in s.ivoid] for s in stsci_services: print (f'(STScI): {s.short_name} - {s.res_title}')
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Using astropyWith the `to_table()` method, `pyvo` results can also be converted to Astropy `Table` objects which offer a variety of addional features. See http://docs.astropy.org/en/stable/table/ for more on working with Astropy Tables.
# Convert to an Astropy Table services_table = services.to_table() # Print the column names and display 1st 3 rows with a subset of columns print(f'\nColumn Names:\n{services_table.colnames}\n') services_table['short_name', 'res_title', 'res_description'][:3]
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.2 Cone searchExample: Find a cone search service for the USNO-B catalog and search it around M51 with a .1 degree radius. (More inspection could be done on the service list instead of blindly choosing the first service.) The position (`pos`) is best specified with `SkyCoord` objects (see http://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.html). The size of the region is specified with the `radius` keyword and may be decimal degrees or an Astropy `Angle` (http://docs.astropy.org/en/stable/api/astropy.coordinates.Angle.htmlastropy.coordinates.Angle).
m51_pos = SkyCoord.from_name("m51") services = vo.regsearch(servicetype='conesearch', keywords='usno-b') results = services[0].search(pos=m51_pos, radius=0.1) # Astropy Table is useful for displaying cone search results. results.to_table()
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.3 Image searchExample: Find an image search service for GALEX, and search it around coordinates 13:37:00.950,-29:51:55.51 (M83) with a radius of .2 degrees. Download the first file in the results. Find an image service
services = vo.regsearch(servicetype='image', keywords=['galex']) services.to_table()['ivoid', 'short_name', 'res_title']
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Search one of the servicesThe first service looks good. Search it!For more details on using `SkyCoord` see http://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.htmlastropy.coordinates.SkyCoord**NOTE**: For image searches, the size of the region is defined by the `size` keyword which is more like a diameter than a radius.
m83_pos = SkyCoord('13h37m00.950s -29d51m55.51s') results = services[0].search(pos=m83_pos, size=.2) # We can look at the results. results.to_table()
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Download an imageFor the first result, print the file format and download the file. If repeatedly executing this code, add `cache=True` to `download_file()` to prevent repeated downloads.See `download_file()` documentation here: https://docs.astropy.org/en/stable/api/astropy.utils.data.download_file.htmlastropy.utils.data.download_file
print(results[0].format) file_name = download_file(results[0].getdataurl()) file_name
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.4 Spectral searchExample: Find a spectral service for x-ray data. Query it around Delta Ori with a search **diameter** of 10 arc minutes, and download the first data product. Note that the results table can be inspected for potentially useful columns.Spectral search is very similar to image search. In this example, note:* **`diameter`** defines the size of the search region* `waveband` used in `regsearch()`* Astropy `Angle` used to specify radius units other than degrees.
# Search for a spectrum search service that has x-ray data. services = vo.regsearch(servicetype='spectrum', waveband='x-ray') # Assuming there are services and the first one is OK... results = services[0].search(pos=SkyCoord.from_name("Delta Ori"), diameter=Angle(10 * u.arcmin)) # Assuming there are results, download the first file. print(f'Title: {results[0].title}, Format: {results[0].format}') file_name = download_file(results[0].getdataurl()) file_name
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
2.5 Table searchExample: Find the HEASARC Table Access Protocol (TAP) service, get some information about the available tables.
services = vo.regsearch(servicetype='tap', keywords=['heasarc']) print(f'{len(services)} service(s) found.') # We found only one service. Print some info about the service and its tables. print(f'{services[0].describe()}') tables = services[0].service.tables # Queries for details of the service's tables print(f'{len(tables)} tables:') for t in tables: print(f'{t.name:30s} - {t.description}') # A more succinct option than t.describe()
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Column InformationFor any table, we can list the column names and descriptions.
for c in tables['zcat'].columns: print(f'{c.name:30s} - {c.description}')
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
Perform a QueryExample: Perform a cone search on the ZCAT catalog at M83 with a 1.0 degree radius.
coord = SkyCoord.from_name("m83") query = f''' SELECT ra, dec, Radial_Velocity, radial_velocity_error, bmag, morph_type FROM public.zcat as cat where contains(point('ICRS',cat.ra,cat.dec),circle('ICRS',{coord.ra.deg},{coord.dec.deg},1.0))=1 ''' results = services[0].service.run_async(query) results.to_table()
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
3. Astroquery Many archives have Astroquery modules for data access, including:* [HEASARC Queries (astroquery.heasarc)](https://astroquery.readthedocs.io/en/latest/heasarc/heasarc.html)* [HITRAN Queries (astroquery.hitran)](https://astroquery.readthedocs.io/en/latest/hitran/hitran.html)* [IRSA Image Server program interface (IBE) Queries (astroquery.ibe)](https://astroquery.readthedocs.io/en/latest/ibe/ibe.html)* [IRSA Queries (astroquery.irsa)](https://astroquery.readthedocs.io/en/latest/irsa/irsa.html)* [IRSA Dust Extinction Service Queries (astroquery.irsa_dust)](https://astroquery.readthedocs.io/en/latest/irsa/irsa_dust.html)* [JPL Spectroscopy Queries (astroquery.jplspec)](https://astroquery.readthedocs.io/en/latest/jplspec/jplspec.html)* [MAST Queries (astroquery.mast)](https://astroquery.readthedocs.io/en/latest/mast/mast.html)* [NASA ADS Queries (astroquery.nasa_ads)](https://astroquery.readthedocs.io/en/latest/nasa_ads/nasa_ads.html)* [NED Queries (astroquery.ned)](https://astroquery.readthedocs.io/en/latest/ned/ned.html)For more, see https://astroquery.readthedocs.io/en/latest/ 3.1 NEDExample: Get an Astropy Table containing the objects from paper 2018ApJ...858...62K. For more on the API, see https://astroquery.readthedocs.io/en/latest/ned/ned.html
from astroquery.ned import Ned objects_in_paper = Ned.query_refcode('2018ApJ...858...62K') objects_in_paper
_____no_output_____
BSD-3-Clause
QuickReference.ipynb
tomdonaldson/navo-workshop
**About this dataset**Age : Age of the patientSex : Sex of the patientexang: exercise induced angina (1 = yes; 0 = no)ca: number of major vessels (0-3)cp : Chest Pain type chest pain typeValue 1: typical angina Value 2: atypical anginaValue 3: non-anginal painValue 4: asymptomatictrtbps : resting blood pressure (in mm Hg)chol : cholestoral in mg/dl fetched via BMI sensorfbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)rest_ecg : resting electrocardiographic results Value 0: normalValue 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteriathalach : maximum heart rate achievedtarget : 0= less chance of heart attack 1= more chance of heart attack
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data=pd.read_csv('/content/drive/MyDrive/dataset/heart_new.csv') data data.info() data.describe()
_____no_output_____
MIT
Heart_Attack_Analysis_&_Classification_Dataset.ipynb
DivyaniMaharana/manim
Data Cleaning
data.isnull() data.isnull().sum() df=data.drop(axis=0,labels=303) df sns.heatmap(df.isnull()) df.isnull().sum()
_____no_output_____
MIT
Heart_Attack_Analysis_&_Classification_Dataset.ipynb
DivyaniMaharana/manim
EDA
sns.boxplot(x='sex',y='cp',data=df) sns.pairplot(df) df.corr() plt.figure(figsize=(8, 12)) heatmap = sns.heatmap(df.corr()[['output']].sort_values(by='output', ascending=False), vmin=-1, vmax=1, annot=True, cmap='BrBG') heatmap.set_title('Features Correlating with output', fontdict={'fontsize':18}, pad=16);
_____no_output_____
MIT
Heart_Attack_Analysis_&_Classification_Dataset.ipynb
DivyaniMaharana/manim
DATA PREPROCESSING
df from sklearn.model_selection import train_test_split X=df.drop(['output'] , axis=1) y=df['output'] X.shape , y.shape X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3,random_state=42) X_train.shape , y_train.shape X_test.shape , y_test.shape from sklearn.preprocessing import StandardScaler sc=StandardScaler() X_train=sc.fit_transform(X_train) X_test=sc.fit_transform(X_test) from sklearn.svm import SVC svc=SVC(C=10, kernel='linear', degree=5) svc.fit(X_train,y_train) y_pred=svc.predict(X_test) y_pred y_test svc.score(X_test,y_test) from sklearn.metrics import accuracy_score acc=accuracy_score(y_test,y_pred) acc from sklearn.tree import DecisionTreeClassifier dt=DecisionTreeClassifier() dt.fit(X_train,y_train) dt.predict(X_test) y_test dt.score(X_test,y_test) dt=DecisionTreeClassifier(criterion='gini',splitter='best',max_depth=1) dt.fit(X_train,y_train) dt.predict(X_test) dt.score(X_test,y_test) import pickle filename='heart attack analysis svc ml model.pkl' pickle.dump(svc,open(filename,'wb'))
_____no_output_____
MIT
Heart_Attack_Analysis_&_Classification_Dataset.ipynb
DivyaniMaharana/manim
IntroductionThis Morglorb recipe uses groupings of ingredients to try to cover nutritional requirements with enough overlap that a single ingredient with quality issues does not cause a failure for the whole recipe. An opimizer is used to find the right amount of each ingredient to fulfill the nutritional and practical requirements. To Do* Nutrients without an upper limit should have the upper limit constraint removed* Add constraints for the NIH essential protein combinations as a limit* Add a radar graph for vitamins showing the boundry between RDI and UL* Add a radar graph for vitamins without an upper limit but showing the RDI* Add a radar graph for essential proteins showing the range between RDI and UL* Add a radar graph for essential proteins without an upper limit, but showing the RDI as the lower limit* Add a radar graph pair for non-essential proteins with the above UL and no UL pairing* Add equality constraints for at least energy, and macro nutrients if possible
# Import all of the helper libraries from scipy.optimize import minimize from scipy.optimize import Bounds from scipy.optimize import least_squares, lsq_linear, dual_annealing, minimize import pandas as pd import numpy as np import os import json from math import e, log, log10 import matplotlib.pyplot as plt import seaborn as sns from ipysheet import from_dataframe, to_dataframe #!pip install seaborn #!pip install ipysheet #!pip install ipywidgets # Setup the notebook context data_dir = '../data' pd.set_option('max_columns', 70)
_____no_output_____
MIT
notebooks/Morglorb-Peanut-Butter-SLSQP.ipynb
sekondusg/dunli
Our DataThe [tables](https://docs.google.com/spreadsheets/d/104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8/editgid=442191411) containing our ingredients nutrition profile are held in Google Sheets.The sheet names are "Ingredients" and "Nutrition Profile"
# Download our nutrition profile data from Google Sheets google_spreadsheet_url = 'https://docs.google.com/spreadsheets/d/104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8/export?format=csv&id=104Y7kH4OzmfsM-v2MSEoc7cIgv0aAMT2sQLAmgkx8R8' nutrition_tab = '624419712' ingredient_tab = '1812860789' nutrition_tab_url = f'{google_spreadsheet_url}&gid={nutrition_tab}' ingredient_tab_url = f'{google_spreadsheet_url}&gid={ingredient_tab}' nutrition_profile_df = pd.read_csv(nutrition_tab_url, index_col=0, verbose=True) for col in ['RDI', 'UL', 'Target Scale', 'Target', 'Weight']: nutrition_profile_df[col] = nutrition_profile_df[col].astype(float) nutrition_profile_df = nutrition_profile_df.transpose() ingredients_df = pd.read_csv(ingredient_tab_url, index_col=0, verbose=True).transpose() # convert all values to float for col in ingredients_df.columns: ingredients_df[col] = ingredients_df[col].astype(float)
Tokenization took: 0.04 ms Type conversion took: 1.01 ms Parser memory cleanup took: 0.00 ms Tokenization took: 0.06 ms Type conversion took: 1.29 ms Parser memory cleanup took: 0.01 ms
MIT
notebooks/Morglorb-Peanut-Butter-SLSQP.ipynb
sekondusg/dunli
Problem SetupLet's cast our data into the from $\vec{y} = A \vec{x} + \vec{b}$ where $A$ is our ingredients data, $\vec{x}$ is the quantity of each ingredient for our recipe, and $\vec{b}$ is the nutrition profile.The problem to be solved is to find the quantity of each ingredient which will optimally satisfy the nutrition profile, or in our model, to minimize: $|A \vec{x} - \vec{b}|$.There are some nutrients we only want to track, but not optimize. For example, we want to know how much cholesterol is contained in our recipe, but we don't want to constrain our result to obtain a specific amount of cholesterol as a goal. The full list of ingredients are named: A_full, and b_full. The values to optimized are named: A and b
b_full = nutrition_profile_df A_full = ingredients_df.transpose() A = ingredients_df.transpose()[nutrition_profile_df.loc['Report Only'] == False].astype(float) b_full = nutrition_profile_df.loc['Target'] b = nutrition_profile_df.loc['Target'][nutrition_profile_df.loc['Report Only'] == False].astype(float) ul = nutrition_profile_df.loc['UL'][nutrition_profile_df.loc['Report Only'] == False].astype(float) rdi = nutrition_profile_df.loc['RDI'][nutrition_profile_df.loc['Report Only'] == False].astype(float) weight = nutrition_profile_df.loc['Weight'][nutrition_profile_df.loc['Report Only'] == False] ul_full = nutrition_profile_df.loc['UL'] rdi_full = nutrition_profile_df.loc['RDI'] # Constrain ingredients before the optimization process. Many of the ingredients are required for non-nutritional purposes # or are being limited to enhance flavor # # The bounds units are in fractions of 100g / day, i.e.: 0.5 represents 50g / day, of the ingredient #bounds_df = pd.DataFrame(index=ingredients_df.index, data={'lower': 0.0, 'upper': np.inf}) bounds_df = pd.DataFrame(index=ingredients_df.index, data={'lower': 0.0, 'upper': 1.0e6}) bounds_df.loc['Guar gum'] = [1.5 * .01, 1.5 * .01 + .0001] bounds_df.loc['Xanthan Gum'] = [1.5 * .01, 1.5 * .01 + .0001] bounds_df.loc['Alpha-galactosidase enzyme (Beano)'] = [1.0, 1.0 + .0001] bounds_df.loc['Multivitamin'] = [1.0, 1.0 + .0001] bounds_df.loc['Corn flour, nixtamalized'] = [0, 1.0] bounds_df.loc['Whey protein'] = [0.0,0.15] bounds_df.loc['Ascorbic acid'] = [0.01, 0.01 + .0001] bounds_df.loc['Peanut butter'] = [0.70, 5.0] bounds_df.loc['Wheat bran, crude'] = [0.5, 5.0] bounds_df.loc['Flaxseed, fresh ground'] = [0.25, 5.0] bounds_df.loc['Choline Bitartrate'] = [0.0, 0.05] bounds_df.loc['Potassium chloride'] = [0.0, 0.15] lower = bounds_df.lower.values upper = bounds_df.upper.values lower.shape, upper.shape x0 = np.array(lower) bounds = pd.DataFrame( data = {'lower': lower, 'upper': upper}, dtype=float) a = 100.; b = 2.; c = a; k = 10 a = 20.; b = 2.; c = a; k = 10 a = 10.; b = 0.1 ; c = a; k = 5 #u0 = (rdi + np.log(rdi)); u0.name = 'u0' #u0 = rdi * (1 + log(a)) u0 = rdi / (1 - log(k) / a) u1 = ul / (log(k) / c + 1) #u1 = ul - np.log(ul); u1.name = 'u1' #u = pd.concat([limits, pd.Series(y0,scale_limits.index, name='y0')], axis=1) def obj(x): y0 = A.dot(x.transpose()) obj_vec = (np.exp(a * (u0 - y0)/u0) + np.exp(b * (y0 - u0)/u0) + np.nan_to_num(np.exp(c * (y0 - u1)/u1))) * weight #print(f'obj_vec: {obj_vec[0]}, y0: {y0[0]}, u0: {u0[0]}') return(np.sum(obj_vec)) #rdi[26], u0[26], u1[26], ul[26] #rdi[0:5], u0[0:5], u1[0:5], ul[0:5] #np.log(rdi)[26] #u1 solution = minimize(obj, x0, method='SLSQP', bounds=list(zip(lower, upper)), options = {'maxiter': 1000}) solution.success A_full.dot(solution.x).astype(int) # Scale the ingredient nutrient amounts for the given quantity of each ingredient given by the optimizer solution_df = A_full.transpose().mul(solution.x, axis=0) # Scale each nutrient vector per ingredient by the amount of the ingredient solution_df.insert(0, 'Quantity (g)', solution.x * 100) # Scale to 100 g since that is basis for the nutrient quantities # Add a row showing the sum of the scaled amount of each nutrient total = solution_df.sum() total.name = 'Total' solution_df = solution_df.append(total) # Plot the macro nutrient profile # The ratio of Calories for protein:carbohydrates:fat is 4:4:9 kcal/g pc = solution_df['Protein (g)']['Total'] * 4.0 cc = solution_df['Carbohydrates (g)']['Total'] * 4.0 fc = solution_df['Total Fat (g)']['Total'] * 9.0 tc = pc + cc + fc p_pct = int(round(pc / tc * 100)) c_pct = int(round(cc / tc * 100)) f_pct = int(round(fc / tc * 100)) (p_pct, c_pct, f_pct) # create data names=f'Protein {p_pct}%', f'Carbohydrates {c_pct}%', f'Fat {f_pct}%', size=[p_pct, c_pct, f_pct] fig = plt.figure(figsize=(10, 5)) fig.add_subplot(1,2,1) # Create a circle for the center of the plot my_circle=plt.Circle( (0,0), 0.5, color='white') # Give color names cmap = plt.get_cmap('Spectral') sm = plt.cm.ScalarMappable(cmap=cmap) colors = ['yellow','orange','red'] plt.pie(size, labels=names, colors=colors) #p=plt.gcf() #p.gca().add_artist(my_circle) fig.gca().add_artist(my_circle) #plt.show() fig.add_subplot(1,2,2) barWidth = 1 fs = [solution_df['Soluble Fiber (g)']['Total']] fi = [solution_df['Insoluble Fiber (g)']['Total']] plt.bar([0], fs, color='red', edgecolor='white', width=barWidth, label=['Soluble Fiber (g)']) plt.bar([0], fi, bottom=fs, color='yellow', edgecolor='white', width=barWidth, label=['Insoluble Fiber (g)']) plt.show() # Also show the Omega-3, Omega-6 ratio # Saturated:Monounsaturated:Polyunsaturated ratios # Prepare data as a whole for plotting by normalizing and scaling amounts = solution_df total = A_full.dot(solution.x) #solution_df.loc['Total'] # Normalize as a ratio beyond RDI norm = (total) / rdi_full norm_ul = (ul_full) / rdi_full nuts = pd.concat([pd.Series(norm.values, name='value'), pd.Series(norm.index, name='name')], axis=1) # Setup categories of nutrients and a common plotting function vitamins = ['Vitamin A (IU)','Vitamin B6 (mg)','Vitamin B12 (ug)','Vitamin C (mg)','Vitamin D (IU)', 'Vitamin E (IU)','Vitamin K (ug)','Thiamin (mg)','Riboflavin (mg)','Niacin (mg)','Folate (ug)','Pantothenic Acid (mg)','Biotin (ug)','Choline (mg)'] minerals = ['Calcium (g)','Chloride (g)','Chromium (ug)','Copper (mg)','Iodine (ug)','Iron (mg)', 'Magnesium (mg)','Manganese (mg)','Molybdenum (ug)','Phosphorus (g)','Potassium (g)','Selenium (ug)','Sodium (g)','Sulfur (g)','Zinc (mg)'] essential_aminoacids = ['Cystine (mg)','Histidine (mg)','Isoleucine (mg)','Leucine (mg)','Lysine (mg)', 'Methionine (mg)','Phenylalanine (mg)','Threonine (mg)','Tryptophan (mg)','Valine (mg)'] other_aminoacids = ['Tyrosine (mg)','Arginine (mg)','Alanine (mg)','Aspartic acid (mg)','Glutamic acid (mg)','Glycine (mg)','Proline (mg)','Serine (mg)','Hydroxyproline (mg)'] def plot_group(nut_names, title): nut_names_short = [s.split(' (')[0] for s in nut_names] # Snip off the units from the nutrient names # Create a bar to indicate an upper limit ul_bar = (norm_ul * 1.04)[nut_names] ul_bar[ul_full[nut_names].isnull() == True] = 0 # Create a bar to mask the UL bar so just the end is exposed ul_mask = norm_ul[nut_names] ul_mask[ul_full[nut_names].isnull() == True] = 0 n = [] # normalized values for each bar for x, mx in zip(norm[nut_names], ul_mask.values): if mx == 0: # no upper limit if x < 1.0: n.append(1.0 - (x / 2.0)) else: n.append(0.50) else: n.append(1.0 - (log10(x) / log10(mx))) clrs = sm.to_rgba(n, norm=False) g = sns.barplot(x=ul_bar.values, y=nut_names_short, color='red') g.set_xscale('log') sns.barplot(x=ul_mask.values, y=nut_names_short, color='white') bax = sns.barplot(x=norm[nut_names], y=nut_names_short, label="Total", palette=clrs) # Add a legend and informative axis label g.set( ylabel="",xlabel="Nutrient Mass / RDI (Red Band is UL)", title=title) #sns.despine(left=True, bottom=True) # Construct a group of bar charts for each nutrient group # Setup the colormap for each bar cmap = plt.get_cmap('Spectral') sm = plt.cm.ScalarMappable(cmap=cmap) #fig = plt.figure(figsize=plt.figaspect(3.)) fig = plt.figure(figsize=(20, 20)) fig.add_subplot(4, 1, 1) plot_group(vitamins,'Vitamin amounts relative to RDI') fig.add_subplot(4, 1, 2) plot_group(minerals,'Mineral amounts relative to RDI') fig.add_subplot(4, 1, 3) plot_group(essential_aminoacids,'Essential amino acid amounts relative to RDI') fig.add_subplot(4, 1, 4) plot_group(other_aminoacids,'Other amino acid amounts relative to RDI') #fig.show() fig.tight_layout() #solu_amount = (solution_df['Quantity (g)'] * 14).astype(int) pd.options.display.float_format = "{:,.2f}".format solu_amount = solution_df['Quantity (g)'] solu_amount.index.name = 'Ingredient' solu_amount.reset_index()
_____no_output_____
MIT
notebooks/Morglorb-Peanut-Butter-SLSQP.ipynb
sekondusg/dunli
Load Data Brokers Import CA data brokers list
fn = '../data/data_brokers/ca-data-brokers.csv' df = pd.read_csv(fn) df['state'] = 'CA' ca = df[['Data Broker Name', 'Email Address', 'Website URL', 'Physical Address', 'state']].copy() ca.rename(inplace=True, columns={ 'Data Broker Name':'name', 'Email Address':'email', 'Website URL':'url', 'Physical Address':'address' })
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Import VT data brokers list
fn = '../data/data_brokers/vt-data-brokers.csv' df = pd.read_csv(fn) df['state'] = "VT" vt = df[['Data Broker Name:','Address:', 'Email Address:', 'Primary Internet Address:', 'state']].copy() vt.rename(inplace=True, columns={ 'Data Broker Name:':'name', 'Address:':'address', 'Email Address:':'email', 'Primary Internet Address:':'url' })
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Merge the two
brokers = pd.concat([ca, vt])
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Save as output
brokers.to_csv('../data/matching_process/brokers.csv', index=False)
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Load Lobbyist Clients
client_list = [] folder = '../data/lobbying/' for path, dirs, files in os.walk(folder): for file in files: fullpath = os.path.join(path, file) if file.endswith(".xml"): with open(fullpath, "rb") as data: tree = ET.parse(data) root = tree.getroot() for filing in root.iter('Filing'): filing_info = filing.attrib for client in filing.iter('Client'): client_info = client.attrib info = { 'filing.id': filing_info['ID'], 'filing.period': filing_info['Period'], 'filing.year': filing_info['Year'], 'client.name': client_info['ClientName'], 'client.id': client_info['ClientID'], 'client.desc': client_info['GeneralDescription'], 'client.state': client_info['ClientState'], 'client.country': client_info['ClientCountry'] } client_list.append(info) cf = pd.DataFrame(client_list)
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Filter for just 2020 filings
clients = cf[cf['filing.year'] == '2020'].copy()
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Add bridge to matches
clients['client.name.check'] = clients['client.name'].str.replace(",","").str.replace(".","").str.upper()
/Users/maddy/Documents/2021/markup/investigations-data-broker-lobbying/databrokers/lib/python3.7/site-packages/ipykernel_launcher.py:1: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will*not* be treated as literal strings when regex=True. """Entry point for launching an IPython kernel.
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Save as output
clients.to_csv('../data/matching_process/clients.csv', index=False)
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Guess Matches
brokers['name.check'] = brokers['name'].str.replace(",","").str.replace(".","").str.upper() unique_clients = pd.DataFrame() unique_clients['client.name.check'] = clients['client.name.check'].unique() choices = list(brokers['name.check'].unique()) choices.extend([ 'EQUIFAX', 'EXPERIAN', 'X-MODE', 'IHS MARITIME & TRADE', 'ACXIOM', 'DELOITTE', 'PUBLICIS GROUP', 'ORACLE', 'ACCENTURE FEDERAL SERVICES', 'RELX', 'ELSEVIER', 'LIVERAMP', 'INMAR', 'EPSILON DATA']) def guess(client): if client in choices: return client, 100 pick, score = process.extractOne(client, choices) return pick, score
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
First Pass: Data Broker Name List
unique_clients['guess'] = unique_clients['client.name.check'].parallel_apply(guess) unique_clients[['guess.name', 'guess.confidence']] = unique_clients['guess'].apply(pd.Series)
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Export for Human Double-Checking
describe = unique_clients['guess.confidence'].describe() guesses = unique_clients[unique_clients['guess.confidence'] > describe['75%']].sort_values(by='guess.confidence', ascending=False) guesses.to_csv('../data/matching_process/match-guesses.csv', index=False)
_____no_output_____
Unlicense
notebooks/0-computer-matching.ipynb
thbland/investigation-data-broker-lobbying
Classification with Python In this notebook we try to practice all the classification algorithms that we learned in this course.We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.Lets first load required libraries:
import itertools import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter import pandas as pd import numpy as np import matplotlib.ticker as ticker from sklearn import preprocessing %matplotlib inline
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
About dataset This dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:| Field | Description ||----------------|---------------------------------------------------------------------------------------|| Loan_status | Whether a loan is paid off on in collection || Principal | Basic principal loan amount at the || Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule || Effective_date | When the loan got originated and took effects || Due_date | Since it’s one-time payoff schedule, each loan has one single due date || Age | Age of applicant || Education | Education of applicant || Gender | The gender of applicant | Lets download the dataset
#!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv path='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv'
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Load Data From CSV File
df = pd.read_csv(path) df.head() df.shape
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Convert to date time object
df['due_date'] = pd.to_datetime(df['due_date']) df['effective_date'] = pd.to_datetime(df['effective_date']) df.head()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
Data visualization and pre-processing Let’s see how many of each class is in our data set
df['loan_status'].value_counts()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science
260 people have paid off the loan on time while 86 have gone into collection Lets plot some columns to underestand data better:
# notice: installing seaborn might takes a few minutes !conda install -c anaconda seaborn -y import seaborn as sns bins = np.linspace(df.Principal.min(), df.Principal.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'Principal', bins=bins, ec="k") g.axes[-1].legend() plt.show() bins = np.linspace(df.age.min(), df.age.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'age', bins=bins, ec="k") g.axes[-1].legend() plt.show()
_____no_output_____
MIT
Course 8: Machine Learning with Python/The best classifier.ipynb
jonathanyeh0723/Coursera_IBM-Data-Science