path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
dog_breed_classification_capstone/Demo_Dog_Breed_Classification.ipynb
|
###Markdown
Demo for Dog Breed Classification Import necessary modules
###Code
from model_transfer import *
from PIL import Image
import requests
from io import BytesIO
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Download a German Shepherd from Wikipedia and show it
###Code
url = 'https://upload.wikimedia.org/wikipedia/commons/0/00/1._DSC_0346_%2810096362833%29.jpg'
response = requests.get(url)
img = Image.open(BytesIO(response.content))
plt.imshow(img)
_ = plt.axis('off')
###Output
_____no_output_____
###Markdown
Create PyTorch model and load in the trained weights
###Code
model_transfer = create_dog_breed_classification_model()
# Model was trained on GPU so to make this run on a wider range of platforms,
# we should not assume the user has a GPU available - transfer the model to CPU
# https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-on-gpu-load-on-cpu
model_transfer.load_state_dict(torch.load('model_checkpoints/model_transfer.pt', map_location=torch.device('cpu')))
###Output
_____no_output_____
###Markdown
Run inference and let's see what we predict
###Code
output = perform_inference(model_transfer, img)
output
###Output
_____no_output_____
|
notebooks/export_lyrics_per_timespan.ipynb
|
###Markdown
Export Lyrics for Trend Analysis
###Code
import json
import pathlib
from collections import defaultdict
import math
###Output
_____no_output_____
###Markdown
Define Parameters
###Code
path_to_corpus = "../data/rebetiko_corpus.json"
output_path = "../data/subcorpora/trend_analysis/"
output_path = pathlib.Path(output_path)
output_path.mkdir(exist_ok=True, parents=True)
###Output
_____no_output_____
###Markdown
Load the Rebetiko Corpus
###Code
with open(path_to_corpus) as f:
corpus_data = json.load(f)
corpus_data = corpus_data["RECORDS"]
###Output
_____no_output_____
###Markdown
Export Lyrics by Year
###Code
lyrics_by_year = defaultdict(str)
adapted_output_path = output_path / "lyrics_by_year"
adapted_output_path.mkdir(exist_ok=True, parents=True)
counter = 0
for song in corpus_data:
year = song["year"]
if song["lyrics"] is None:
continue
if song["year"] is None:
continue
if year >= 1900 and year <= 2000:
lyrics_by_year[year] += song["lyrics"] + "\n\n"
for year, lyrics in lyrics_by_year.items():
filename = str(year) + ".txt"
output_file = open(adapted_output_path / filename, "w")
output_file.write(lyrics)
output_file.close()
###Output
_____no_output_____
###Markdown
Export Lyrics by Epoch
###Code
lyrics_by_decade = defaultdict(str)
lyrics_by_quinquennial = defaultdict(str)
for song in corpus_data:
if song["year"] is None:
continue
if song["lyrics"] is None:
continue
if song["year"] <= 0:
continue
year = song["year"]
decade = int(math.ceil((int(year) - 9) / 10.0)) * 10
quinquennial = int(math.ceil((int(year) - 4) / 5.0)) * 5
lyrics_by_decade[decade] += song["lyrics"] + "\n\n"
lyrics_by_quinquennial[quinquennial] += song["lyrics"] + "\n\n"
adapted_output_path = output_path / "decades"
adapted_output_path.mkdir(exist_ok=True, parents=True)
for decade, lyrics in lyrics_by_decade.items():
filename = str(decade) + "-" + str(decade + 9) + ".txt"
output_file = open(adapted_output_path / filename, "w")
output_file.write(lyrics)
output_file.close()
adapted_output_path = output_path / "quinquennials"
adapted_output_path.mkdir(exist_ok=True, parents=True)
for quinquennial, lyrics in lyrics_by_quinquennial.items():
filename = str(quinquennial) + "-" + str(quinquennial + 4) + ".txt"
output_file = open(adapted_output_path / filename, "w")
output_file.write(lyrics)
output_file.close()
###Output
_____no_output_____
###Markdown
Write Lyrics for Defined Timespans
###Code
timespans = [
(1922, 1932),
(1906, 1932),
(1906, 1935),
(1933, 1935),
(1936, 1941),
(1942, 1945),
(1922, 1932),
(1942, 1946),
(1946, 1946),
(1947, 1960),
(1942, 1960),
(1946, 1960),
(1960, 1974),
(1960, 1979),
(1947, 1974),
(1942, 1974),
(1947, 1974),
(1980, 1992),
(1974, 1992),
(1900, 1909),
(1910, 1919),
(1920, 1929),
(1930, 1939),
(1940, 1949),
(1950, 1959),
(1960, 1969),
(1970, 1979),
(1980, 1989),
(1990, 1999),
]
lyrics_by_timespan = defaultdict(str)
for timespan in timespans:
for song in corpus_data:
year = song["year"]
if song["lyrics"] is None:
continue
if year is not None:
if year >= timespan[0] and year <= timespan[1]:
lyrics_by_timespan[timespan] += song["lyrics"] + "\n\n"
adapted_output_path = output_path / "timespans"
adapted_output_path.mkdir(exist_ok=True, parents=True)
for timespan, lyrics in lyrics_by_timespan.items():
filename = str(timespan[0]) + "-" + str(timespan[1]) + ".txt"
output_file = open(adapted_output_path / filename, "w")
output_file.write(lyrics)
output_file.close()
###Output
_____no_output_____
###Markdown
Write all Lyrics to Single File
###Code
all_lyrics = ""
for song in corpus_data:
if song["lyrics"] is None:
continue
all_lyrics += song["lyrics"]
all_lyrics += "\n\n"
output_file = open(output_path / "all_lyrics.txt", "w")
output_file.write(all_lyrics)
output_file.close()
###Output
_____no_output_____
|
module1-logistic-regression/logistic_regression_categorical_encoding.ipynb
|
###Markdown
_Lambda School Data Science, Classification 1_This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty? Logistic Regression, One-Hot Encoding Objectives- begin with baselines for classification- use classification metric: accuracy- do train/validate/test split- use scikit-learn for logistic regression- do one-hot encoding- scale features- submit to predictive modeling competitions Get ready Install [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. Get data Option 1. Google DriveDownload files from [Google Drive](https://drive.google.com/drive/u/1/folders/1zZqKi90E2gtf-TEGf8Oh4sY7YkpAw0vf).- [train_features.csv](https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P)- [train_labels.csv](https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f)- [test_features.csv](https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz)- [sample_submission.csv](https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV) Option 2. Kaggle web UI Go to our Kaggle InClass competition webpage. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 3. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c COMPETITION-NAME``` Read data - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format
###Code
import pandas as pd
train_features = pd.read_csv('https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P')
train_labels = pd.read_csv('https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f')
test_features = pd.read_csv('https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz')
sample_submission = pd.read_csv('https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV')
train_features.shape, train_labels.shape, test_features.shape, sample_submission.shape
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Begin with baselines for classification Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class What if we guessed the majority class for every prediction? Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction? Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function. Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features Drop nulls if necessary Fit Logistic Regresson on train data Evaluate on validation data What predictions does a Logistic Regression return? Do one-hot encoding of categorical features Install and import [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders`
###Code
# !pip install category_encoders
import category_encoders as ce
###Output
_____no_output_____
###Markdown
Ridley Leisy _Lambda School Data Science, Classification 1_This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty? Logistic Regression, One-Hot Encoding Objectives- begin with baselines for classification- use classification metric: accuracy- do train/validate/test split- use scikit-learn for logistic regression- do one-hot encoding- scale features- submit to predictive modeling competitions Get ready Install [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. Get data Option 1. Google DriveDownload files from [Google Drive](https://drive.google.com/drive/u/1/folders/1zZqKi90E2gtf-TEGf8Oh4sY7YkpAw0vf).- [train_features.csv](https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P)- [train_labels.csv](https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f)- [test_features.csv](https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz)- [sample_submission.csv](https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV) Option 2. Kaggle web UI Go to our Kaggle InClass competition webpage. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 3. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c COMPETITION-NAME``` Read data - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format
###Code
import pandas as pd
train_features = pd.read_csv('https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P')
train_labels = pd.read_csv('https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f')
test_features = pd.read_csv('https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz')
sample_submission = pd.read_csv('https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV')
train_features.shape, train_labels.shape, test_features.shape, sample_submission.shape
train_labels['status_group'].unique()
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Begin with baselines for classification Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class
###Code
y_train = train_labels['status_group']
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
###Output
_____no_output_____
###Markdown
Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_train,y_pred)
###Output
_____no_output_____
###Markdown
Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
from sklearn.model_selection import train_test_split
X_train = train_features
X_train, X_test, y_train, y_test = train_test_split(X_train, y_train,
train_size=0.80, test_size=0.20,
stratify=y_train, random_state=42)
###Output
_____no_output_____
###Markdown
Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features
###Code
X_train_numeric = X_train.select_dtypes('number')
X_val_numeric = X_test.select_dtypes('number')
###Output
_____no_output_____
###Markdown
Drop nulls if necessary
###Code
X_train_numeric.isnull().sum()
###Output
_____no_output_____
###Markdown
Fit Logistic Regresson on train data
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs',multi_class='auto',max_iter=1000)
model.fit(X_train_numeric,y_train)
###Output
/usr/local/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
###Markdown
Evaluate on validation data
###Code
y_pred = model.predict(X_val_numeric)
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
What predictions does a Logistic Regression return?
###Code
pd.Series(y_pred).value_counts()
y_pred_proba = model.predict_proba(X_val_numeric)
y_pred_proba.sum(axis=1)
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features Install and import [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders`
###Code
# !pip install category_encoders
import category_encoders as ce
###Output
_____no_output_____
###Markdown
Check "cardinality" of categorical features[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. One-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for "high cardinality" categoricals that have dozens, hundreds, or thousands of unique values.
###Code
X_train.describe(exclude='number').T.sort_values(by='unique')
###Output
_____no_output_____
###Markdown
Explore `quantity` feature
###Code
X_train['quantity'].value_counts()
X_train['quantity'].values
###Output
_____no_output_____
###Markdown
Encode `quantity` feature
###Code
encoder = ce.OneHotEncoder(use_cat_names=True)
encoded = encoder.fit_transform(X_train['quantity'].values)
###Output
_____no_output_____
###Markdown
Do one-hot encoding & Scale features, within a complete model fitting workflow. Why and how to scale features before fitting linear modelsScikit-Learn User Guide, [Preprocessing data](https://scikit-learn.org/stable/modules/preprocessing.html)> Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.> The `preprocessing` module further provides a utility class `StandardScaler` that implements the `Transformer` API to compute the mean and standard deviation on a training set. The scaler instance can then be used on new data to transform it the same way it did on the training set. How to use encoders and scalers in scikit-learn- Use the **`fit_transform`** method on the **train** set- Use the **`transform`** method on the **validation** set
###Code
ce.__version__
from sklearn.preprocessing import StandardScaler
categorical_features = ['quantity']
numeric = X_train.select_dtypes('number').columns.drop('id').tolist()
features = categorical_features + numeric
features
X_train_subset = X_train[features]
X_val_subset = X_test[features]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
model = LogisticRegression(solver='lbfgs',multi_class='auto',max_iter=1000)
model.fit(X_train_scaled, y_train)
model.score(X_val_scaled,y_test)
###Output
_____no_output_____
###Markdown
Compare original features, encoded features, & scaled features Get & plot coefficients
###Code
model.coef_
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Classification 1_This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty? Logistic Regression, One-Hot Encoding Objectives- begin with baselines for classification- use classification metric: accuracy- do train/validate/test split- use scikit-learn for logistic regression- do one-hot encoding- scale features- submit to predictive modeling competitions Get ready Install [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. Get data Option 1. Google DriveDownload files from [Google Drive](https://drive.google.com/drive/u/1/folders/1zZqKi90E2gtf-TEGf8Oh4sY7YkpAw0vf).- [train_features.csv](https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P)- [train_labels.csv](https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f)- [test_features.csv](https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz)- [sample_submission.csv](https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV) Option 2. Kaggle web UI Go to our Kaggle InClass competition webpage. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 3. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c COMPETITION-NAME``` Read data - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format
###Code
import pandas as pd
train_features = pd.read_csv('https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P')
train_labels = pd.read_csv('https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f')
test_features = pd.read_csv('https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz')
sample_submission = pd.read_csv('https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV')
train_features.shape, train_labels.shape, test_features.shape, sample_submission.shape
train_features.head()
train_labels.head()
test_features.head()
sample_submission.head()
train_features.describe()
train_features.describe(exclude='number')
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Begin with baselines for classification Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class
###Code
y_train = train_labels['status_group']
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
###Output
_____no_output_____
###Markdown
Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
###Output
_____no_output_____
###Markdown
Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
from sklearn.model_selection import train_test_split
X_train = train_features
y_train = train_labels['status_group']
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, train_size=0.80, test_size=0.20,
stratify=y_train, random_state=42)
X_train.shape, X_val.shape, y_train.shape, y_val.shape
y_train.value_counts(normalize=True)
y_val.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features
###Code
X_train_numeric = X_train.select_dtypes('number')
X_val_numeric = X_val.select_dtypes('number')
###Output
_____no_output_____
###Markdown
Drop nulls if necessary
###Code
X_train_numeric.isnull().sum()
###Output
_____no_output_____
###Markdown
Fit Logistic Regresson on train data
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=1000)
model.fit(X_train_numeric, y_train)
###Output
/home/richmond/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
###Markdown
Evaluate on validation data
###Code
y_pred = model.predict(X_val_numeric)
accuracy_score(y_val, y_pred)
model.score(X_val_numeric, y_val)
###Output
_____no_output_____
###Markdown
What predictions does a Logistic Regression return?
###Code
y_pred
pd.Series(y_pred).value_counts()
y_pred_proba.sum(axis=1)
y_pred_proba
proba_functional = y_pred_proba[:, 0]
proba_functional
proba_functional_needs_repair = y_pred_proba[:, 1]
proba_functional_needs_repair
proba_non_functional = y_pred_proba[:, 2]
proba_non_functional
pd.set_option('display.float_format', '{:.2f}'.format)
pd.DataFrame({'Functional': proba_functional,
'Functional Needs Repair': proba_functional_needs_repair,
'Non Functional': proba_non_functional}).describe()
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features Install and import [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders`
###Code
!pip install category_encoders
import category_encoders as ce
###Output
_____no_output_____
###Markdown
Check "cardinality" of categorical features[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. One-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for "high cardinality" categoricals that have dozens, hundreds, or thousands of unique values. Explore `quantity` feature
###Code
X_train['quantity'].value_counts(dropna=False)
# Recombine X_train and y_train, for exploratory data analysis
train = X_train.copy()
train['status_group'] = y_train
# Now do groupby...
train.groupby('quantity')['status_group'].value_counts(normalize=True)
X_train['quantity'].head(10)
###Output
_____no_output_____
###Markdown
Encode `quantity` feature
###Code
encoder = ce.OneHotEncoder(use_cat_names=True)
encoded = encoder.fit_transform(X_train['quantity'])
encoded.head(10)
###Output
_____no_output_____
###Markdown
Do one-hot encoding & Scale features, within a complete model fitting workflow. Why and how to scale features before fitting linear modelsScikit-Learn User Guide, [Preprocessing data](https://scikit-learn.org/stable/modules/preprocessing.html)> Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.> The `preprocessing` module further provides a utility class `StandardScaler` that implements the `Transformer` API to compute the mean and standard deviation on a training set. The scaler instance can then be used on new data to transform it the same way it did on the training set. How to use encoders and scalers in scikit-learn- Use the **`fit_transform`** method on the **train** set- Use the **`transform`** method on the **validation** set
###Code
from sklearn.preprocessing import StandardScaler
categorical_features = ['quantity']
numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist()
features = categorical_features + numeric_features
X_train_subset = X_train[features]
X_val_subset = X_val[features]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
model = LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=1000)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
###Output
/home/richmond/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/data.py:625: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/home/richmond/anaconda3/lib/python3.7/site-packages/sklearn/base.py:462: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
/home/richmond/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
app.launch_new_instance()
###Markdown
Compare original features, encoded features, & scaled features
###Code
print(X_train.shape)
X_train[:1]
print(X_train_numeric.shape)
X_train_numeric[:1]
print(X_train_encoded.shape)
X_train_encoded[:1]
print(X_train_scaled.shape)
X_train_scaled[:1]
###Output
(47520, 14)
###Markdown
Get & plot coefficients
###Code
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
coefficients.sort_values().plot.barh();
###Output
_____no_output_____
###Markdown
Submit to predictive modeling competition Write submission CSV fileThe format for the submission file is simply the row id and the predicted label (for an example, see `sample_submission.csv` on the data download page.For example, if you just predicted that all the waterpoints were functional you would have the following predictions:id,status_group50785,functional51630,functional17168,functional45559,functional49871,functionalYour code to generate a submission file may look like this: estimator is your scikit-learn estimator, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)
###Code
X_test_subset = test_features[features]
X_test_encoded = encoder.transform(X_test_subset)
X_test_scaled = scaler.transform(X_test_encoded)
all(X_test_encoded.columns == X_train_encoded.columns)
X_test_encoded.columns, X_train_encoded.columns
y_pred = model.predict(X_test_scaled)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('submission-01.csv', index=False)
!head submission-01.csv
###Output
id,status_group
50785,functional
51630,functional
17168,functional
45559,non functional
49871,functional
52449,functional
24806,functional
28965,non functional
36301,non functional
###Markdown
_Lambda School Data Science, Classification 1_This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty? Logistic Regression, One-Hot Encoding Objectives- begin with baselines for classification- use classification metric: accuracy- do train/validate/test split- use scikit-learn for logistic regression- do one-hot encoding- scale features- submit to predictive modeling competitions Get ready Install [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. Get data Option 1. Google DriveDownload files from [Google Drive](https://drive.google.com/drive/u/1/folders/1zZqKi90E2gtf-TEGf8Oh4sY7YkpAw0vf).- [train_features.csv](https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P)- [train_labels.csv](https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f)- [test_features.csv](https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz)- [sample_submission.csv](https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV) Option 2. Kaggle web UI Go to our Kaggle InClass competition webpage. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 3. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c COMPETITION-NAME``` Read data - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format
###Code
import pandas as pd
train_features = pd.read_csv('https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P')
train_labels = pd.read_csv('https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f')
test_features = pd.read_csv('https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz')
sample_submission = pd.read_csv('https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV')
train_features.shape, train_labels.shape, test_features.shape, sample_submission.shape
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Begin with baselines for classification Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class
###Code
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
###Output
_____no_output_____
###Markdown
Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
###Output
_____no_output_____
###Markdown
Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
###Output
_____no_output_____
###Markdown
Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features
###Code
###Output
_____no_output_____
###Markdown
Drop nulls if necessary
###Code
###Output
_____no_output_____
###Markdown
Fit Logistic Regresson on train data
###Code
###Output
_____no_output_____
###Markdown
Evaluate on validation data
###Code
###Output
_____no_output_____
###Markdown
What predictions does a Logistic Regression return?
###Code
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features Install and import [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders`
###Code
# !pip install category_encoders
import category_encoders as ce
###Output
_____no_output_____
###Markdown
Check "cardinality" of categorical features[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. One-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for "high cardinality" categoricals that have dozens, hundreds, or thousands of unique values.
###Code
###Output
_____no_output_____
###Markdown
Explore `quantity` feature
###Code
###Output
_____no_output_____
###Markdown
Encode `quantity` feature
###Code
###Output
_____no_output_____
###Markdown
Do one-hot encoding & Scale features, within a complete model fitting workflow. Why and how to scale features before fitting linear modelsScikit-Learn User Guide, [Preprocessing data](https://scikit-learn.org/stable/modules/preprocessing.html)> Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.> The `preprocessing` module further provides a utility class `StandardScaler` that implements the `Transformer` API to compute the mean and standard deviation on a training set. The scaler instance can then be used on new data to transform it the same way it did on the training set. How to use encoders and scalers in scikit-learn- Use the **`fit_transform`** method on the **train** set- Use the **`transform`** method on the **validation** set
###Code
###Output
_____no_output_____
###Markdown
Compare original features, encoded features, & scaled features
###Code
###Output
_____no_output_____
###Markdown
Get & plot coefficients
###Code
###Output
_____no_output_____
###Markdown
Submit to predictive modeling competition Write submission CSV fileThe format for the submission file is simply the row id and the predicted label (for an example, see `sample_submission.csv` on the data download page.For example, if you just predicted that all the waterpoints were functional you would have the following predictions:id,status_group50785,functional51630,functional17168,functional45559,functional49871,functionalYour code to generate a submission file may look like this: estimator is your scikit-learn estimator, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Classification 1_This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty? Logistic Regression, One-Hot Encoding Objectives- begin with baselines for classification- use classification metric: accuracy- do train/validate/test split- use scikit-learn for logistic regression- do one-hot encoding- scale features- submit to predictive modeling competitions Get ready Install [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. Get data Option 1. Google DriveDownload files from [Google Drive](https://drive.google.com/drive/u/1/folders/1zZqKi90E2gtf-TEGf8Oh4sY7YkpAw0vf).- [train_features.csv](https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P)- [train_labels.csv](https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f)- [test_features.csv](https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz)- [sample_submission.csv](https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV) Option 2. Kaggle web UI Go to our Kaggle InClass competition webpage. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 3. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c COMPETITION-NAME``` Read data - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format
###Code
import pandas as pd
train_features = pd.read_csv('https://drive.google.com/uc?export=download&id=14ULvX0uOgftTB2s97uS8lIx1nHGQIB0P')
train_labels = pd.read_csv('https://drive.google.com/uc?export=download&id=1r441wLr7gKGHGLyPpKauvCuUOU556S2f')
test_features = pd.read_csv('https://drive.google.com/uc?export=download&id=1wvsYl9hbRbZuIuoaLWCsW_kbcxCdocHz')
sample_submission = pd.read_csv('https://drive.google.com/uc?export=download&id=1kfJewnmhowpUo381oSn3XqsQ6Eto23XV')
train_features.shape, train_labels.shape, test_features.shape, sample_submission.shape
train_features.columns
train_features.isna().sum()
train_features.dtypes
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Begin with baselines for classification Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class
###Code
train_labels['status_group'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
maj_class = train_labels['status_group'].mode()[0]
y_pred = [maj_class] * len(train_labels)
###Output
_____no_output_____
###Markdown
Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
from sklearn.metrics import accuracy_score as acc
acc(train_labels['status_group'], y_pred)
###Output
_____no_output_____
###Markdown
Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
from sklearn.model_selection import train_test_split as tts
X_train = train_features
y_train = train_labels['status_group']
X_train, X_val, y_train, y_val = tts(X_train,
y_train,
train_size=.8,
test_size=.2,
stratify=y_train,
random_state=40)
###Output
_____no_output_____
###Markdown
Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features
###Code
X_train_num = X_train.select_dtypes('number')
X_val_num = X_val.select_dtypes('number')
###Output
_____no_output_____
###Markdown
Drop nulls if necessary Fit Logistic Regresson on train data
###Code
from sklearn.linear_model import LogisticRegression as LR
model = LR(solver='lbfgs', multi_class='auto', max_iter=1000)
model.fit(X_train_num, y_train)
###Output
C:\Users\Paul\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:947: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
###Markdown
Evaluate on validation data
###Code
y_pred = model.predict(X_val_num)
acc(y_val, y_pred)
###Output
_____no_output_____
###Markdown
What predictions does a Logistic Regression return?
###Code
y_pred
y_pred_prob = model.predict_proba(X_val_num)
y_pred_prob
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features Install and import [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/)- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` Check "cardinality" of categorical features[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. One-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for "high cardinality" categoricals that have dozens, hundreds, or thousands of unique values.
###Code
import category_encoders as ce
X_train.describe(exclude='number').T.sort_values(by='unique')
###Output
_____no_output_____
###Markdown
Explore `quantity` feature
###Code
X_train['quantity'].value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
Encode `quantity` feature
###Code
encoder = ce.OneHotEncoder(use_cat_names=True)
encoded = encoder.fit_transform(X_train['quantity'].values)
encoded.head()
###Output
_____no_output_____
###Markdown
Do one-hot encoding & Scale features, within a complete model fitting workflow. Why and how to scale features before fitting linear modelsScikit-Learn User Guide, [Preprocessing data](https://scikit-learn.org/stable/modules/preprocessing.html)> Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.> The `preprocessing` module further provides a utility class `StandardScaler` that implements the `Transformer` API to compute the mean and standard deviation on a training set. The scaler instance can then be used on new data to transform it the same way it did on the training set. How to use encoders and scalers in scikit-learn- Use the **`fit_transform`** method on the **train** set- Use the **`transform`** method on the **validation** set
###Code
from sklearn.preprocessing import StandardScaler as SS
cat_features = ['quantity']
num_features = X_train.select_dtypes('number').columns.drop(
'id').tolist()
features = cat_features + num_features
X_train_subset = X_train[features]
X_val_subset = X_val[features]
test_sub = test_features[features]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
test_enc = encoder.transform(test_sub)
scaler = SS()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
test_scaled = scaler.transform(test_enc)
model = LR(solver='lbfgs', multi_class='auto', max_iter=1000)
model.fit(X_train_scaled, y_train)
accuracy = model.score(X_val_scaled, y_val)
print(f'Acc: {accuracy}')
###Output
Acc: 0.6493265993265993
###Markdown
Compare original features, encoded features, & scaled features
###Code
X_train[:1]
X_train_num[:1]
X_train_encoded[:1]
X_train_scaled[:1]
###Output
_____no_output_____
###Markdown
Get & plot coefficients
###Code
coef = pd.Series(model.coef_[0], X_train_encoded.columns)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
coef.sort_values().plot.barh()
###Output
_____no_output_____
###Markdown
Submit to predictive modeling competition Write submission CSV fileThe format for the submission file is simply the row id and the predicted label (for an example, see `sample_submission.csv` on the data download page.For example, if you just predicted that all the waterpoints were functional you would have the following predictions:id,status_group50785,functional51630,functional17168,functional45559,functional49871,functionalYour code to generate a submission file may look like this: estimator is your scikit-learn estimator, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)
###Code
sample_submission.shape, y_pred.shape, X_val_scaled.shape
y_pred = model.predict(test_scaled)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('sub_0.csv', index=False)
cd PycharmProjects/lambda/DS-Unit-2-Classification-1/module1-logistic-regression/
ls
###Output
Volume in drive C is OS
Volume Serial Number is AA23-606F
Directory of C:\Users\Paul\PycharmProjects\lambda\DS-Unit-2-Classification-1\module1-logistic-regression
06/03/2019 04:55 PM <DIR> .
06/03/2019 04:55 PM <DIR> ..
06/03/2019 02:01 PM <DIR> .ipynb_checkpoints
06/03/2019 04:55 PM 96,215 logistic_regression_categorical_encoding.ipynb
06/03/2019 10:56 AM 0 README.md
06/03/2019 04:54 PM 264,867 sub_0.csv
3 File(s) 361,082 bytes
3 Dir(s) 93,083,361,280 bytes free
|
Step6_Optimization-Layers-and-Activation.ipynb
|
###Markdown
Optimizing Model Hidden Layers and Activation Functions Hidden layers and NodesReducing number of `Layers` would need more `epochs` ~ 500-1000```pythonmodel = tf.keras.models.Sequential()model.add(tf.keras.layers.Dense(64, activation="relu", input_dim=3))model.add(tf.keras.layers.Dense(32, activation="relu"))model.add(tf.keras.layers.Dense(1, activation="linear"))```Using too few neurons in the `Layers` will result in underfitting.Using too many neurons in the `Layers` may result in overfitting and taking more time to train.There are many rule-of-thumb methods for determining the correct number of neurons to use in the hidden layers, such as the following: The number of hidden neurons should be between the size of the input layer and the size of the output layer. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer. [link](https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw) Activation functions Activation functions work best in their default hyperparameters that are used in popular frameworks such as Tensorflow and Pytorch.`ReLU` function is widely used.Activation functions for Classification ML: `softmax`, `sigmoid`, `Tanh`. In this project, `ReLU` seems to be the best fit for activation function.
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler
# FUNCTIONS
from functions import *
###Output
_____no_output_____
###Markdown
Import data
###Code
filename_alldata = "data/_nanocomposite_data.csv"
alldata = pd.read_csv(filename_alldata, index_col=None, header=0)
# Drop columns which are not used for now
alldata_clean = alldata.drop(['polymer_p2', 'ratio_1_2','filler_2','wt_l2','owner','foaming'], axis=1)
alldata_clean = mapStringToNum(alldata_clean)
alldata_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 polymer_1 5000 non-null int64
1 filler_1 5000 non-null int64
2 wt_l1 5000 non-null float64
3 conductivity 5000 non-null float64
dtypes: float64(2), int64(2)
memory usage: 156.4 KB
###Markdown
Prepare Dataset for TensorFlow Scaling X and Y dataX data might not need scaling as the range of values is not high.
###Code
X_scaler = MinMaxScaler(feature_range=(0, 1))
Y_scaler = superHighVariationScaler()
###Output
_____no_output_____
###Markdown
Splitting data to training and testing sets
###Code
training_data, testing_data = train_test_split(alldata_clean, test_size=0.2, random_state=25)
# Split into input features (X) and output labels (Y) variables
X_training = training_data.drop('conductivity', axis=1).values
Y_training = training_data[['conductivity']].values
# Pull out columns for X (data to train with) and Y (value to predict)
X_testing = testing_data.drop('conductivity', axis=1).values
Y_testing = testing_data[['conductivity']].values
###Output
_____no_output_____
###Markdown
Scaling data
###Code
# Scale both the training inputs and outputs
X_scaled_training = X_scaler.fit_transform(X_training)
Y_scaled_training = Y_scaler.fit_transform(Y_training)
# It's very important that the training and test data are scaled with the same scaler.
X_scaled_testing = X_scaler.transform(X_testing)
Y_scaled_testing = Y_scaler.transform(Y_testing)
###Output
_____no_output_____
###Markdown
Model build and complie
###Code
# Create model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(256, activation="relu", input_dim=3))
model.add(tf.keras.layers.Dense(64, activation="relu"))
model.add(tf.keras.layers.Dense(8, activation="relu"))
model.add(tf.keras.layers.Dense(1, activation="linear"))
model.compile(loss=tf.keras.losses.MeanAbsolutePercentageError(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001, decay=1e-6),
metrics = tf.keras.metrics.MeanSquaredLogarithmicError() )
history = model.fit(X_scaled_training, Y_scaled_training,
validation_data=(X_scaled_testing, Y_scaled_testing),
epochs=200, batch_size=64)
###Output
Epoch 1/200
63/63 [==============================] - 1s 6ms/step - loss: 104.8891 - mean_squared_logarithmic_error: 0.1858 - val_loss: 99.5350 - val_mean_squared_logarithmic_error: 0.1752
Epoch 2/200
63/63 [==============================] - 0s 3ms/step - loss: 101.4971 - mean_squared_logarithmic_error: 0.1838 - val_loss: 99.4876 - val_mean_squared_logarithmic_error: 0.1742
Epoch 3/200
63/63 [==============================] - 0s 3ms/step - loss: 101.8944 - mean_squared_logarithmic_error: 0.1817 - val_loss: 99.2869 - val_mean_squared_logarithmic_error: 0.1721
Epoch 4/200
63/63 [==============================] - 0s 3ms/step - loss: 103.1290 - mean_squared_logarithmic_error: 0.1826 - val_loss: 99.7417 - val_mean_squared_logarithmic_error: 0.1720
Epoch 5/200
63/63 [==============================] - 0s 3ms/step - loss: 100.4824 - mean_squared_logarithmic_error: 0.1817 - val_loss: 100.0342 - val_mean_squared_logarithmic_error: 0.1741
Epoch 6/200
63/63 [==============================] - 0s 3ms/step - loss: 101.7561 - mean_squared_logarithmic_error: 0.1808 - val_loss: 100.3768 - val_mean_squared_logarithmic_error: 0.1731
Epoch 7/200
63/63 [==============================] - 0s 3ms/step - loss: 101.1464 - mean_squared_logarithmic_error: 0.1808 - val_loss: 99.4036 - val_mean_squared_logarithmic_error: 0.1730
Epoch 8/200
63/63 [==============================] - 0s 3ms/step - loss: 101.0093 - mean_squared_logarithmic_error: 0.1798 - val_loss: 99.1748 - val_mean_squared_logarithmic_error: 0.1701
Epoch 9/200
63/63 [==============================] - 0s 3ms/step - loss: 100.6395 - mean_squared_logarithmic_error: 0.1783 - val_loss: 100.6448 - val_mean_squared_logarithmic_error: 0.1659
Epoch 10/200
63/63 [==============================] - 0s 3ms/step - loss: 107.9072 - mean_squared_logarithmic_error: 0.1755 - val_loss: 99.4663 - val_mean_squared_logarithmic_error: 0.1669
Epoch 11/200
63/63 [==============================] - 0s 3ms/step - loss: 101.0271 - mean_squared_logarithmic_error: 0.1720 - val_loss: 99.0417 - val_mean_squared_logarithmic_error: 0.1658
Epoch 12/200
63/63 [==============================] - 0s 3ms/step - loss: 100.1941 - mean_squared_logarithmic_error: 0.1734 - val_loss: 99.0194 - val_mean_squared_logarithmic_error: 0.1621
Epoch 13/200
63/63 [==============================] - 0s 3ms/step - loss: 101.9399 - mean_squared_logarithmic_error: 0.1710 - val_loss: 99.1778 - val_mean_squared_logarithmic_error: 0.1624
Epoch 14/200
63/63 [==============================] - 0s 3ms/step - loss: 100.5320 - mean_squared_logarithmic_error: 0.1687 - val_loss: 98.6691 - val_mean_squared_logarithmic_error: 0.1596
Epoch 15/200
63/63 [==============================] - 0s 3ms/step - loss: 103.5322 - mean_squared_logarithmic_error: 0.1688 - val_loss: 98.7074 - val_mean_squared_logarithmic_error: 0.1597
Epoch 16/200
63/63 [==============================] - 0s 3ms/step - loss: 100.8254 - mean_squared_logarithmic_error: 0.1667 - val_loss: 99.9386 - val_mean_squared_logarithmic_error: 0.1565
Epoch 17/200
63/63 [==============================] - 0s 3ms/step - loss: 100.3681 - mean_squared_logarithmic_error: 0.1673 - val_loss: 98.9317 - val_mean_squared_logarithmic_error: 0.1578
Epoch 18/200
63/63 [==============================] - 0s 3ms/step - loss: 99.2468 - mean_squared_logarithmic_error: 0.1679 - val_loss: 99.5260 - val_mean_squared_logarithmic_error: 0.1597
Epoch 19/200
63/63 [==============================] - 0s 3ms/step - loss: 99.5870 - mean_squared_logarithmic_error: 0.1669 - val_loss: 99.0273 - val_mean_squared_logarithmic_error: 0.1559
Epoch 20/200
63/63 [==============================] - 0s 3ms/step - loss: 99.5678 - mean_squared_logarithmic_error: 0.1664 - val_loss: 98.5689 - val_mean_squared_logarithmic_error: 0.1596
Epoch 21/200
63/63 [==============================] - 0s 3ms/step - loss: 102.0487 - mean_squared_logarithmic_error: 0.1660 - val_loss: 98.2283 - val_mean_squared_logarithmic_error: 0.1569
Epoch 22/200
63/63 [==============================] - 0s 3ms/step - loss: 97.7747 - mean_squared_logarithmic_error: 0.1653 - val_loss: 98.1898 - val_mean_squared_logarithmic_error: 0.1555
Epoch 23/200
63/63 [==============================] - 0s 3ms/step - loss: 97.4516 - mean_squared_logarithmic_error: 0.1639 - val_loss: 97.4197 - val_mean_squared_logarithmic_error: 0.1559
Epoch 24/200
63/63 [==============================] - 0s 3ms/step - loss: 98.9120 - mean_squared_logarithmic_error: 0.1633 - val_loss: 97.6090 - val_mean_squared_logarithmic_error: 0.1572
Epoch 25/200
63/63 [==============================] - 0s 3ms/step - loss: 98.9001 - mean_squared_logarithmic_error: 0.1641 - val_loss: 97.1878 - val_mean_squared_logarithmic_error: 0.1546
Epoch 26/200
63/63 [==============================] - 0s 3ms/step - loss: 98.4417 - mean_squared_logarithmic_error: 0.1614 - val_loss: 98.8136 - val_mean_squared_logarithmic_error: 0.1514
Epoch 27/200
63/63 [==============================] - 0s 3ms/step - loss: 98.8245 - mean_squared_logarithmic_error: 0.1625 - val_loss: 96.4319 - val_mean_squared_logarithmic_error: 0.1538
Epoch 28/200
63/63 [==============================] - 0s 3ms/step - loss: 99.4263 - mean_squared_logarithmic_error: 0.1632 - val_loss: 96.4162 - val_mean_squared_logarithmic_error: 0.1525
Epoch 29/200
63/63 [==============================] - 0s 3ms/step - loss: 101.3300 - mean_squared_logarithmic_error: 0.1636 - val_loss: 98.4439 - val_mean_squared_logarithmic_error: 0.1505
Epoch 30/200
63/63 [==============================] - 0s 3ms/step - loss: 96.5756 - mean_squared_logarithmic_error: 0.1607 - val_loss: 97.3160 - val_mean_squared_logarithmic_error: 0.1580
Epoch 31/200
63/63 [==============================] - 0s 3ms/step - loss: 100.0728 - mean_squared_logarithmic_error: 0.1597 - val_loss: 95.8190 - val_mean_squared_logarithmic_error: 0.1500
Epoch 32/200
63/63 [==============================] - 0s 3ms/step - loss: 98.1707 - mean_squared_logarithmic_error: 0.1593 - val_loss: 96.3881 - val_mean_squared_logarithmic_error: 0.1522
Epoch 33/200
63/63 [==============================] - 0s 3ms/step - loss: 97.6877 - mean_squared_logarithmic_error: 0.1573 - val_loss: 95.3114 - val_mean_squared_logarithmic_error: 0.1497
Epoch 34/200
63/63 [==============================] - 0s 3ms/step - loss: 95.6064 - mean_squared_logarithmic_error: 0.1578 - val_loss: 94.5784 - val_mean_squared_logarithmic_error: 0.1492
Epoch 35/200
63/63 [==============================] - 0s 3ms/step - loss: 96.2598 - mean_squared_logarithmic_error: 0.1563 - val_loss: 95.0697 - val_mean_squared_logarithmic_error: 0.1474
Epoch 36/200
63/63 [==============================] - 0s 3ms/step - loss: 95.7262 - mean_squared_logarithmic_error: 0.1551 - val_loss: 95.2725 - val_mean_squared_logarithmic_error: 0.1460
Epoch 37/200
63/63 [==============================] - 0s 3ms/step - loss: 96.5904 - mean_squared_logarithmic_error: 0.1541 - val_loss: 94.0375 - val_mean_squared_logarithmic_error: 0.1456
Epoch 38/200
63/63 [==============================] - 0s 3ms/step - loss: 97.5702 - mean_squared_logarithmic_error: 0.1553 - val_loss: 94.1149 - val_mean_squared_logarithmic_error: 0.1485
Epoch 39/200
63/63 [==============================] - 0s 3ms/step - loss: 95.2810 - mean_squared_logarithmic_error: 0.1531 - val_loss: 93.5517 - val_mean_squared_logarithmic_error: 0.1468
Epoch 40/200
63/63 [==============================] - 0s 3ms/step - loss: 97.1967 - mean_squared_logarithmic_error: 0.1559 - val_loss: 93.8673 - val_mean_squared_logarithmic_error: 0.1448
Epoch 41/200
63/63 [==============================] - 0s 3ms/step - loss: 94.8852 - mean_squared_logarithmic_error: 0.1533 - val_loss: 93.8628 - val_mean_squared_logarithmic_error: 0.1461
Epoch 42/200
63/63 [==============================] - 0s 3ms/step - loss: 94.9579 - mean_squared_logarithmic_error: 0.1534 - val_loss: 93.6777 - val_mean_squared_logarithmic_error: 0.1460
Epoch 43/200
63/63 [==============================] - 0s 3ms/step - loss: 94.2146 - mean_squared_logarithmic_error: 0.1527 - val_loss: 92.3212 - val_mean_squared_logarithmic_error: 0.1447
Epoch 44/200
63/63 [==============================] - 0s 3ms/step - loss: 95.2732 - mean_squared_logarithmic_error: 0.1531 - val_loss: 92.8725 - val_mean_squared_logarithmic_error: 0.1421
Epoch 45/200
63/63 [==============================] - 0s 3ms/step - loss: 94.5589 - mean_squared_logarithmic_error: 0.1499 - val_loss: 89.9434 - val_mean_squared_logarithmic_error: 0.1400
Epoch 46/200
63/63 [==============================] - 0s 3ms/step - loss: 94.1042 - mean_squared_logarithmic_error: 0.1501 - val_loss: 90.4881 - val_mean_squared_logarithmic_error: 0.1423
Epoch 47/200
63/63 [==============================] - 0s 3ms/step - loss: 94.2816 - mean_squared_logarithmic_error: 0.1497 - val_loss: 89.2198 - val_mean_squared_logarithmic_error: 0.1395
Epoch 48/200
63/63 [==============================] - 0s 3ms/step - loss: 94.1962 - mean_squared_logarithmic_error: 0.1489 - val_loss: 91.5581 - val_mean_squared_logarithmic_error: 0.1400
Epoch 49/200
63/63 [==============================] - 0s 4ms/step - loss: 93.1992 - mean_squared_logarithmic_error: 0.1483 - val_loss: 88.1647 - val_mean_squared_logarithmic_error: 0.1398
Epoch 50/200
63/63 [==============================] - 0s 3ms/step - loss: 93.6228 - mean_squared_logarithmic_error: 0.1467 - val_loss: 89.7897 - val_mean_squared_logarithmic_error: 0.1366
Epoch 51/200
63/63 [==============================] - 0s 3ms/step - loss: 94.4701 - mean_squared_logarithmic_error: 0.1461 - val_loss: 91.6798 - val_mean_squared_logarithmic_error: 0.1407
Epoch 52/200
63/63 [==============================] - 0s 3ms/step - loss: 95.5173 - mean_squared_logarithmic_error: 0.1443 - val_loss: 89.6225 - val_mean_squared_logarithmic_error: 0.1378
Epoch 53/200
63/63 [==============================] - 0s 3ms/step - loss: 89.9848 - mean_squared_logarithmic_error: 0.1433 - val_loss: 87.3974 - val_mean_squared_logarithmic_error: 0.1338
Epoch 54/200
63/63 [==============================] - 0s 3ms/step - loss: 89.2019 - mean_squared_logarithmic_error: 0.1408 - val_loss: 86.2871 - val_mean_squared_logarithmic_error: 0.1327
Epoch 55/200
63/63 [==============================] - 0s 3ms/step - loss: 90.0371 - mean_squared_logarithmic_error: 0.1402 - val_loss: 87.0603 - val_mean_squared_logarithmic_error: 0.1315
Epoch 56/200
63/63 [==============================] - 0s 3ms/step - loss: 92.2906 - mean_squared_logarithmic_error: 0.1419 - val_loss: 91.4453 - val_mean_squared_logarithmic_error: 0.1307
Epoch 57/200
63/63 [==============================] - 0s 3ms/step - loss: 89.1377 - mean_squared_logarithmic_error: 0.1377 - val_loss: 89.0525 - val_mean_squared_logarithmic_error: 0.1304
Epoch 58/200
63/63 [==============================] - 0s 3ms/step - loss: 88.3720 - mean_squared_logarithmic_error: 0.1370 - val_loss: 84.6948 - val_mean_squared_logarithmic_error: 0.1287
Epoch 59/200
63/63 [==============================] - 0s 3ms/step - loss: 89.2033 - mean_squared_logarithmic_error: 0.1369 - val_loss: 87.2822 - val_mean_squared_logarithmic_error: 0.1308
Epoch 60/200
63/63 [==============================] - 0s 3ms/step - loss: 86.5040 - mean_squared_logarithmic_error: 0.1362 - val_loss: 85.4422 - val_mean_squared_logarithmic_error: 0.1265
Epoch 61/200
63/63 [==============================] - 0s 3ms/step - loss: 88.7046 - mean_squared_logarithmic_error: 0.1353 - val_loss: 83.9103 - val_mean_squared_logarithmic_error: 0.1263
Epoch 62/200
63/63 [==============================] - 0s 3ms/step - loss: 88.3612 - mean_squared_logarithmic_error: 0.1337 - val_loss: 86.5970 - val_mean_squared_logarithmic_error: 0.1230
Epoch 63/200
63/63 [==============================] - 0s 3ms/step - loss: 86.7880 - mean_squared_logarithmic_error: 0.1328 - val_loss: 84.6663 - val_mean_squared_logarithmic_error: 0.1284
Epoch 64/200
63/63 [==============================] - 0s 3ms/step - loss: 87.0158 - mean_squared_logarithmic_error: 0.1320 - val_loss: 86.8915 - val_mean_squared_logarithmic_error: 0.1297
Epoch 65/200
63/63 [==============================] - 0s 3ms/step - loss: 90.7955 - mean_squared_logarithmic_error: 0.1315 - val_loss: 82.7680 - val_mean_squared_logarithmic_error: 0.1233
Epoch 66/200
63/63 [==============================] - 0s 3ms/step - loss: 86.7194 - mean_squared_logarithmic_error: 0.1303 - val_loss: 84.1497 - val_mean_squared_logarithmic_error: 0.1267
Epoch 67/200
63/63 [==============================] - 0s 3ms/step - loss: 84.5823 - mean_squared_logarithmic_error: 0.1297 - val_loss: 81.4445 - val_mean_squared_logarithmic_error: 0.1245
Epoch 68/200
63/63 [==============================] - 0s 3ms/step - loss: 87.2174 - mean_squared_logarithmic_error: 0.1285 - val_loss: 80.6087 - val_mean_squared_logarithmic_error: 0.1230
Epoch 69/200
63/63 [==============================] - 0s 3ms/step - loss: 81.3429 - mean_squared_logarithmic_error: 0.1272 - val_loss: 80.0888 - val_mean_squared_logarithmic_error: 0.1180
Epoch 70/200
63/63 [==============================] - 0s 3ms/step - loss: 83.0460 - mean_squared_logarithmic_error: 0.1250 - val_loss: 81.2358 - val_mean_squared_logarithmic_error: 0.1196
Epoch 71/200
63/63 [==============================] - 0s 3ms/step - loss: 81.9700 - mean_squared_logarithmic_error: 0.1266 - val_loss: 78.6166 - val_mean_squared_logarithmic_error: 0.1154
Epoch 72/200
63/63 [==============================] - 0s 3ms/step - loss: 80.7523 - mean_squared_logarithmic_error: 0.1222 - val_loss: 79.5919 - val_mean_squared_logarithmic_error: 0.1166
Epoch 73/200
63/63 [==============================] - 0s 3ms/step - loss: 78.9497 - mean_squared_logarithmic_error: 0.1235 - val_loss: 78.4522 - val_mean_squared_logarithmic_error: 0.1145
Epoch 74/200
63/63 [==============================] - 0s 3ms/step - loss: 78.2652 - mean_squared_logarithmic_error: 0.1216 - val_loss: 76.5593 - val_mean_squared_logarithmic_error: 0.1138
Epoch 75/200
63/63 [==============================] - 0s 3ms/step - loss: 82.1060 - mean_squared_logarithmic_error: 0.1204 - val_loss: 78.9325 - val_mean_squared_logarithmic_error: 0.1111
Epoch 76/200
63/63 [==============================] - 0s 3ms/step - loss: 79.9692 - mean_squared_logarithmic_error: 0.1195 - val_loss: 74.9544 - val_mean_squared_logarithmic_error: 0.1133
Epoch 77/200
63/63 [==============================] - 0s 3ms/step - loss: 76.6096 - mean_squared_logarithmic_error: 0.1195 - val_loss: 74.5828 - val_mean_squared_logarithmic_error: 0.1097
Epoch 78/200
63/63 [==============================] - 0s 3ms/step - loss: 77.7061 - mean_squared_logarithmic_error: 0.1179 - val_loss: 74.9138 - val_mean_squared_logarithmic_error: 0.1113
Epoch 79/200
63/63 [==============================] - 0s 3ms/step - loss: 75.1152 - mean_squared_logarithmic_error: 0.1167 - val_loss: 72.5842 - val_mean_squared_logarithmic_error: 0.1079
Epoch 80/200
63/63 [==============================] - 0s 3ms/step - loss: 74.7399 - mean_squared_logarithmic_error: 0.1160 - val_loss: 73.5968 - val_mean_squared_logarithmic_error: 0.1079
Epoch 81/200
63/63 [==============================] - 0s 4ms/step - loss: 77.0355 - mean_squared_logarithmic_error: 0.1172 - val_loss: 74.8247 - val_mean_squared_logarithmic_error: 0.1113
Epoch 82/200
63/63 [==============================] - 0s 3ms/step - loss: 72.4570 - mean_squared_logarithmic_error: 0.1166 - val_loss: 71.8861 - val_mean_squared_logarithmic_error: 0.1093
Epoch 83/200
63/63 [==============================] - 0s 4ms/step - loss: 74.2396 - mean_squared_logarithmic_error: 0.1158 - val_loss: 73.0308 - val_mean_squared_logarithmic_error: 0.1095
Epoch 84/200
63/63 [==============================] - 0s 3ms/step - loss: 71.8431 - mean_squared_logarithmic_error: 0.1154 - val_loss: 71.5406 - val_mean_squared_logarithmic_error: 0.1084
Epoch 85/200
63/63 [==============================] - 0s 3ms/step - loss: 71.2986 - mean_squared_logarithmic_error: 0.1151 - val_loss: 70.6712 - val_mean_squared_logarithmic_error: 0.1071
Epoch 86/200
63/63 [==============================] - 0s 3ms/step - loss: 73.7692 - mean_squared_logarithmic_error: 0.1147 - val_loss: 70.1696 - val_mean_squared_logarithmic_error: 0.1044
Epoch 87/200
63/63 [==============================] - 0s 3ms/step - loss: 70.8436 - mean_squared_logarithmic_error: 0.1094 - val_loss: 68.4915 - val_mean_squared_logarithmic_error: 0.0998
###Markdown
Plotting predicting vs testing data
###Code
# Calculate predictions
PredValSet = model.predict(X_scaled_testing)
PredValSet2 = Y_scaler.inverse_transform(PredValSet)
compdata = testing_data.copy()
compdata = mapNumToString (compdata)
compdata['labels'] = compdata['polymer_1'] + "-" + compdata['filler_1']
compdata['type'] = 'Experiment'
compdata2 = compdata.copy()
compdata2['type'] = 'Predicted'
compdata2['conductivity'] = PredValSet2
compdata = compdata.append(compdata2, ignore_index = True)
g = sns.relplot(data=compdata ,x="wt_l1", y ="conductivity", hue="type", col="labels", kind="scatter", col_wrap =3 );
g.set_xlabels("weight fraction (%)");
g.set_ylabels("conductivity (S/m)");
g.set(yscale="log");
###Output
_____no_output_____
###Markdown
Extrapolation: Estimate higher wt (>25%)
###Code
filename_unknowndata7 = "data-evaluation/HDPE_SWCNT_data-set-7.csv"
unknowndata7 = pd.read_csv(filename_unknowndata7, index_col=None, header=0)
unknowndata7.drop(['polymer_p2', 'ratio_1_2','filler_2','wt_l2','owner','foaming'], axis=1, inplace=True) #,'foaming'
unknowndata7_clean = unknowndata7.copy()
unknowndata7_clean = mapStringToNum(unknowndata7_clean)
# Pull out columns for X (data to train with) and Y (value to predict)
X_unknowndata7 = unknowndata7_clean.drop('conductivity', axis=1).values
X_scaled_unknowndata7 = X_scaler.transform(X_unknowndata7)
# Calculate predictions
PredValSet_unknowndata7 = model.predict(X_scaled_unknowndata7)
PredValSet_unknowndata72 = Y_scaler.inverse_transform(PredValSet_unknowndata7)
compdata = unknowndata7.copy()
compdata['labels'] = compdata['polymer_1'] + "-" + compdata['filler_1'] + "_predicted_unknown"
compdata['conductivity'] = PredValSet_unknowndata72
######################
filename_data8 = "data-evaluation/HDPE_SWCNT_data-set-8.csv"
data8 = pd.read_csv(filename_data8, index_col=None, header=0)
data8['labels']= data8['polymer_1'] + "-" + data8['filler_1'] + "_actual_data"
######################
compdata = compdata.append(data8, ignore_index = True)
fig_dims = (15, 6)
fig, ax = plt.subplots(figsize=fig_dims)
plt.xlabel("weight fraction (%)")
plt.ylabel("conductivity (S/m)")
plt.yscale("log")
g = sns.scatterplot(data=compdata ,x="wt_l1", y ="conductivity", hue="labels" , ax = ax ,markers=["-","x"] );
###Output
_____no_output_____
###Markdown
Predicting unknow case - treated HDPE + SWCNT
###Code
filename_HDPEtreated_SWCNT = "data-evaluation/HDPEtreated_SWCNT_data-set-6.csv"
data_HDPEtreated_SWCNT = pd.read_csv(filename_HDPEtreated_SWCNT, index_col=None, header=0)
data_HDPEtreated_SWCNT_clean = data_HDPEtreated_SWCNT.drop(
['polymer_p2', 'ratio_1_2','filler_2','wt_l2','owner','foaming'], axis=1)
unknowndata = data_HDPEtreated_SWCNT_clean.copy()
unknowndata['conductivity'] = float("NaN")
unknowndata_clean = unknowndata.copy()
unknowndata_clean = mapStringToNum (unknowndata_clean)
# Pull out columns for X (data to train with) and Y (value to predict)
X_unknowndata = unknowndata_clean.drop('conductivity', axis=1).values
X_scaled_unknowndata = X_scaler.transform(X_unknowndata)
# Calculate predictions
PredValSet_unknowndata = model.predict(X_scaled_unknowndata)
PredValSet_unknowndata2 = Y_scaler.inverse_transform(PredValSet_unknowndata)
compdata = unknowndata.copy()
compdata['labels'] = compdata['polymer_1'] + "-" + compdata['filler_1'] + "_predicted_unknown"
compdata['conductivity'] = PredValSet_unknowndata2
alldata['labels'] = alldata['polymer_1'] + "-" + alldata['filler_1']
compdata1 = alldata[alldata['filler_1'] != "GNP" ].copy()
compdata = compdata.append(compdata1, ignore_index = True)
# reduce data rows to 5% (sparse data)
drop_indices = np.random.choice(compdata.index, int(np.ceil(len(compdata.index) * 0.95) ) , replace=False)
compdata_subset = compdata.drop(drop_indices)
fig_dims = (15, 6)
fig, ax = plt.subplots(figsize=fig_dims)
plt.xlabel("weight fraction (%)")
plt.ylabel("conductivity (S/m)")
plt.yscale("log")
plt.xlim([0,25])
g = sns.scatterplot(data=compdata_subset ,x="wt_l1", y ="conductivity", hue="labels" , ax = ax )
###Output
_____no_output_____
|
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/PointWells/201905_Hindcast/2015_PointWells_Evaluations.ipynb
|
###Markdown
This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data.
###Code
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2015
modver = "HC201905"
mooring = "PointWells"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/201905_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
###Output
_____no_output_____
###Markdown
Map of Buoy Location.
###Code
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
###Output
_____no_output_____
###Markdown
Temperature
###Code
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
###Output
/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>
plt.locator_params(axis="x", nbins=20)
###Markdown
Salinity
###Code
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
###Output
_____no_output_____
|
examples/05_Scalable_GP_Regression_Multidimensional/KISSGP_Deep_Kernel_Regression_CUDA.ipynb
|
###Markdown
Deep Kernel Learning GP Regression (w/ KISS-GP) OverviewIn this notebook, we'll give a brief tutorial on how to use deep kernel learning for regression on a medium scale dataset using SKI. This also demonstrates how to incorporate standard PyTorch modules in to a Gaussian process model.
###Code
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading DataFor this example notebook, we'll be using the `elevators` UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.**Note**: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
###Code
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', 'elevators.mat')
data = torch.Tensor(loadmat('elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
###Output
_____no_output_____
###Markdown
Defining the DKL Feature ExtractorNext, we define the neural network feature extractor used to define the deep kernel. In this case, we use a fully connected network with the architecture `d -> 1000 -> 500 -> 50 -> 2`, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
###Code
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
###Output
_____no_output_____
###Markdown
Defining the GP ModelWe now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a `GridInterpolationKernel` (SKI) with an RBF base kernel. The forward methodIn deep kernel learning, the forward method is where most of the interesting new stuff happens. Before calling the mean and covariance modules on the data as in the simple GP regression setting, we first pass the input data `x` through the neural network feature extractor. Then, to ensure that the output features of the neural network remain in the grid bounds expected by SKI, we scales the resulting features to be between 0 and 1.Only after this processing do we call the mean and covariance module of the Gaussian process. This example also demonstrates the flexibility of defining GP models that allow for learned transformations of the data (in this case, via a neural network) before calling the mean and covariance function. Because the neural network in this case maps to two final output features, we will have no problem using SKI.
###Code
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.GridInterpolationKernel(
gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=2)),
num_dims=2, grid_size=100
)
self.feature_extractor = feature_extractor
def forward(self, x):
# We're first putting our data through a deep net (feature extractor)
# We're also scaling the features so that they're nice values
projected_x = self.feature_extractor(x)
projected_x = projected_x - projected_x.min(0)[0]
projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1
mean_x = self.mean_module(projected_x)
covar_x = self.covar_module(projected_x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = GPRegressionModel(train_x, train_y, likelihood).cuda()
###Output
_____no_output_____
###Markdown
Training the modelThe cell below trains the DKL model above, learning both the hyperparameters of the Gaussian process **and** the parameters of the neural network in an end-to-end fashion using Type-II MLE. We run 20 iterations of training using the `Adam` optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
###Code
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.feature_extractor.parameters()},
{'params': model.covar_module.parameters()},
{'params': model.mean_module.parameters()},
{'params': model.likelihood.parameters()},
], lr=0.01)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iterations = 60
def train():
for i in range(training_iterations):
# Zero backprop gradients
optimizer.zero_grad()
# Get output from model
output = model(train_x)
# Calc loss and backprop derivatives
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
optimizer.step()
# See dkl_mnist.ipynb for explanation of this flag
with gpytorch.settings.use_toeplitz(True):
%time train()
###Output
/home/jrg365/gpytorch/gpytorch/utils/cholesky.py:14: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
potrf_list = [sub_mat.potrf() for sub_mat in mat.view(-1, *mat.shape[-2:])]
/home/jrg365/gpytorch/gpytorch/lazy/added_diag_lazy_tensor.py:66: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
ld_one = lr_flipped.potrf().diag().log().sum() * 2
###Markdown
Making PredictionsThe next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in `preds.mean()`) using the standard SKI testing code, with no acceleration or precomputation.
###Code
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.fast_pred_var():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 0.07841506600379944
###Markdown
Deep Kernel Learning GP Regression (w/ KISS-GP) OverviewIn this notebook, we'll give a brief tutorial on how to use deep kernel learning for regression on a medium scale dataset using SKI. This also demonstrates how to incorporate standard PyTorch modules in to a Gaussian process model.
###Code
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading DataFor this example notebook, we'll be using the `elevators` UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.**Note**: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
###Code
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', 'elevators.mat')
data = torch.Tensor(loadmat('elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
###Output
_____no_output_____
###Markdown
Defining the DKL Feature ExtractorNext, we define the neural network feature extractor used to define the deep kernel. In this case, we use a fully connected network with the architecture `d -> 1000 -> 500 -> 50 -> 2`, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
###Code
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
###Output
_____no_output_____
###Markdown
Defining the GP ModelWe now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a `GridInterpolationKernel` (SKI) with an RBF base kernel. The forward methodIn deep kernel learning, the forward method is where most of the interesting new stuff happens. Before calling the mean and covariance modules on the data as in the simple GP regression setting, we first pass the input data `x` through the neural network feature extractor. Then, to ensure that the output features of the neural network remain in the grid bounds expected by SKI, we scales the resulting features to be between 0 and 1.Only after this processing do we call the mean and covariance module of the Gaussian process. This example also demonstrates the flexibility of defining GP models that allow for learned transformations of the data (in this case, via a neural network) before calling the mean and covariance function. Because the neural network in this case maps to two final output features, we will have no problem using SKI.
###Code
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.GridInterpolationKernel(
gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=2)),
num_dims=2, grid_size=100
)
self.feature_extractor = feature_extractor
def forward(self, x):
# We're first putting our data through a deep net (feature extractor)
# We're also scaling the features so that they're nice values
projected_x = self.feature_extractor(x)
projected_x = projected_x - projected_x.min(0)[0]
projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1
mean_x = self.mean_module(projected_x)
covar_x = self.covar_module(projected_x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = GPRegressionModel(train_x, train_y, likelihood).cuda()
###Output
_____no_output_____
###Markdown
Training the modelThe cell below trains the DKL model above, learning both the hyperparameters of the Gaussian process **and** the parameters of the neural network in an end-to-end fashion using Type-II MLE. We run 20 iterations of training using the `Adam` optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
###Code
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam([
{'params': model.feature_extractor.parameters()},
{'params': model.covar_module.parameters()},
{'params': model.mean_module.parameters()},
{'params': model.likelihood.parameters()},
], lr=0.01)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iterations = 60
def train():
for i in range(training_iterations):
# Zero backprop gradients
optimizer.zero_grad()
# Get output from model
output = model(train_x)
# Calc loss and backprop derivatives
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
optimizer.step()
# See dkl_mnist.ipynb for explanation of this flag
with gpytorch.settings.use_toeplitz(True):
%time train()
###Output
/home/jrg365/gpytorch/gpytorch/utils/cholesky.py:14: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
potrf_list = [sub_mat.potrf() for sub_mat in mat.view(-1, *mat.shape[-2:])]
/home/jrg365/gpytorch/gpytorch/lazy/added_diag_lazy_tensor.py:66: UserWarning: torch.potrf is deprecated in favour of torch.cholesky and will be removed in the next release. Please use torch.cholesky instead and note that the :attr:`upper` argument in torch.cholesky defaults to ``False``.
ld_one = lr_flipped.potrf().diag().log().sum() * 2
###Markdown
Making PredictionsThe next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in `preds.mean()`) using the standard SKI testing code, with no acceleration or precomputation.
###Code
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.settings.fast_pred_var():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 0.07841506600379944
###Markdown
Deep Kernel Learning GP Regression (w/ KISS-GP) OverviewIn this notebook, we'll give a brief tutorial on how to use deep kernel learning for regression on a medium scale dataset using SKI. This also demonstrates how to incorporate standard PyTorch modules in to a Gaussian process model.
###Code
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading DataFor this example notebook, we'll be using the `elevators` UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.**Note**: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
###Code
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', 'elevators.mat')
data = torch.Tensor(loadmat('elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
###Output
_____no_output_____
###Markdown
Defining the DKL Feature ExtractorNext, we define the neural network feature extractor used to define the deep kernel. In this case, we use a fully connected network with the architecture `d -> 1000 -> 500 -> 50 -> 2`, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
###Code
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
###Output
_____no_output_____
###Markdown
Defining the GP ModelWe now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a `GridInterpolationKernel` (SKI) with an RBF base kernel. The forward methodIn deep kernel learning, the forward method is where most of the interesting new stuff happens. Before calling the mean and covariance modules on the data as in the simple GP regression setting, we first pass the input data `x` through the neural network feature extractor. Then, to ensure that the output features of the neural network remain in the grid bounds expected by SKI, we scales the resulting features to be between 0 and 1.Only after this processing do we call the mean and covariance module of the Gaussian process. This example also demonstrates the flexibility of defining GP models that allow for learned transformations of the data (in this case, via a neural network) before calling the mean and covariance function. Because the neural network in this case maps to two final output features, we will have no problem using SKI.
###Code
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.GridInterpolationKernel(
gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(ard_num_dims=2)),
num_dims=2, grid_size=100
)
self.feature_extractor = feature_extractor
def forward(self, x):
# We're first putting our data through a deep net (feature extractor)
# We're also scaling the features so that they're nice values
projected_x = self.feature_extractor(x)
projected_x = projected_x - projected_x.min(0)[0]
projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1
mean_x = self.mean_module(projected_x)
covar_x = self.covar_module(projected_x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = GPRegressionModel(train_x, train_y, likelihood).cuda()
###Output
_____no_output_____
###Markdown
Training the modelThe cell below trains the DKL model above, learning both the hyperparameters of the Gaussian process **and** the parameters of the neural network in an end-to-end fashion using Type-II MLE. We run 20 iterations of training using the `Adam` optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
###Code
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.SGD([
{'params': model.feature_extractor.parameters(), 'weight_decay': 1e-3},
{'params': model.covar_module.parameters()},
{'params': model.mean_module.parameters()},
{'params': model.likelihood.parameters()},
], lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
training_iterations = 40
def train():
for i in range(training_iterations):
# Zero backprop gradients
optimizer.zero_grad()
# Get output from model
output = model(train_x)
# Calc loss and backprop derivatives
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
optimizer.step()
# See dkl_mnist.ipynb for explanation of this flag
with gpytorch.settings.use_toeplitz(True):
%time train()
###Output
Iter 1/40 - Loss: 0.942
Iter 2/40 - Loss: 0.925
Iter 3/40 - Loss: 0.905
Iter 4/40 - Loss: 0.878
Iter 5/40 - Loss: 0.856
Iter 6/40 - Loss: 0.831
Iter 7/40 - Loss: 0.806
Iter 8/40 - Loss: 0.784
Iter 9/40 - Loss: 0.762
Iter 10/40 - Loss: 0.738
Iter 11/40 - Loss: 0.717
Iter 12/40 - Loss: 0.694
Iter 13/40 - Loss: 0.672
Iter 14/40 - Loss: 0.649
Iter 15/40 - Loss: 0.626
Iter 16/40 - Loss: 0.603
Iter 17/40 - Loss: 0.580
Iter 18/40 - Loss: 0.557
Iter 19/40 - Loss: 0.536
Iter 20/40 - Loss: 0.514
Iter 21/40 - Loss: 0.490
Iter 22/40 - Loss: 0.468
Iter 23/40 - Loss: 0.448
Iter 24/40 - Loss: 0.425
Iter 25/40 - Loss: 0.400
Iter 26/40 - Loss: 0.379
Iter 27/40 - Loss: 0.358
Iter 28/40 - Loss: 0.338
Iter 29/40 - Loss: 0.317
Iter 30/40 - Loss: 0.292
Iter 31/40 - Loss: 0.275
Iter 32/40 - Loss: 0.256
Iter 33/40 - Loss: 0.239
Iter 34/40 - Loss: 0.237
Iter 35/40 - Loss: 0.215
Iter 36/40 - Loss: 0.201
Iter 37/40 - Loss: 0.181
Iter 38/40 - Loss: 0.156
Iter 39/40 - Loss: 0.128
Iter 40/40 - Loss: 0.111
CPU times: user 15.2 s, sys: 4.52 s, total: 19.7 s
Wall time: 19.7 s
###Markdown
Making PredictionsThe next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in `preds.mean()`) using the standard SKI testing code, with no acceleration or precomputation.
###Code
model.eval()
likelihood.eval()
with torch.no_grad(), gpytorch.settings.use_toeplitz(False), gpytorch.fast_pred_var():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 0.11873025447130203
|
python/pandas/dataframe.ipynb
|
###Markdown
How to create a Pandas Dataframe in PythonIn Pandas, `DataFrame` is the primary data structures to hold **tabular data**. You can create it using the `DataFrame` constructor `pandas.DataFrame()` or by importing data directly from various data sources. Simply:* `Series` is like a **column*** a `DataFrame` is the **whole table**Tabular datasets which are located in large external databases or are present in files of different formats such as .csv files or excel files can be read into Python using the pandas library in the form of a `DataFrame`. Syntax`pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False)` PurposeTo create a two dimensional spreadsheet - like data structure for storing data in a tabular format Parameters* `data` - dictionary or list (default: None). It will be used to populate the rows and columns of the DataFrame* `index` - index or Array (default: None). It is used to specify the feature of the dataset whose values will be used to mark and identify each row of the dataset. Although its default value is ‘None’, if the index is not specified, integer values ranging from 0 to one less than the total number of rows present in the dataset will be used as index* `columns` - index or Array (default:None). It is used to specify the column or the feature names of the dataset. Although its default value is ‘None’, if the columns parameter is not specified then integer values ranging from zero to one less than the total number of features present in the dataset will be used as column names* `dtype` - dtype (default: None). It is used to force the DataFrame to be created and have only those values or convert the values to the specified dtype. If this parameter is not specified then the DataFrame will infer the data types of each feature on the basis of the values present in them* `copy` - Boolean (default: None). It is used to copy data from the inputs ReturnsA two-dimensional data structure containing data in a tabular format i.e., rows and columns. Creating a basic single column Pandas DataFrameA basic `DataFrame` can be made by using a `list`:
###Code
# Create a single column dataframe
import pandas as pd
data_list = ['India', 'China', 'United States', 'Pakistan', 'Indonesia']
df = pd.DataFrame(data_list)
print('type:', type(df), '\n-------------------------------------------')
print(df)
# That creates a default column name (0) and index names (0,1,2,3..).
###Output
type: <class 'pandas.core.frame.DataFrame'>
-------------------------------------------
0
0 India
1 China
2 United States
3 Pakistan
4 Indonesia
###Markdown
Making a DataFrame from a dictionary of listsA `pandas.DataFrame` can be created using a `dictionary` in which the keys are column names and and array or list of feature values are passed as the values to the dict. This `dictionary` is then passed as a value to the data parameter of the DataFrame constructor:
###Code
# Create a dictionary where the keys are the feature names
# And the values are a list of the feature values
data_dict = {'Country': ['India', 'China', 'United States', 'Pakistan', 'Indonesia'],
'Population': [1393409038, 1444216107, 332129157, 225199937, 276361783],
'Currency': ['Indian Rupee', 'Renminbi', 'US Dollar', 'Pakistani Rupee', 'Indonesian Rupiah']}
df = pd.DataFrame(data=data_dict)
print(df)
###Output
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
Making a DataFrame from a list of listsA list of lists means a `list` in which each element itself is a `list`. Each element in such a list forms a row of the `DataFrame`. Therefore, the number of rows of the Pandas DataFrame is equal to the number of elements of the outer list:
###Code
# Create a list of lists where
# Each inner list is a row of the DataFrame
data_list = [['India', 1393409038, 'Indian Rupee'],
['China', 1444216107, 'Renminbi'],
['United States', 332129157, 'US Dollar'],
['Pakistan', 225199937, 'Pakistani Rupee'],
['Indonesia', 276361783, 'Indonesian Rupiah']]
df = pd.DataFrame(data=data_list,
columns=['Country', 'Population', 'Currency'])
print(df)
###Output
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
The elements of the inner lists, that is, the lists within data_list are the values of the different features across each row. Also, see that the column names have been passed as a list to the columns parameter. Making a DataFrame from a list of dictionariesA list of dictionaries means a `list` in which each element is a `dictionary`. In the dictionary, the keys are the **column names** and the values are the corresponding **column values**:
###Code
# Create a list of dictionaries where the keys are the column names
# And the values are a particular feature value.
list_of_dicts = [{'Country': 'India', 'Population': 139409038, 'Currency': 'Indian Rupee'},
{'Country': 'China', 'Population': 1444216107, 'Currency': 'Renminbi'},
{'Country': 'United States', 'Population': 332129157, 'Currency': 'US Dollar'},
{'Country': 'Pakistan', 'Population': 225199937, 'Currency': 'Pakistani Rupee'},
{'Country': 'Indonesia', 'Population': 276361763, 'Currency': 'Indonesian Rupiah'}, ]
df = pd.DataFrame(list_of_dicts)
print(df)
###Output
Country Population Currency
0 India 139409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361763 Indonesian Rupiah
###Markdown
Making a DataFrame from a Numpy arrayA multi-dimensional `numpy` array can also be used for creating a `DataFrame`. It looks similar to the list of lists where there is an outer array and the inner arrays form the rows of the `DataFrame`:
###Code
import numpy as np
data_nparray = np.array([['India', 1393409038, 'Indian Rupee'],
['China', 1444216107, 'Renminbi'],
['United States', 332129157, 'US Dollar'],
['Pakistan', 225199937, 'Pakistani Rupee'],
['Indonesia', 276361783, 'Indonesian Rupiah']])
df = pd.DataFrame(data=data_nparray)
print(df)
###Output
0 1 2
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
For column names, you need to pass a list of column names to the columns parameter:
###Code
data_nparray = np.array([['India', 1393409038, 'Indian Rupee'],
['China', 1444216107, 'Renminbi'],
['United States', 332129157, 'US Dollar'],
['Pakistan', 225199937, 'Pakistani Rupee'],
['Indonesia', 276361783, 'Indonesian Rupiah']])
df = pd.DataFrame(data=data_nparray,
columns=['Country', 'Population', 'Currency'])
print(df)
###Output
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
Alternatively, you can also make a `dictionary` of numpy arrays where the keys would be the column names and the corresponding values to each key would be the inner arrays which are the feature values:
###Code
data_array = np.array(
[['India', 'China', 'United States', 'Pakistan', 'Indonesia'],
[1393409038, 1444216107, 332129157, 225199937, 276361783],
['Indian Rupee', 'Renminbi', 'US Dollar', 'Pakistani Rupee', 'Indonesian Rupiah']])
# Create a dictionary where the keys are the column names
# And each element of data_array is the feature value
dict_array = {
'Country': data_array[0],
'Population': data_array[1],
'Currency': data_array[2]
}
df = pd.DataFrame(dict_array)
print(df)
###Output
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
Making a DataFrame using the zip functionThe `zip` function can be used to combine multiple objects into a single object which can then be passed into the `pandas.DataFrame` function for making the `DataFrame`:
###Code
# Create the countries list (1st object)
countries = ['India', 'China', 'United States', 'Pakistan', 'Indonesia']
# Create the population list (2nd object)
population = [1393409038, 1444216107, 332129157, 225199937, 276361783]
# Create the currency list (3rd object)
currency = ['Indian Rupee', 'Renminbi', 'US Dollar',
'Pakistani Rupee', 'Indonesian Rupiah']
# Zip the three objects
data_zipped = zip(countries, population, currency)
data_zipped_list = list(zip(countries, population, currency))
print('type:', type(data_zipped))
print(data_zipped_list, '\n------------------------------------------------------------------------')
# Pass the zipped object as the data parameter and mention the column names explicitly
df = pd.DataFrame(data_zipped,
columns=['Country', 'Population', 'Currency'])
print(df)
###Output
type: <class 'zip'>
[('India', 1393409038, 'Indian Rupee'), ('China', 1444216107, 'Renminbi'), ('United States', 332129157, 'US Dollar'), ('Pakistan', 225199937, 'Pakistani Rupee'), ('Indonesia', 276361783, 'Indonesian Rupiah')]
------------------------------------------------------------------------
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
Making Indexed Pandas DataFramesPandas DataFrames having a pre-defined index can also be made by passing a list of indices to the index parameter:
###Code
# Create the DataFrame
data_dict = {'Country': ['India', 'China', 'United States', 'Pakistan', 'Indonesia'],
'Population': [1393409038, 1444216107, 332129157, 225199937, 276361783],
'Currency': ['Indian Rupee', 'Renminbi', 'US Dollar', 'Pakistani Rupee', 'Indonesian Rupiah']}
# Make the list of indices
indices = ['Ind', 'Chi', 'US', 'Pak', 'Indo']
# Pass the indices to the index parameter
df = pd.DataFrame(data=data_dict, index=indices)
print(df)
###Output
Country Population Currency
Ind India 1393409038 Indian Rupee
Chi China 1444216107 Renminbi
US United States 332129157 US Dollar
Pak Pakistan 225199937 Pakistani Rupee
Indo Indonesia 276361783 Indonesian Rupiah
###Markdown
Making a new DataFrame from existing DataFrames pandas.concatYou can also make new DataFrames from existing DataFrames using the `pandas.concat` function. The DataFrames can be joined or concatenated both vertically or horizontally as required. Joining two DataFrames horizontallyYou can join two DataFrames horizontally by setting the value of the axis parameter to `1`:
###Code
# -- Joining Horizontally
# Create 1st DataFrame
countries = ['India', 'China', 'United States', 'Pakistan', 'Indonesia']
df1 = pd.DataFrame(countries, columns=['Country'])
print(df1, '\n----------------')
# Create 2nd DataFrame
df2_data = {
'Population': [1393409038, 1444216107, 332129157, 225199937, 276361783],
'Currency': ['Indian Rupee', 'Renminbi', 'US Dollar', 'Pakistani Rupee', 'Indonesian Rupiah']
}
df2 = pd.DataFrame(df2_data)
print(df2, '\n----------------')
# Join the two DataFrames horizontally by setting the axis value equal to 1
# Axis: { y: 0, x: 1, z: 2}
df_joined = pd.concat([df1, df2], axis=1)
print(df_joined)
###Output
Country
0 India
1 China
2 United States
3 Pakistan
4 Indonesia
----------------
Population Currency
0 1393409038 Indian Rupee
1 1444216107 Renminbi
2 332129157 US Dollar
3 225199937 Pakistani Rupee
4 276361783 Indonesian Rupiah
----------------
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
3 Pakistan 225199937 Pakistani Rupee
4 Indonesia 276361783 Indonesian Rupiah
###Markdown
Joining two DataFrames verticallyYou can also join two DataFrames vertically if they have the same column names by setting the value of the axis parameter to `0`:
###Code
# -- Joining Vertically
# Create the 1st DataFrame
df_top_data = {
'Country': ['India', 'China', 'United States'],
'Population': [1393409038, 1444216107, 332129157],
'Currency': ['Indian Rupee', 'Renminbi', 'US Dollar']
}
df_top = pd.DataFrame(df_top_data)
print(df_top, '\n----------------')
# Create the 2nd DataFrame
df_bottom_data = {
'Country': ['Pakistan', 'Indonesia'],
'Population': [225199937, 276361783],
'Currency': ['Pakistani Rupee', 'Indonesian Rupiah']
}
df_bottom = pd.DataFrame(df_bottom_data)
print(df_bottom, '\n----------------')
# Join the two DataFrames vertically by setting the axis value equal to 0
df_joined = pd.concat([df_top, df_bottom], axis=0)
print(df_joined)
###Output
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
----------------
Country Population Currency
0 Pakistan 225199937 Pakistani Rupee
1 Indonesia 276361783 Indonesian Rupiah
----------------
Country Population Currency
0 India 1393409038 Indian Rupee
1 China 1444216107 Renminbi
2 United States 332129157 US Dollar
0 Pakistan 225199937 Pakistani Rupee
1 Indonesia 276361783 Indonesian Rupiah
###Markdown
Making pandas DataFrames from text filesThe `pandas.read_csv` function is one of the most popular functions used to read external text files.Even though the name of the function says ‘csv’, it can read other types of text files which are often imported from different databases because of which they can be in different formats (.csv, .txt, etc.) or encodings (utf-8, ascii, etc.). `pandas.read_csv` function [offers a number of parameters](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html) which can be configured for reading and parsing the files as required. Now, you will see how to load a dataset using the `read_csv` function:
###Code
# Enter the path where the file is located
df = pd.read_csv('listings.csv')
print(df)
###Output
id name host_id \
0 2818 Quiet Garden View Room & Super Fast WiFi 3159
1 20168 Studio with private bathroom in the centre 1 59484
2 27886 Romantic, stylish B&B houseboat in canal district 97647
3 28871 Comfortable double room 124245
4 29051 Comfortable single room 124245
... ... ... ...
5551 53661558 apartment center Amsterdam 434282364
5552 53668495 Full flat with garden for families near Wester... 11768620
5553 53669920 Ground Floor of a 200 Year Old Canal House 434634113
5554 53670575 Bright and cozy apartment in Amsterdam 172920739
5555 53671581 Large home close to center transport shops 267671916
host_name neighbourhood_group \
0 Daniel NaN
1 Alexander NaN
2 Flip NaN
3 Edwin NaN
4 Edwin NaN
... ... ...
5551 Leonardo Romero NaN
5552 Helena NaN
5553 Kyah NaN
5554 Kevin NaN
5555 Fauna NaN
neighbourhood latitude longitude \
0 Oostelijk Havengebied - Indische Buurt 52.364350 4.943580
1 Centrum-Oost 52.364070 4.893930
2 Centrum-West 52.387610 4.891880
3 Centrum-West 52.367750 4.890920
4 Centrum-Oost 52.365840 4.891110
... ... ... ...
5551 De Pijp - Rivierenbuurt 52.342191 4.906892
5552 Bos en Lommer 52.383912 4.856359
5553 Centrum-West 52.375168 4.881894
5554 Bos en Lommer 52.378442 4.854791
5555 Bijlmer-Centrum 52.322727 4.936015
room_type price minimum_nights number_of_reviews last_review \
0 Private room 60 3 285 2021-11-21
1 Private room 106 1 339 2020-03-27
2 Private room 135 2 226 2021-10-20
3 Private room 75 2 370 2021-11-25
4 Private room 55 2 520 2021-11-26
... ... ... ... ... ...
5551 Entire home/apt 120 20 0 NaN
5552 Entire home/apt 140 5 0 NaN
5553 Private room 200 2 0 NaN
5554 Entire home/apt 116 4 0 NaN
5555 Entire home/apt 800 1 0 NaN
reviews_per_month calculated_host_listings_count availability_365 \
0 2.83 1 70
1 3.53 2 0
2 2.11 1 8
3 4.63 2 238
4 5.57 2 267
... ... ... ...
5551 NaN 1 365
5552 NaN 1 13
5553 NaN 1 363
5554 NaN 1 349
5555 NaN 1 365
number_of_reviews_ltm license
0 7 0363 5F3A 5684 6750 D14D
1 0 0363 CBB3 2C10 0C2A 1E29
2 7 0363 974D 4986 7411 88D8
3 34 0363 607B EA74 0BD8 2F6F
4 41 0363 607B EA74 0BD8 2F6F
... ... ...
5551 0 Exempt
5552 0 0363 C724 A106 AE99 0B0B
5553 0 0363 8553 5196 A6A0 2A7F
5554 0 0363 4E2B 7976 261A FF95
5555 0 NaN
[5556 rows x 18 columns]
###Markdown
'sep' parameterThe default character by which Python separates the values of different columns in a row is a comma `,`. You can set a different separator by changing the `sep` parameter:
###Code
# Define the sep parameter
df = pd.read_csv('listings.csv', sep=',')
print(df)
###Output
id name host_id \
0 2818 Quiet Garden View Room & Super Fast WiFi 3159
1 20168 Studio with private bathroom in the centre 1 59484
2 27886 Romantic, stylish B&B houseboat in canal district 97647
3 28871 Comfortable double room 124245
4 29051 Comfortable single room 124245
... ... ... ...
5551 53661558 apartment center Amsterdam 434282364
5552 53668495 Full flat with garden for families near Wester... 11768620
5553 53669920 Ground Floor of a 200 Year Old Canal House 434634113
5554 53670575 Bright and cozy apartment in Amsterdam 172920739
5555 53671581 Large home close to center transport shops 267671916
host_name neighbourhood_group \
0 Daniel NaN
1 Alexander NaN
2 Flip NaN
3 Edwin NaN
4 Edwin NaN
... ... ...
5551 Leonardo Romero NaN
5552 Helena NaN
5553 Kyah NaN
5554 Kevin NaN
5555 Fauna NaN
neighbourhood latitude longitude \
0 Oostelijk Havengebied - Indische Buurt 52.364350 4.943580
1 Centrum-Oost 52.364070 4.893930
2 Centrum-West 52.387610 4.891880
3 Centrum-West 52.367750 4.890920
4 Centrum-Oost 52.365840 4.891110
... ... ... ...
5551 De Pijp - Rivierenbuurt 52.342191 4.906892
5552 Bos en Lommer 52.383912 4.856359
5553 Centrum-West 52.375168 4.881894
5554 Bos en Lommer 52.378442 4.854791
5555 Bijlmer-Centrum 52.322727 4.936015
room_type price minimum_nights number_of_reviews last_review \
0 Private room 60 3 285 2021-11-21
1 Private room 106 1 339 2020-03-27
2 Private room 135 2 226 2021-10-20
3 Private room 75 2 370 2021-11-25
4 Private room 55 2 520 2021-11-26
... ... ... ... ... ...
5551 Entire home/apt 120 20 0 NaN
5552 Entire home/apt 140 5 0 NaN
5553 Private room 200 2 0 NaN
5554 Entire home/apt 116 4 0 NaN
5555 Entire home/apt 800 1 0 NaN
reviews_per_month calculated_host_listings_count availability_365 \
0 2.83 1 70
1 3.53 2 0
2 2.11 1 8
3 4.63 2 238
4 5.57 2 267
... ... ... ...
5551 NaN 1 365
5552 NaN 1 13
5553 NaN 1 363
5554 NaN 1 349
5555 NaN 1 365
number_of_reviews_ltm license
0 7 0363 5F3A 5684 6750 D14D
1 0 0363 CBB3 2C10 0C2A 1E29
2 7 0363 974D 4986 7411 88D8
3 34 0363 607B EA74 0BD8 2F6F
4 41 0363 607B EA74 0BD8 2F6F
... ... ...
5551 0 Exempt
5552 0 0363 C724 A106 AE99 0B0B
5553 0 0363 8553 5196 A6A0 2A7F
5554 0 0363 4E2B 7976 261A FF95
5555 0 NaN
[5556 rows x 18 columns]
###Markdown
Practical Tips* The `chunksize` parameter of the `read_csv` function is very useful for datasets that are too big or do not fit into the memory. By defining the `chunksize`, Python will load only the `chunk_size` number of rows at a time and process them before loading the next chunk* Try to manually check the dataset first before loading it into Python. This can give you an idea of the delimiter (`sep`) used or if there are any rows at the beginning or the end of the dataset which should be ignored while loading the dataset* In case you wish to pass a list of column names which are different from the ones present in the dataset. You can do so using the names parameter. This will push the column names to the first row but this row can be ignored by setting `skiprows = 1` Importing and Viewing DataAs we know, `pandas` allows us to convert data from different formats, such as a CSV file, into a `DataFrame` object (the primary `pandas` data structure). We can then import a CSV file as a `DataFrame` using the `pd.read_csv()` function, which takes in the path of the file you want to import. To view the `DataFrame` in a Jupyter notebook, we simply type the name of the variable.**NOTE:** `display` is a function in the `IPython.display` module that runs the appropriate dunder (magic) method to get the appropriate data to ... display
###Code
# Import pandas package
import pandas as pd
# Read the airbnb NYC `listings.csv` file
df = pd.read_csv("listings.csv")
print('type:', type(df), '\n--------')
# Display the pandas DataFrame
display(df)
###Output
type: <class 'pandas.core.frame.DataFrame'>
--------
###Markdown
Since there are so many rows in the `DataFrame`, we see that most of the data is truncated. We can view just the first or last few entries in the `DataFrame` using the `.head()` and `.tail()` methods.
###Code
# The first few entries
print('The first few entries:\n', df.head(), '\n===================================================================================')
# The last few entries
print('The last few entries:\n', df.tail())
###Output
The first few entries:
id name host_id \
0 2818 Quiet Garden View Room & Super Fast WiFi 3159
1 20168 Studio with private bathroom in the centre 1 59484
2 27886 Romantic, stylish B&B houseboat in canal district 97647
3 28871 Comfortable double room 124245
4 29051 Comfortable single room 124245
host_name neighbourhood_group neighbourhood \
0 Daniel NaN Oostelijk Havengebied - Indische Buurt
1 Alexander NaN Centrum-Oost
2 Flip NaN Centrum-West
3 Edwin NaN Centrum-West
4 Edwin NaN Centrum-Oost
latitude longitude room_type price minimum_nights \
0 52.36435 4.94358 Private room 60 3
1 52.36407 4.89393 Private room 106 1
2 52.38761 4.89188 Private room 135 2
3 52.36775 4.89092 Private room 75 2
4 52.36584 4.89111 Private room 55 2
number_of_reviews last_review reviews_per_month \
0 285 2021-11-21 2.83
1 339 2020-03-27 3.53
2 226 2021-10-20 2.11
3 370 2021-11-25 4.63
4 520 2021-11-26 5.57
calculated_host_listings_count availability_365 number_of_reviews_ltm \
0 1 70 7
1 2 0 0
2 1 8 7
3 2 238 34
4 2 267 41
license
0 0363 5F3A 5684 6750 D14D
1 0363 CBB3 2C10 0C2A 1E29
2 0363 974D 4986 7411 88D8
3 0363 607B EA74 0BD8 2F6F
4 0363 607B EA74 0BD8 2F6F
===================================================================================
The last few entries:
id name host_id \
5551 53661558 apartment center Amsterdam 434282364
5552 53668495 Full flat with garden for families near Wester... 11768620
5553 53669920 Ground Floor of a 200 Year Old Canal House 434634113
5554 53670575 Bright and cozy apartment in Amsterdam 172920739
5555 53671581 Large home close to center transport shops 267671916
host_name neighbourhood_group neighbourhood \
5551 Leonardo Romero NaN De Pijp - Rivierenbuurt
5552 Helena NaN Bos en Lommer
5553 Kyah NaN Centrum-West
5554 Kevin NaN Bos en Lommer
5555 Fauna NaN Bijlmer-Centrum
latitude longitude room_type price minimum_nights \
5551 52.342191 4.906892 Entire home/apt 120 20
5552 52.383912 4.856359 Entire home/apt 140 5
5553 52.375168 4.881894 Private room 200 2
5554 52.378442 4.854791 Entire home/apt 116 4
5555 52.322727 4.936015 Entire home/apt 800 1
number_of_reviews last_review reviews_per_month \
5551 0 NaN NaN
5552 0 NaN NaN
5553 0 NaN NaN
5554 0 NaN NaN
5555 0 NaN NaN
calculated_host_listings_count availability_365 number_of_reviews_ltm \
5551 1 365 0
5552 1 13 0
5553 1 363 0
5554 1 349 0
5555 1 365 0
license
5551 Exempt
5552 0363 C724 A106 AE99 0B0B
5553 0363 8553 5196 A6A0 2A7F
5554 0363 4E2B 7976 261A FF95
5555 NaN
###Markdown
Selecting ColumnsTypically, we will only want a subset of the available columns in our `DataFrame`. We can select a single column using single brackets `[...]` and the name of the column:
###Code
# Results for a single column
print('Single column `name`:')
display(df['name'])
###Output
Single column `name`:
###Markdown
The result is a `Series` object with its own set of attributes and methods. These objects are like arrays and are the building blocks of `DataFrames`**NOTE:** Each `DataFrame` is made up of a set of `Series`.To select multiple columns at once, we use double brackets `[[...]]` and commas `,` between column names:
###Code
# Results for multiple columns
hosts = df[['host_id', 'host_name']]
hosts.head()
###Output
_____no_output_____
###Markdown
Pandas Data SelectionThere’s 2 main options to achieve the selection and indexing activities in Pandas:* Selecting data by row numbers (`.iloc`)* Selecting data by label or by a conditional statement (`.loc`) Pandas iloc data selectionThe `iloc` indexer for Pandas `Dataframe` is used for **integer-location based indexing / selection** by position.The `iloc` indexer syntax is `data.iloc[, ]`. `iloc` in pandas is used to select rows and columns **by number**, in the order that they appear in the DataFrame. You can imagine that each row has a row number from `0` to the total rows (`data.shape[0]`) and `iloc[]` allows selections based on these numbers. The same applies for columns (ranging from `0` to `data.shape[1]`)There are two “arguments” to `iloc` – a `row selector`, and a `column selector`. For example:
###Code
display(df)
# Single selections using iloc and DataFrame
# Returns a `Series` data type
# Rows:
print('1st row of DataFrame:\n', df.iloc[0], '\n-----------------------', sep='') # 1st row of DataFrame
print('2nd row of DataFrame:\n', df.iloc[1], '\n-----------------------', sep='') # 2nd row of DataFrame
print('last row of DataFrame\n', df.iloc[-1], '\n-----------------------', sep='') # last row of DataFrame
# Columns:
print('1st column of DataFrame:\n', df.iloc[:,0], '\n-----------------------', sep='') # 1st column of DataFrame
print('2nd column of DataFrame:\n', df.iloc[:,1], '\n-----------------------', sep='') # 2nd column of DataFrame
print('last column of DataFrame:\n', df.iloc[:,-1], sep='') # last column of DataFrame
###Output
_____no_output_____
###Markdown
Multiple columns and rows can be selected together using the `.iloc` indexer:
###Code
# Multiple row and column selections using iloc and DataFrame
print('1st five rows of DataFrame:\n', df.iloc[0:5], '\n-----------------------', sep='')
print('1st two columns of DataFrame with all rows:\n', df.iloc[:, 0:2], sep='')
###Output
1st five rows of DataFrame:
id name host_id \
0 2818 Quiet Garden View Room & Super Fast WiFi 3159
1 20168 Studio with private bathroom in the centre 1 59484
2 27886 Romantic, stylish B&B houseboat in canal district 97647
3 28871 Comfortable double room 124245
4 29051 Comfortable single room 124245
host_name neighbourhood_group neighbourhood \
0 Daniel NaN Oostelijk Havengebied - Indische Buurt
1 Alexander NaN Centrum-Oost
2 Flip NaN Centrum-West
3 Edwin NaN Centrum-West
4 Edwin NaN Centrum-Oost
latitude longitude room_type price minimum_nights \
0 52.36435 4.94358 Private room 60 3
1 52.36407 4.89393 Private room 106 1
2 52.38761 4.89188 Private room 135 2
3 52.36775 4.89092 Private room 75 2
4 52.36584 4.89111 Private room 55 2
number_of_reviews last_review reviews_per_month \
0 285 2021-11-21 2.83
1 339 2020-03-27 3.53
2 226 2021-10-20 2.11
3 370 2021-11-25 4.63
4 520 2021-11-26 5.57
calculated_host_listings_count availability_365 number_of_reviews_ltm \
0 1 70 7
1 2 0 0
2 1 8 7
3 2 238 34
4 2 267 41
license
0 0363 5F3A 5684 6750 D14D
1 0363 CBB3 2C10 0C2A 1E29
2 0363 974D 4986 7411 88D8
3 0363 607B EA74 0BD8 2F6F
4 0363 607B EA74 0BD8 2F6F
-----------------------
1st two columns of DataFrame with all rows:
id name
0 2818 Quiet Garden View Room & Super Fast WiFi
1 20168 Studio with private bathroom in the centre 1
2 27886 Romantic, stylish B&B houseboat in canal district
3 28871 Comfortable double room
4 29051 Comfortable single room
... ... ...
5551 53661558 apartment center Amsterdam
5552 53668495 Full flat with garden for families near Wester...
5553 53669920 Ground Floor of a 200 Year Old Canal House
5554 53670575 Bright and cozy apartment in Amsterdam
5555 53671581 Large home close to center transport shops
[5556 rows x 2 columns]
###Markdown
There’s two gotchas to remember when using `iloc` in this manner:**NOTE:** that `.iloc` returns a `pandas.Series` when 1 row is selected, and a `pandas.DataFrame` when multiple rows are selected, or if any column in full is selected. To counter this, pass a single-valued list if you require `DataFrame` output.* When using `.loc`, or `.iloc`, you can control the output format by passing lists or single values to the selectors* When selecting multiple columns or multiple rows in this manner, remember that in your selection e.g.`[1:5]`, the rows/columns selected will run from the 1st number to the 'second number - 1'. e.g. `[1:5]` will go 1,2,3,4., `[x,y]` goes from `x` to `y-1`. Pandas loc data selectionThe Pandas `.loc` indexer can be used with `DataFrames` for 2 different use cases:* Selecting rows by label / index* Selecting rows with a boolean / conditional lookupThe `loc` indexer is used with the same syntax as `iloc`: `data.loc[, ]`. Label-based / Index-based indexing using .locSelections using the `loc` method are based on the index of the DataFrame (if any). Where the index is set on a DataFrame, using `df.set_index()`, the `.loc` method directly selects based on index values of any rows:DataFrame
###Code
df = pd.DataFrame({'month': [1, 4, 7, 10],
'year': [2012, 2014, 2013, 2014],
'sale': [55, 40, 84, 31]})
display(df)
print('------------')
# `inplace` (bool, default False)
# If True, modifies the DataFrame in place (do not create a new object)
df.set_index("month", inplace=True)
display(df)
###Output
_____no_output_____
###Markdown
`month` set as Index set on sample DataFrame. Now with the index set, we can directly select rows for different “month” values using `.loc[]` – either singly, or in multiples.`.loc` is used by `pandas` for label based lookups in dataframes:
###Code
print('type:', type(df.loc[1]))
display(df.loc[1])
print('\n-----------')
print('type:', type(df.loc[[1, 10]]))
display(df.loc[[1, 10]])
###Output
type: <class 'pandas.core.series.Series'>
###Markdown
**NOTE:** that the 1st example returns a `Series`, and the second returns a `DataFrame`. You can achieve a single-column DataFrame by passing a single-element list to the `.loc` operation. Select columns with `.loc` using the names of the columns:
###Code
display(df.loc[[1, 10], ["sale"]])
###Output
_____no_output_____
###Markdown
When using the `.loc` indexer, columns are referred to by names using lists of strings, or “`:`” slices.You can select ranges of index labels – the selection `data.loc[‘index_name_1’:’index_name_2’, [‘column_name’:’column_name’]]` will return all rows in the data frame between the index entries for “index_name_1” and “index_name_1”:
###Code
# Select rows with index values 'a' and 'e', with all columns between 'Age' and 'Experience'
display(df)
print('-------------\n')
display(df.loc[['a', 'e'], 'Age':'Experience'])
print('-------------\n')
# Select same rows, with just 'City' column
display(df.loc['a':'g', ['City']])
print('\n-------------')
# Change the index to be based on the 'Name' column
display(df.set_index('Name', inplace=True))
# select the row with 'Name' = Veena
display(df.loc['Veena'])
###Output
_____no_output_____
###Markdown
Pandas Loc Boolean / Logical indexingConditional selections with boolean arrays using `data.loc[]` is the common method. With boolean indexing or logical selection, you pass an array or `Series` of `True` / `False` values to the `.loc` indexer to select the rows where your `Series` has `True` values.In most use cases, you will make selections based on the values of different columns in your data set.For example, the statement data[‘Experience’] == 7] produces a Pandas `Series` with a `True`/`False` value for every row in the ‘`df`’ DataFrame, where there are “`True`” values for the rows where the Experience is 7. These type of boolean arrays can be passed directly to the `.loc` indexer as so:
###Code
display(df)
print("df['Experience'] == 7:\n", df['Experience'] == 7,
'\n----------\ntype: ', type(df['Experience'] == 7),
sep='')
display(df.loc[df['Experience'] == 7])
###Output
_____no_output_____
###Markdown
Using a boolean `True`/`False` series to select rows in a pandas data frame – all rows with Experience of 7 are selected. As before, a second argument can be passed to `.loc` to select particular columns out of the DataFrame. Again, columns are referred to by name for the `loc` indexer and can be: * A single string* A list of columns* A slice “:” operation Multiple column selection example using .locSelecting multiple columns with `loc` can be achieved by passing column names to the second argument of `.loc[]`**NOTE:** that when selecting columns, if 1 column only is selected, the `.loc` operator returns a `Series`. For a single column DataFrame, use a one-element list to keep the `DataFrame` format, for example:
###Code
display(df)
display(df.loc[df['City'] == 'Delhi', 'Experience'])
print('----------\ntype:', type(df.loc[df['City'] == 'Delhi', 'Experience']))
display(df.loc[df['City'] == 'Delhi', ['Experience']])
print('----------\ntype:', type(df.loc[df['City'] == 'Delhi', ['Experience']]))
###Output
_____no_output_____
###Markdown
If selections of a single column are made as a `str`, a series is returned from `.loc`. Pass a `list` to get a `DataFrame` back.Make sure you understand the following additional examples of `.loc` selections for clarity:
###Code
import pandas as pd
display(df)
# Select rows where the City ends with 'hi', include all columns
# na=False - specifying `na` to be `False` instead of `NaN`.
display(df.loc[df['City'].str.endswith("hi", na=False)])
# Select rows with City equal to some values, all columns
display(df.loc[df['City'].isin(['Delhi', 'Mumbai', 'Colombo'])])
display(df.loc[df['City'].str.endswith("hi", na=False) & (df['Experience'] == 7)])
# Select rows with id column between 5 and 11, and just return 'Age' and 'City' columns
display(df.loc[(df['Experience'] > 5) & (df['Experience'] < 11), ['Age', 'City']] )
# A lambda function that yields True/False values can also be used.
# Select rows where the City name len equals 5 symb.
display(df.loc[df['City'].apply(lambda x: False if pd.isna(x) else len(x) == 5)])
###Output
_____no_output_____
###Markdown
Logical selections and boolean `Series` can also be passed to the generic `[]` indexer of a `pandas.DataFrame` and will give the same results: `data.loc[data[‘id’] == 9] == data[data[‘id’] == 9]`. Get unique values in columns of a Dataframe* `Series.unique()` - it returns the a numpy array of unique elements in series object* `Series.nunique(self, dropna=True)` - it returns the count of unique elements in the series object* `DataFrame.nunique(self, axis=0, dropna=True)` - it returns the count of unique elements along different axis**NOTE:*** If `axis = 0` (Y) (by default)- it returns a series object containing the count of unique elements in each **column*** If `axis = 1` (X) - it returns a series object containing the count of unique elements in each **row**Now let’s use these functions to find unique element related information from a dataframe:
###Code
import numpy as np
import pandas as pd
# List of Tuples
empoyees = [
('jack', 34, 'Sydney', 5),
('Riti', 31, 'Delhi' , 7),
('Aadi', 16, np.NaN, 11),
('Mohit', 31,'Delhi' , 7),
('Veena', np.NaN, 'Delhi' , 4),
('Shaunak', 35, 'Mumbai', 5 ),
('Shaun', 35, 'Colombo', 11)
]
# Create a DataFrame object
df = pd.DataFrame(empoyees,
columns=['Name', 'Age', 'City', 'Experience'],
index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
display(df)
###Output
_____no_output_____
###Markdown
Find unique values in a single column. To fetch the unique values in column ‘Age’ of the above created dataframe, we will call `unique()` function on the column:
###Code
# Get a unique values in column 'Age' of the dataframe
# Returns `numpy.ndarray`
unique_values = df['Age'].unique()
print('type:', type(unique_values))
print(unique_values)
###Output
_____no_output_____
###Markdown
Count unique values in a single columnIf we are interested in **count of unique elements** in a column then we can use `unique()` function:
###Code
# Count unique values in column 'Age' of the dataframe
# Returns `int`
count_of_unique_values = df['Age'].nunique()
print('type:', type(count_of_unique_values))
print(count_of_unique_values)
###Output
_____no_output_____
###Markdown
Include NaN while counting the unique elements in a columnUsing `nunique()` with default arguments doesn’t include `NaN` while counting the unique elements, if we want to include `NaN` too then we need to pass the `dropna` argument:
###Code
# Count unique values in column 'Age' including NaN
count_of_unique_values = df['Age'].nunique(dropna=False)
print(count_of_unique_values)
###Output
_____no_output_____
###Markdown
Count unique values in each column of the dataframeIn `Dataframe.nunique()` default value of axis is `0` (it returns the count of unique elements in each column):
###Code
# Get a series object containing the count of unique elements
# in each column of dataframe
# Retunrs `pandas.Series`
count_of_unique_values = df.nunique()
print('type:', type(count_of_unique_values))
print(count_of_unique_values)
###Output
_____no_output_____
###Markdown
It didn’t included the `NaN` while counting because default value of argument `dropna` is `True`. To include the `NaN` pass the value of `dropna` argument as `False`:
###Code
# Count unique elements in each column including NaN
count_of_unique_values = df.nunique(dropna=False)
print(count_of_unique_values)
###Output
_____no_output_____
###Markdown
Get Unique values in a multiple columnsTo get the unique values in multiple columns of a dataframe, we can merge the contents of those columns to create a single `Series` object and then can call `unique()` function on that series object:
###Code
# Get unique elements in multiple columns i.e. Name & Age
# Returns 'numpy.ndarray'
unique_values = pd.concat([df['Name'], df['Age']]).unique()
print('type:', type(unique_values))
print(unique_values)
###Output
_____no_output_____
###Markdown
The result is a new `DataFrame` object with the selected columns. It is useful to select the columns you are interested in analyzing before moving onto the analysis, especially if the data is wide with many unnecessary variables. Changing Column Formatting`pandas` does a relatively good job of understanding **what data types each column is meant to be stored as**. However, sometimes, we would like to change the default type. For example, dates are commonly seen in DataFrames. `pandas` has a useful built-in data type to handle dates, called a `DatetimeIndex`, which allows us to extract useful information like the year and month for a particular row.To check the data types of columns we call the `.dtypes` attribute of the `DataFrame`. To convert a column to a `DatetimeIndex`, we use the `.to_datetime()` functions (these functions exist for all supported data types like `.to_string()` to convert a column to be stored as a string).Specifically, we want to convert the `last_review` column to a datetime column. So we select it as seen in the previous section and set it equal to the result of the operation. Datetime series have a `.dt` attribute with built-in attributes and functions. We select the `.year` attribute of the newly typed datetime column, `last_review`, to get the year of each row.
###Code
# Show the data types for each column `<column_name> ... <column_type>`
df.dtypes
# Change the type of a column to datetime
df['last_review'] = pd.to_datetime(df['last_review'])
df.dtypes
# extract the year from a datetime series
df['year'] = df['last_review'].dt.year
df['year'].head()
###Output
_____no_output_____
###Markdown
Series String FunctionsAnother useful data cleaning tool is removing leading and trailing whitespace `' '` from string data. This can be done using the `strip()` method.
###Code
# Strip leading and trailing spaces from a string series
df['name'] = df['name'].str.strip()
df['name'].head()
###Output
_____no_output_____
###Markdown
`pandas` series have built-in function for making all of the inputs lowercase using `.str.lower()`. This is particularly useful in cases when the data do not have standard capitalization practices which could lead to classifying the same entity as two separate entities. Here, we are lower-casing all listing names that can be found in the name column, and assigning the updated listing names to the newly created `name_lower` column:
###Code
# Lowercase all strings in a series
df['name_lower'] = df['name'].str.lower()
df['name_lower'].head()
###Output
_____no_output_____
###Markdown
Combining ColumnsIf we want to make calculations between columns, we can easily do this by applying the operation to each of the `Series`. Here, we are calculating the minimum number of revenue a listing generates, by calculating the product of the minimum number of stays and the price per night:
###Code
# Calculate using 2 columns
df['min_revenue'] = df['minimum_nights'] * df['price']
df[['minimum_nights', 'price', 'min_revenue']].head()
###Output
_____no_output_____
###Markdown
Descriptive Analysis Summary StatisticsWe can compute some interesting statistics to answer some business questions. The first question we may have is what the average and median price is for the listings in our data. We use the built-in `.mean()` and `.median()` methods to compute these:**NOTE:** * The `.mean()` - (average) of a data set is found by adding all numbers in the data set and then dividing by the number of values in the set* The `.median()` - is the middle value when a data set is ordered from least to greatest
###Code
# Get the mean price
print(airbnb['price'].mean(), '\n=================')
# Get the median price
airbnb['price'].median()
###Output
_____no_output_____
###Markdown
We should rely more on `.median()` prices for our analysis to get a sense of typical listings. Grouped StatisticsWe can also conduct these calculations on groupings of data using the `.groupby()` method. This function is very similar to using pivot tables in excel as we select a subset of columns in our data and then conduct aggregate calculations on them.When you use `as_index = False`, you indicate to `groupby()` that you don't want to set the column ID as the index. When both implementation yield the same results, use `as_index = False` because it will save you some typing and an unnecessary `pandas` operation.
###Code
# Get the mean grouped by type of room
print(df[['room_type', 'price']].groupby('room_type', as_index=False).mean(),
'\n==============================')
# Get the median grouped by type of room
print(df[['room_type', 'price']].groupby('room_type', as_index=False).median())
###Output
_____no_output_____
###Markdown
If we would like to group on additional variables, we can input a list rather than a string as the 1st argument of `.groupby()`:
###Code
# Get the median grouped by type of room and year
df[['room_type', 'year', 'price']].groupby(['room_type', 'year'], as_index=False).median()
###Output
_____no_output_____
###Markdown
Filtering DataOften, we are only interested in a subset of the rows in our dataset. For example, we may only be interested in listings under $1000 as they are more common and closer to the typical listing. We do this by passing a Boolean expression into single brackets as shown below:
###Code
# get all rows with price < 1000
df_under_1000 = df[df['price'] < 1000]
df_under_1000.head()
###Output
_____no_output_____
###Markdown
We can also pass in multiple filters by surrounding each expression in parenthesis `()` and using either `&` (for `and` expressions) or `|` (for `or` expressions). **NOTE:** You will get an error if you do not surround the expressions with parentheses.
###Code
# Get all rows with price < 1000 and year equal to 2020
df_under_1000 = df[(df['price'] < 1000) & (df['year'] == 2020)]
df_under_1000.head()
###Output
_____no_output_____
###Markdown
Plotting`pandas` also has built-in plotting capabilities. For example, we can see the distribution of prices for each listing in our dataset using a histogram in one line of code. Note, we use the under $1000 DataFrame here as we cannot see the bars very clearly when including all prices:
###Code
# Distribution of prices under $1000
ax = df_under_1000['price'].plot.hist(bins=40)
###Output
_____no_output_____
###Markdown
Import / Export of Dataframes[Pandas: How to Read and Write Files](https://realpython.com/pandas-read-write-files/read-a-csv-file)* You can save your Pandas `DataFrame` as a "file_type" file with `.to_{file_type}` pandas function* Once your data is saved in a "file_type" file, you’ll likely want to load and use it from time to time. You can do that with the Pandas `.read_{file_type}` function:Format TypeData DescriptionReaderWritertextCSVread_csvto_csvtextFixed-Width Text Fileread_fwftextJSONread_jsonto_jsontextHTMLread_htmlto_htmltextLaTeXStyler.to_latextextXMLread_xmlto_xmltextLocal clipboardread_clipboardto_clipboardbinaryMS Excelread_excelto_excelbinaryOpenDocumentread_excelbinaryHDF5 Formatread_hdfto_hdfbinaryFeather Formatread_featherto_featherbinaryParquet Formatread_parquetto_parquetbinaryORC Formatread_orcbinaryStataread_statato_statabinarySASread_sasbinarySPSSread_spssbinaryPython Pickle Formatread_pickleto_pickleSQLSQLread_sqlto_sqlSQLGoogle BigQueryread_gbqto_gbq Write a CSV FileYou can save your Pandas `DataFrame` as a CSV file with `.to_csv()`:
###Code
import pandas as pd
data = {
'a': 1,
'b': 2,
'c': 3,
'd': 4,
'e': 5,
'f': 6
}
df = pd.DataFrame(data, index=range(len(data)))
display(df)
df.to_csv('test.csv', index = False)
###Output
_____no_output_____
###Markdown
You’ve created the file `test.csv` in your current working directory. The first column contains the row labels. In some cases, you’ll find them irrelevant. If you don’t want to keep them, then you can pass the argument `index = False` to `.to_csv()`. Read a CSV FileOnce your data is saved in a CSV file, you’ll likely want to load and use it from time to time. You can do that with the Pandas `.read_csv()` function:
###Code
# Use `index_col=0` argument in `.read_csv()`
# To use the 1st column as an index to the entries
df = pd.read_csv('test.csv')
display(df)
###Output
_____no_output_____
|
casa/L-band RFI frequency flagging.ipynb
|
###Markdown
Known RFI regions for MeerKAT L-band See `katdal` notebook[Visualising MeerKAT data.ipynb](https://github.com/ska-sa/MeerKAT-Cookbook/blob/add_large_file_examples/katdal/Visualising%20MeerKAT%20data.ipynb)for details
###Code
msfile='1548939342.ms'
listobs(msfile)
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
Bandpass edges and the Milky Way
###Code
flagdata(vis=msfile, mode='manual', spw='*:856MHZ~880MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1658MHz~1800MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1420.0MHz~1421.3MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
GSM and Aviation
###Code
flagdata(vis=msfile, mode='manual', spw='*:900MHz~915MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:925MHz~960MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1080MHz~1095MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
GPS
###Code
flagdata(vis=msfile, mode='manual', spw='*:1565MHz~1585MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1217MHz~1237MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1375MHz~1387MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1166MHz~1186MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
GLONASS
###Code
flagdata(vis=msfile, mode='manual', spw='*:1592MHz~1610MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1242MHz~1249MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
Galileo
###Code
flagdata(vis=msfile, mode='manual', spw='*:1191MHz~1217MHZ', action='apply');
flagdata(vis=msfile, mode='manual', spw='*:1260MHz~1300MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
Afristar
###Code
flagdata(vis=msfile, mode='manual', spw='*:1453MHz~1490MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
IRIDIUM
###Code
flagdata(vis=msfile, mode='manual', spw='*:1616MHz~1626MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
Inmarsat
###Code
flagdata(vis=msfile, mode='manual', spw='*:1526MHz~1554MHZ', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
###Markdown
Alkantpan
###Code
flagdata(vis=msfile, mode='manual', spw='*:1600MHz', action='apply');
plotms(vis=msfile,
xaxis='freq',
yaxis='amp',
averagedata=True,
avgtime='900',
field='PKS1934-63',
antenna='*&&&',
coloraxis='corr',
plotrange=[0.850, 1.712, 0, 0])
###Output
_____no_output_____
|
08_Transfer_Learning_zh_CN.ipynb
|
###Markdown
TensorFlow 教程 08 迁移学习by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) 中文翻译 [thrillerist](https://zhuanlan.zhihu.com/insight-pixel)/[Github](https://github.com/thrillerist/TensorFlow-Tutorials) 简介在前一篇教程 07 中,我们了解了如何用预训练的Indeption模型来做图像分类。不幸的是,Inception模型似乎无法对人物图像做分类。原因在于该模型所使用的训练集,其中有一些易混淆的类别标签。Inception模型实际上能够从图像中提取出有用的信息。因此我们可以用其它数据集来训练Inception模型。但如果要在新的数据集上训练这样的模型,需要在一台强大又昂贵的电脑上花费好几周的时间。相反,我们可以复用预训练的Inception模型,然后只需要替换掉最后做分类的那一层。这个方法叫迁移学习。本文基于上一篇教程,你需要熟悉教程07中的Inception模型,以及之前教程中关于如何在TensorFlow中创建和训练神经网络的部分。 这篇教程的部分代码在`inception.py`文件中。 流程图 下图展示了用Inception模型做迁移学习时数据的流向。首先,我们在Inception模型中输入并处理一张图像。在模型最终的分类层之前,将所谓的Transfer- Values保存到缓存文件中。使用缓存文件的原因是,Inception模型处理一张图要花很长时间。我的装有Quad-Core 2 GHz CPU的笔记本电脑每秒能用Inception模型处理3张图像。如果每张图像都要处理多次的话,将transfer-values保存下来可以节省很多时间。transfer-values有时也称为bottleneck-values,但这个词可能令人费解,在这里就没有使用。当新数据集里的所有图像都用Inception处理过,并且生成的transfer-values都保存到缓存文件之后,我们可以将这些transfer-values作为其它神经网络的输入。接着训练第二个神经网络,用来分类新的数据集,因此,网络基于Inception模型的transfer-values来学习如何分类图像。这样,Inception模型从图像中提取出有用的信息,然后用另外的神经网络来做真正的分类工作。
###Code
from IPython.display import Image, display
Image('images/08_transfer_learning_flowchart.png')
###Output
_____no_output_____
###Markdown
导入
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
from datetime import timedelta
import os
# Functions and classes for loading and using the Inception model.
import inception
# We use Pretty Tensor to define the new classifier.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
使用Python3.5.2(Anaconda)开发,TensorFlow版本是:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor 版本:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
载入CIFAR-10数据
###Code
import cifar10
###Output
_____no_output_____
###Markdown
cirfa10模块中已经定义好了数据维度,因此我们需要时只要导入就行。
###Code
from cifar10 import num_classes
###Output
_____no_output_____
###Markdown
设置电脑上保存数据集的路径。
###Code
# cifar10.data_path = "data/CIFAR-10/"
###Output
_____no_output_____
###Markdown
CIFAR-10数据集大概有163MB,如果给定路径没有找到文件的话,将会自动下载。
###Code
cifar10.maybe_download_and_extract()
###Output
Data has apparently already been downloaded and unpacked.
###Markdown
载入类别名称。
###Code
class_names = cifar10.load_class_names()
class_names
###Output
Loading data: data/CIFAR-10/cifar-10-batches-py/batches.meta
###Markdown
载入训练集。这个函数返回图像、整形分类号码、以及用One-Hot编码的分类号数组,称为标签。
###Code
images_train, cls_train, labels_train = cifar10.load_training_data()
###Output
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_1
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_2
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_3
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_4
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_5
###Markdown
载入测试集。
###Code
images_test, cls_test, labels_test = cifar10.load_test_data()
###Output
Loading data: data/CIFAR-10/cifar-10-batches-py/test_batch
###Markdown
现在已经载入了CIFAR-10数据集,它包含60,000张图像以及相关的标签(图像的分类)。数据集被分为两个独立的子集,即训练集和测试集。
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
###Output
Size of:
- Training-set: 50000
- Test-set: 10000
###Markdown
用来绘制图片的帮助函数 这个函数用来在3x3的栅格中画9张图像,然后在每张图像下面写出真实类别和预测类别。
###Code
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
绘制几张图像看看数据是否正确
###Code
# Get the first images from the test-set.
images = images_test[0:9]
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=False)
###Output
_____no_output_____
###Markdown
下载Inception模型 从网上下载Inception模型。这是你保存数据文件的默认文件夹。如果文件夹不存在就自动创建。
###Code
# inception.data_dir = 'inception/'
###Output
_____no_output_____
###Markdown
如果文件夹中不存在Inception模型,就自动下载。它有85MB。更多详情见教程07。
###Code
inception.maybe_download()
###Output
Downloading Inception v3 Model ...
Data has apparently already been downloaded and unpacked.
###Markdown
载入Inception模型 载入模型,为图像分类做准备。注意warning信息,以后可能会导致程序运行失败。
###Code
model = inception.Inception()
###Output
_____no_output_____
###Markdown
计算 Transfer-Values 导入用来从Inception模型中获取transfer-values的帮助函数。
###Code
from inception import transfer_values_cache
###Output
_____no_output_____
###Markdown
设置训练集和测试集缓存文件的目录。
###Code
file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.pkl')
file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.pkl')
print("Processing Inception transfer-values for training-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_train * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,
images=images_scaled,
model=model)
print("Processing Inception transfer-values for test-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_test * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_test = transfer_values_cache(cache_path=file_path_cache_test,
images=images_scaled,
model=model)
###Output
Processing Inception transfer-values for test-images ...
- Data loaded from cache-file: data/CIFAR-10/inception_cifar10_test.pkl
###Markdown
检查transfer-values的数组大小。在训练集中有50,000张图像,每张图像有2048个transfer-values。
###Code
transfer_values_train.shape
###Output
_____no_output_____
###Markdown
相同的,在测试集中有10,000张图像,每张图像有2048个transfer-values。
###Code
transfer_values_test.shape
###Output
_____no_output_____
###Markdown
绘制transfer-values的帮助函数
###Code
def plot_transfer_values(i):
print("Input image:")
# Plot the i'th image from the test-set.
plt.imshow(images_test[i], interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values_test[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()
plot_transfer_values(i=16)
plot_transfer_values(i=17)
###Output
Input image:
###Markdown
transfer-values的PCA分析结果 用scikit-learn里的主成分分析(PCA),将transfer-values的数组维度从2048维降到2维,方便绘制。
###Code
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
创建一个新的PCA-object,将目标数组维度设为2。
###Code
pca = PCA(n_components=2)
###Output
_____no_output_____
###Markdown
计算PCA需要一段时间,因此将样本数限制在3000。如果你愿意,可以使用整个训练集。
###Code
transfer_values = transfer_values_train[0:3000]
###Output
_____no_output_____
###Markdown
获取你选取的样本的类别号。
###Code
cls = cls_train[0:3000]
###Output
_____no_output_____
###Markdown
保数组有3000份样本,每个样本有2048个transfer-values。
###Code
transfer_values.shape
###Output
_____no_output_____
###Markdown
用PCA将transfer-value从2048维降低到2维。
###Code
transfer_values_reduced = pca.fit_transform(transfer_values)
###Output
_____no_output_____
###Markdown
数组现在有3000个样本,每个样本两个值。
###Code
transfer_values_reduced.shape
###Output
_____no_output_____
###Markdown
帮助函数用来绘制降维后的transfer-values。
###Code
def plot_scatter(values, cls):
# Create a color-map with a different color for each class.
import matplotlib.cm as cm
cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))
# Get the color for each sample.
colors = cmap[cls]
# Extract the x- and y-values.
x = values[:, 0]
y = values[:, 1]
# Plot it.
plt.scatter(x, y, color=colors)
plt.show()
###Output
_____no_output_____
###Markdown
画出用PCA降维后的transfer-values。用10种不同的颜色来表示CIFAR-10数据集中不同的类别。颜色各自组合在一起,但有很多重叠部分。这可能是因为PCA无法正确地分离transfer-values。
###Code
plot_scatter(transfer_values_reduced, cls)
###Output
_____no_output_____
###Markdown
transfer-values的t-SNE分析结果
###Code
from sklearn.manifold import TSNE
###Output
_____no_output_____
###Markdown
另一种降维的方法是t-SNE。不幸的是,t-SNE很慢,因此我们先用PCA将维度从2048减少到50。
###Code
pca = PCA(n_components=50)
transfer_values_50d = pca.fit_transform(transfer_values)
###Output
_____no_output_____
###Markdown
创建一个新的t-SNE对象,用来做最后的降维工作,将目标维度设为2维。
###Code
tsne = TSNE(n_components=2)
###Output
_____no_output_____
###Markdown
用t-SNE执行最终的降维。目前在scikit-learn中实现的t-SNE可能无法处理很多样本的数据,所以如果你用整个训练集的话,程序可能会崩溃。
###Code
transfer_values_reduced = tsne.fit_transform(transfer_values_50d)
###Output
_____no_output_____
###Markdown
确保数组有3000份样本,每个样本有两个transfer-values。
###Code
transfer_values_reduced.shape
###Output
_____no_output_____
###Markdown
画出用t-SNE降低至二维的transfer-values,相比上面PCA的结果,它有更好的分离度。这意味着由Inception模型得到的transfer-values似乎包含了足够多的信息,可以对CIFAR-10图像进行分类,然而还是有一些重叠部分,说明分离并不完美。
###Code
plot_scatter(transfer_values_reduced, cls)
###Output
_____no_output_____
###Markdown
TensorFlow中的新分类器 在我们将会在TensorFlow中创建一个新的神经网络。这个网络会把Inception模型中的transfer-values作为输入,然后输出CIFAR-10图像的预测类别。这里假定你已经熟悉如何在TensorFlow中建立神经网络,否则请阅读教程03。 占位符 (Placeholder)变量 首先需要找到transfer-values的数组长度,它是保存在Inception模型对象中的一个变量。
###Code
transfer_len = model.transfer_len
###Output
_____no_output_____
###Markdown
现在为输入的transfer-values创建一个placeholder变量,输入到我们新建的网络中。变量的形状是`[None, transfer_len]`,`None`表示它的输入数组包含任意数量的样本,每个样本元素个数为2048,即`transfer_len`。
###Code
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')
###Output
_____no_output_____
###Markdown
为输入图像的真实类型标签定义另外一个placeholder变量。这是One-Hot编码的数组,包含10个元素,每个元素代表了数据集中的一种可能类别。
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
计算代表真实类别的整形数字。这也可能是一个placeholder变量。
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
神经网络 创建在CIFAR-10数据集上做分类的神经网络。它将Inception模型得到的transfer-values作为输入,保存在placeholder变量`x`中。网络输出预测的类别`y_pred`。教程03中有更多使用Pretty Tensor构造神经网络的细节。
###Code
# Wrap the transfer-values as a Pretty Tensor object.
x_pretty = pt.wrap(x)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
fully_connected(size=1024, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
优化方法 创建一个变量来记录当前优化迭代的次数。
###Code
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
###Output
_____no_output_____
###Markdown
优化新的神经网络的方法。
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)
###Output
_____no_output_____
###Markdown
分类准确率 网络的输出y_pred是一个包含10个元素的数组。类别号是数组中最大元素的索引。
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
创建一个布尔向量,表示每张图像的真实类别是否与预测类别相同。
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
将布尔值向量类型转换成浮点型向量,这样子False就变成0,True变成1,然后计算这些值的平均数,以此来计算分类的准确度。
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
运行TensorFlow 创建TensorFlow会话(session) 一旦创建了TensorFlow图,我们需要创建一个TensorFlow会话,用来运行图。
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
初始化变量 我们需要在开始优化weights和biases变量之前对它们进行初始化。
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
获取随机训练batch的帮助函数 训练集中有50,000张图像(以及保存transfer-values的数组)。用这些图像(transfer-vlues)计算模型的梯度会花很多时间。因此,我们在优化器的每次迭代里只用到了一小部分的图像(transfer-vlues)。如果内存耗尽导致电脑死机或变得很慢,你应该试着减少这些数量,但同时可能还需要更优化的迭代。
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
函数用来从训练集中选择随机batch的transfer-vlues。
###Code
def random_batch():
# Number of images (transfer-values) in the training-set.
num_images = len(transfer_values_train)
# Create a random index.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random x and y-values.
# We use the transfer-values instead of images as x-values.
x_batch = transfer_values_train[idx]
y_batch = labels_train[idx]
return x_batch, y_batch
###Output
_____no_output_____
###Markdown
执行优化迭代的帮助函数 函数用来执行一定数量的优化迭代,以此来逐渐改善网络层的变量。在每次迭代中,会从训练集中选择新的一批数据,然后TensorFlow在这些训练样本上执行优化。每100次迭代会打印出进度。
###Code
def optimize(num_iterations):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images (transfer-values) and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch()
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
# We also want to retrieve the global_step counter.
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
# Print status to screen every 100 iterations (and last).
if (i_global % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-batch.
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
# Print status.
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
展示结果的帮助函数 绘制错误样本的帮助函数 函数用来绘制测试集中被误分类的样本。
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = images_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
n = min(9, len(images))
# Plot the first n images.
plot_images(images=images[0:n],
cls_true=cls_true[0:n],
cls_pred=cls_pred[0:n])
###Output
_____no_output_____
###Markdown
绘制混淆(confusion)矩阵的帮助函数
###Code
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
# Append the class-name to each line.
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
###Output
_____no_output_____
###Markdown
计算分类的帮助函数这个函数用来计算图像的预测类别,同时返回一个代表每张图像分类是否正确的布尔数组。 由于计算可能会耗费太多内存,就分批处理。如果你的电脑死机了,试着降低batch-size。
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(transfer_values, labels, cls_true):
# Number of images.
num_images = len(transfer_values)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: transfer_values[i:j],
y_true: labels[i:j]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
计算测试集上的预测类别。
###Code
def predict_cls_test():
return predict_cls(transfer_values = transfer_values_test,
labels = labels_test,
cls_true = cls_test)
###Output
_____no_output_____
###Markdown
计算分类准确率的帮助函数这个函数计算了给定布尔数组的分类准确率,布尔数组表示每张图像是否被正确分类。比如, `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`。
###Code
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
###Output
_____no_output_____
###Markdown
展示分类准确率的帮助函数 函数用来打印测试集上的分类准确率。为测试集上的所有图片计算分类会花费一段时间,因此我们直接从这个函数里调用上面的函数,这样就不用每个函数都重新计算分类。
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
结果 优化之前的性能 测试集上的准确度很低,这是由于模型只做了初始化,并没做任何优化,所以它只是对图像做随机分类。
###Code
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
###Output
Accuracy on Test-Set: 9.4% (939 / 10000)
###Markdown
10,000次优化迭代后的性能 在10,000次优化迭代之后,测试集上的分类准确率大约为90%。相比之下,之前教程06中的准确率低于80%。
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 90.7% (9069 / 10000)
Example errors:
###Markdown
关闭TensorFlow会话 现在我们已经用TensorFlow完成了任务,关闭session,释放资源。注意,我们需要关闭两个TensorFlow-session,每个模型对象各有一个。
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# model.close()
# session.close()
###Output
_____no_output_____
|
course_projects/Keras_HandwritingRecognitionMNIST.ipynb
|
###Markdown
KerasKeras is a high level API for tensorflow. It has scikit-learn integration and was built around deep learning concepts, so it is very easy to construct the layers and implement the optimization functions.The objective here is to set the same neural network used in Tensorflow low-level API for the MNIST data set, but this time with keras.
###Code
from tensorflow import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import RMSprop
###Output
_____no_output_____
###Markdown
Load MNIST dataset
###Code
(mnist_train_images, mnist_train_labels), (mnist_test_images, mnist_test_labels) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Convert the data into Keras/TensorFlow format
###Code
train_images = mnist_train_images.reshape(60000, 784)
test_images = mnist_test_images.reshape(10000, 784)
train_images = train_images.astype('float32')
test_images = test_images.astype('float32')
###Output
_____no_output_____
###Markdown
Divide the image data by 255 to normalize
###Code
train_images /= 255
test_images /= 255
###Output
_____no_output_____
###Markdown
One-hot encode the labels
###Code
y_train = keras.utils.to_categorical(mnist_train_labels, 10)
y_test = keras.utils.to_categorical(mnist_test_labels, 10)
###Output
_____no_output_____
###Markdown
Same visualization of training images
###Code
import matplotlib.pyplot as plt
def display_sample(num):
#Print the label
label = y_train[num].argmax(axis=0)
#Reshape to a 28x28 image
image = train_images[num].reshape([28,28])
plt.title('Sample: %d Label: %d' % (num, label))
plt.imshow(image, cmap=plt.get_cmap('gray_r'))
plt.show()
display_sample(500)
###Output
_____no_output_____
###Markdown
Neural network in KerasIt is straightforward, and building the same as the low level API:- The input layer of 784 features feeds into a ReLU layer of 784 nodes- This layer feeds into another ReLU layer of 512 nodes- The 512 nodes layer goes into 10 nodes with softmax applied
###Code
model = Sequential()
model.add(Dense(784, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Summary of the model
###Code
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_5 (Dense) (None, 784) 615440
_________________________________________________________________
dropout (Dropout) (None, 784) 0
_________________________________________________________________
dense_6 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_7 (Dense) (None, 10) 5130
=================================================================
Total params: 1,022,490
Trainable params: 1,022,490
Non-trainable params: 0
_________________________________________________________________
###Markdown
Optimizer and loss function
###Code
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Training the model10 epochs with a batch size of 100. Keras is slower and this can take some of time.
###Code
history = model.fit(train_images, y_train,
batch_size=100,
epochs=10,
verbose=2,
validation_data=(test_images, y_test))
score = model.evaluate(test_images, test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.10906815528869629
Test accuracy: 0.9824000000953674
###Markdown
Even with just 10 epochs, it outperformed Tensorflow version - **96% (low level API) vs 98% (Kera)** Visualize the wrong ones
###Code
for x in range(500):
test_image = test_images[x,:].reshape(1,784)
predicted_cat = model.predict(test_image).argmax()
label = test_labels[x].argmax()
if (predicted_cat != label):
plt.title('Prediction: %d Label: %d' % (predicted_cat, label))
plt.imshow(test_image.reshape([28,28]), cmap=plt.get_cmap('gray_r'))
plt.show()
###Output
_____no_output_____
|
notebook/UserBasedCollaborativeFiltering.ipynb
|
###Markdown
Load data
###Code
DATA_DIR = os.path.join('..', 'data', 'processed', 'filtering')
with open(os.path.join(DATA_DIR, 'user_to_items.pickle'), 'rb') as file:
user_to_items = pickle.load(file)
with open(os.path.join(DATA_DIR, 'train_ratings.pickle'), 'rb') as file:
train_ratings = pickle.load(file)
with open(os.path.join(DATA_DIR, 'test_ratings.pickle'), 'rb') as file:
test_ratings = pickle.load(file)
# get number of users and items
M = 1 + max(
max([i[0] for i in train_ratings.keys()]),
max([i[0] for i in test_ratings.keys()])
)
N = 1 + max(
max([i[1] for i in train_ratings.keys()]),
max([i[1] for i in test_ratings.keys()])
)
M, N
###Output
_____no_output_____
###Markdown
Fit model
###Code
MIN_NEIGHBORS = 10
MAX_NEIGHBORS = 240
MIN_COMMON_ITEMS = 2
STEP = 10
TRIALS = MAX_NEIGHBORS - MIN_NEIGHBORS + 1
N_WORKERS = 12
[(MIN_NEIGHBORS + i * TRIALS // N_WORKERS, MIN_NEIGHBORS +(i + 1) * TRIALS // N_WORKERS) for i in range(N_WORKERS)]
def parallel_fit(min_neighbors, max_neighbors, step=STEP):
train_scores, test_scores = [], []
for neighbors in range(min_neighbors, max_neighbors, step):
ubcf = UserBased(M, N, neighbors=neighbors, min_common_items=MIN_COMMON_ITEMS)
ubcf.fit(train_ratings, user_to_items)
train_scores.append(ubcf.score(train_ratings))
test_scores.append(ubcf.score(test_ratings))
return train_scores, test_scores
pool = futures.ProcessPoolExecutor(N_WORKERS)
fs = [
pool.submit(
parallel_fit,
MIN_NEIGHBORS + i * TRIALS // N_WORKERS,
MIN_NEIGHBORS +(i + 1) * TRIALS // N_WORKERS
)
for i in range(N_WORKERS)
]
futures.wait(fs)
result = [f.result() for f in fs]
train_loss = np.concatenate([lst[0] for lst in result])
test_loss = np.concatenate([lst[1] for lst in result])
sns.lineplot(np.arange(MIN_NEIGHBORS, MAX_NEIGHBORS + 1, STEP), train_loss)
sns.lineplot(np.arange(MIN_NEIGHBORS, MAX_NEIGHBORS + 1, STEP), test_loss)
plt.xticks(np.arange(MIN_NEIGHBORS, MAX_NEIGHBORS + 1, STEP * 2))
plt.yticks(np.arange(0.5, 1.05, 0.05))
plt.legend(['train', 'test'])
plt.xlabel('Neighbors')
plt.ylabel('RMSE');
plt.savefig("ubcf.png")
test_loss
###Output
_____no_output_____
|
examples/embryos_dic.ipynb
|
###Markdown
Introduction**Date**: 13th August 2020 **Author**: Nelson Gonzabato Hello and welcome to another notebook. After a very long break from Kaggle, I have returned and decided to share what I am currently working on. **What's new in this version of the notebook?**I have added text to the notebook. More importantly, I have fixed issues with `load_augmentations` that flipped images. It is defined in this notebook and will be committed to `cytounet` later. Thank you for reading and hope you like it.As always, please let me know what could be improved. **Notebook Aims**Image data is an integral part of the biomedical research indsutry especially in experiments that aim to image different stages of the cell or organisms. From imaging embryonic events to histopathological imaging, microscopy is extremely important.Despite its importance, microscopy tasks are often time consuming and require years of expert experience to not only obtain datasets but also perform such tasks as classifying normal vs diseased samples or simply counting the number of cells in an image.**Introducing cytounet**To simplify image segmentation, I have written up a small deep learning based Keras/Tensorflow python package [cytounet](https://github.com/Nelson-Gon/cytounet/tree/master/cytounet). It should be noted that depsite its relative implementation simplicity, deep learning still has some flaws and limitations that we discusss at the end of this notebook.With that short intro, let us dive right into the code. For this task, we are going to generate image labels(masks) for embryonic images from the [Broad Institute](https://data.broadinstitute.org/bbbc/BBBC009/) Cloning the repositoryIn this notebook as stated above, we use `cytounet` an implementation of the Unet[algorithm](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/). The name `cytounet` reflects the fact that this is an implementation of the Unet algorithm for biological data(`cyto`). This does not mean that it is limited to biological data. You can play around with different non biological datasets and judge for yourself how well it works.
###Code
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
! git clone https://github.com/Nelson-Gon/cytounet.git
###Output
Cloning into 'cytounet'...
remote: Enumerating objects: 70, done.[K
remote: Counting objects: 100% (70/70), done.[K
remote: Compressing objects: 100% (50/50), done.[K
remote: Total 966 (delta 40), reused 49 (delta 20), pack-reused 896[K
Receiving objects: 100% (966/966), 51.22 MiB | 28.02 MiB/s, done.
Resolving deltas: 100% (427/427), done.
###Markdown
For convenience, we shall `c`hange `d`irectory into our newly cloned repository.
###Code
%cd cytounet
###Output
/kaggle/working/cytounet
###Markdown
**Importing relevant modules**Within cytounet are a few functions that will be useful for our pipeline. For convenience, we import everything from these modules.
###Code
from cytounet.model import *
from cytounet.data import *
from cytounet.augmentation import *
###Output
_____no_output_____
###Markdown
**What does our data look like?**It is often important to understand what our data looks like. For our purposes, it is especially important to know the file format of our images. It is also a useful idea to `l`ist files in our directory to simply confirm that it in fact is not empty.
###Code
! ls examples/BBBC003_v1/
###Output
images truth
###Markdown
**Image Transformations** First, we need to define a dictionary to define what kind of transformations will be used in our processing functions(`generate_*_data`) which generate augmented images that are then fed to our model on each epoch. This can also be useful if you need to synthesize data to escape the "curse" of small datasets which apparently do not work well for deep learning methods. Of particular interest here is the `rescale` argument which will allow us to transform our images to a form that is recognizable by our model
###Code
data_generator_args = dict(rotation_range=0.1,
rescale = 1./255,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.1,
horizontal_flip=True,
fill_mode='nearest')
! if [ ! -d "aug" ]; then mkdir aug;fi
###Output
_____no_output_____
###Markdown
**Generate training data**Using the above arguments,we generate our train data. We also generate "fake" data which we save to `aug` to use this as our test data. You can also use this as your validation dataset and feed it to `generate_validation_data`.
###Code
train_gen = generate_train_data(5, "examples/BBBC003_v1","images", "truth",aug_dict = data_generator_args,
seed = 2, target_size = (512, 512), save_to_dir="aug")
for i, batch in enumerate(train_gen):
if i>= 5:
break
###Output
Found 15 images belonging to 1 classes.
Found 15 images belonging to 1 classes.
###Markdown
This section moves our images to target directories, it should be run to generate our test data.
###Code
! ls aug | wc -l
###Output
60
###Markdown
The above tells us that we have generated 60 fake images.
###Code
! if [ ! -d "aug/images" ]; then mkdir aug/images aug/masks;fi
! mv aug/image_* aug/images && mv aug/mask_* aug/masks && ls aug/masks | wc -l
def load_augmentations(image_path, mask_path, image_prefix="image", mask_prefix="mask"):
image_name_arr = glob.glob(os.path.join(image_path, "{}*.png".format(image_prefix)))
image_arr = []
mask_arr = []
for index, item in enumerate(image_name_arr):
img = image.load_img(item, color_mode="grayscale", target_size = (512, 512))
img = image.img_to_array(img)
mask = image.load_img(item.replace(image_path, mask_path).replace(image_prefix, mask_prefix),
color_mode="grayscale", target_size = (512, 512))
mask = image.img_to_array(mask)
image_arr.append(img)
mask_arr.append(mask)
image_arr = np.array(image_arr)
mask_arr = np.array(mask_arr)
return image_arr, mask_arr
images, masks = load_augmentations("aug/images","aug/masks")
###Output
_____no_output_____
###Markdown
**View newly generated data**
###Code
show_images(images, number=10)
show_images(masks, number = 10)
###Output
_____no_output_____
###Markdown
Model building and trainingNext we build our model by calling `unet`. Here, we use binary cross entropy and accuracy as our loss function and metric respectively. You can alternatively use dice coeffiecient, jaccard similarity, mean IOU and so on as you may wish or as theory may allow. I have found `dice_coef` not to work very well so far. We also use `Adam` as our default optimizer. One could use `SGD` instead. Please try it out and let me know what your results are.
###Code
model = unet(input_size = (512, 512, 1), learning_rate = 1e-4, metrics=["accuracy"],
loss=["binary_crossentropy"])
###Output
_____no_output_____
###Markdown
In training our model, most of these hyperparameters are randomly chosen. This is one of the limitations of deep and machine learning in my opinion. One could perform a hyperparameter grid search but at the time of writing, this is not yet implemented in this package.
###Code
history = train(model, train_gen, epochs = 5, steps_per_epoch=150, save_as="unet_embryo.hdf5")
###Output
Epoch 1/5
150/150 [==============================] - 87s 581ms/step - loss: 0.5974 - accuracy: 0.6837
Epoch 2/5
150/150 [==============================] - 87s 579ms/step - loss: 0.0362 - accuracy: 0.9931
Epoch 3/5
150/150 [==============================] - 87s 579ms/step - loss: 0.0081 - accuracy: 0.9957
Epoch 4/5
150/150 [==============================] - 86s 574ms/step - loss: 0.0057 - accuracy: 0.9962
Epoch 5/5
150/150 [==============================] - 86s 573ms/step - loss: 0.0048 - accuracy: 0.9964
###Markdown
Predict on our generated dataFinally, we predict on our generated "fake" data and see how well our model does.
###Code
results = predict(model_object=unet(),test_path="aug/images", model_weights="unet_embryo.hdf5",
image_length=15, image_suffix="png")
###Output
15/15 [==============================] - 0s 6ms/step
###Markdown
ResultsFrom these results, it is clear that our model is overfitting on our dataset. We could overcome this by using regularizers such as `L1` and `L2` which use the absolute weights and sum square of the weights for the penalty respectively. Another solution is to use a simpler model and build on this to see how model complexity affects the model's output.
###Code
show_images(results, number = 10)
###Output
_____no_output_____
|
scikit-learn-official-examples/linear_model/plot_lasso_and_elasticnet.ipynb
|
###Markdown
Lasso and Elastic Net for Sparse SignalsEstimates Lasso and Elastic-Net regression models on a manually generatedsparse signal corrupted with an additive noise. Estimated coefficients arecompared with the ground-truth.
###Code
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
# #############################################################################
# Generate some sparse data to play with
np.random.seed(42)
n_samples, n_features = 50, 200
X = np.random.randn(n_samples, n_features)
coef = 3 * np.random.randn(n_features)
inds = np.arange(n_features)
np.random.shuffle(inds)
coef[inds[10:]] = 0 # sparsify coef
y = np.dot(X, coef)
# add noise
y += 0.01 * np.random.normal(size=n_samples)
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[:n_samples // 2], y[:n_samples // 2]
X_test, y_test = X[n_samples // 2:], y[n_samples // 2:]
# #############################################################################
# Lasso
from sklearn.linear_model import Lasso
alpha = 0.1
lasso = Lasso(alpha=alpha)
y_pred_lasso = lasso.fit(X_train, y_train).predict(X_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
# #############################################################################
# ElasticNet
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7)
y_pred_enet = enet.fit(X_train, y_train).predict(X_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
plt.plot(enet.coef_, color='lightgreen', linewidth=2,
label='Elastic net coefficients')
plt.plot(lasso.coef_, color='gold', linewidth=2,
label='Lasso coefficients')
plt.plot(coef, '--', color='navy', label='original coefficients')
plt.legend(loc='best')
plt.title("Lasso R^2: %f, Elastic Net R^2: %f"
% (r2_score_lasso, r2_score_enet))
plt.show()
###Output
_____no_output_____
|
content/jupyter.ipynb
|
###Markdown
Jupyter notebook pageThis is a raw Jupyter notebook page, in `.ipynb` format. Usage Basic code execution works:
###Code
!hostname
print(sum(range(10)))
###Output
_____no_output_____
|
Lab04/Clustering.ipynb
|
###Markdown
ClusteringIn this exercise, you will use K-Means clustering to segment customer data into five clusters. Import the LibrariesYou will use the **KMeans** class to create your model. This will require a vector of features, so you will also use the **VectorAssembler** class.
###Code
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler
###Output
_____no_output_____
###Markdown
Load Source DataThe source data for your clusters is in a comma-separated values (CSV) file, and incldues the following features:- CustomerName: The customer's name- Age: The customer's age in years- MaritalStatus: The custtomer's marital status (1=Married, 0 = Unmarried)- IncomeRange: The top-level for the customer's income range (for example, a value of 25,000 means the customer earns up to 25,000)- Gender: A numeric value indicating gender (1 = female, 2 = male)- TotalChildren: The total number of children the customer has- ChildrenAtHome: The number of children the customer has living at home.- Education: A numeric value indicating the highest level of education the customer has attained (1=Started High School to 5=Post-Graduate Degree- Occupation: A numeric value indicating the type of occupation of the customer (0=Unskilled manual work to 5=Professional)- HomeOwner: A numeric code to indicate home-ownership (1 - home owner, 0 = not a home owner)- Cars: The number of cars owned by the customer.
###Code
customers = spark.read.csv('wasb://spark@<YOUR_ACCOUNT>.blob.core.windows.net/data/customers.csv', inferSchema=True, header=True)
customers.show()
###Output
_____no_output_____
###Markdown
Create the K-Means ModelYou will use the feaures in the customer data to create a Kn-Means model with a k value of 5. This will be used to generate 5 clusters.
###Code
assembler = VectorAssembler(inputCols = ["Age", "MaritalStatus", "IncomeRange", "Gender", "TotalChildren", "ChildrenAtHome", "Education", "Occupation", "HomeOwner", "Cars"], outputCol="features")
train = assembler.transform(customers)
kmeans = KMeans(featuresCol=assembler.getOutputCol(), predictionCol="cluster", k=5, seed=0)
model = kmeans.fit(train)
print ("Model Created!")
###Output
_____no_output_____
###Markdown
Get the Cluster CentersThe cluster centers are indicated as vector coordinates.
###Code
centers = model.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
###Output
_____no_output_____
###Markdown
Predict ClustersNow that you have trained the model, you can use it to segemnt the customer data into 5 clusters and show each customer with their allocated cluster.
###Code
prediction = model.transform(train)
prediction.groupBy("cluster").count().orderBy("cluster").show()
prediction.select("CustomerName", "cluster").show(50)
###Output
_____no_output_____
|
buy-low.ipynb
|
###Markdown
Buy Low, Sell High Backtesting a Long-Term Rolling Average Investment Strategy vs. Dollar Cost Averaging By: Jeff Hale "Buy low and sell high" is standard investment advice. However, investors buy when they think the price is going to rise and sell when they think the price is going to fall lower. Often times investors are chasing the market. They sell when fear has taken over and prices have already fallen. They buy when everyone is euphoric and prices are already high. Let's buy when things are below the 20 year moving average.I'm certainly not the first person to test an investment strategy like this one, but dollar cost averaging seems to be the prevailing wisdom. This is also a good exercise in building a basic back-testing process with python, pandas, and numpy.Dollar cost averaging is another recommended investment strategy. Let's use that as a baseline. Rough plan:1. Import S&P 500 monthly data, from [Quandl](https://www.quandl.com). 2. Build DataFrames.3. Code strategies to buy and sell based on triggers. 4. Evaluate.5. Iterate.We'll use the inflation adjusted data.Wwe want to make sure we go back a good ways. The past thirty year's in the US economy have had their ups and downs, but money has largely been cheap and inflation has largely been low. We'll look at data going back 50 years for this example.Let's code and and evaluate. AssumptionsFor our simulations, let's say you are an individual investor with 1k of cash to invest per month. We'll assume you can invest this tax free in a retirement account. Transaction cost is assumed to be zero with [Vanguard](https://investor.vanguard.com/etf/profile/fees/voo) S&P 500 ETF (VOOO) because it is free to buy and sell.Expense ratio is 0.04 percent.Quarterly distributions per share from the fund have been over one dollar for past few quarters. -per share whateverBased on historic performance, there are expected to be no taxable capital gains distributions.Shares are currently $229.64 as of Jan. 2, 2019, COB.For now, we'll exclude the above concerns.
###Code
# essentials
import pdb
import numpy as np
import pandas as pd
# visualizations
import matplotlib.pyplot as plt
import seaborn as sns
# quandl
import quandl
# reproducibility
np.random.seed(34)
# Jupyter magic
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# formatting
sns.set()
pd.options.display.float_format = '{:,.2f}'.format
###Output
_____no_output_____
###Markdown
Price Data We need the S&P 500 price data. We can get inflation adjusted or non-inflation adjusted monthly price. We'll use inflation adjusted for real returns.We'll specify that we want the result in a numpy array. Insert your own authtoken. Tokens are free at Quandl.
###Code
price_data = quandl.get(
"MULTPL/SP500_INFLADJ_MONTH",
authtoken="17ShEkbGYrhcJ7Qw8DvJ", # use your own free auth token, please
returns="numpy"
)
df = pd.DataFrame(price_data)
df.tail(20)
###Output
_____no_output_____
###Markdown
We have a few months with both the beginning and ending dates. Even data meant to be consumed from a financial data firm is a bit messy!Let's drop the duplicate months and plot.
###Code
df = df.drop([1773, 1776, 1778])
df.tail()
sns.lineplot(data=df, x='Date', y='Value')
###Output
_____no_output_____
###Markdown
That chart always makes me nervous. Stock market curves that go up like that usually come back to earth eventually. This one is inflation adjusted, too. Compute Moving AverageLet's make a 20 year moving average. We'll set 20 as a constant.
###Code
# years to lookback for rolling average
LOOKBACK = 20
###Output
_____no_output_____
###Markdown
First we need to move the date into the index.
###Code
df = df.set_index('Date')
df.head()
###Output
_____no_output_____
###Markdown
type(df.index) Looks good. Create rolling average column
###Code
df['moving_avg'] = df['Value'].rolling((12*LOOKBACK)).mean()
df.head()
df.tail()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1778 entries, 1871-01-01 to 2019-02-01
Data columns (total 2 columns):
Value 1778 non-null float64
moving_avg 1539 non-null float64
dtypes: float64(2)
memory usage: 41.7 KB
###Markdown
We're going to have nan's for the first 20 years of data because there isn't data to look back over and average. That's fine. We're not that concerned about returns from the 1800s anyway. Those are less likely to have meaningful information for today's investments than more recent data.Let's drop those rows with nans.
###Code
df = df.dropna()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1539 entries, 1890-12-01 to 2019-02-01
Data columns (total 2 columns):
Value 1539 non-null float64
moving_avg 1539 non-null float64
dtypes: float64(2)
memory usage: 36.1 KB
###Markdown
Cool. We dropped our nan columns.
###Code
ax1 = sns.lineplot(data=df, x=df.index, y='Value', color='green')
ax1.set_xlabel('Date')
ax1.set_ylabel('Price')
ax2 = plt.twinx()
ax2 = sns.lineplot(data=df, x=df.index, y='moving_avg', color='orange')
ax2.set_ylabel('Moving Average')
ax1.legend(labels=["Price"], loc = 'upper left')
ax2.legend(labels=["Moving Average"], loc = 'center left')
###Output
_____no_output_____
###Markdown
Create Standard Deviation Columns and Buy/Sell SignalsNow let's implement our trading strategy. Let's try buying when the price is 1 standard deviation below the rolling average.
###Code
df.moving_avg.std()
###Output
_____no_output_____
###Markdown
Let's make a column for the std dev and a column for our buy signal.
###Code
df['moving_avg_sd'] = df.moving_avg.rolling((12*LOOKBACK)).std()
df.tail()
###Output
_____no_output_____
###Markdown
if moving
###Code
df['buy'] = df.Value < (df.moving_avg - df.moving_avg_sd)
df.tail()
df.buy.value_counts()
###Output
_____no_output_____
###Markdown
Ok. We'd buy 254 months. Let's make a *sell* column for when the values is greater than 1 std deviation above the average.
###Code
df['sell'] = df.Value > (df.moving_avg + df.moving_avg_sd)
df.tail()
df.sell.value_counts()
###Output
_____no_output_____
###Markdown
Ok. We'd sell just over half the time. Trading Let's add a column for the % change in price
###Code
df['pct_change'] = df['Value'].pct_change()
df.tail()
###Output
_____no_output_____
###Markdown
Let's add columns for uninvested cash value, invested value, and total value.
###Code
df['cash_bal'] = 0
df['invested_bal'] = 0
df['total_bal'] = df.cash_bal + df.invested_bal
df.tail()
###Output
_____no_output_____
###Markdown
Variables Let's set our variables. We have 1k per month of new cash in from our non-investment earnings (e.g. our wages).We'll buy with 20% our cash balance if our buy signal is reached.We'll sell 20% of our invested value if our sell signal is reached.
###Code
def allocate_gain_loss():
# change invested balance by gain or loss since the past month
df['invested_bal'] *= df_balance['pct_change']
df.tail()
df.info()
df.head()
###Output
_____no_output_____
###Markdown
Let's delete our early columns with nans.
###Code
df = df.dropna()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 1300 entries, 1910-11-01 to 2019-02-01
Data columns (total 9 columns):
Value 1300 non-null float64
moving_avg 1300 non-null float64
moving_avg_sd 1300 non-null float64
buy 1300 non-null bool
sell 1300 non-null bool
pct_change 1300 non-null float64
cash_bal 1300 non-null int64
invested_bal 1300 non-null int64
total_bal 1300 non-null int64
dtypes: bool(2), float64(4), int64(3)
memory usage: 83.8 KB
###Markdown
And let's simulate the past 30 years of investing. After all, this is modelling retirement investments. Later we can look at simulating different investing start dates.
###Code
df = df.iloc[-360:]
df.head()
df.tail()
df.buy.value_counts()
df.sell.value_counts()
###Output
_____no_output_____
###Markdown
Now let's make a function to execute our buy, sell, or hold decision and fill our dataframe
###Code
def invest(
df, # (pandas DataFrame): the DataFrame with the investment data
monthly_new_capital = 1000, # monthly contribution
cash_bal = 0, # initial cash_bal
invested_bal = 0, # initial invested_bal
monthly_sell_pct = .20, # % of invested $ to sell
monthly_buy_pct = .20, # % of cash balance to invest
):
for i, row in df.iterrows():
# add outside contribution to cash balance
df.loc[i, 'cash_bal'] = cash_bal + monthly_new_capital
# update cash_bal for buy, sell, hold calcs
cash_bal = df.loc[i, 'cash_bal']
# update invested balance by gain or loss since the past month
df.loc[i, 'invested_bal'] = invested_bal * (1 + df.loc[i, 'pct_change'])
# update invested_bal for buy, sell, hold calcs
invested_bal = df.loc[i, 'invested_bal']
# buy, sell, or hold
if df.loc[i,'buy'] == True:
new_investment = monthly_buy_pct * cash_bal
df.loc[i, 'invested_bal'] = invested_bal + new_investment
df.loc[i, 'cash_bal'] = cash_bal - new_investment
else:
if df.loc[i,'sell'] == True:
withdrawal = monthly_sell_pct * invested_bal
df.loc[i, 'invested_bal'] = invested_bal - withdrawal
df.loc[i, 'cash_bal'] = cash_bal + withdrawal
cash_bal = df.loc[i, 'cash_bal']
invested_bal = df.loc[i, 'invested_bal']
# make summary dataframe columns
df['total_bal'] = df.cash_bal + df.invested_bal
df['row'] = list(range(len(df)))
df["total_contribution"] = monthly_new_capital * df['row']
df['return'] = df['total_bal'] - df['total_contribution']
df['percent_return'] = df['return']/ df['total_contribution']
return df
invest(df)
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Let's compute the annualized Sharp ratio
###Code
returns = df['return']
sharpe_ratio = np.sqrt(12) * (returns.mean() / returns.std())
sharpe_ratio
###Output
_____no_output_____
###Markdown
Chart ItLet's chart the total balance with the component cash and invested balances.
###Code
df.loc[:, ['cash_bal', 'invested_bal']].plot.area()
###Output
_____no_output_____
###Markdown
Ok. That's a decent return. Let's see how it compares to an investment where you just invest your contribution into the S&P500 each month. Baseline Invest 1k per month in S&P 500 EFT.
###Code
df_b = df.copy()
###Output
_____no_output_____
###Markdown
Set df_b.buy == True. We're buying every month!
###Code
df_b['buy'] = True
df_b = invest(df_b, monthly_new_capital=1000, monthly_buy_pct = 1.0)
df_b.head()
df_b.tail()
df_b.loc[:, ['cash_bal', 'invested_bal']].plot.area()
###Output
_____no_output_____
###Markdown
You would have been much better off investing every month than following the earlier market timing strategy. Playing with investment options Sell or buy 50% of holdings or cash each time.
###Code
df_bets = df.copy()
df_bets = invest(df_bets, monthly_new_capital=1000, monthly_buy_pct = .5, monthly_sell_pct = .5)
df_bets.tail()
###Output
_____no_output_____
###Markdown
Let's try changing our buy and sell points. Let's try both at .5 sd from the moving average.
###Code
df['sell'] = df.Value > (df.moving_avg +(0.5 * df.moving_avg_sd))
df['buy'] = df.Value < (df.moving_avg - (0.5 * df.moving_avg_sd))
df_sd = df.copy()
df_sd = invest(df_sd, monthly_new_capital=1000, monthly_buy_pct = 0.5, monthly_sell_pct = 0.5)
df_sd.tail()
###Output
_____no_output_____
|
ipython/localUncertainty.ipynb
|
###Markdown
First Order Local Uncertainty Analysis for Chemical Reaction SystemsThis ipython notebook performs first order local uncertainty analysis for a chemical reaction systemusing a RMG-generated model.
###Code
from rmgpy.tools.uncertainty import Uncertainty
from rmgpy.tools.canteraModel import getRMGSpeciesFromUserSpecies
from rmgpy.species import Species
from IPython.display import display, Image
import os
# Define the CHEMKIN and Dictionary file paths. This is a reduced phenyldodecane (PDD) model.
# Must use annotated chemkin file
chemkinFile = 'uncertainty/chem_annotated.inp'
dictFile = 'uncertainty/species_dictionary.txt'
# Alternatively, unhighlight the following lines and comment out the lines above to use the minimal model,
# which will not take as long to process
# Make sure to also uncomment the specified lines two code blocks down which are related
# chemkinFile = 'data/minimal_model/chem_annotated.inp'
# dictFile = 'data/minimal_model/species_dictionary.txt'
###Output
_____no_output_____
###Markdown
Initialize the `Uncertainty` class object with the model.
###Code
uncertainty = Uncertainty(outputDirectory='uncertainty')
uncertainty.loadModel(chemkinFile, dictFile)
###Output
_____no_output_____
###Markdown
We can now perform stand-alone sensitivity analysis.
###Code
# Map the species to the objects within the Uncertainty class
PDD = Species().fromSMILES("CCCCCCCCCCCCc1ccccc1")
C11ene=Species().fromSMILES("CCCCCCCCCC=C")
ETHBENZ=Species().fromSMILES("CCc1ccccc1")
mapping = getRMGSpeciesFromUserSpecies([PDD,C11ene,ETHBENZ], uncertainty.speciesList)
initialMoleFractions = {mapping[PDD]: 1.0}
T = (623,'K')
P = (350,'bar')
terminationTime = (72, 'h')
sensitiveSpecies=[mapping[PDD], mapping[C11ene]]
# If you used the minimal model, uncomment the following lines and comment out the lines above
# ethane = Species().fromSMILES('CC')
# C2H4 = Species().fromSMILES('C=C')
# Ar = Species().fromSMILES('[Ar]')
# mapping = getRMGSpeciesFromUserSpecies([ethane, C2H4, Ar], uncertainty.speciesList)
# # Define the reaction conditions
# initialMoleFractions = {mapping[ethane]: 1.0, mapping[Ar]:50.0}
# T = (1300,'K')
# P = (1,'atm')
# terminationTime = (5e-4, 's')
# sensitiveSpecies=[mapping[ethane], mapping[C2H4]]
# Perform the sensitivity analysis
uncertainty.sensitivityAnalysis(initialMoleFractions, sensitiveSpecies, T, P, terminationTime, number=5, fileformat='.png')
# Show the sensitivity plots
for species in sensitiveSpecies:
print '{}: Reaction Sensitivities'.format(species)
index = species.index
display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_reactions.png'.format(index))))
print '{}: Thermo Sensitivities'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_thermo.png'.format(index))))
###Output
_____no_output_____
###Markdown
If we want to run local uncertainty analysis, we must assign all the uncertainties using the `Uncertainty` class' `assignParameterUncertainties` function. `ThermoParameterUncertainty` and `KineticParameterUncertainty` classes may be customized and passed into this function if non-default constants for constructing the uncertainties are desired. This must be done after the parameter sources are properly extracted from the model. Thermo UncertaintyEach species is assigned a uniform uncertainty distribution in free energy:$G \in [G_{min},G_{max}]$$dG = (G_{max} - G_{min})/2$Several parameters are used to formulate $dG$. These are $dG_{library}$, $dG_{QM}$, $dG_{GAV}$, and $dG_{group}$. $dG = \delta_{library} dG_{library} + \delta_{QM} dG_{QM} +\delta_{GAV} dG_{GAV} +\sum_{group} w_{group} dG_{group}$where $\delta$ is a dirac delta function which equals one if the species thermochemistry parameter contains the particular source type and $w_{group}$ is the weight of the thermo group used to construct the species thermochemistry in the group additivity method. Kinetics UncertaintyEach reaction is assigned a uniform uncertainty distribution in the overall ln(k), or ln(A):$d \ln (k) \in [\ln(k_{min}),\ln(k_{max})]$$d\ln(k) = [\ln(k_{max})-\ln(k_{min})]/2$The parameters used to formulate $d \ln(k)$ are $d\ln(k_{library})$, $d\ln(k_{training})$, $d\ln(k_{pdep})$, $d\ln(k_{family})$, $d\ln(k_{non-exact})$, and $d\ln(k_{rule})$.For library, training, and pdep reactions, the kinetic uncertainty is assigned according to their uncertainty type. For kinetics estimated using RMG's rate rules, the following formula is used to calculate the uncertainty:$d \ln (k) = d\ln(k_{family}) + \log_{10}(N+1)*dln(k_{non-exact})+\sum_{rule} w_{rule} d \ln(k_{rule})$where N is the total number of rate rules used and $w_{rule}$ is the weight of the rate rule used to estimate the kinetics.
###Code
uncertainty.loadDatabase()
uncertainty.extractSourcesFromModel()
uncertainty.assignParameterUncertainties()
###Output
_____no_output_____
###Markdown
The first order local uncertainty, or variance $(d\ln c_i)^2$, for the concentration of species $i$ is defined as:$(d\ln c_i)^2 = \sum_j \left(\frac{d\ln c_i}{d\ln k_j}d\ln k_j\right)^2 + \sum_k \left(\frac{d\ln c_i}{dG_k}dG_k\right)^2$We have previously performed the sensitivity analysis. Now we perform the local uncertainty analysis and apply the formula above using the parameter uncertainties and plot the results. This first analysis considers the parameters to be independent. In other words, even when multiple species thermochemistries depend on a single thermo group or multiple reaction rate coefficients depend on a particular rate rule, each value is considered independent of each other. This typically results in a much larger uncertainty value than in reality due to cancellation error.
###Code
uncertainty.localAnalysis(sensitiveSpecies, correlated=False, number=5, fileformat='.png')
# Show the uncertainty plots
for species in sensitiveSpecies:
print '{}: Thermo Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'thermoLocalUncertainty_{}.png'.format(species.toChemkin()))))
print '{}: Reaction Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))
###Output
_____no_output_____
###Markdown
Correlated UncertaintyA more accurate picture of the uncertainty in mechanism estimated using groups and rate rules requires accounting of the correlated errors resulting from using the same groups in multiple parameters. This requires us to track the original sources: the groups and the rate rules, which constitute each parameter. These errors may cancel in the final uncertainty calculation. Note, however, that the error stemming from the estimation method itself do not cancel. For thermochemistry, the error terms described previously are $dG_{library}$, $dG_{QM}$, $dG_{GAV}$, and $dG_{group}$. Of these, $dG_{GAV}$ is an uncorrelated independent residual error, whereas the other terms are correlated. Noting this distinction, we can re-categorize and index these two types of parameters in terms of correlated sources $dG_{corr,y}$ and uncorrelated sources $dG_{res,z}$.For kinetics, the error terms described perviously are $d\ln(k_{library})$, $d\ln(k_{training})$, $d\ln(k_{pdep})$, $d\ln(k_{family})$, $d\ln(k_{non-exact})$, and $d\ln(k_{rule})$. Of these, $d\ln(k_{family})$, $d\ln(k_{non-exact})$ are uncorrelated independent error terms resulting from the method of estimation. Again, we re-categorize the correlated versus non-correlated sources as $d\ln k_{corr,v}$ and $d\ln k_{res,w}$, respectively. The first order local uncertainty, or variance $(d\ln c_{corr,i})^2$, for the concentration of species $i$ becomes:$(d\ln c_{corr,i})^2 = \sum_v \left(\frac{d\ln c_i}{d\ln k_{corr,v}}d\ln k_{corr,v}\right)^2 + \sum_w \left(\frac{d\ln c_i}{d\ln k_{res,w}}d\ln k_{res,w}\right)^2 + \sum_y \left(\frac{d\ln c_i}{dG_{corr,y}}dG_{corr,y}\right)^2 + \sum_z \left(\frac{d\ln c_i}{dG_{res,z}}dG_{res,z}\right)^2$where the differential terms can be computed as:$\frac{d\ln c_i}{d\ln k_{corr,v}} = \sum_j \frac{d\ln c_i}{d\ln k_j}\frac{d\ln k_j}{d\ln k_{corr,v}}$$\frac{d\ln c_i}{d G_{corr,y}} = \sum_k \frac{d\ln c_i}{dG_k}\frac{dG_k}{dG_{corr,y}}$
###Code
uncertainty.assignParameterUncertainties(correlated=True)
uncertainty.localAnalysis(sensitiveSpecies, correlated=True, number=10, fileformat='.png')
# Show the uncertainty plots
for species in sensitiveSpecies:
print '{}: Thermo Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'thermoLocalUncertainty_{}.png'.format(species.toChemkin()))))
print '{}: Reaction Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))
###Output
_____no_output_____
###Markdown
First Order Local Uncertainty Analysis for Chemical Reaction SystemsThis IPython notebook performs first order local uncertainty analysis for a chemical reaction systemusing a RMG-generated model. Step 1: Define mechanism files and simulation settingsTwo examples are provided below. You should only run one of the two blocks.
###Code
# This is a small phenyldodecane pyrolysis model
# Must use annotated chemkin file
chemkinFile = './data/pdd_model/chem_annotated.inp'
dictFile = './data/pdd_model/species_dictionary.txt'
# Initialize the Uncertainty class instance and load the model
uncertainty = Uncertainty(outputDirectory='./temp/uncertainty')
uncertainty.loadModel(chemkinFile, dictFile)
# Map the species to the objects within the Uncertainty class
PDD = Species().fromSMILES("CCCCCCCCCCCCc1ccccc1")
C11ene=Species().fromSMILES("CCCCCCCCCC=C")
ETHBENZ=Species().fromSMILES("CCc1ccccc1")
mapping = getRMGSpeciesFromUserSpecies([PDD,C11ene,ETHBENZ], uncertainty.speciesList)
# Define the reaction conditions
initialMoleFractions = {mapping[PDD]: 1.0}
T = (623, 'K')
P = (350, 'bar')
terminationTime = (72, 'h')
sensitiveSpecies=[mapping[PDD], mapping[C11ene]]
# This is an even smaller ethane pyrolysis model
# Must use annotated chemkin file
chemkinFile = 'data/ethane_model/chem_annotated.inp'
dictFile = 'data/ethane_model/species_dictionary.txt'
# Initialize the Uncertainty class instance and load the model
uncertainty = Uncertainty(outputDirectory='./temp/uncertainty')
uncertainty.loadModel(chemkinFile, dictFile)
# Map the species to the objects within the Uncertainty class
ethane = Species().fromSMILES('CC')
C2H4 = Species().fromSMILES('C=C')
mapping = getRMGSpeciesFromUserSpecies([ethane, C2H4], uncertainty.speciesList)
# Define the reaction conditions
initialMoleFractions = {mapping[ethane]: 1.0}
T = (1300, 'K')
P = (1, 'bar')
terminationTime = (0.5, 'ms')
sensitiveSpecies=[mapping[ethane], mapping[C2H4]]
###Output
_____no_output_____
###Markdown
Step 2: Run sensitivity analysisLocal uncertainty analysis uses the results from a first-order sensitivity analysis. This analysis is done using RMG's native solver.
###Code
# Perform the sensitivity analysis
uncertainty.sensitivityAnalysis(initialMoleFractions, sensitiveSpecies, T, P, terminationTime, number=5, fileformat='.png')
# Show the sensitivity plots
for species in sensitiveSpecies:
print '{}: Reaction Sensitivities'.format(species)
index = species.index
display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_reactions.png'.format(index))))
print '{}: Thermo Sensitivities'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory,'solver','sensitivity_1_SPC_{}_thermo.png'.format(index))))
###Output
_____no_output_____
###Markdown
Step 3: Uncertainty assignment and propagation of uncorrelated parametersIf we want to run local uncertainty analysis, we must assign all the uncertainties using the `Uncertainty` class' `assignParameterUncertainties` function. `ThermoParameterUncertainty` and `KineticParameterUncertainty` classes may be customized and passed into this function if non-default constants for constructing the uncertainties are desired. This must be done after the parameter sources are properly extracted from the model. Thermo UncertaintyEach species is assigned a uniform uncertainty distribution in free energy:$$G \in [G_{min},G_{max}]$$We will propogate the standard deviation in free energy, which for a uniform distribution is defined as follows:$$\Delta G = \frac{1}{\sqrt{12}}(G_{max} - G_{min})$$Several parameters are used to formulate $\Delta G$. These are $\Delta G_\mathrm{library}$, $\Delta G_\mathrm{QM}$, $\Delta G_\mathrm{GAV}$, and $\Delta _\mathrm{group}$. $$\Delta G = \delta_\mathrm{library} \Delta G_\mathrm{library} + \delta_\mathrm{QM} \Delta G_\mathrm{QM} + \delta_\mathrm{GAV} \left( \Delta G_\mathrm{GAV} + \sum_{\mathrm{group}\; j} d_{j} \Delta G_{\mathrm{group},j} \right)$$where $\delta$ is the Kronecker delta function which equals one if the species thermochemistry parameter contains the particular source type and $d_{j}$ is the degeneracy (number of appearances) of the thermo group used to construct the species thermochemistry in the group additivity method. Kinetics UncertaintyEach reaction is assigned a uniform uncertainty distribution in the overall $\ln k$, or $\ln A$:$$\ln k \in [\ln(k_{min}),\ln(k_{max})]$$Again, we use the standard deviation of this distribution:$$\Delta \ln(k) = \frac{1}{\sqrt{12}}(\ln k_{max} - \ln k_{min})$$The parameters used to formulate $\Delta \ln k$ are $\Delta \ln k_\mathrm{library}$, $\Delta \ln k_\mathrm{training}$, $\Delta \ln k_\mathrm{pdep}$, $\Delta \ln k_\mathrm{family}$, $\Delta \ln k_\mathrm{non-exact}$, and $\Delta \ln k_\mathrm{rule}$.For library, training, and pdep reactions, the kinetic uncertainty is assigned according to their uncertainty type. For kinetics estimated using RMG's rate rules, the following formula is used to calculate the uncertainty:$$\Delta \ln k_\mathrm{rate\; rules} = \Delta\ln k_\mathrm{family} + \log_{10}(N+1) \left(\Delta\ln k_\mathrm{non-exact}\right) + \sum_{\mathrm{rule}\; i} w_i \Delta \ln k_{\mathrm{rule},i}$$where N is the total number of rate rules used and $w_{i}$ is the weight of the rate rule in the averaging scheme for that kinetics estimate.
###Code
# NOTE: You must load the database with the same settings which were used to generate the model.
# This includes any thermo or kinetics libraries which were used.
uncertainty.loadDatabase(
thermoLibraries=['DFT_QCI_thermo', 'primaryThermoLibrary'],
kineticsFamilies='default',
reactionLibraries=[],
)
uncertainty.extractSourcesFromModel()
uncertainty.assignParameterUncertainties()
###Output
_____no_output_____
###Markdown
The first order local uncertainty, or variance $(d\ln c_i)^2$, for the concentration of species $i$ is defined as:$$(\Delta \ln c_i)^2 = \sum_{\mathrm{reactions}\; m} \left(\frac{\partial\ln c_i}{\partial\ln k_m}\right)^2 (\Delta \ln k_m)^2 + \sum_{\mathrm{species}\; n} \left(\frac{\partial\ln c_i}{\partial G_n}\right)^2(\Delta G_n)^2$$We have previously performed the sensitivity analysis. Now we perform the local uncertainty analysis and apply the formula above using the parameter uncertainties and plot the results. This first analysis considers the parameters to be independent. In other words, even when multiple species thermochemistries depend on a single thermo group or multiple reaction rate coefficients depend on a particular rate rule, each value is considered independent of each other. This typically results in a much larger uncertainty value than in reality due to cancellation error.
###Code
result = uncertainty.localAnalysis(sensitiveSpecies, correlated=False, number=5, fileformat='.png')
print process_local_results(result, sensitiveSpecies, number=5)[1]
# Show the uncertainty plots
for species in sensitiveSpecies:
print '{}: Thermo Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory, 'uncorrelated', 'thermoLocalUncertainty_{}.png'.format(species.toChemkin()))))
print '{}: Reaction Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory, 'uncorrelated', 'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))
###Output
_____no_output_____
###Markdown
Step 4: Uncertainty assignment and propagation of correlated parametersA more accurate picture of the uncertainty in mechanism estimated using groups and rate rules requires accounting of the correlated errors resulting from using the same groups in multiple parameters. This requires us to track the original sources: the groups and the rate rules, which constitute each parameter. These errors may cancel in the final uncertainty calculation. Note, however, that the error stemming from the estimation method itself do not cancel. For thermochemistry, the error terms described previously are $\Delta G_\mathrm{library}$, $\Delta G_\mathrm{QM}$, $\Delta G_\mathrm{GAV}$, and $\Delta _\mathrm{group}$. Of these, $\Delta G_\mathrm{GAV}$ is an uncorrelated residual error, whereas the other terms are correlated. The set of correlated and uncorrelated parameters can be thought of instead as a set of independent parameters, $\Delta G_{ind,w}$.For kinetics, the error terms described perviously are $\Delta \ln k_\mathrm{library}$, $\Delta \ln k_\mathrm{training}$, $\Delta \ln k_\mathrm{pdep}$, $\Delta \ln k_\mathrm{family}$, $\Delta \ln k_\mathrm{non-exact}$, and $\Delta \ln k_\mathrm{rule}$. Of these, $\Delta \ln k_\mathrm{family}$ and $\Delta \ln k_\mathrm{non-exact}$ are uncorrelated error terms resulting from the method of estimation. Again, we consider the set of correlated and uncorrelated parameters as the set of independent parameters, $\Delta\ln k_{ind,v}$.The first order local uncertainty, or variance $(d\ln c_i)^2$, for the concentration of species $i$ becomes:$$(\Delta \ln c_i)^2 = \sum_v \left(\frac{\partial\ln c_i}{\partial\ln k_{ind,v}}\right)^2 \left(\Delta\ln k_{ind,v}\right)^2 + \sum_w \left(\frac{\partial\ln c_i}{\partial G_{ind,w}}\right)^2 \left(\Delta G_{ind,w}\right)^2$$where the differential terms can be computed as:$$\frac{\partial\ln c_i}{\partial\ln k_{ind,v}} = \sum_m \frac{\partial\ln c_i}{\partial\ln k_m} \frac{\partial\ln k_m}{\partial\ln k_{ind,v}}$$$$\frac{\partial\ln c_i}{\partial G_{ind,w}} = \sum_n \frac{\partial\ln c_i}{\partial G_n} \frac{\partial G_n}{\partial G_{ind,w}}$$
###Code
uncertainty.assignParameterUncertainties(correlated=True)
result = uncertainty.localAnalysis(sensitiveSpecies, correlated=True, number=10, fileformat='.png')
print process_local_results(result, sensitiveSpecies, number=5)[1]
# Show the uncertainty plots
for species in sensitiveSpecies:
print '{}: Thermo Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory, 'correlated', 'thermoLocalUncertainty_{}.png'.format(species.toChemkin()))))
print '{}: Reaction Uncertainty Contributions'.format(species)
display(Image(filename=os.path.join(uncertainty.outputDirectory, 'correlated', 'kineticsLocalUncertainty_{}.png'.format(species.toChemkin()))))
###Output
_____no_output_____
|
deprecated/4. Data Visualization - Categorical Variable.ipynb
|
###Markdown
Load Dataset
###Code
train = pd.read_csv('./data/train_clean.csv')
test = pd.read_csv('./data/test_clean.csv')
print('Train:')
print(train.info(verbose=False), '\n')
print('Test:')
print(test.info(verbose=False))
###Output
Train:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 722142 entries, 0 to 722141
Columns: 68 entries, loan_amnt to credit_length
dtypes: float64(16), int64(41), object(11)
memory usage: 374.6+ MB
None
Test:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 98727 entries, 0 to 98726
Columns: 68 entries, loan_amnt to credit_length
dtypes: float64(16), int64(41), object(11)
memory usage: 51.2+ MB
None
###Markdown
Data Basic Information.
###Code
# imbalanced dataset
target1 = train['target'].sum()
target0 = (1 - train['target']).sum()
print('Target 0:\t', target0, '\t', np.round(target0 / len(train), 4))
print('Target 1:\t', target1, '\t', np.round(target1 / len(train), 4))
print('0/1 Ratio:\t', np.round(target0 / target1, 4))
# visualize the target count distribution
data = [go.Bar(x=['status 0'], y=[target0], name='Status 0'),
go.Bar(x=['status 1'], y=[target1], name='Status 1')]
margin=go.layout.Margin(l=50, r=50, b=30, t=40, pad=4)
legend = dict(orientation='h', xanchor='auto', y=-0.2)
layout = go.Layout(title='Loan Status Count Plot', xaxis=dict(title='Loan Status'),
yaxis=dict(title='Count'), autosize=False, width=700, height=400,
margin=margin, legend=legend)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Visualization
###Code
# define categorical and numerical features
cat_features = ['term', 'home_ownership', 'verification_status', 'purpose',
'title', 'addr_state', 'initial_list_status', 'application_type',
'grade', 'sub_grade']
num_features = ['loan_amnt', 'int_rate', 'installment_ratio', 'emp_length', 'annual_inc',
'dti', 'delinq_2yrs', 'inq_last_6mths', 'open_acc', 'pub_rec',
'revol_bal', 'revol_util', 'total_acc', 'collections_12_mths_ex_med',
'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'total_rev_hi_lim',
'acc_open_past_24mths', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'chargeoff_within_12_mths', 'delinq_amnt', 'mo_sin_old_il_acct',
'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl',
'mort_acc', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_accts_ever_120_pd', 'num_actv_bc_tl', 'num_actv_rev_tl',
'num_bc_sats', 'num_bc_tl', 'num_il_tl', 'num_op_rev_tl',
'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats', 'num_tl_120dpd_2m',
'num_tl_30dpd', 'num_tl_90g_dpd_24m', 'num_tl_op_past_12m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies',
'tax_liens', 'tot_hi_cred_lim', 'total_bal_ex_mort', 'total_bc_limit',
'total_il_high_credit_limit', 'credit_length']
features = cat_features + num_features
# define numerical and categorical features
print('Categorical feature:\t', len(cat_features))
print('Numerical feature:\t', len(num_features))
print('Total feature:\t\t', len(features))
###Output
Categorical feature: 10
Numerical feature: 54
Total feature: 64
###Markdown
Categorical Features
###Code
# term
feature = 'term'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# home_ownership
feature = 'home_ownership'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# verification_status
feature = 'verification_status'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# purpose
feature = 'purpose'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=600))
# title
feature = 'title'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=600))
# addr_state
state_count = train.groupby('addr_state')['target'].count().reset_index()
state_count = state_count.sort_values(by='target', ascending=False)
# visualization
scl = [[0.0, 'rgb(242,240,247)'], [0.2, 'rgb(218,218,235)'],
[0.4, 'rgb(188,189,220)'], [0.6, 'rgb(158,154,200)'],
[0.8, 'rgb(117,107,177)'], [1.0, 'rgb(84,39,143)']]
data = [dict(type='choropleth', colorscale=scl, autocolorscale=False,
locations=state_count['addr_state'], z=state_count['target'],
locationmode='USA-states', colorbar=dict(title='Counts'),
marker=dict(line=dict(color = 'rgb(255,255,255)', width=2)))]
geo = dict(scope='usa', projection=dict(type='albers usa'),
showlakes=True, lakecolor='rgb(255, 255, 255)')
layout = dict(title='Loan Count Distribution by State', geo=geo,
margin=go.Margin(l=50, r=50, b=50, t=40, pad=4),
width=1000, height=600)
fig = dict(data=data, layout=layout)
iplot(fig)
# addr_state
state_rate = train.groupby('addr_state')['target'].mean().reset_index()
state_rate = state_rate.sort_values(by='target', ascending=False)
# visualization
scl = [[0.0, 'rgb(242,240,247)'], [0.2, 'rgb(218,218,235)'],
[0.4, 'rgb(188,189,220)'], [0.6, 'rgb(158,154,200)'],
[0.8, 'rgb(117,107,177)'], [1.0, 'rgb(84,39,143)']]
data = [dict(type='choropleth', colorscale=scl, autocolorscale=False,
locations=state_rate['addr_state'], z=state_rate['target'],
locationmode='USA-states', colorbar=dict(title='Default Rate'),
marker=dict(line=dict(color = 'rgb(255,255,255)', width=2)))]
geo = dict(scope='usa', projection=dict(type='albers usa'),
showlakes=True, lakecolor='rgb(255, 255, 255)')
layout = dict(title='Loan Default Rate Distribution by State', geo=geo,
margin=go.Margin(l=50, r=50, b=50, t=40, pad=4),
width=1000, height=600)
fig = dict(data=data, layout=layout)
iplot(fig)
# initial_list_status
feature = 'initial_list_status'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# application_type
feature = 'application_type'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# grade
feature = 'grade'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=450))
# sub_grade
feature = 'sub_grade'
iplot(categorical_plot(data=train, feature=feature, width=1000, height=500))
###Output
_____no_output_____
###Markdown
Discrete Features
###Code
# emp_length
feature = 'emp_length'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# delinq_2yrs
feature = 'delinq_2yrs'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# inq_last_6mths
feature = 'inq_last_6mths'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# pub_rec
feature = 'pub_rec'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# collections_12_mths_ex_med
feature = 'collections_12_mths_ex_med'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# acc_now_delinq
feature = 'acc_now_delinq'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# acc_open_past_24mths
feature = 'acc_open_past_24mths'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# chargeoff_within_12_mths
feature = 'chargeoff_within_12_mths'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# mort_acc
feature = 'mort_acc'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# mths_since_recent_inq
feature = 'mths_since_recent_inq'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_accts_ever_120_pd
feature = 'num_accts_ever_120_pd'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_actv_bc_tl
feature = 'num_actv_bc_tl'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_actv_rev_tl
feature = 'num_actv_rev_tl'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_bc_sats
feature = 'num_bc_sats'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_bc_tl
feature = 'num_bc_tl'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_il_tl
feature = 'num_il_tl'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_op_rev_tl
feature = 'num_op_rev_tl'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_rev_accts
feature = 'num_rev_accts'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_rev_tl_bal_gt_0
feature = 'num_rev_tl_bal_gt_0'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_sats
feature = 'num_sats'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_tl_120dpd_2m
feature = 'num_tl_120dpd_2m'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_tl_30dpd
feature = 'num_tl_30dpd'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_tl_90g_dpd_24m
feature = 'num_tl_90g_dpd_24m'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# num_tl_op_past_12m
feature = 'num_tl_op_past_12m'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# pub_rec_bankruptcies
feature = 'pub_rec_bankruptcies'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# tax_liens
feature = 'tax_liens'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
# credit_length
feature = 'credit_length'
fig = numerical_plot(data=train, feature=feature, width=1000, height=450, bins=50)
iplot(fig)
###Output
_____no_output_____
|
quanteconomics/ddp_intro_py.ipynb
|
###Markdown
DiscreteDP***Getting Started with a Simple Example*** **Daisuke Oyama***Faculty of Economics, University of Tokyo* This notebook demonstrates via a simple example how to use the `DiscreteDP` module.
###Code
import numpy as np
from scipy import sparse
from quantecon.markov import DiscreteDP
###Output
_____no_output_____
###Markdown
A two-state example Let us consider the following two-state dynamic program,taken from Puterman (2005), Section 3.1, pp.33-35;see also Example 6.2.1, pp.155-156.* There are two possible states $0$ and $1$.* At state $0$, you may choose either "stay", say action $0$, or "move", action $1$.* At state $1$, there is no way to move, so that you can only stay, i.e., $0$ is the only available action. (You may alternatively distinguish between the action "staty" at state $0$ and that at state $1$, and call the latter action $2$; but here we choose to refer to the both actions as action $0$.)* At state $0$, if you choose action $0$ (stay), then you receive a reward $5$, and in the next period the state will remain at $0$ with probability $1/2$, but it moves to $1$ with probability $1/2$.* If you choose action $1$ (move), then you receive a reward $10$, and the state in the next period will be $1$ with probability $1$.* At state $1$, where the only action you can take is $0$ (stay), you receive a reward $-1$, and the state will remain at $1$ with probability $1$.* You want to maximize the sum of discounted expected reward flows with discount factor $\beta \in [0, 1)$. The optimization problem consists of:* the state space: $S = \{0, 1\}$;* the action space: $A = \{0, 1\}$;* the set of feasible state-action pairs $\mathit{SA} = \{(0, 0), (0, 1), (1, 0)\} \subset S \times A$;* the reward function $r\colon \mathit{SA} \to \mathbb{R}$, where $$ r(0, 0) = 5,\ r(0, 1) = 10,\ r(1, 0) = -1; $$* the transition probability function $q \colon \mathit{SA} \to \Delta(S)$, where $$ \begin{aligned} &(q(0 | 0, 0), q(1 | 0, 0)) = (1/2, 1/2), \\ &(q(0 | 0, 1), q(1 | 0, 1)) = (0, 1), \\ &(q(0 | 1, 0), q(1 | 1, 0)) = (0, 1); \end{aligned} $$ * the discount factor $\beta \in [0, 1)$. The Belmann equation for this problem is:$$\begin{aligned}v(0) &= \max \left\{5 + \beta \left(\frac{1}{2} v(0) + \frac{1}{2} v(1)\right), 10 + \beta v(1)\right\}, \\v(1) &= (-1) + \beta v(1).\end{aligned}$$ This problem is simple enough to solve by hand:the optimal value function $v^*$ is given by$$\begin{aligned}&v(0) =\begin{cases}\dfrac{5 - 5.5 \beta}{(1 - 0.5 \beta) (1 - \beta)} & \text{if $\beta > \frac{10}{11}$} \\\dfrac{10 - 11 \beta}{1 - \beta} & \text{otherwise},\end{cases}\\&v(1) = -\frac{1}{1 - \beta},\end{aligned}$$and the optimal policy function $\sigma^*$ is given by$$\begin{aligned}&\sigma^*(0) =\begin{cases}0 & \text{if $\beta > \frac{10}{11}$} \\1 & \text{otherwise},\end{cases}\\&\sigma^*(1) = 0.\end{aligned}$$
###Code
def v_star(beta):
v = np.empty(2)
v[1] = -1 / (1 - beta)
if beta > 10/11:
v[0] = (5 - 5.5*beta) / ((1 - 0.5*beta) * (1 - beta))
else:
v[0] = (10 - 11*beta) / (1 - beta)
return v
###Output
_____no_output_____
###Markdown
We want to solve this problem numerically by using the `DiscreteDP` class. We will set $\beta = 0.95$ ($> 10/11$), for which the anlaytical solution is:$\sigma^* = (0, 0)$ and
###Code
v_star(beta=0.95)
###Output
_____no_output_____
###Markdown
Formulating the model There are two ways to represent the data for instantiating a `DiscreteDP` object.Let $n$, $m$, and $L$ denote the numbers of states, actions,and feasbile state-action pairs, respectively;in the above example, $n = 2$, $m = 2$, and $L = 3$.1. `DiscreteDP(R, Q, beta)` with parameters: * $n \times m$ reward array `R`, * $n \times m \times n$ transition probability array `Q`, and * discount factor `beta`, where `R[s, a]` is the reward for action `a` when the state is `s` and `Q[s, a, s']` is the probability that the state in the next period is `s'` when the current state is `s` and the action chosen is `a`.2. `DiscreteDP(R, Q, beta, s_indices, a_indices)` with parameters: * length $L$ reward vector `R`, * $L \times n$ transition probability array `Q`, * discount factor `beta`, * length $L$ array `s_indices`, and * length $L$ array `a_indices`, where the pairs `(s_indices[0], a_indices[0])`, ..., `(s_indices[L-1], a_indices[L-1])` enumerate feasible state-action pairs, and `R[i]` is the reward for action `a_indices[i]` when the state is `s_indices[i]` and `Q[i, s']` is the probability that the state in the next period is `s'` when the current state is `s_indices[i]` and the action chosen is `a_indices[0]`. Creating a `DiscreteDP` instance Let us illustrate the two formulations by the simple example at the outset. Product formulation This formulation is straightforwardwhen the number of feasible actions is constant across statesso that the set of feasible state-action pairs is naturally represetendby the product $S \times A$,while any problem can actually be represented in this wayby defining the reward `R[s, a]` to be $-\infty$ when action `a` is infeasible under state `s`. To apply this approach to the current example,we consider the effectively equivalent problemin which at both states $0$ and $1$,both actions $0$ (stay) and $1$ (move) are available,but action $1$ yields a reward $-\infty$ at state $1$. The reward array `R` is an $n \times m$ 2-dimensional array:
###Code
R = [[5, 10],
[-1, -float('inf')]]
###Output
_____no_output_____
###Markdown
The transition probability array `Q` is an $n \times m \times n$ 3-dimenstional array:
###Code
Q = [[(0.5, 0.5), (0, 1)],
[(0, 1), (0.5, 0.5)]] # Probabilities in Q[1, 1] are arbitrary
###Output
_____no_output_____
###Markdown
Note that the transition probabilities for action $(s, a) = (1, 1)$ are arbitrary,since $a = 1$ is infeasible at $s = 1$ in the original problem. Let us set the discount factor $\beta$ to be $0.95$:
###Code
beta = 0.95
###Output
_____no_output_____
###Markdown
We are ready to create a `DiscreteDP` instance:
###Code
ddp = DiscreteDP(R, Q, beta)
###Output
_____no_output_____
###Markdown
State-action pairs formulation When the number of feasible actions varies across states,it can be inefficient in terms of memory usageto extend the domain by treating infeasible actionsto be "feasible but yielding reward $-\infty$".This formulation takes the set of feasible state-action pairs as is,defining `R` to be a 1-dimensional array of length `L`and `Q` to be a 2-dimensional array of shape `(L, n)`,where `L` is the number of feasible state-action pairs. First, we have to list all the feasible state-action pairs.For our example, they are: $(s, a) = (0, 0), (0, 1), (1, 0)$. We have arrays `s_indices` and ` a_indices` of length $3$contain the indices of states and actions, respectively.
###Code
s_indices = [0, 0, 1] # State indices
a_indices = [0, 1, 0] # Action indices
###Output
_____no_output_____
###Markdown
The reward vector `R` is a length $L$ 1-dimensional array:
###Code
# Rewards for (s, a) = (0, 0), (0, 1), (1, 0), respectively
R = [5, 10, -1]
###Output
_____no_output_____
###Markdown
The transition probability array `Q` is an $L \times n$ 2-dimensional array:
###Code
# Probability vectors for (s, a) = (0, 0), (0, 1), (1, 0), respectively
Q = [(0.5, 0.5), (0, 1), (0, 1)]
###Output
_____no_output_____
###Markdown
For the discount factor, set $\beta = 0.95$ as before:
###Code
beta = 0.95
###Output
_____no_output_____
###Markdown
Now create a `DiscreteDP` instance:
###Code
ddp_sa = DiscreteDP(R, Q, beta, s_indices, a_indices)
###Output
_____no_output_____
###Markdown
Notes Importantly, this formulation allows us to represent the transition probability array `Q`as a [`scipy.sparse`](http://docs.scipy.org/doc/scipy/reference/sparse.html) matrix(of any format),which is useful for large and sparse problems. For example, let us convert the above ndarray `Q` to the Coordinate (coo) format:
###Code
import scipy.sparse
Q = scipy.sparse.coo_matrix(Q)
###Output
_____no_output_____
###Markdown
Pass it to `DiscreteDP` with the other parameters:
###Code
ddp_sparse = DiscreteDP(R, Q, beta, s_indices, a_indices)
###Output
_____no_output_____
###Markdown
Internally, the matrix `Q` is converted to the Compressed Sparse Row (csr) format:
###Code
ddp_sparse.Q
ddp_sparse.Q.toarray()
###Output
_____no_output_____
###Markdown
Solving the model Now let us solve our model.Currently, `DiscreteDP` supports the following solution algorithms:* policy iteration;* value iteration;* modified policy iteration.(The methods are the same across the formulations.) Policy iteration We solve the model first by policy iteration,which gives the exact solution:
###Code
v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a))
res = ddp.solve(method='policy_iteration', v_init=v_init)
###Output
_____no_output_____
###Markdown
`res` contains the information about the solution result:
###Code
res
###Output
_____no_output_____
###Markdown
The optimal policy function:
###Code
res.sigma
###Output
_____no_output_____
###Markdown
The optimal value function:
###Code
res.v
###Output
_____no_output_____
###Markdown
This coincides with the analytical solution:
###Code
v_star(beta)
np.allclose(res.v, v_star(beta))
###Output
_____no_output_____
###Markdown
The number of iterations:
###Code
res.num_iter
###Output
_____no_output_____
###Markdown
Verify that the value of the policy `[0, 0]` is actually equal to the optimal value `v`:
###Code
ddp.evaluate_policy(res.sigma)
ddp.evaluate_policy(res.sigma) == res.v
###Output
_____no_output_____
###Markdown
`res.mc` is the controlled Markov chain given by the optimal policy `[0, 0]`:
###Code
res.mc
###Output
_____no_output_____
###Markdown
Value iteration Next, solve the model by value iteration,which returns an $\varepsilon$-optimal solution for a specified value of $\varepsilon$:
###Code
epsilon = 1e-2 # Convergece tolerance, optional(default=1e-3)
v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a))
res_vi = ddp.solve(method='value_iteration', v_init=v_init,
epsilon=epsilon)
res_vi
###Output
_____no_output_____
###Markdown
The computed policy function `res1.sigma` is an $\varepsilon$-optimal policy,and the value function `res1.v` is an $\varepsilon/2$-approximationof the true optimal value function.
###Code
np.abs(v_star(beta) - res_vi.v).max()
###Output
_____no_output_____
###Markdown
Modified policy iteration Finally, solve the model by modified policy iteration:
###Code
epsilon = 1e-2 # Convergece tolerance, optional(defaul=1e-3)
v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a))
res_mpi = ddp.solve(method='modified_policy_iteration', v_init=v_init,
epsilon=epsilon)
res_mpi
###Output
_____no_output_____
###Markdown
Modified policy function also returns an $\varepsilon$-optimal policy functionand an $\varepsilon/2$-approximate value function:
###Code
np.abs(v_star(beta) - res_mpi.v).max()
###Output
_____no_output_____
|
code/chapter03_DL-basics/3.9_mlp-scratch.ipynb
|
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
0.4.1
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0030, train acc 0.714, test acc 0.753
epoch 2, loss 0.0019, train acc 0.821, test acc 0.777
epoch 3, loss 0.0017, train acc 0.842, test acc 0.834
epoch 4, loss 0.0015, train acc 0.857, test acc 0.839
epoch 5, loss 0.0014, train acc 0.865, test acc 0.845
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
0.4.1
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0030, train acc 0.714, test acc 0.753
epoch 2, loss 0.0019, train acc 0.821, test acc 0.777
epoch 3, loss 0.0017, train acc 0.842, test acc 0.834
epoch 4, loss 0.0015, train acc 0.857, test acc 0.839
epoch 5, loss 0.0014, train acc 0.865, test acc 0.845
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.4.0
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0031, train acc 0.711, test acc 0.766
epoch 2, loss 0.0019, train acc 0.823, test acc 0.827
epoch 3, loss 0.0017, train acc 0.842, test acc 0.833
epoch 4, loss 0.0015, train acc 0.856, test acc 0.817
epoch 5, loss 0.0015, train acc 0.863, test acc 0.856
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("D:/Deeplearning/Dive-into-DL-PyTorch-master/code") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.3.0+cu92
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0031, train acc 0.706, test acc 0.762
epoch 2, loss 0.0019, train acc 0.823, test acc 0.830
epoch 3, loss 0.0017, train acc 0.844, test acc 0.828
epoch 4, loss 0.0015, train acc 0.856, test acc 0.807
epoch 5, loss 0.0015, train acc 0.862, test acc 0.852
###Markdown
3.9 多层感知机的从零开始实现我们已经从上一节里了解了多层感知机的原理。下面,我们一起来动手实现一个多层感知机。首先导入实现所需的包或模块。
###Code
import tensorflow as tf
import numpy as np
import sys
print(tf.__version__)
###Output
2.0.0
###Markdown
3.9.1 获取和读取数据这里继续使用Fashion-MNIST数据集。我们将使用多层感知机对图像进行分类
###Code
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
batch_size = 256
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)
x_train = x_train/255.0
x_test = x_test/255.0
train_iter = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
test_iter = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数我们在3.6节(softmax回归的从零开始实现)里已经介绍了,Fashion-MNIST数据集中图像形状为 28×28,类别数为10。本节中我们依然使用长度为 28×28=784 的向量表示每一张图像。因此,输入个数为784,输出个数为10。实验中,我们设超参数隐藏单元个数为256。
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
w1 = tf.Variable(tf.random.truncated_normal([num_inputs, num_hiddens], stddev=0.1))
b1 = tf.Variable(tf.random.truncated_normal([num_hiddens], stddev=0.1))
w2 = tf.Variable(tf.random.truncated_normal([num_hiddens, num_outputs], stddev=0.1))
b2=tf.Variable(tf.random.truncated_normal([num_outputs], stddev=0.1))
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数这里我们使用基础的max函数来实现ReLU,而非直接调用relu函数。
###Code
def relu(x):
return tf.math.maximum(x,0)
def net(x,w1,b1,w2,b2):
x = tf.reshape(x,shape=[-1,num_inputs])
h = relu(tf.matmul(x,w1) + b1 )
y = tf.math.softmax( tf.matmul(h,w2) + b2 )
return y
###Output
_____no_output_____
###Markdown
3.9.5. 定义损失函数¶为了得到更好的数值稳定性,我们直接使用Tensorflow提供的包括softmax运算和交叉熵损失计算的函数。
###Code
def loss(y_hat,y_true):
return tf.losses.sparse_categorical_crossentropy(y_true,y_hat)
###Output
_____no_output_____
###Markdown
3.9.6. 训练模型
###Code
def acc(y_hat,y):
return np.mean((tf.argmax(y_hat,axis=1) == y))
num_epochs, lr = 5, 0.5
for epoch in range(num_epochs):
loss_all = 0
for x,y in train_iter:
with tf.GradientTape() as tape:
y_hat = net(x,w1,b1,w2,b2)
l = tf.reduce_mean(loss(y_hat,y))
loss_all += l.numpy()
grads = tape.gradient(l, [w1, b1, w2, b2])
w1.assign_sub(grads[0])
b1.assign_sub(grads[1])
w2.assign_sub(grads[2])
b2.assign_sub(grads[3])
print(epoch, 'loss:', l.numpy())
total_correct, total_number = 0, 0
for x,y in test_iter:
with tf.GradientTape() as tape:
y_hat = net(x,w1,b1,w2,b2)
y=tf.cast(y,'int64')
correct=acc(y_hat,y)
print(epoch,"test_acc:", correct)
###Output
0 loss: 0.7799275
0 test_acc: 0.875
1 loss: 0.72887945
1 test_acc: 0.9375
2 loss: 0.72454
2 test_acc: 0.8125
3 loss: 0.5607478
3 test_acc: 0.875
4 loss: 0.5008962
4 test_acc: 0.9375
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.1.0
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0014, train acc 0.870, test acc 0.855
epoch 2, loss 0.0013, train acc 0.875, test acc 0.856
epoch 3, loss 0.0013, train acc 0.878, test acc 0.855
epoch 4, loss 0.0012, train acc 0.882, test acc 0.866
epoch 5, loss 0.0012, train acc 0.887, test acc 0.851
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
0.4.1
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0030, train acc 0.714, test acc 0.753
epoch 2, loss 0.0019, train acc 0.821, test acc 0.777
epoch 3, loss 0.0017, train acc 0.842, test acc 0.834
epoch 4, loss 0.0015, train acc 0.857, test acc 0.839
epoch 5, loss 0.0014, train acc 0.865, test acc 0.845
###Markdown
3.9 多层感知机的从零开始实现我们已经从上一节里了解了多层感知机的原理。下面,我们一起来动手实现一个多层感知机。首先导入实现所需的包或模块。
###Code
import tensorflow as tf
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_tensorflow
import d2lzh_tensorflow2 as d2l
print(tf.__version__)
###Output
2.1.0
###Markdown
3.9.1 获取和读取数据这里继续使用Fashion-MNIST数据集。我们将使用多层感知机对图像进行分类
###Code
from tensorflow.keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
batch_size = 256
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)
x_train = x_train/255.0
x_test = x_test/255.0
train_iter = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
test_iter = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数我们在3.6节(softmax回归的从零开始实现)里已经介绍了,Fashion-MNIST数据集中图像形状为 28×28,类别数为10。本节中我们依然使用长度为 28×28=784 的向量表示每一张图像。因此,输入个数为784,输出个数为10。实验中,我们设超参数隐藏单元个数为256。
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = tf.Variable(tf.random.normal(shape=(num_inputs, num_hiddens),mean=0, stddev=0.01, dtype=tf.float32))
b1 = tf.Variable(tf.zeros(num_hiddens, dtype=tf.float32))
W2 = tf.Variable(tf.random.normal(shape=(num_hiddens, num_outputs),mean=0, stddev=0.01, dtype=tf.float32))
b2 = tf.Variable(tf.random.normal([num_outputs], stddev=0.1))
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数这里我们使用基础的max函数来实现ReLU,而非直接调用relu函数。
###Code
def relu(x):
return tf.math.maximum(x,0)
def net(X):
X = tf.reshape(X, shape=[-1, num_inputs])
h = relu(tf.matmul(X, W1) + b1)
return tf.math.softmax(tf.matmul(h, W2) + b2)
###Output
_____no_output_____
###Markdown
3.9.5. 定义损失函数¶为了得到更好的数值稳定性,我们直接使用Tensorflow提供的包括softmax运算和交叉熵损失计算的函数。
###Code
def loss(y_hat,y_true):
return tf.losses.sparse_categorical_crossentropy(y_true,y_hat)
###Output
_____no_output_____
###Markdown
3.9.6. 训练模型
###Code
num_epochs, lr = 5, 0.5
num_epochs, lr = 5, 0.5
params = [W1, b1, W2, b2]
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.7892, train acc 0.703, test acc 0.818
epoch 2, loss 0.4805, train acc 0.822, test acc 0.840
epoch 3, loss 0.4183, train acc 0.844, test acc 0.851
epoch 4, loss 0.3853, train acc 0.857, test acc 0.857
epoch 5, loss 0.3607, train acc 0.867, test acc 0.864
###Markdown
3.9 多层感知机的从零开始实现
###Code
import torch
import numpy as np
import sys
sys.path.append("..") # 为了导入上层目录的d2lzh_pytorch
import d2lzh_pytorch as d2l
print(torch.__version__)
###Output
1.10.0
###Markdown
3.9.1 获取和读取数据
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
3.9.2 定义模型参数
###Code
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)
params = [W1, b1, W2, b2]
for param in params:
param.requires_grad_(requires_grad=True)
###Output
_____no_output_____
###Markdown
3.9.3 定义激活函数
###Code
def relu(X):
return torch.max(input=X, other=torch.tensor(0.0))
###Output
_____no_output_____
###Markdown
3.9.4 定义模型
###Code
def net(X):
X = X.view((-1, num_inputs))
H = relu(torch.matmul(X, W1) + b1)
return torch.matmul(H, W2) + b2
###Output
_____no_output_____
###Markdown
3.9.5 定义损失函数
###Code
loss = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
3.9.6 训练模型
###Code
num_epochs, lr = 5, 100.0
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)
###Output
epoch 1, loss 0.0030, train acc 0.715, test acc 0.788
epoch 2, loss 0.0019, train acc 0.820, test acc 0.780
epoch 3, loss 0.0017, train acc 0.842, test acc 0.842
epoch 4, loss 0.0015, train acc 0.857, test acc 0.840
epoch 5, loss 0.0015, train acc 0.865, test acc 0.834
|
Desafios/DesafioQodaR1_facil.ipynb
|
###Markdown
> **Desafio Qoda - Fácil** **Questionário elaborado por Qoda*** Respostas: Rodrigo da Costa Aglinskas* Data: 16/10/2021 Crie um algoritmo em R que mostre a mensagem "Alo mundo" na tela.
###Code
x <- "Hello World!"
print(x)
###Output
[1] "Hello World!"
###Markdown
Crie um algoritmo em R que peça dois números e imprima a soma.
###Code
soma.num1 <- readline(prompt="Digite um número para somar: ")
soma.num2 <- readline(prompt="Digite outro número para somar: ")
soma.num1 <- as.integer(soma.num1)
soma.num2 <- as.integer(soma.num2)
print(sum(soma.num1, soma.num2))
###Output
Digite um número para somar: 5
Digite outro número para somar: 4
[1] 9
###Markdown
Crie um algoritmo em R que peça um número e então mostre a mensagem O número informado foi [número].
###Code
df.num <- readline(prompt="Digite um nr: ")
df.num <- as.integer(df.num)
print(paste("O número digitado foi: ", df.num))
###Output
Digite um nr: 55
[1] "O número digitado foi: 55"
###Markdown
Crie um algoritmo em R que converta metros para centímetros.
###Code
metros <- readline(prompt="Digite um tamanho em metros para conversão em cm: ")
metros <- as.integer(metros)
cm <- metros * 100
print(paste('O tamanho convertido é',cm,'cm.'))
###Output
Digite um tamanho em metros para conversão em cm: 5
[1] "O tamanho convertido é 500 cm."
###Markdown
Crie um algoritmo em R que peça as 4 notas bimestrais e mostre a média.
###Code
nota1 <- readline(prompt="Digite a nota do 1º bimestre: ")
nota42 <- readline(prompt="Digite a nota do 2º bimestre: ")
nota3 <- readline(prompt="Digite a nota do 3º bimestre: ")
nota4 <- readline(prompt="Digite a nota do 4º bimestre: ")
nota1 <- as.double(nota1)
nota2 <- as.double(nota2)
nota3 <- as.double(nota3)
nota4 <- as.double(nota4)
media <- (nota1 + nota2 + nota3 + nota4)/4
print(paste('A média é',media))
###Output
Digite a nota do 1º bimestre: 3
Digite a nota do 2º bimestre: 9
Digite a nota do 3º bimestre: 9
Digite a nota do 4º bimestre: 4
[1] "A média é 4.5"
###Markdown
Crie um algoritmo em R que peça o raio de um círculo, calcule e mostre sua área.
###Code
raio <- readline(prompt="Digite o raio do círculo: ")
raio <- as.double(raio)
area <- pi*(raio^2)
diametro <- raio * 2
print(paste('A área é',format(round(area, 2), nsmall = 2), 'e o diâmetro é',diametro))
###Output
Digite o raio do círculo: 5
[1] "A área é 78.54 e o diâmetro é 10"
###Markdown
Crie um algoritmo em R que calcule a área de um quadrado, em seguida mostre o dobro desta área para o usuário.
###Code
base <- readline(prompt="Digite a base do quadrado: ")
altura <- readline(prompt="Digite a altura do quadrado: ")
base <- as.double(base)
altura <- as.double(altura)
areaQ <- base * altura
print(paste('A área do quadrado é: ',format(round(areaQ, 2), nsmall = 2)))
###Output
Digite a base do quadrado: 35
Digite a altura do quadrado: 12
[1] "A área do quadrado é: 420.00"
###Markdown
Crie um algoritmo em R que pergunte quanto você ganha por hora e o número de horas trabalhadas no mês. Calcule e mostre o total do seu salário no referido mês.
###Code
salarioHora <- readline(prompt = 'Digite o valor da hora trabalhada: ')
horas <- readline(prompt = 'Digite a quantidade de horas trabalhadas: ')
salarioHora <- as.double(salarioHora)
horas <- as.double(horas)
salarioMes <- salarioHora * horas
print(paste('O salário a receber será de R$ :',salarioMes))
###Output
Digite o valor da hora trabalhada: 35
Digite a quantidade de horas trabalhadas: 720
[1] "O salário a receber será de R$ : 25200"
###Markdown
Crie um algoritmo em R que peça a temperatura em graus Farenheit, transforme e mostre a temperatura em graus Celsius.C = (5 * (F-32) / 9).
###Code
print('########Conversor Farenheit para Celsius em R ########')
far <- readline(prompt = 'Digite a temperatura em Farenheit: ')
far <- as.double(far)
celsius <- (5*(far-32)/9)
print(paste('A temperatura em Celsius é: ',celsius, 'ºC'))
###Output
[1] "########Conversor Farenheit para Celsius em R ########"
Digite a temperatura em Farenheit: 89.6
[1] "A temperatura em Celsius é: 32 ºC"
###Markdown
Crie um algoritmo em R que peça a temperatura em graus Celsius, transforme e mostre em graus Farenheit.
###Code
print('########Conversor Celsius para Farenheit########')
cel <- readline(prompt = 'Digite a temperatura em Celsius: ')
cel <- as.double(cel)
farenheit <- (cel * 9/5) + 32
print(paste('A temperatura em Farenheit é: ',farenheit, 'ºC'))
###Output
[1] "########Conversor Celsius para Farenheit########"
Digite a temperatura em Celsius: 32
[1] "A temperatura em Farenheit é: 89.6 ºC"
###Markdown
Crie um algoritmo em R que peça 2 números inteiros e um número real. Calcule e mostre:* a soma do dobro do primeiro com metade do segundo .* a soma do triplo do primeiro com o terceiro.* o terceiro elevado ao cubo.
###Code
int1 <- readline(prompt = 'Digite um número inteiro: ')
int2 <- readline(prompt= 'Digite outro número inteiro: ')
real <- readline(prompt = 'Digite um número real: ')
int1 <- as.integer(int1)
int2 <- as.integer(int2)
real <- as.double(real)
soma1 <- (int1*2)+(int2/2)
soma2 <- (int1*3)+real
cubo <- real^3
print(paste('A soma do dobro do primeiro com metade do segundo é: ', soma1))
print(paste('A soma do triplo do primeiro com o terceiro é: ', soma2))
print(paste('O terceiro elevado ao cubo é: ',format(round(cubo, 2), nsmall = 2)))
###Output
Digite um número inteiro: 4
Digite outro número inteiro: 8
Digite um número real: 1.9
[1] "A soma do dobro do primeiro com metade do segundo é: 12"
[1] "A soma do triplo do primeiro com o terceiro é: 13.9"
[1] "O terceiro elevado ao cubo é: 6.86"
|
Pymaceuticals/.ipynb_checkpoints/pymaceuticals_starter (2)-checkpoint.ipynb
|
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
Merge_table_drug = merge_table[["Drug", "Timepoint", "Tumor Volume (mm3)"]]
# Convert to DataFrame
Mean_Tumor_Volume = Merge_table_drug.groupby(["Drug", "Timepoint"]).mean()
# Preview DataFrame
Mean_Tumor_Volume = Mean_Tumor_Volume.reset_index()
Mean_Tumor_Volume
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
Merge_table_drug_error = merge_table[["Drug", "Timepoint", "Tumor Volume (mm3)"]]
# Convert to DataFrame
Mean_Tumor_Volume_error = Merge_table_drug.groupby(["Drug", "Timepoint"]).sem()
# Preview DataFrame
Mean_Tumor_Volume_error = Mean_Tumor_Volume_error.reset_index()
Mean_Tumor_Volume_error
# Minor Data Munging to Re-Format the Data Frames
New_Mean_Tumor_Volume = Mean_Tumor_Volume.pivot(index='Timepoint', columns='Drug')
New_Mean_Tumor_Volume.columns = New_Mean_Tumor_Volume.columns.droplevel(0)
# Preview that Reformatting worked
New_Mean_Tumor_Volume.head()
# Generate the Plot (with Error Bars)
Treatment_Tumor_Volume = New_Mean_Tumor_Volume[['Capomulin', 'Infubinol', 'Ketapril', 'Placebo']]
# Save the Figure
Treatment_Tumor_Volume.plot(style=['ro:','b^:','gs:', '^k:']).grid(axis='y')
plt.title("Tumer Response to Treatment")
plt.xlabel("Time(Days)")
plt.ylabel("Tumor Volume (mm3)")
plt.ylim(32,73)
plt.xlim(-2,47)
plt.savefig("../Images/Treatment2.png")
plt.show()
###Output
_____no_output_____
###Markdown
Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
Merge_table_Met = merge_table[["Drug", "Timepoint", "Metastatic Sites"]]
# Convert to DataFrame
Mean_Met_Sites = Merge_table_Met.groupby(["Drug", "Timepoint"]).mean()
# Preview DataFrame
Mean_Met_Sites = Mean_Met_Sites.reset_index()
Mean_Met_Sites.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
Merge_table_Met_error = merge_table[["Drug", "Timepoint", "Metastatic Sites"]]
# Convert to DataFrame
Mean_Met_error = Merge_table_Met.groupby(["Drug", "Timepoint"]).sem()
# Preview DataFrame
Mean_Met_error = Mean_Met_error.reset_index()
Mean_Met_error.head()
# Minor Data Munging to Re-Format the Data Frames
New_Mean_Met_Sites = Mean_Met_Sites.pivot(index='Timepoint', columns='Drug')
New_Mean_Met_Sites.columns = New_Mean_Met_Sites.columns.droplevel(0)
# Preview that Reformatting worked
New_Mean_Met_Sites
Treatment_Met_Sites = New_Mean_Met_Sites[['Capomulin', 'Infubinol', 'Ketapril', 'Placebo']]
# Save the Figure
Treatment_Met_Sites.plot(style=['ro:','b^:','gs:', '^k:']).grid(axis='y')
plt.title("Metastatic Spread During Treatment")
plt.xlabel("Treatment Duration(Days)")
plt.ylabel("Met. Sites")
plt.ylim(-0.2,3.8)
plt.xlim(-2,47)
plt.savefig("../Images/Met2.png")
plt.show()
###Output
_____no_output_____
###Markdown
 Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
Merge_table_Mice = merge_table[["Drug", "Timepoint", "Mouse ID"]]
# Convert to DataFrame
Merge_table_Mice_count = Merge_table_Mice.groupby(["Drug", "Timepoint"]).count()
# Preview DataFrame
Merge_table_Mice_count = Merge_table_Mice_count.reset_index()
Merge_table_Mice_count.head()
# Minor Data Munging to Re-Format the Data Frames
New_Mice_Survival = Merge_table_Mice_count.pivot(index='Timepoint', columns='Drug')
New_Mice_Survival.columns = New_Mice_Survival.columns.droplevel(0)
# Preview that Reformatting worked
New_Mice_Survival
Treatment_Mice_Survival = New_Mice_Survival[['Capomulin', 'Infubinol', 'Ketapril', 'Placebo']]
Treatment_Mice_Survival = Treatment_Mice_Survival.div(25)*100
# Generate the Plot (Accounting for percentages)
Treatment_Mice_Survival.plot(style=['ro:','b^:','gs:', '^k:'], grid=True)
plt.title("Survival During Treatment")
plt.xlabel("Time(Dyas)")
plt.ylabel("Survival Rate(%)")
plt.ylim(34,102)
plt.xlim(-2,47)
plt.savefig("../Images/Mice2.png")
plt.show()
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
 Summary Bar Graph
###Code
# Calculate the percent changes for each drug
# Display the data to confirm
#pd.options.display.float_format = '{:.2%}'.format
Treatment_Tumor_Volume
total = len(Treatment_Tumor_Volume.index)
New_Treatment = Treatment_Tumor_Volume.pct_change(periods = (total-1))
New_Treatment = New_Treatment.iloc[total-1:]*100
#New_Treatment = New_Treatment.style.format('{:.2%}')
New_Treatment = New_Treatment.round(0)
New_Treatment = New_Treatment.T.reset_index()
New_Treatment = New_Treatment.set_index('Drug')
subset = New_Treatment[45]
tuples = tuple(zip(subset.index, subset))
labels = [val[0] for val in tuples]
y_labels = [val[1] for val in tuples]
x = np.arange(len(labels))
width = 1
fig, ax = plt.subplots()
plt.ylim(-30, 70)
ax.set_title("Tumor Change Over 45 Day Treatment")
ax.set_ylabel("% Tumor Volume Change")
plt.tick_params(axis='x', rotation=0)
plt.grid(True)
ax.set_xticks(x)
ax.set_xticklabels(labels)
bars = []
def autolabel(y_labels):
for i, v in enumerate(y_labels):
if v > 0:
level = 2; bar = 'red'
else: level = -10; bar = 'green'
bars.append(bar)
ax.text(i-0.25 , level, str(v)+'%', color="white",
size = 12,
ha='right',
va='bottom',
multialignment = 'center')
autolabel(y_labels)
rects1 = ax.bar(x - width/2, y_labels, width, color=bars)
plt.savefig("../Images/Final2.png")
plt.show()
# Store all Relevant Percent Changes into a Tuple
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
# Use functions to label the percentages of changes
# Call functions to implement the function calls
# Save the Figure
# Show the Figure
fig.show()
###Output
_____no_output_____
|
module3-ridge-regression/ LECTURE WITH EXAMPLES- LS_DS9_214.ipynb
|
###Markdown
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression- do train/validate/test split- begin with baselines for classification- express and explain the intuition and interpretation of Logistic Regression- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression modelsLogistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- category_encoders- numpy- pandas- scikit-learn
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
###Output
_____no_output_____
###Markdown
Do train/validate/test split Overview Predict Titanic survival 🚢Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition. Kaggle splits the data into train and test sets for participants. Let's load both:
###Code
import pandas as pd
train = pd.read_csv(DATA_PATH+'titanic/train.csv')
test = pd.read_csv(DATA_PATH+'titanic/test.csv')
###Output
_____no_output_____
###Markdown
Notice that the train set has one more column than the test set:
###Code
train.shape, test.shape
###Output
_____no_output_____
###Markdown
Which column is in train but not test? The target!
###Code
set(train.columns) - set(test.columns)
###Output
_____no_output_____
###Markdown
Why doesn't Kaggle give you the target for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:>> 1. a **training set**, which includes the _independent variables,_ as well as the _dependent variable_ (what you are trying to predict).>> 2. a **test set**, which just has the _independent variables._ You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.>> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. **You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.**>> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...>> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. 2-way train/test split is not enough Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:>> 1. Each observation can either be used for exploration or confirmation, not both.>> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.>> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.>> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ...Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."** (The green box in the diagram.)Therefore, we usually do **"3-way holdout method (train/validation/test split)"** or **"cross-validation with independent test set."** What's the difference between Training, Validation, and Testing sets? Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)> The validation set is for adjusting a model's hyperparameters. The testing data set is the ultimate judge of model performance.>> Testing data is what you hold out until very last. You only run your model on it once. You don’t make any changes or adjustments to your model after that. ... Follow Along> You will want to create your own training and validation sets (by splitting the Kaggle “training” data).Do this, using the [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function:
###Code
from sklearn.model_selection import train_test_split
train, val = train_test_split(train, random_state=42)
train.shape, val.shape
###Output
_____no_output_____
###Markdown
Challenge For your assignment, you'll do a 3-way train/validate/test split.Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort! You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kaggle "training" data, so you'll end up with 3 sets total. Begin with baselines for classification Overview We'll begin with the **majority class baseline.**[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Follow Along Determine majority class
###Code
train.head()
target = 'Survived'
y_train = train[target]
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
# Just for demonstration
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
###Output
_____no_output_____
###Markdown
Use a classification metric: accuracy[Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
# Training accuracy of majority class baseline =
# frequency of majority class (aka base rate)
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
# Validation accuracy of majority class baseline =
# usually similar to Train accuracy
y_val = val[target]
y_pred = [majority_class] * len(y_val)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Challenge In your assignment, your Sprint Challenge, and your upcoming Kaggle challenge, you'll begin with the majority class baseline. How quickly can you beat this baseline? Express and explain the intuition and interpretation of Logistic Regression OverviewTo help us get an intuition for *Logistic* Regression, let's start by trying *Linear* Regression instead, and see what happens... Follow Along Linear Regression?
###Code
train.describe()
# 1. Import estimator class
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
linear_reg = LinearRegression()
# 3. Arrange X feature matrices (already did y target vectors)
features = ['Pclass', 'Age', 'Fare']
X_train = train[features]
X_val = val[features]
# Impute missing values
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
# 4. Fit the model
linear_reg.fit(X_train_imputed, y_train)
# 5. Apply the model to new data.
# The predictions look like this ...
linear_reg.predict(X_val_imputed)
# Get coefficients
pd.Series(linear_reg.coef_, features)
test_case = [[1, 5, 500]] # 1st class, 5-year old, Rich
linear_reg.predict(test_case)
###Output
_____no_output_____
###Markdown
Logistic Regression!
###Code
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Validation Accuracy', log_reg.score(X_val_imputed, y_val))
# The predictions look like this
log_reg.predict(X_val_imputed)
log_reg.predict(test_case)
log_reg.predict_proba(test_case)
# What's the math?
pd.Series(log_reg.coef_[0], features)
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
import numpy as np
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
###Output
_____no_output_____
###Markdown
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regressionProbability_of_passing_an_exam_versus_hours_of_study). Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models OverviewNow that we have more intuition and interpretation of Logistic Regression, let's use it within a realistic, complete scikit-learn workflow, with more features and transformations. Follow AlongSelect these features: `['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']`(Why shouldn't we include the `Name` or `Ticket` features? What would happen here?) Fit this sequence of transformers & estimator:- [category_encoders.one_hot.OneHotEncoder](https://contrib.scikit-learn.org/categorical-encoding/onehot.html)- [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)- [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)- [sklearn.linear_model.LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html)Get validation accuracy.
###Code
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
target = 'Survived'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_train.shape, y_train.shape, X_val.shape, y_val.shape
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_train_encoded.head()
X_val_encoded.head()
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
X_train_imputed[:5]
X_val_imputed[:5]
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
X_train_scaled[:5]
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
###Output
Validation Accuracy 0.8071748878923767
###Markdown
Plot coefficients:
###Code
%matplotlib inline
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
coefficients.sort_values().plot.barh();
###Output
_____no_output_____
###Markdown
Generate [Kaggle](https://www.kaggle.com/c/titanic) submission:
###Code
X_test = test[features]
X_test_encoded = encoder.transform(X_test)
X_test_imputed = imputer.transform(X_test_encoded)
X_test_scaled = scaler.transform(X_test_imputed)
y_pred = model.predict(X_test_scaled)
submission = test[['PassengerId']].copy()
submission['Survived'] = y_pred
submission.to_csv('titanic-submission-01.csv', index=False)
###Output
_____no_output_____
|
amath563/hw2/hw2_rd.ipynb
|
###Markdown
Load RD trajectoriesTrajectories have been generated in MATLAB for randomly generated initial conditions.
###Code
N = 128
T = 201
num_iter = 20
num_tests = 1
RD_all_data = np.zeros((num_iter-num_tests,T,N,2*N))
RD_input_data = np.zeros(((T-1)*(num_iter-num_tests),N,2*N))
RD_target_data = np.zeros(((T-1)*(num_iter-num_tests),N,2*N))
for i in range(num_iter-num_tests):
d = loadmat('PDECODES/RD_data/N'+str(N)+'/iter'+str(i+1)+'.mat')
u = d['u']
v = d['v']
RD_all_data[i,:,:,:N] = u[:,:,:].T
RD_all_data[i,:,:,N:] = v[:,:,:].T
RD_input_data[i*(T-1):(i+1)*(T-1),:,:] = RD_all_data[i,:-1,:,:]
RD_target_data[i*(T-1):(i+1)*(T-1),:,:] = RD_all_data[i,1:,:,:]
# RD_input_data[i*(T-1):(i+1)*(T-1),:,:N] = u[:,:,:-1].T
# RD_input_data[i*(T-1):(i+1)*(T-1),:,N:] = v[:,:,:-1].T
# RD_target_data[i*(T-1):(i+1)*(T-1),:,:N] = u[:,:,1:].T
# RD_target_data[i*(T-1):(i+1)*(T-1),:,N:] = v[:,:,1:].T
RD_test_data = np.zeros((T*num_tests,N,2*N))
for i in range(num_tests):
d = loadmat('PDECODES/RD_data/N'+str(N)+'/iter'+str(num_iter-i)+'.mat')
u = d['u']
v = d['v']
RD_test_data[i*T:(i+1)*T,:,:N] = u.T
RD_test_data[i*T:(i+1)*T,:,N:] = v.T
RD_input_data.shape
plt.pcolormesh(RD_input_data[200,:,:N])
plt.axis('image')
plt.show()
plt.pcolormesh(RD_input_data[200,:,N:])
plt.axis('image')
plt.show()
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(np.reshape(uu[:,i],(N,2*N)))
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/svd_mode_'+str(i+1)+'.pdf')
plt.gcf().get_size_inches()
i=14
mpl.rcParams['text.usetex'] = True
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(RD_test_data[15,:,:])
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/prediction_t'+str(i+1)+'.pdf')
plt.gcf().get_size_inches()
###Output
_____no_output_____
###Markdown
Train Neural Network on Full Data
###Code
model = Sequential()
model.add(Dense(N*2*N, activation='tan_sig', use_bias=True, input_shape=(N*2*N,)))
model.add(Dense(N*2*N, activation='sigmoid', use_bias=True))
#model.add(Dense(N*2*N, activation='linear', use_bias=True))
model.add(Dense(N*2*N))
sgd1 = keras.optimizers.SGD(lr=0.001, decay=1e-15, momentum=1, nesterov=True)
adam1 = keras.optimizers.Adam(lr=.02, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-4, amsgrad=True, clipvalue=0.5)
nadam1 = keras.optimizers.Nadam(lr=0.02, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.004)
rmsprop1 = keras.optimizers.RMSprop(lr=0.01, rho=0.9, epsilon=None, decay=0.0)
model.compile(loss='mean_squared_error', optimizer=adam1, metrics=['accuracy'])
#plot_model(model, to_file='model.pdf', show_shapes=True)
mpl.rcParams['text.usetex'] = False
model.fit(
np.reshape(RD_input_data,(-1,N*2*N)),
np.reshape(RD_target_data,(-1,N*2*N)),
epochs=1000, batch_size=800, shuffle=True, callbacks=[plot_losses], validation_split=0.0)
RD_NN_prediction = np.zeros(np.reshape(RD_test_data[0:T],(-1,N*2*N)).shape)
RD_NN_prediction[0] = np.reshape(RD_test_data[0],(-1,N*2*N))
for k in range(T-1):
RD_NN_prediction[k+1] = model.predict(np.array([RD_NN_prediction[k]]))
np.reshape(RD_NN_prediction,(-1,N,2*N)).shape
mpl.rcParams['text.usetex'] = True
i=15
m = plt.pcolormesh(np.reshape(RD_NN_prediction,(-1,N,2*N))[i])
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/predicted_RD_'+str(i)+'_trajectory.pdf')
np.reshape(np.reshape(RD_input_data,(-1,N*2*N)),(-1,N,2*N))
RD_input_data
###Output
_____no_output_____
###Markdown
Compute SVDReshape data and compute rank k approximation to find fixed subspace to which we project our spatial ponints at each time.
###Code
RD_all_data.shape
RD_all_data_reshaped = np.reshape(RD_all_data[:,:,:,],(-1,2*N*N)).T
np.shape(RD_all_data_reshaped)
[uu,ss,vvh] = np.linalg.svd(RD_all_data_reshaped,full_matrices=False)
mpl.rcParams['text.usetex'] = True
plt.scatter(np.arange(len(ss)),ss,color='k')
plt.savefig('img/singular_values.pdf')
i=0
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(np.reshape(uu[:,i],(N,2*N)))
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/svd_mode_'+str(i+1)+'.pdf')
plt.gcf().get_size_inches()
plt.figure(figsize=(6,3.3))
plt.plot(np.arange(len(vvh[i])),vvh[i],color='k')
plt.savefig('img/svd_coeff_'+str(i+1)+'.pdf')
plt.gcf().get_size_inches()
###Output
_____no_output_____
###Markdown
Set rank and take reduced SVD
###Code
rank = 100
u = uu[:,:rank]
s = ss[:rank]
vh = vvh[:rank]
SVD_input_data = np.delete(vh,np.s_[200::201],axis=1).T
SVD_target_data = np.delete(vh,np.s_[1::201],axis=1).T
SVD_input_data.shape
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(np.reshape([email protected](s)@SVD_input_data[180],(N,2*N)))
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/uv_t180.pdf')
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(RD_all_data[0,180])
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/svd_t180.pdf')
###Output
_____no_output_____
###Markdown
Train Net on SVD data
###Code
model = Sequential()
model.add(Dense(2*rank, activation='tan_sig', use_bias=True, input_shape=(rank,)))
model.add(Dense(2*rank, activation='sigmoid', use_bias=True))
model.add(Dense(2*rank, activation='linear', use_bias=True))
model.add(Dense(rank))
sgd1 = keras.optimizers.SGD(lr=0.001, decay=1e-15, momentum=1, nesterov=True)
adam1 = keras.optimizers.Adam(lr=.02, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-4, amsgrad=True, clipvalue=0.5)
nadam1 = keras.optimizers.Nadam(lr=0.02, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.004)
rmsprop1 = keras.optimizers.RMSprop(lr=0.01, rho=0.9, epsilon=None, decay=0.0)
model.compile(loss='mean_squared_error', optimizer=adam1, metrics=['accuracy'])
mpl.rcParams['text.usetex'] = False
model.fit(
SVD_input_data,
SVD_target_data,
epochs=1000, batch_size=80, shuffle=True, callbacks=[plot_losses], validation_split=0.0)
SVD_test_data = np.reshape(RD_test_data[0:T],(-1,N*2*N))@u
SVD_NN_prediction = np.zeros(SVD_test_data.shape)
SVD_NN_prediction[0] = SVD_test_data[0]
for k in range(T-1):
SVD_NN_prediction[k+1] = model.predict(np.array([SVD_NN_prediction[k]]))
plt.figure(figsize=(6,3.3))
m = plt.pcolormesh(np.reshape([email protected](s)@SVD_NN_prediction[15],(N,2*N)))
m.set_rasterized(True)
plt.axis('image')
plt.savefig('img/svd_prediction_t15.pdf')
SVD_input_data[0].shape
plt.figure()
plt.scatter(np.arange(T),SVD_NN_prediction[:,0])
plt.scatter(np.arange(T-1),SVD_input_data[:,0])
plt.show()
%matplotlib notebook
import matplotlib.animation
t = np.arange(T)
fig, ax = plt.subplots()
def animate(i):
plt.pcolormesh(np.reshape([email protected](s)@SVD_NN_prediction[i],(N,2*N)))
# plt.pcolormesh(RD_all_data[0,i])
plt.axis('image')
ani = matplotlib.animation.FuncAnimation(fig, animate, frames=len(t))
plt.show()
###Output
_____no_output_____
|
benchmarks/metabolites/metabolie_grounding.ipynb
|
###Markdown
Grounding a list of metabolites
###Code
import re
from gilda import ground
###Output
INFO: [2020-12-17 09:52:51] /Users/ben/Dropbox/postdoc/darpa/src/deft/adeft/recognize.py - OneShotRecognizer not available. Extension module for AlignmentBasedScorer is missing
###Markdown
We define some basic functions to load the strings and run Gilda on them. We also define a function to print grounding stats and print out any ungrounded strings.
###Code
def load_texts():
with open('plasmax_name_to_kegg.txt') as fh:
texts = [l.strip().split(',')[0] for l in fh.readlines()][1:]
return sorted(set(texts))
def ground_texts(texts, grounding_fun):
return {text: grounding_fun(text) for text in texts}
def print_grounding_stats(groundings):
grounded = [t for t, g in groundings.items() if g]
ungrounded = [t for t, g in groundings.items() if not g]
num_texts = len(groundings)
print('Grounded: %d/%d (%.2f%%)' % (len(grounded), num_texts, 100*len(grounded)/num_texts))
print(ungrounded)
###Output
_____no_output_____
###Markdown
First we try running Gilda without any modifications and see what happens
###Code
texts = load_texts()
results = ground_texts(texts, ground)
print_grounding_stats(results)
###Output
Grounded: 163/230 (70.87%)
['2-aminomuconicacid', '2-hg', '2/3-phosphoglycerate', '4-pyridoxicacid', '5-phosphoribosyl-1-pyrophosphate', '6-phosphogluconate', 'aconitate', 'akg', 'argininosuccinate', 'ascorbicacid', 'carbamoyl_phosphate', 'carbamoylaspartate', 'carbamoylphosphate', 'cmp-acetylneuraminicacid', 'cysteicacid', 'dihydroacetonephosphate', 'dihydroxyacetonephosphate', 'fructose-16-bisphosphate', 'fructose1-6-bisphosphate', 'fructose1_6-biphosphate', 'glucosamine-6-phosphate', 'glucosamine6-phosphate', 'glutathioneoxidized', 'hydroxyphenyllacticacid', 'indole-3-lacticacid', 'isethionicacid', 'kiv', 'kmv+kic', 'kynurenic_acid', 'kynurenicacid', 'lactoylgsh', 'linoleicacid', 'mannitol/sorbitol', 'methioninesulfoxide', 'methyltryptophan', 'mevalonicacid', 'mevalonicacid5-pyrophosphate', 'myristic_acid', 'myristicacid', 'n-acetylglutamate', 'n-acetylneuramicacid', 'n-methylnicotinamide(nmnm)', 'nicotinamide/picolinamide', 'nicotinamidemononucleotide(nmn)', 'oleic_acid', 'oleicacid', 'oleoamide', 'oroticacid', 'palmitic_acid', 'palmiticacid', 'palmitoleicacid', 'palmitoylcarnitinec16', 'pentose5-phosphates', 'phenolsulphate', 'pipecolicacid', 'pyridoxide', 'pyroglutamicacid', 'quinolinicacid', 'ribitol/arabitol', 'sedoheptulose7-phosphate', 'seduheptulose7-phosphate', 'staericacid', 'stearicacid', 'stereamide', 'succinicglutathione', 'succinylglutathione', 'uricacid']
###Markdown
It looks like 71% was grounded. Here is one example, each result is a list of ScoredMatch objects that each contain a Term and some metadata. The grounding is included in the Term. Matches are sorted by decreasing score with the highest scoring match on top.
###Code
print(results['lactate'])
print(results['lactate'][0].term.db, results['lactate'][0].term.id)
###Output
[ScoredMatch(Term(lactate,lactate,CHEBI,CHEBI:24996,lactate,assertion,famplex,None),1.0,Match(query=lactate,ref=lactate,exact=True,space_mismatch=False,dash_mismatches={},cap_combos=[]))]
CHEBI CHEBI:24996
###Markdown
Upon examination, the entries in the ungrounded list have some patterns of issues that can be fixed with some preprocessing in the `preprocess_text` function. We can then define and use `ground_preprocess` which preprocesses each text before grounding it with Gilda.
###Code
typos = {
'stereamide': 'stearamide',
'staericacid': 'stearicacid',
}
def preprocess_text(text):
if text in typos:
text = typos[text]
# Example: nicotinamidemononucleotide(nmn)
text = re.sub('(\([a-zA-Z]+\))$', '', text)
# Example: palmitic_acid
text = text.replace('_', ' ')
# Example: pipecolicacid
suffixes = ['acid', 'mononucleotide']
for suffix in suffixes:
text = re.sub('([^ ])(%s)$' % suffix, '\\1 %s' % suffix, text)
# Example: nicotinamide/picolinamide
if '/' in text:
text = text.split('/')[0]
return text
def ground_preprocess(text):
text = preprocess_text(text)
return ground(text)
results = ground_texts(texts, ground_preprocess)
print_grounding_stats(results)
###Output
Grounded: 195/230 (84.78%)
['2-hg', '2/3-phosphoglycerate', '5-phosphoribosyl-1-pyrophosphate', '6-phosphogluconate', 'aconitate', 'akg', 'argininosuccinate', 'carbamoylaspartate', 'carbamoylphosphate', 'dihydroacetonephosphate', 'dihydroxyacetonephosphate', 'fructose-16-bisphosphate', 'fructose1-6-bisphosphate', 'fructose1_6-biphosphate', 'glucosamine-6-phosphate', 'glucosamine6-phosphate', 'glutathioneoxidized', 'hydroxyphenyllacticacid', 'kiv', 'kmv+kic', 'lactoylgsh', 'methioninesulfoxide', 'methyltryptophan', 'mevalonicacid5-pyrophosphate', 'n-acetylglutamate', 'n-acetylneuramicacid', 'oleoamide', 'palmitoylcarnitinec16', 'pentose5-phosphates', 'phenolsulphate', 'pyridoxide', 'sedoheptulose7-phosphate', 'seduheptulose7-phosphate', 'succinicglutathione', 'succinylglutathione']
###Markdown
Gilda doesn't have the right synonyms to find groundings for these remaining ungrounded texts. Standardizing the results INDRA offers utilities to map identifiers and standardize names which can be useful in this setting, see https://indra.readthedocs.io/en/latest/modules/ontology/standardize.html.
###Code
from indra.ontology.standardize import standardize_name_db_refs
standardize_name_db_refs({results['lactate'][0].term.db: results['lactate'][0].term.id})
###Output
INFO: [2020-12-17 09:53:11] indra.ontology.bio.ontology - Loading INDRA bio ontology from cache at /Users/ben/.indra/bio_ontology/1.5/bio_ontology.pkl
###Markdown
We see that the standard name for this entry from CHEBI is `lactate` and we were able to get CAS and PUBCHEM mappings for it. We can also look at ontological information for the grounded entries via INDRA as follows, with the example of `glutamine`. It looks like `glutamine` has a lot of children in the ChEBI ontology.
###Code
from indra.ontology.bio import bio_ontology
glutamine_term = ground('glutamine')[0].term
children = bio_ontology.get_children(glutamine_term.db, glutamine_term.id)
for child in children:
print(bio_ontology.get_name(*child), child[1])
###Output
N(2)-acetyl-D-glutamine CHEBI:144430
N(2)-acylglutamine CHEBI:83985
alpha-chrysopine CHEBI:83080
poly-L-glutamic acid CHEBI:26173
Gln-Cys-Cys CHEBI:144458
Ala-Met-Gln-Gln CHEBI:137239
alpha-N-peptidyl-L-glutamine CHEBI:16376
Cys-Met-Gln CHEBI:144427
N-(gamma-L-glutamyl)-2-naphthylamine CHEBI:90444
Asn-Met-Gln-Pro CHEBI:138505
gamma-glutamylputrescine CHEBI:48006
N(5)-phenyl-L-glutamine CHEBI:79289
Dnp-Gln CHEBI:72487
Glu-Phe-Gln-Gln CHEBI:73488
Gln-Trp CHEBI:141431
(4-\{4-[2-(gamma-L-glutamylamino)ethyl]phenoxymethyl\}furan-2-yl)methanamine CHEBI:88248
Tnp-Gln CHEBI:72495
N(2)-[(2E)-3-methylhex-2-enoyl]-L-glutamine CHEBI:145321
coprine CHEBI:3875
N(5)-ethyl-L-glutamine CHEBI:17394
N(2)-phenylacetylglutamine CHEBI:8087
L-glutamine derivative CHEBI:24317
Arg-Asn-Gln-Arg CHEBI:73397
ophthalmic acid CHEBI:84058
5,6,7,8-tetrahydrofolyl-L-glutamic acid CHEBI:27650
Asp-Gln-Arg CHEBI:73447
10-formyltetrahydrofolyl glutamate CHEBI:19111
gamma-glutamyltyramine CHEBI:84215
N(2)-[4-(2,4-dichlorophenoxy)butanoyl]-L-glutamine CHEBI:144862
Gln-Gln CHEBI:73846
Gln-Val CHEBI:141433
N(2)-phenylacetyl-L-glutamine CHEBI:17884
mannopine CHEBI:80662
Gln-Phe-Trp-Tyr CHEBI:73464
glutaminium CHEBI:32679
gamma-glutamyl-gamma-aminobutyraldehyde CHEBI:61521
(S)-proglumide CHEBI:76268
10-formyltetrahydrofolyl-(Glu)n CHEBI:134412
Aceglutamide aluminum CHEBI:31161
Asp-Gln-Ser CHEBI:73448
chrysopine CHEBI:83079
tetrahydrofolyl-poly(L-glutamic acid) macromolecule CHEBI:68512
Asn-Gln CHEBI:73421
Asp-Phe-Asp-Gln CHEBI:73437
Ala-Gln-Pro CHEBI:73347
peptidyl-L-glutamyl 5-glycerophosphoethanolamine CHEBI:25912
D-glutamine CHEBI:17061
Gln-Leu CHEBI:141429
Gln-Leu-Leu-Pro CHEBI:73463
gamma-Glu-Gln CHEBI:73707
gamma-L-glutamylputrescinium(1+) CHEBI:58731
Leu-Thr-Gln CHEBI:73574
Gln-Tyr CHEBI:141432
2-methyl-L-glutamine CHEBI:43949
Lys-Gln CHEBI:73600
D-glutaminium CHEBI:32673
glutaurine CHEBI:27694
L-glutamine amide CHEBI:21309
L-glutamine 2-naphthylamide CHEBI:90446
Gln-Asn CHEBI:141428
N-L-glutamyl-poly-L-glutamic acid CHEBI:21490
N-(gamma-L-glutamyl)-L-alaninol CHEBI:85894
gamma-glutamyl-beta-cyanoalanine CHEBI:10565
gamma-glutamyl-beta-aminopropiononitrile CHEBI:28092
tetrahydrofolyl-poly(glutamic acid) macromolecule CHEBI:28624
10-formyltetrahydrofolyl-L-glutamate CHEBI:27862
indigoidine CHEBI:79296
Theanine glucoside CHEBI:136628
N-oleoyl-L-glutamine CHEBI:136615
beta-chrysopine CHEBI:83081
Glu-Asp-Gln-Gln CHEBI:73465
N(5)-methyl-L-glutamine CHEBI:17592
Ala-Leu-Thr-Gln CHEBI:73372
N(2)-acyl-L-glutamine CHEBI:17008
Gln-Phe CHEBI:141430
(2S,4S)-Pinnatanine CHEBI:143049
gamma-L-glutamylputrescine CHEBI:48005
(R)-proglumide CHEBI:76267
D-glutamine derivative CHEBI:83987
Arg-Asp-Gln-Ser CHEBI:137242
N-(indol-3-ylacetyl)glutamine CHEBI:70811
N(2)-(3-hydroxy-3-methylhexanoyl)-L-glutamine CHEBI:145323
N(2)-acetylglutamine CHEBI:73685
tetrahydrofolyl glutamate CHEBI:26908
N-acetyl-L-glutamine CHEBI:21553
5,10-methylenetetrahydrofolylpolyglutamate CHEBI:20503
N(2)-benzoyl-N,N-dipropyl-alpha-glutamine CHEBI:76266
(2R)-2-amino-5-[2-(3,4-dihydroxyphenyl)ethylamino]-5-oxopentanoic acid CHEBI:125658
Glu-Gln CHEBI:141435
glutamine derivative CHEBI:70813
3'-L-glutaminyl-AMP CHEBI:131558
L-glutamyl 5-glycerophosphoethanolamine CHEBI:21311
N(5)-phospho-L-glutamine CHEBI:139506
phenylacetylglutamine CHEBI:25982
Leu-Asp-Gln CHEBI:73561
L-glutaminium CHEBI:32666
Asp-Leu-Asp-Gln CHEBI:73428
Asp-Leu-Leu-Gln CHEBI:73429
N(5)-alkyl-L-glutamine CHEBI:21844
N(2)-[4-(indol-3-yl)butanoyl]-L-glutamine CHEBI:144365
peptidyl-glutamine CHEBI:25919
Glu-Glu-Gln CHEBI:144559
Ala-Asn-Gln-Ser CHEBI:73331
L-glutamine CHEBI:18050
N-isopropyl-L-glutamine CHEBI:85891
|
Python/Unlimited-Precision.ipynb
|
###Markdown
Computing with Unlimited Precision *Python* provides the module fractions that implements *rational numbers* through the function Fraction that is implemented in this module. We can load this function as follows:
###Code
from fractions import Fraction
###Output
_____no_output_____
###Markdown
The function Fraction expects two arguments, the *nominator* and the *denominator*. Mathematically, we have$$ \texttt{Fraction}(p, q) = \frac{p}{q}. $$For example, we can compute the sum $\frac{1}{2} + \frac{1}{3}$ as follows:
###Code
sum = Fraction(1, 2) + Fraction(1, 3)
print(sum)
###Output
5/6
###Markdown
Let us compute Euler's number $e$. The easiest way to compute $e$ is as inifinite series. We have that$$ e = \sum\limits_{n=0}^\infty \frac{1}{n!} $$Here $n!$ denotes the *factorial* of $n$, which is defined as follows:$$ n! = 1 \cdot 2 \cdot 3 \cdot {\dots} \cdot n. $$
###Code
def factorial(n):
"compute the factorial of n"
result = 1
for i in range(1, n+1):
result *= i
return result
###Output
_____no_output_____
###Markdown
Let's check that our definition of the factorial works as expected.
###Code
for i in range(10):
print(i, '! = ', factorial(i), sep='')
###Output
0! = 1
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
###Markdown
Lets approximate $e$ by the following sum:$$ e = \sum\limits_{i=0}^n \frac{1}{i!} $$Setting $n=100$ should be sufficient to compute $e$ to a hundred decimal places.
###Code
n = 100
e = 0
for i in range(n+1):
e += Fraction(1, factorial(i))
e
###Output
_____no_output_____
###Markdown
Multiply $e$ by $10^{100}$ and round so that we get the first 100 decimal places of $e$:
###Code
eTimesBig = e * 10 ** n
s = str(round(eTimesBig))
###Output
_____no_output_____
###Markdown
Insert a '.' after the first digit:
###Code
print(s[0], '.', s[1:], sep='')
###Output
2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664274
###Markdown
Computing with Unlimited Precision *Python* provides the module fractions that implements *rational numbers* through the function Fraction that is implemented in this module. We can load this function as follows:
###Code
from fractions import Fraction
###Output
_____no_output_____
###Markdown
The function Fraction expects two arguments, the *nominator* and the *denominator*. Mathematically, we have$$ \texttt{Fraction}(p, q) = \frac{p}{q}. $$For example, we can compute the sum $\frac{1}{2} + \frac{1}{3}$ as follows:
###Code
sum = Fraction(1, 2) + Fraction(1, 3)
print(sum)
1/2 + 1/3
###Output
_____no_output_____
###Markdown
Let us compute Euler's number $e$. The easiest way to compute $e$ is as inifinite series. We have that$$ e = \sum\limits_{n=0}^\infty \frac{1}{n!}. $$Here $n!$ denotes the *factorial* of $n$, which is defined as follows:$$ n! = 1 \cdot 2 \cdot 3 \cdot {\dots} \cdot n. $$The function `factorial` takes a natural number `n` and returns `n!`.
###Code
def factorial(n):
"returns the factorial of n"
result = 1
for i in range(1, n+1):
result *= i
return result
###Output
_____no_output_____
###Markdown
Let's check that our definition of the factorial works as expected.
###Code
for i in range(10):
print(i, '! = ', factorial(i), sep='')
###Output
_____no_output_____
###Markdown
Lets approximate $e$ by the following sum:$$ e = \sum\limits_{i=0}^n \frac{1}{i!} $$Setting $n=100$ should be sufficient to compute $e$ to a hundred decimal places.
###Code
n = 100
e = 0
for i in range(n+1):
e += Fraction(1, factorial(i))
print(e)
###Output
_____no_output_____
###Markdown
As a fraction, that result is not helpful. Let us convert it into a floating point representation bymultiply $e$ by $10^{100}$ and rounding so that we get the first 100 decimal places of $e$:
###Code
eTimesBig = e * 10 ** n
eTimesBig
s = str(round(eTimesBig))
s
###Output
_____no_output_____
###Markdown
Insert a '.' after the first digit:
###Code
print(s[0], '.', s[1:], sep='')
###Output
_____no_output_____
###Markdown
Computing with Unlimited Precision *Python* provides the module fractions that implements *rational numbers* through the function Fraction that is implemented in this module. We can load this function as follows:
###Code
from fractions import Fraction
###Output
_____no_output_____
###Markdown
The function Fraction expects two arguments, the *nominator* and the *denominator*. Mathematically, we have$$ \texttt{Fraction}(p, q) = \frac{p}{q}. $$For example, we can compute the sum $\frac{1}{2} + \frac{1}{3}$ as follows:
###Code
sum = Fraction(1, 2) + Fraction(1, 3)
print(sum)
###Output
_____no_output_____
###Markdown
Let us compute Euler's number $e$. The easiest way to compute $e$ is as inifinite series. We have that$$ e = \sum\limits_{n=0}^\infty \frac{1}{n!}. $$Here $n!$ denotes the *factorial* of $n$, which is defined as follows:$$ n! = 1 \cdot 2 \cdot 3 \cdot {\dots} \cdot n. $$The function `factorial` takes a natural number `n` and returns `n!`.
###Code
def factorial(n):
"returns the factorial of n"
result = 1
for i in range(1, n+1):
result *= i
return result
###Output
_____no_output_____
###Markdown
Let's check that our definition of the factorial works as expected.
###Code
for i in range(10):
print(i, '! = ', factorial(i), sep='')
###Output
_____no_output_____
###Markdown
Lets approximate $e$ by the following sum:$$ e = \sum\limits_{i=0}^n \frac{1}{i!} $$Setting $n=100$ should be sufficient to compute $e$ to a hundred decimal places.
###Code
n = 100
e = 0
for i in range(n+1):
e += Fraction(1, factorial(i))
print(e)
###Output
_____no_output_____
###Markdown
Multiply $e$ by $10^{100}$ and round so that we get the first 100 decimal places of $e$:
###Code
eTimesBig = e * 10 ** n
s = str(round(eTimesBig))
###Output
_____no_output_____
###Markdown
Insert a '.' after the first digit:
###Code
print(s[0], '.', s[1:], sep='')
###Output
_____no_output_____
|
handson-ml/Housing.ipynb
|
###Markdown
California housing prices [Func] for fetching the Housing Data
###Code
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
#urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
###Output
_____no_output_____
###Markdown
Downloading the data
###Code
fetch_housing_data()
###Output
_____no_output_____
###Markdown
[Func] for Loading the data
###Code
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
###Output
_____no_output_____
###Markdown
Loading the data
###Code
housing = load_housing_data()
###Output
_____no_output_____
###Markdown
Quick look at the data
###Code
housing.head()
###Output
_____no_output_____
###Markdown
Quick description of the data
###Code
housing.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
longitude 20640 non-null float64
latitude 20640 non-null float64
housing_median_age 20640 non-null float64
total_rooms 20640 non-null float64
total_bedrooms 20433 non-null float64
population 20640 non-null float64
households 20640 non-null float64
median_income 20640 non-null float64
median_house_value 20640 non-null float64
ocean_proximity 20640 non-null object
dtypes: float64(9), object(1)
memory usage: 1.6+ MB
###Markdown
Show attr categories
###Code
housing['ocean_proximity'].value_counts()
###Output
_____no_output_____
###Markdown
summary of the numerical attributes
###Code
housing.describe()
###Output
_____no_output_____
###Markdown
Plot a histogram for each numerical attribute.
###Code
# only in a Jupyter notebook
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.show()
###Output
_____no_output_____
###Markdown
Create a Test Set
###Code
import numpy as np
def split_train_test(data, test_ratio):
np.random.seed(42)
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
print(len(train_set), "train +", len(test_set), "test")
###Output
16512 train + 4128 test
###Markdown
Use each instance’s identifier to decide whether or not it should go in the test set
###Code
import hashlib
def test_set_check(identifier, test_ratio, hash):
return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio
def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))
return data.loc[~in_test_set], data.loc[in_test_set]
###Output
_____no_output_____
###Markdown
Using the row index as the ID
###Code
housing_with_id = housing.reset_index() # adds an `index` column
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
###Output
_____no_output_____
###Markdown
Using district’s latitude and longitude as the ID | BETTER as it's more stable
###Code
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "id")
###Output
_____no_output_____
###Markdown
Using Scikit-Learn functions to split the data
###Code
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Median Income
###Code
housing['median_income'].hist()
###Output
_____no_output_____
###Markdown
income category attribute
###Code
housing["income_cat"] = np.ceil(housing["median_income"] / 1.5)
housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)
housing["income_cat"].hist()
###Output
_____no_output_____
###Markdown
stratified sampling
###Code
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
housing["income_cat"].value_counts() / len(housing)
###Output
_____no_output_____
###Markdown
remove the income_cat attribute so the data is back to its original state
###Code
for set in (strat_train_set, strat_test_set):
set.drop(["income_cat"], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Discover and Visualize the Data to Gain Insights
###Code
housing = strat_train_set.copy()
###Output
_____no_output_____
###Markdown
Visualizing Geographical Data
###Code
housing.plot(kind="scatter", x="longitude", y="latitude")
# visualize the places where there is a high density of data points
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
###Output
_____no_output_____
###Markdown
housing prices
###Code
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1,
s=housing["population"]/100, label="population",
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True,)
plt.legend()
###Output
_____no_output_____
###Markdown
Looking for Correlations
###Code
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Using Pandas’ scatter_matrix function- which plots every numerical attribute against every other numerical attribute
###Code
from pandas.tools.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
###Output
/Users/amrmkayid/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: 'pandas.tools.plotting.scatter_matrix' is deprecated, import 'pandas.plotting.scatter_matrix' instead.
after removing the cwd from sys.path.
###Markdown
zoom in on correlation scatterplot between median house value & the median income
###Code
housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=0.1)
###Output
_____no_output_____
###Markdown
Experimenting with Attribute Combinations
###Code
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"] / housing["total_rooms"]
housing["population_per_household"]=housing["population"] / housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Prepare the Data for Machine Learning Algorithms separate the predictors and the labels
###Code
housing = strat_train_set.drop("median_house_value", axis=1) # drop labels for training set
housing_labels = strat_train_set["median_house_value"].copy()
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
sample_incomplete_rows = housing[housing.isnull().any(axis=1)].head()
sample_incomplete_rows
sample_incomplete_rows.dropna(subset=["total_bedrooms"]) # option 1
sample_incomplete_rows.drop("total_bedrooms", axis=1) # option 2
median = housing["total_bedrooms"].median()
sample_incomplete_rows["total_bedrooms"].fillna(median, inplace=True) # option 3
sample_incomplete_rows
###Output
_____no_output_____
###Markdown
Using sklearn Imputer
###Code
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy="median")
# Droping non-numircal attr
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
print(imputer.statistics_)
print(housing_num.median().values)
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns, index = list(housing.index.values))
housing_tr.loc[sample_incomplete_rows.index.values]
imputer.strategy
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
###Output
_____no_output_____
###Markdown
convert these text labels to numbers.
###Code
housing_cat = housing[['ocean_proximity']]
housing_cat.head(10)
from future_encoders import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
housing_cat_encoded = ordinal_encoder.fit_transform(housing_cat)
housing_cat_encoded[:10]
ordinal_encoder.categories_
###Output
_____no_output_____
###Markdown
one-hot encoding
###Code
# from sklearn.preprocessing import OneHotEncoder -> Error!!!
from future_encoders import OneHotEncoder
encoder = OneHotEncoder()
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))
## a sparse matrix only stores the location of the nonzero elements.
housing_cat_1hot
housing_cat_1hot.toarray()
###Output
_____no_output_____
###Markdown
Using Sklearn LabelBinarizer | text categories -> integer categories -> one-hot vectors
###Code
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
housing_cat_1hot = encoder.fit_transform(housing_cat)
print(housing_cat_1hot)
###Output
[[1 0 0 0 0]
[1 0 0 0 0]
[0 0 0 0 1]
...
[0 1 0 0 0]
[1 0 0 0 0]
[0 0 0 1 0]]
###Markdown
Custom Transformers
###Code
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
"""
a small transformer class that adds the combined attributes
"""
def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self # nothing else to do
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
housing_extra_attribs = pd.DataFrame(
housing_extra_attribs,
columns=list(housing.columns)+["rooms_per_household", "population_per_household"])
housing_extra_attribs.head()
###Output
_____no_output_____
###Markdown
Feature Scaling- **min-max scaling (normalization)**: > values are shifted and rescaled so that they end up ranging from 0 to 1- **standardization**: > it subtracts the mean value (so standardized values always have a zero mean), and then it divides by the variance so that the result‐ ing distribution has unit variance. Unlike min-max scaling, standardization does not bound values to a specific range Transformation Pipelines- many data transformation steps that need to be executed in the right order
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', Imputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
housing_num_tr
###Output
_____no_output_____
###Markdown
Joining these transformations into a single pipeline OLD Method
###Code
# from sklearn.base import BaseEstimator, TransformerMixin
# # Create a class to select numerical or categorical columns
# class OldDataFrameSelector(BaseEstimator, TransformerMixin):
# """
# transforms the data by selecting the desired attributes (numerical or categorical),
# dropping the rest, and con‐ verting the resulting DataFrame to a NumPy array
# """
# def __init__(self, attribute_names):
# self.attribute_names = attribute_names
# def fit(self, X, y=None):
# return self
# def transform(self, X):
# return X[self.attribute_names].values
# from sklearn.pipeline import FeatureUnion
# num_attribs = list(housing_num)
# cat_attribs = ["ocean_proximity"]
# num_pipeline = Pipeline([
# ('selector', OldDataFrameSelector(num_attribs)),
# ('imputer', Imputer(strategy="median")),
# ('attribs_adder', CombinedAttributesAdder()),
# ('std_scaler', StandardScaler()),
# ])
# cat_pipeline = Pipeline([
# ('selector', OldDataFrameSelector(cat_attribs)),
# ('label_binarizer', LabelBinarizer()),
# ])
# full_pipeline = FeatureUnion(transformer_list=[
# ("num_pipeline", num_pipeline),
# ("cat_pipeline", cat_pipeline),
# ])
# housing_prepared = full_pipeline.fit_transform(housing)
# housing_prepared
# housing_prepared.shape
from future_encoders import ColumnTransformer
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
full_pipeline = ColumnTransformer([
("num", num_pipeline, num_attribs),
("cat", OneHotEncoder(), cat_attribs),
])
housing_prepared = full_pipeline.fit_transform(housing)
housing_prepared
housing_prepared.shape
###Output
_____no_output_____
###Markdown
Select and Train a Model train a Linear Regression model
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:\t", lin_reg.predict(some_data_prepared))
print("Labels:\t\t", list(some_labels))
###Output
Predictions: [210644.60459286 317768.80697211 210956.43331178 59218.98886849
189747.55849879]
Labels: [286600.0, 340600.0, 196900.0, 46300.0, 254500.0]
###Markdown
measure this regression model’s RMSE on the whole training set
###Code
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
###Output
_____no_output_____
###Markdown
train a Decision Tree
###Code
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
###Output
_____no_output_____
###Markdown
Better Evaluation Using Cross-Validation
###Code
def display_scores(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels,
scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
display_scores(tree_rmse_scores)
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
###Output
Scores: [66782.73843989 66960.118071 70347.95244419 74739.57052552
68031.13388938 71193.84183426 64969.63056405 68281.61137997
71552.91566558 67665.10082067]
Mean: 69052.46136345083
Standard deviation: 2731.6740017983466
###Markdown
train a Random Forests
###Code
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
###Output
Scores: [53073.57731498 49962.80734452 52759.78493505 55399.45224316
51447.82563437 55953.1626352 50864.71587983 50990.66851181
56408.41135998 53314.06594996]
Mean: 53017.44718088476
Standard deviation: 2156.0416203198347
###Markdown
Fine-Tune Your Model
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5,
scoring='neg_mean_squared_error')
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
grid_search.best_estimator_
###Output
_____no_output_____
###Markdown
evaluation scores
###Code
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
###Output
63874.55184826077 {'max_features': 2, 'n_estimators': 3}
55367.0631114183 {'max_features': 2, 'n_estimators': 10}
53069.765518605265 {'max_features': 2, 'n_estimators': 30}
60042.79293326827 {'max_features': 4, 'n_estimators': 3}
52646.24693263773 {'max_features': 4, 'n_estimators': 10}
50537.41874010012 {'max_features': 4, 'n_estimators': 30}
59262.55074431904 {'max_features': 6, 'n_estimators': 3}
52048.21439722854 {'max_features': 6, 'n_estimators': 10}
49831.09951390634 {'max_features': 6, 'n_estimators': 30}
58751.149380061135 {'max_features': 8, 'n_estimators': 3}
51795.58618618918 {'max_features': 8, 'n_estimators': 10}
49987.02713856517 {'max_features': 8, 'n_estimators': 30}
61694.91758626871 {'bootstrap': False, 'max_features': 2, 'n_estimators': 3}
54103.963872844164 {'bootstrap': False, 'max_features': 2, 'n_estimators': 10}
60449.68709120704 {'bootstrap': False, 'max_features': 3, 'n_estimators': 3}
52716.44869720821 {'bootstrap': False, 'max_features': 3, 'n_estimators': 10}
59535.36204899333 {'bootstrap': False, 'max_features': 4, 'n_estimators': 3}
51442.278894879084 {'bootstrap': False, 'max_features': 4, 'n_estimators': 10}
###Markdown
Analyze the Best Models and Their Errors
###Code
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
extra_attribs = ["rooms_per_hhold", "pop_per_hhold", "bedrooms_per_room"]
cat_one_hot_attribs = list(encoder.classes_)
attributes = num_attribs + extra_attribs + cat_one_hot_attribs
sorted(zip(feature_importances, attributes), reverse=True)
###Output
_____no_output_____
###Markdown
Evaluate Your System on the Test Set
###Code
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse) # => evaluates to 48,209.6
final_rmse
###Output
_____no_output_____
|
sprite_sheets/lab.ipynb
|
###Markdown
 Lab - Correlation Analysis in Python Objectives- Part 1: The Dataset- Part 2: Scatterplot Graphs and Correlatable Variables- Part 3: Calculating Correlation with Python- Part 4: Visualizing Scenario/BackgroundCorrelation is an important statistical relationship that can indicate whether the variable values are linearly related.In this lab, you will learn how to use Python to calculate correlation. In Part 1, you will setup the dataset. In Part 2, you will learn how to identify if the variables in a given dataset are correlatable. Finally, in Part 3, you will use Python to calculate the correlation between two sets of variable. Required Resources* 1 PC with Internet access* Raspberry Pi version 2 or higher* Python libraries: pandas, numpy, matplotlib, seaborn* Datafiles: brainsize.txt Part 1: The Dataset You will use a dataset that contains a sample of 40 right-handed Anglo Introductory Psychology students at a large Southwestern university. Subjects took four subtests (Vocabulary, Similarities, Block Design, and Picture Completion) of the Wechsler (1981) Adult Intelligence Scale-Revised. The researchers used Magnetic Resonance Imaging (MRI) to determine the brain size of the subjects. Information about gender and body size (height and weight) are also included. The researchers withheld the weights of two subjects and the height of one subject for reasons of confidentiality.Two simple modifications were applied to the dataset:1. Replace the quesion marks used to represent the withheld data points described above by the 'NaN' string. The substitution was done because Pandas does not handle the question marks correctly.2. Replace all tab characters with commas, converting the dataset into a CSV dataset.The prepared dataset is saved as `brainsize.txt`. Step 1: Loading the Dataset From a File.Before the dataset can be used, it must be loaded onto memory.In the code below, The first line imports the `pandas` modules and defines `pd` as a descriptor that refers to the module.The second line loads the dataset CSV file into a variable called `brainFile`.The third line uses `read_csv()`, a `pandas` method, to convert the CSV dataset stored in `brainFile` into a dataframe. The dataframe is then stored in the `brainFrame` variable.Run the cell below to execute the described functions.
###Code
# Code cell 1
import pandas as pd
brainFile = './Data/brainsize.txt'
brainFrame = pd.read_csv(brainFile)
###Output
_____no_output_____
###Markdown
Step 2: Verifying the dataframe.To make sure the dataframe has been correctly loaded and created, use the `head()` method. Another Pandas method, `head()` displays the first five entries of a dataframe.
###Code
# Code cell 2
brainFrame.head()
###Output
_____no_output_____
###Markdown
Part 2: Scatterplot Graphs and Correlatable Variables Step 1: The pandas `describe()` method.The pandas module includes the `describe()` method which performs same common calculations against a given dataset. In addition to provide common results including count, mean, standard deviation, minimum, and maximum, `describe()` is also a great way to quickly test the validity of the values in the dataframe.Run the cell below to output the results computed by `describe()` against the `brainFrame` dataframe.
###Code
# Code cell 3
brainFrame.describe()
###Output
_____no_output_____
###Markdown
Step 2: Scatterplot graphsScatterplot graphs are important when working with correlations as they allow for a quick visual verification of the nature of the relationship between the variables. This lab uses the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables. Other more robust correlation methods exist but are out of the scope of this lab. a. Load the required modules.Before graphs can be plotted, it is necessary to import a few modules, namely `numpy` and `matplotlib`. Run the cell below to load these modules.
###Code
# Code cell 4
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
b. Separate the data.To ensure the results do not get skewed because of the differences in male and female bodies, the dateframe is split into two dataframes: one containing all male entries and another with only female instances. Running the cell below creates the two new dataframes, menDf and womenDf, each one containing the respective entries.
###Code
# Code cell 5
menDf = brainFrame[(brainFrame.Gender == 'Male')]
womenDf = brainFrame[(brainFrame.Gender == 'Female')]
###Output
_____no_output_____
###Markdown
c. Plot the graphs.Because the dataset includes three different measures of intelligence (PIQ, FSIQ, and VIQ), the first line below uses Pandas `mean()` method to calculate the mean value between the three and store the result in the `menMeanSmarts` variable. Notice that the first line also refers to the menDf, the filtered dataframe containing only male entries.The second line uses the `matplotlib` method `scatter()` to create a scatterplot graph between the `menMeanSmarts` variable and the `MRI_Count`attribute. The MRI_Count in this dataset can be thought as of a measure of the physical size of the subjects' brains.The third line simply displays the graph.The fourth line is used to ensure the graph will be displayed in this notebook.
###Code
# Code cell 6
menMeanSmarts = menDf[["PIQ", "FSIQ", "VIQ"]].mean(axis=1)
plt.scatter(menMeanSmarts, menDf["MRI_Count"])
plt.show()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Similarly, the code below creates a scatterplot graph for the women-only filtered dataframe.
###Code
# Code cell 7
# Graph the women-only filtered dataframe
#womenMeanSmarts = ?
#plt.scatter(?, ?)
plt.show()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 3: Calculating Correlation with Python Step 1: Calculate correlation against brainFrame.The pandas `corr()` method provides an easy way to calculate correlation against a dataframe. By simply calling the method against a dataframe, one can get the correlation between all variables at the same time.
###Code
# Code cell 8
brainFrame.corr(method='pearson')
###Output
_____no_output_____
###Markdown
Notice at the left-to-right diagonal in the correlation table generated above. Why is the diagonal filled with 1s? Is that a coincidence? Explain. Still looking at the correlation table above, notice that the values are mirrored; values below the 1 diagonal have a mirrored counterpart above the 1 diagonal. Is that a coincidence? Explain. Using the same `corr()` method, it is easy to calculate the correlation of the variables contained in the female-only dataframe:
###Code
# Code cell 9
womenDf.corr(method='pearson')
###Output
_____no_output_____
###Markdown
And the same can be done for the male-only dataframe:
###Code
# Code cell 10
# Use corr() for the male-only dataframe with the pearson method
#?.corr(?)
###Output
_____no_output_____
###Markdown
Part 4: Visualizing Step 1: Install Seaborn.To make it easier to visualize the data correlations, heatmap graphs can be used. Based on colored squares, heatmap graphs can help identify correlations in a glance.The Python module named `seaborn` makes it very easy to plot heatmap graphs.First, run the cell below to download and install the `seaborn` module.
###Code
# Code cell 11
!pip install seaborn
###Output
_____no_output_____
###Markdown
Step 2: Plot the correlation heatmap.Now that the dataframes are ready, the heatmaps can be plotted. Below is a breakdown of the code in the cell below:Line 1: Generates a correlation table based on the `womenNoGenderDf` dataframe and stores it on `wcorr`.Line 2: Uses the `seaborn` `heatmap()` method to generate and plot the heatmap. Notice that `heatmap()` takes `wcorr` as a parameter.Line 3: Use to export and save the generated heatmap as a PNG image. While the line 3 is not active (it has the comment `` character preceding it, forcing the interpreter to ignore it), it was kept for informational purposes.
###Code
# Code cell 12
import seaborn as sns
wcorr = womenDf.corr()
sns.heatmap(wcorr)
#plt.savefig('attribute_correlations.png', tight_layout=True)
###Output
_____no_output_____
###Markdown
Similarly, the code below creates and plots a heatmap for the male-only dataframe.
###Code
# Code cell 14
mcorr = menDf.corr()
sns.heatmap(mcorr)
#plt.savefig('attribute_correlations.png', tight_layout=True)
###Output
_____no_output_____
|
src/Matching/.ipynb_checkpoints/Delete_Matching-checkpoint.ipynb
|
###Markdown
Reading in the datasets
###Code
dna = pd.read_csv("../data/working/validcompaniesdictionary.csv", index_col = [0])
fda = pd.read_excel("../data/original/fda_companies.xlsx")
ndc = pd.read_excel("../data/original/BI DSPG Company Datasets/NDC_Company_Dataset.xls")
###Output
_____no_output_____
###Markdown
Neil's code for cleaning
###Code
removeset=string.punctuation
removeset=removeset.replace("-","") #Don't remove dashes
removeset=removeset.replace("&","") #Don't remove ampersand
removeset=removeset.replace("_","") #Don't remove underscore
removeset=removeset.replace("%","") #Don't remove percent
removeset=removeset.replace("$","") #Don't remove dollar
print(removeset)
# remove all single characters (This step is done first, because later there are single chars we want to retain.)
#document = re.sub(r'\s+[a-zA-Z]\s+', ' ', str(X[sen]))
string = "hello i world"
string = re.sub(r'\s+[a-zA-Z]\s+', ' ', string)
print(string)
# remove all numbers
#document = re.sub(r'[0-9]','', document)
string2 = "h3ll0"
string2 = re.sub(r'[0-9]','', string)
print(string2)
# Substituting multiple spaces with single space
#document = re.sub(r'\s+', ' ', document, flags=re.I)
string3 = "hello world"
string3 = re.sub(r'\s+', ' ', string3, flags=re.I)
print(string3)
#Converting to lowercase
string4 = "HEllo WORLD"
string4 = string4.lower()
print(string4)
#Removing prefixed 'b'
#document = re.sub(r'^b\s+', '', document)
string5 = "b hello world"
string5 = re.sub(r'^b\s+', '', string5)
print(string5)
#Make dashes into combined words
#document = re.sub(r'\s-\s+', '-', document)
string6 = "hello - world"
string6 = re.sub(r'\s-\s+', '-', string6)
print(string6)
#Make ampersand into combined words
#document = re.sub(r'\s&\s+', '&', document)
string7 = "hel & lo & world"
string7 = re.sub(r'\s&\s+', '&', string7)
print(string7)
#Make underscore into combined words
#document = re.sub(r'\s_\s+', '_', document)
string8 = "hel _ lo wo _ rld"
string8 = re.sub(r'\s_\s+', '_', string8)
print(string8)
#removes all punctuation in string that is in removeset
document = "Johnson+;Johnson!"
for i in removeset:
document=re.sub(re.escape(i),"",document)
print(document)
###Output
JohnsonJohnson
###Markdown
Daniel's Code for Cleaning NDC
###Code
def eraseFromColumn(inputColumn, eraseList):
"iteratively delete regex query matches from input list"
"""
inputColumn -- a column from a pandas dataframe, this will be the set of
target words/entries that deletions will be made from
eraseList -- a column containing strings (regex expressions) which will be
deleted from the inputColumn, in an iterative fashion
"""
eraseList['changeNum'] = 0
eraseList['changeIndexes'] = ''
inputColumn = inputColumn.replace(regex=True, to_replace = "\\\\", value='/')
for index, row in eraseList.iterrows():
curReplaceVal = row[0]
currentRegexExpression=re.compile(curReplaceVal)
CurrentBoolVec=inputColumn.str.contains(currentRegexExpression, na= False)
eraseList['changeIndexes'].iloc[index]=[i for i, x in enumerate(CurrentBoolVec) if x]
eraseList['changeNum'].iloc[index] = len(eraseList['changeIndexes'].iloc[index])
inputColumn.replace(regex=True, to_replace=currentRegexExpression,value='', inplace = True)
return inputColumn, eraseList
###Output
_____no_output_____
###Markdown
Cleaning NDC Removing the first 25 since they are just numbers
###Code
#Getting rid of the first 25 since those are just numbers
ndc = ndc.iloc[25:]
#renaming column
ndc = ndc.rename(columns = {'Row Labels':'company'})
###Output
_____no_output_____
###Markdown
Lowercase everything
###Code
#Converting to lower first
ndc.company = ndc.company.str.lower()
###Output
_____no_output_____
###Markdown
Get rid of () and {}
###Code
#Function that uses regex to remove parentheses and square brackets
def removeParenthesis(string):
return re.sub('[()\{}]', '', string)
#Here I am just making a new array that will hold the result of removing parentheses
#I will deleted the old row further down
companies = np.array([])
for row in ndc.itertuples():
companies = np.append(companies, removeParenthesis(row.company))
ndc['companies'] = companies
#Shows the changes
ndc[(ndc.company.str.contains("()") | (ndc.company.str.contains("{}")))]
del ndc['company']
###Output
_____no_output_____
###Markdown
Getting rid of rest of unnecessary punctuation
###Code
#Removing these from ndc dataset
removeset
#function that gets rid of unwanted punctuation
#This does get rid of ' within a string (ex. l'oreal becomes l oreal) so maybe recheck?
def removeUnwantedPunc(string):
return re.sub('[!"#\'()*+,./:;<=>?@[\]^`{|}~]', ' ', string)
company = np.array([])
for row in ndc.itertuples():
company = np.append(company, removeUnwantedPunc(row.companies))
ndc['company'] = company
del ndc['companies']
ndc.head()
###Output
_____no_output_____
###Markdown
Grabbing the list of legal entities from os github
###Code
legalEntities = pd.read_csv("https://raw.githubusercontent.com/DSPG-Young-Scholars-Program/dspg20oss/danBranch/ossPy/keyFiles/curatedLegalEntitesRaw.csv", header = None)
legalEntities.head()
###Output
_____no_output_____
|
nbs/tripletnet.ipynb
|
###Markdown
Implementaion of a triplet network
###Code
#export
import torch
import torch.nn as nn
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
A triplet network takes in 3 inputs:- The query image- The positive image- The Negative imageThese 3 images all pass through the same embedding extractorThe query image can be a cropped out part of a parent image or an image very similar to the parent image in the embedding space. This parent image will be taken as the postive image because it is very similar to the query image in the embedding space. The negative image however is one that bears no embedding similarity to the positive image.NB: It is important to note however that there are many notions of similarity between items. Items may be similar in color or in brand or gender use etc. This raises the need for us to have conditions of simialrity while building our architectureWe will be using a conditional similarity network to learn the different notions of similarities from the triplets while training
###Code
#export
class TripletNet(nn.Module):
def __init__(self, embeddingnet):
super(TripletNet, self).__init__()
self.embeddingnet = conditionalnet
def forward(self, q, p, n, c):
"""
q: Query Image
p: Positive Image
n: Negative Image
c: Condition of similarity
"""
# it is important to normalize the embeddings while using triplets
embedded_x, masknorm_norm_q, embed_norm_q, tot_embed_norm_q = self.conditionalnet(q, c)
embedded_p, masknorm_norm_p, embed_norm_p, tot_embed_norm_p = self.conditionalnet(p, c)
embedded_n, masknorm_norm_n, embed_norm_n, tot_embed_norm_n = self.conditionalnet(n, c)
mask_norm = (masknorm_norm_q + masknorm_norm_p + masknorm_norm_n) / 3
embed_norm = (embed_norm_q + embed_norm_p + embed_norm_n) / 3
mask_embed_norm = (tot_embed_norm_q + tot_embed_norm_p + tot_embed_norm_n) / 3
pos_dist = F.pairwise_distance(embedded_q, embedded_p, 2)
neg_dist = F.pairwise_distance(embedded_q, embedded_n, 2)
return pos_dist, neg_dist, mask_norm, embed_norm, mask_embed_norm
###Output
_____no_output_____
|
transformer_chatbot_jp.ipynb
|
###Markdown
Transformer Chatbot
###Code
import tensorflow as tf
tf.random.set_seed(1234)
import tensorflow_datasets as tfds
import os
import re
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
データセットの準備このチュートリアルでは、映画とドラマの会話を収録した[Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html)というデータセットを利用する。このデータセットには1万ペア以上のキャラクター達の22万を超える会話が含まれている。`movie_conversations.txt`には会話のIDのリスト、`movie_lines.txt`には各会話IDに割り当てられたテキストが含まれている。データセットの詳しい情報は、zipファイルに入っているREADMEファイルをご参照ください。
###Code
path_to_zip = tf.keras.utils.get_file(
'cornell_movie_dialogs.zip',
origin=
'http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip',
extract=True)
path_to_dataset = os.path.join(
os.path.dirname(path_to_zip), "cornell movie-dialogs corpus")
path_to_movie_lines = os.path.join(path_to_dataset, 'movie_lines.txt')
path_to_movie_conversations = os.path.join(path_to_dataset,
'movie_conversations.txt')
###Output
Downloading data from http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip
9920512/9916637 [==============================] - 2s 0us/step
###Markdown
データの前処理と読み込みこのチュートリアルをシンプル且つ高速にするため、訓練サンプルの最大数を`oMAX_SAMPLES=25000`に、文の最大長さを`MAX_LENGTH=40`に限定している。前処理に関しては以下の手順がある:* `MAX_SAMPLES`個の会話ペアを解凍し、`question`と`answers`の2つのリストに格納する。* 全ての文に対して特殊マークを削除するよう前処理を行う* [TensorFlow Datasets SubwordTextEncoder](https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/SubwordTextEncoder)を利用してトークナイザー(text2ID, ID2text)を作る* 全ての文をトークナイズ(形態素解析)して、文の始まりと終わりを示す`START_TOKEN`と`END_TOKEN`を文に追加する* `MAX_LENGTH`個のトークンを超える文を削除する* 解析済みの文を`MAX_LENGTH`までpadする
###Code
# サンプルの最大数を指定
MAX_SAMPLES = 50000
def preprocess_sentence(sentence):
sentence = sentence.lower().strip()
# 単語と句読点の間にスペースを入れる
# eg: "he is a boy." => "he is a boy ."
sentence = re.sub(r"([?.!,])", r" \1 ", sentence)
sentence = re.sub(r'[" "]+', " ", sentence)
# (a-z, A-Z, ".", "?", "!", ",")以外のマークを全部スペースに置き換わる
sentence = re.sub(r"[^a-zA-Z?.!,]+", " ", sentence)
sentence = sentence.strip()
return sentence
def load_conversations():
# 行idからtextにマッピングする辞書
id2line = {}
with open(path_to_movie_lines, errors='ignore') as file:
lines = file.readlines()
for line in lines:
parts = line.replace('\n', '').split(' +++$+++ ')
id2line[parts[0]] = parts[4]
inputs, outputs = [], []
with open(path_to_movie_conversations, 'r') as file:
# eg: u0 +++$+++ u2 +++$+++ m0 +++$+++ ['L194', 'L195', 'L196', 'L197']
lines = file.readlines()
for line in lines:
parts = line.replace('\n', '').split(' +++$+++ ')
# 会話を行idのリストのする
conversation = [line[1:-1] for line in parts[3][1:-1].split(', ')]
for i in range(len(conversation) - 1):
inputs.append(preprocess_sentence(id2line[conversation[i]]))
outputs.append(preprocess_sentence(id2line[conversation[i + 1]]))
if len(inputs) >= MAX_SAMPLES:
return inputs, outputs
return inputs, outputs
questions, answers = load_conversations()
print('Sample question: {}'.format(questions[20]))
print('Sample answer: {}'.format(answers[20]))
# tfdsを利用してquestionsとanswersのトークナイザーを構築する
tokenizer = tfds.features.text.SubwordTextEncoder.build_from_corpus(
questions + answers, target_vocab_size=2**13)
# 開始と終了を示すのstartとendトークンを定義する
START_TOKEN, END_TOKEN = [tokenizer.vocab_size], [tokenizer.vocab_size + 1]
# ボキャブラリーのサイズににstartとendトークンの数を足す
VOCAB_SIZE = tokenizer.vocab_size + 2
print('Tokenized sample question: {}'.format(tokenizer.encode(questions[20])))
# 文の最大長さ
MAX_LENGTH = 40
# 文を解析、フィルタリング、padする
def tokenize_and_filter(inputs, outputs):
tokenized_inputs, tokenized_outputs = [], []
for (sentence1, sentence2) in zip(inputs, outputs):
# 形態素解析
sentence1 = START_TOKEN + tokenizer.encode(sentence1) + END_TOKEN
sentence2 = START_TOKEN + tokenizer.encode(sentence2) + END_TOKEN
# 解析済みの文の長さをチェック
if len(sentence1) <= MAX_LENGTH and len(sentence2) <= MAX_LENGTH:
tokenized_inputs.append(sentence1)
tokenized_outputs.append(sentence2)
# 解析済みの文をpadする
tokenized_inputs = tf.keras.preprocessing.sequence.pad_sequences(
tokenized_inputs, maxlen=MAX_LENGTH, padding='post')
tokenized_outputs = tf.keras.preprocessing.sequence.pad_sequences(
tokenized_outputs, maxlen=MAX_LENGTH, padding='post')
return tokenized_inputs, tokenized_outputs
questions, answers = tokenize_and_filter(questions, answers)
print('Vocab size: {}'.format(VOCAB_SIZE))
print('Number of samples: {}'.format(len(questions)))
###Output
Vocab size: 8333
Number of samples: 44095
###Markdown
tf.data.Datasetを作成[tf.data.Dataset API](https://www.tensorflow.org/api_docs/python/tf/data)を利用して、インプットパイプラインを構築する。そうすると、キャッシュやプリフレッチなどの特性が利用でき、訓練プロセスを高速化することができる。Transformerは自回帰モデル:時刻毎に予測し、その出力で次の時刻に何をするかを決める。訓練時、このチュートリアルはteacher-forcingを使う。teacher-forcingはモデルの現時刻の予測と関係なく、次の時刻の真の出力を渡す。Transformerは全ての単語を予測するが、self-attentionによって入力シーケンスの過去(現時刻以前)の単語を見ることができ、次の単語をうまく予測する。モデルが予想の出力を覗かないように、look-ahead maskが使われる。ターゲット(予測対象)はdecoderの入力としてpadされた`decoder_inputs`と損失と精度を計算する`cropped_targets`に分けられる。
###Code
BATCH_SIZE = 64
BUFFER_SIZE = 20000
# decoderの入力は最後のトークン以前のものを使う
# outputsからSTART_TOKENを削除する
dataset = tf.data.Dataset.from_tensor_slices((
{
'inputs': questions,
'dec_inputs': answers[:, :-1]
},
{
'outputs': answers[:, 1:]
},
))
dataset = dataset.cache()
dataset = dataset.shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
print(dataset)
###Output
<PrefetchDataset shapes: ({inputs: (None, 40), dec_inputs: (None, 39)}, {outputs: (None, 39)}), types: ({inputs: tf.int32, dec_inputs: tf.int32}, {outputs: tf.int32})>
###Markdown
Attention Scaled dot product AttentionTransformerに使われるscaled dot-product attention関数は3つの入力を受け取る:Q (query), K (key), V (value)。attention重みの計算は以下の式を使う:$$Attention(Q, K, V)=softmax_k(\frac{QK^T}{\sqrt{d_k}})V$$softmax標準化は`key`において行われるため、その値は `query`の重要度を示す。式の出力はattention重みと`value`ベクトルの乗算を意味している。これで、我々がフォーカスしたい単語が保持され、関係ない単語が流されることが保証される。ドット積attentionの値は深さ(Q,K,Vベクトルの次元数)の平方根にスケールされる。こうする理由は、深さが大きくなると、ドット積の値の増加も凄まじくなる。こうなると、softmax関数が圧迫され、非常に小さい勾配が出てきて、hard softmaxになってしまう。例えば、`query`と`key`は0平均と1分散になっている。これらの行列の掛け算の結果は0平均(dk * 0 = 0)と`dk`(dk * 1 = dk)分散になる。つまり、`dk`の平方根はスケールに使われる理由は`key`と`query`の行列掛け算の結果が0平均と1分散になり、柔軟なsoftmaxを得られるため。maskに*-1e9*(ほぼマイナス無限大)を掛ける。こうする理由は、maskは`query`と`key`の行列掛け算(スケール済み)と合計され、そしてsoftmaxの前に適用される。目的は、maskしたセルを0にする。大きいマイナスの数をsoftmaxで計算すると0に近い値が出てくる。
###Code
def scaled_dot_product_attention(query, key, value, mask):
"""attention重みを計算する """
matmul_qk = tf.matmul(query, key, transpose_b=True)
# qkの積をスケールする
depth = tf.cast(tf.shape(key)[-1], tf.float32)
logits = matmul_qk / tf.math.sqrt(depth)
# maskを足してあげる
if mask is not None:
logits += (mask * -1e9)
# softmax標準化は最後の軸に行われる (seq_len_k)
attention_weights = tf.nn.softmax(logits, axis=-1)
output = tf.matmul(attention_weights, value)
return output
###Output
_____no_output_____
###Markdown
Multi-head attentionMulti-head attentionは以下の4つのパーツがある:* 線形層と各headへの分割* Scaled dot-product attention* headの結合* 最後の線形層全てのmulti-head attentionブロックは3つの入力を受け取る: (query), K (key), V (value)。これらは線形層を通して、複数のheadに分割される。上で定義した`scaled_dot_product_attention`は全てのheadに適用される(broadcastで効率向上)。attentionの段階では適切なmaskが必要。各headのattentionの出力は後で結合され(tf.transposeとtf.reshapeを使う)、最後の線形層に渡される。1つのattention headではなく、`query`、`key`、`value`は複数のheadに分割される。これによって、モデルは違う特徴空間のそれぞれの位置の情報を同時に獲得できる。各headに分割された後、次元削減が行われる。これで、全体の計算コストは全ての次元で、1つのheadと同じようになる。
###Code
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, name="multi_head_attention"):
super(MultiHeadAttention, self).__init__(name=name)
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
# d_model = head数 * dk(QKVの次元数)
# 全ての行列をまとめて渡すので、次元数を計算する必要がある
self.depth = d_model // self.num_heads
self.query_dense = tf.keras.layers.Dense(units=d_model)
self.key_dense = tf.keras.layers.Dense(units=d_model)
self.value_dense = tf.keras.layers.Dense(units=d_model)
self.dense = tf.keras.layers.Dense(units=d_model)
def split_heads(self, inputs, batch_size):
inputs = tf.reshape(
inputs, shape=(batch_size, -1, self.num_heads, self.depth))
return tf.transpose(inputs, perm=[0, 2, 1, 3])
def call(self, inputs):
query, key, value, mask = inputs['query'], inputs['key'], inputs[
'value'], inputs['mask']
batch_size = tf.shape(query)[0]
# 最初の線形層
query = self.query_dense(query)
key = self.key_dense(key)
value = self.value_dense(value)
# headに分割する
query = self.split_heads(query, batch_size)
key = self.split_heads(key, batch_size)
value = self.split_heads(value, batch_size)
# scaled dot-product attention
scaled_attention = scaled_dot_product_attention(query, key, value, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])
# headの結合
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model))
# 最後の線形層
outputs = self.dense(concat_attention)
return outputs
###Output
_____no_output_____
###Markdown
Transformer Masking`create_padding_mask`と`create_look_ahead`はヘルプ関数で、maskを作ってpadされたトークンをmaskする。これらのヘルプ関数を`tf.keras.layers.Lambda`層として利用する。モデルがpaddingを入力として受けつけないように、バッチの中のpadされたトークン(値が0のもの)を全てmaskする。
###Code
def create_padding_mask(x):
mask = tf.cast(tf.math.equal(x, 0), tf.float32)
# (batch_size, 1, 1, sequence length)
return mask[:, tf.newaxis, tf.newaxis, :]
print(create_padding_mask(tf.constant([[1, 2, 0, 3, 0], [0, 0, 0, 4, 5]])))
def create_look_ahead_mask(x):
seq_len = tf.shape(x)[1]
# 行列の対角線右上の三角形を1にする
look_ahead_mask = 1 - tf.linalg.band_part(tf.ones((seq_len, seq_len)), -1, 0)
padding_mask = create_padding_mask(x)
return tf.maximum(look_ahead_mask, padding_mask)
print(create_look_ahead_mask(tf.constant([[1, 2, 0, 4, 5]])))
###Output
tf.Tensor(
[[[[0. 1. 1. 1. 1.]
[0. 0. 1. 1. 1.]
[0. 0. 1. 1. 1.]
[0. 0. 1. 0. 1.]
[0. 0. 1. 0. 0.]]]], shape=(1, 1, 5, 5), dtype=float32)
###Markdown
Positional encodingTransformerは再帰(recurrence)または畳み込み(convolution)の処理を含んでなくて、positional encodingを追加することによって、モデルに文の中の単語の相対位置に関する情報を与えることができる。positional encodingベクトルはembeddingベクトルに足される。embeddingはd次元空間においてトークンの特徴表現で、近い意味を持つものはお互いの距離も近い。ただし、embeddingは文の中の単語の相対位置をエンコードしていない。positional encodingを足してあげた後、d次元区間において、単語間の位置はその意味の類似度と文の中の位置によって決まる(位置が近いと距離も近くなる)。詳しくは[positional encoding](https://github.com/kaitolucifer/transformer_chatbot_jp/blob/master/position_encoding_jp.ipynb)のノートブックをご参照ください。positional encodingを計算する式は以下のように:$$PE_{(pos, 2i)}=\sin(pos/10000^{2i/d_{model}})$$$$PE_{(pos, 2i+1)}=\cos(pos/10000^{2i/d_{model}})$$
###Code
class PositionalEncoding(tf.keras.layers.Layer):
def __init__(self, position, d_model):
super(PositionalEncoding, self).__init__()
self.pos_encoding = self.positional_encoding(position, d_model)
def get_angles(self, position, i, d_model):
angles = 1 / tf.pow(10000, (2 * (i // 2)) / tf.cast(d_model, tf.float32))
return position * angles
def positional_encoding(self, position, d_model):
angle_rads = self.get_angles(
position=tf.range(position, dtype=tf.float32)[:, tf.newaxis],
i=tf.range(d_model, dtype=tf.float32)[tf.newaxis, :],
d_model=d_model)
# sinを配列の偶数インデックスに適用する
sines = tf.math.sin(angle_rads[:, 0::2])
# cosを配列の偶数インデックスに適用する
cosines = tf.math.cos(angle_rads[:, 1::2])
pos_encoding = tf.concat([sines, cosines], axis=-1)
pos_encoding = pos_encoding[tf.newaxis, ...]
return tf.cast(pos_encoding, tf.float32)
def call(self, inputs):
return inputs + self.pos_encoding[:, :tf.shape(inputs)[1], :]
sample_pos_encoding = PositionalEncoding(50, 512)
plt.pcolormesh(sample_pos_encoding.pos_encoding.numpy()[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Encoder Layer全てのencoderは以下のsublayerを含む:1. Multi-head attention (padding mask付き)2. dropoutに続く2層のDense層これらのsublayerには残差結合(residual connection)があり、それにlayer normalizationが続く構造になっている。sublayerの出力はLayerNorm(x + Sublayer(x))という形に書くことができる。標準化(normalization)は`d_model` (最後の)軸で行われる。
###Code
def encoder_layer(units, d_model, num_heads, dropout, name="encoder_layer"):
inputs = tf.keras.Input(shape=(None, d_model), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
attention = MultiHeadAttention(
d_model, num_heads, name="attention")({
'query': inputs,
'key': inputs,
'value': inputs,
'mask': padding_mask
})
attention = tf.keras.layers.Dropout(rate=dropout)(attention)
attention = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(inputs + attention)
outputs = tf.keras.layers.Dense(units=units, activation='relu')(attention)
outputs = tf.keras.layers.Dense(units=d_model)(outputs)
outputs = tf.keras.layers.Dropout(rate=dropout)(outputs)
outputs = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention + outputs)
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
sample_encoder_layer = encoder_layer(
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_encoder_layer")
tf.keras.utils.plot_model(
sample_encoder_layer, to_file='encoder_layer.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Encoderencoderは以下の要素を含む:1. 入力embedding2. Positional Encoding3. `num_layers`層のencoder layer入力はembeddingに変換され、positional encodingと足し合わせる。そして、足し合わせたものはencoder layerの入力とする。encoderの出力はdecoderの入力になる。
###Code
def encoder(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name="encoder"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
padding_mask = tf.keras.Input(shape=(1, 1, None), name="padding_mask")
embeddings = tf.keras.layers.Embedding(vocab_size, d_model)(inputs)
embeddings *= tf.math.sqrt(tf.cast(d_model, tf.float32))
embeddings = PositionalEncoding(vocab_size, d_model)(embeddings)
outputs = tf.keras.layers.Dropout(rate=dropout)(embeddings)
for i in range(num_layers):
outputs = encoder_layer(
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
name="encoder_layer_{}".format(i),
)([outputs, padding_mask])
return tf.keras.Model(
inputs=[inputs, padding_mask], outputs=outputs, name=name)
sample_encoder = encoder(
vocab_size=8192,
num_layers=2,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_encoder")
tf.keras.utils.plot_model(
sample_encoder, to_file='encoder.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Decoder Layer全てのdecoder layerは以下のsublayerを含む:1. Masked multi-head-attention (look ahead mask と padding maskを適用した)2. Multi-head attention (padding maskを適用した)。`value`と`key`はencoderの出力を入力として受け取る。`query`はmasked multi-head attention層の出力を受け取る。3. dropoutに続く2層のDense層これらのsublayerには残差結合(residual connection)があり、それにlayer normalizationが続く構造になっている。sublayerの出力はLayerNorm(x + Sublayer(x))という形に書くことができる。標準化(normalization)は`d_model` (最後の)軸で行われる。queryはdecoderの最初のattention blockの出力を、keyはencoderの出力を受け取り、attention重みはencoderの出力をベースとしたdecoderの入力の重要度を表す。言い換えると、decoderはencoderの出力と自分の出力のself-attentionを見て、次の単語を予測する。詳細は上のscaled dot product attentionセッションのデモンストレーションを参照してください。
###Code
def decoder_layer(units, d_model, num_heads, dropout, name="decoder_layer"):
inputs = tf.keras.Input(shape=(None, d_model), name="inputs")
enc_outputs = tf.keras.Input(shape=(None, d_model), name="encoder_outputs")
look_ahead_mask = tf.keras.Input(
shape=(1, None, None), name="look_ahead_mask")
padding_mask = tf.keras.Input(shape=(1, 1, None), name='padding_mask')
attention1 = MultiHeadAttention(
d_model, num_heads, name="attention_1")(inputs={
'query': inputs,
'key': inputs,
'value': inputs,
'mask': look_ahead_mask
})
attention1 = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention1 + inputs)
attention2 = MultiHeadAttention(
d_model, num_heads, name="attention_2")(inputs={
'query': attention1,
'key': enc_outputs,
'value': enc_outputs,
'mask': padding_mask
})
attention2 = tf.keras.layers.Dropout(rate=dropout)(attention2)
attention2 = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(attention2 + attention1)
outputs = tf.keras.layers.Dense(units=units, activation='relu')(attention2)
outputs = tf.keras.layers.Dense(units=d_model)(outputs)
outputs = tf.keras.layers.Dropout(rate=dropout)(outputs)
outputs = tf.keras.layers.LayerNormalization(
epsilon=1e-6)(outputs + attention2)
return tf.keras.Model(
inputs=[inputs, enc_outputs, look_ahead_mask, padding_mask],
outputs=outputs,
name=name)
sample_decoder_layer = decoder_layer(
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_decoder_layer")
tf.keras.utils.plot_model(
sample_decoder_layer, to_file='decoder_layer.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Decoderdecoderは以下の要素を含む:1. 出力embedding2. Positional Encoding3. N層のdecoder layertargetはembddingに変換され、positioanl encodingと足し合わせる。足し合わせたものはdecoder layerの入力とする。decoderの出力は最後の線形層の入力になる。
###Code
def decoder(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name='decoder'):
inputs = tf.keras.Input(shape=(None,), name='inputs')
enc_outputs = tf.keras.Input(shape=(None, d_model), name='encoder_outputs')
look_ahead_mask = tf.keras.Input(
shape=(1, None, None), name='look_ahead_mask')
padding_mask = tf.keras.Input(shape=(1, 1, None), name='padding_mask')
embeddings = tf.keras.layers.Embedding(vocab_size, d_model)(inputs)
embeddings *= tf.math.sqrt(tf.cast(d_model, tf.float32))
embeddings = PositionalEncoding(vocab_size, d_model)(embeddings)
outputs = tf.keras.layers.Dropout(rate=dropout)(embeddings)
for i in range(num_layers):
outputs = decoder_layer(
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
name='decoder_layer_{}'.format(i),
)(inputs=[outputs, enc_outputs, look_ahead_mask, padding_mask])
return tf.keras.Model(
inputs=[inputs, enc_outputs, look_ahead_mask, padding_mask],
outputs=outputs,
name=name)
sample_decoder = decoder(
vocab_size=8192,
num_layers=2,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_decoder")
tf.keras.utils.plot_model(
sample_decoder, to_file='decoder.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
Transformertransformerはencoder, decoderと最後の線形層からなる構造である。decoderの出力は線形層の入力で、その出力は戻される。
###Code
def transformer(vocab_size,
num_layers,
units,
d_model,
num_heads,
dropout,
name="transformer"):
inputs = tf.keras.Input(shape=(None,), name="inputs")
dec_inputs = tf.keras.Input(shape=(None,), name="dec_inputs")
enc_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='enc_padding_mask')(inputs)
# decoderの最初のattention blockで、入力の未来時刻のトークンをmaskする
look_ahead_mask = tf.keras.layers.Lambda(
create_look_ahead_mask,
output_shape=(1, None, None),
name='look_ahead_mask')(dec_inputs)
# 2番目のattention blockでencoderの出力をmaskする
dec_padding_mask = tf.keras.layers.Lambda(
create_padding_mask, output_shape=(1, 1, None),
name='dec_padding_mask')(inputs)
enc_outputs = encoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[inputs, enc_padding_mask])
dec_outputs = decoder(
vocab_size=vocab_size,
num_layers=num_layers,
units=units,
d_model=d_model,
num_heads=num_heads,
dropout=dropout,
)(inputs=[dec_inputs, enc_outputs, look_ahead_mask, dec_padding_mask])
outputs = tf.keras.layers.Dense(units=vocab_size, name="outputs")(dec_outputs)
return tf.keras.Model(inputs=[inputs, dec_inputs], outputs=outputs, name=name)
sample_transformer = transformer(
vocab_size=8192,
num_layers=4,
units=512,
d_model=128,
num_heads=4,
dropout=0.3,
name="sample_transformer")
tf.keras.utils.plot_model(
sample_transformer, to_file='transformer.png', show_shapes=True)
###Output
_____no_output_____
###Markdown
モデルを訓練する モデルの初期化このチュートリアルをシンプルかつ高速にするため、`num_layers`、`d_model`と`units`の数を減らした。オリジナルのtransformernについては、[論文](https://arxiv.org/abs/1706.03762)のほうをご参照ください。
###Code
tf.keras.backend.clear_session()
# ハイパーパラメータ
NUM_LAYERS = 2
D_MODEL = 256
NUM_HEADS = 8
UNITS = 512
DROPOUT = 0.1
model = transformer(
vocab_size=VOCAB_SIZE,
num_layers=NUM_LAYERS,
units=UNITS,
d_model=D_MODEL,
num_heads=NUM_HEADS,
dropout=DROPOUT)
###Output
_____no_output_____
###Markdown
損失関数targetシーケンスはpadされたため、損失を計算するときにもpadding maskを適用するのは重要である。
###Code
def loss_function(y_true, y_pred):
y_true = tf.reshape(y_true, shape=(-1, MAX_LENGTH - 1))
loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')(y_true, y_pred)
mask = tf.cast(tf.not_equal(y_true, 0), tf.float32)
loss = tf.multiply(loss, mask)
return tf.reduce_mean(loss)
###Output
_____no_output_____
###Markdown
可変学習率adam optimizerに自作の学習率スケジューラーを使用。学習率の変動は[論文](https://arxiv.org/abs/1706.03762)の式を参照:$$lrate=d^{0.5}_{model}*min(step\_num^{-0.5},step\_num*warmup\_steps^{-1.5})$$
###Code
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps**-1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
sample_learning_rate = CustomSchedule(d_model=128)
plt.plot(sample_learning_rate(tf.range(200000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
###Output
_____no_output_____
###Markdown
モデルをコンパイル
###Code
learning_rate = CustomSchedule(D_MODEL)
optimizer = tf.keras.optimizers.Adam(
learning_rate, beta_1=0.9, beta_2=0.98, epsilon=1e-9)
def accuracy(y_true, y_pred):
# ラベルのshapeを(batch_size, MAX_LENGTH - 1)に変換する
y_true = tf.reshape(y_true, shape=(-1, MAX_LENGTH - 1))
accuracy = tf.keras.metrics.sparse_categorical_accuracy(y_true, y_pred)
return accuracy
model.compile(optimizer=optimizer, loss=loss_function, metrics=[accuracy])
###Output
_____no_output_____
###Markdown
モデルを訓練するtransformerを訓練するときは`model.fit()` で簡単に実行できる。
###Code
EPOCHS = 20
model.fit(dataset, epochs=EPOCHS)
###Output
Epoch 1/20
689/689 [==============================] - 94s 137ms/step - loss: 2.1146 - accuracy: 0.0249
Epoch 2/20
689/689 [==============================] - 81s 117ms/step - loss: 1.5009 - accuracy: 0.0530
Epoch 3/20
689/689 [==============================] - 80s 116ms/step - loss: 1.3942 - accuracy: 0.0652
Epoch 4/20
689/689 [==============================] - 80s 117ms/step - loss: 1.3312 - accuracy: 0.0719
Epoch 5/20
689/689 [==============================] - 80s 117ms/step - loss: 1.2740 - accuracy: 0.0765
Epoch 6/20
689/689 [==============================] - 82s 118ms/step - loss: 1.2220 - accuracy: 0.0801
Epoch 7/20
689/689 [==============================] - 81s 117ms/step - loss: 1.1668 - accuracy: 0.0832
Epoch 8/20
689/689 [==============================] - 81s 117ms/step - loss: 1.1057 - accuracy: 0.0861
Epoch 9/20
689/689 [==============================] - 81s 117ms/step - loss: 1.0505 - accuracy: 0.0890
Epoch 10/20
689/689 [==============================] - 81s 118ms/step - loss: 1.0010 - accuracy: 0.0918
Epoch 11/20
689/689 [==============================] - 81s 117ms/step - loss: 0.9547 - accuracy: 0.0946
Epoch 12/20
689/689 [==============================] - 80s 117ms/step - loss: 0.9129 - accuracy: 0.0974
Epoch 13/20
689/689 [==============================] - 81s 117ms/step - loss: 0.8754 - accuracy: 0.1001
Epoch 14/20
689/689 [==============================] - 81s 117ms/step - loss: 0.8402 - accuracy: 0.1029
Epoch 15/20
689/689 [==============================] - 80s 116ms/step - loss: 0.8089 - accuracy: 0.1056
Epoch 16/20
689/689 [==============================] - 80s 117ms/step - loss: 0.7801 - accuracy: 0.1082
Epoch 17/20
689/689 [==============================] - 81s 117ms/step - loss: 0.7540 - accuracy: 0.1108
Epoch 18/20
689/689 [==============================] - 81s 117ms/step - loss: 0.7307 - accuracy: 0.1134
Epoch 19/20
689/689 [==============================] - 80s 116ms/step - loss: 0.7083 - accuracy: 0.1159
Epoch 20/20
689/689 [==============================] - 80s 117ms/step - loss: 0.6887 - accuracy: 0.1183
###Markdown
評価と予測評価では以下の手順で行う:* データセットを作るときと同じ前処理を適用する* 入力文を形態素解析し、`START_TOKEN`と`END_TOKEN`を追加する* padding maskとlook ahead maskを計算する* decoderはencoderの出力と自分の出力を見て予測をする* 最後の単語の選んで、argmaxを計算する* 予測した単語はdecoder入力と結合し、decoderに渡す* このアプローチでは、decoderは予測した単語をベースに、次の単語を予測する注:このモデルは小さなキャパシティーを持ち、データセット全体の一部で訓練されたため、その性能はまだ改善できる。
###Code
def evaluate(sentence):
sentence = preprocess_sentence(sentence)
sentence = tf.expand_dims(
START_TOKEN + tokenizer.encode(sentence) + END_TOKEN, axis=0)
output = tf.expand_dims(START_TOKEN, 0)
for i in range(MAX_LENGTH):
predictions = model(inputs=[sentence, output], training=False)
# seq_len次元の最後の単語を選ぶ
predictions = predictions[:, -1:, :]
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# predicted_idはend_tokenの時、終了して結果を返す
if tf.equal(predicted_id, END_TOKEN[0]):
break
# predicted_idを出力に結合し、decoderの入力として渡す
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0)
def predict(sentence):
prediction = evaluate(sentence)
predicted_sentence = tokenizer.decode(
[i for i in prediction if i < tokenizer.vocab_size])
print('Input: {}'.format(sentence))
print('Output: {}'.format(predicted_sentence))
return predicted_sentence
###Output
_____no_output_____
###Markdown
モデルをテストしよう!
###Code
output = predict('Where have you been?')
output = predict("It's a trap")
# モデルに前回の出力をフィードする
sentence = 'I am not crazy, my mother had me tested.'
for _ in range(5):
sentence = predict(sentence)
print('')
###Output
Input: I am not crazy, my mother had me tested.
Output: what do you mean ?
Input: what do you mean ?
Output: i don t know . i just don t know what you re talkin about .
Input: i don t know . i just don t know what you re talkin about .
Output: yeah . what do you want , pick up sticks ?
Input: yeah . what do you want , pick up sticks ?
Output: i don t know . i m a paleontologist , not a foreign secretary .
Input: i don t know . i m a paleontologist , not a foreign secretary .
Output: you re nervous ?
|
Python_Stock/Technical_Indicators/CMF.ipynb
|
###Markdown
Chaikin Money Flow (CMF) https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:chaikin_money_flow_cmf
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-06-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
n = 20
df['MF_Multiplier'] = (2*df['Adj Close'] - df['Low'] - df['High'])/(df['High']-df['Low'])
df['MF_Volume'] = df['MF_Multiplier']*df['Volume']
df['CMF'] = df['MF_Volume'].rolling(n).sum()/df['Volume'].rolling(n).sum()
df = df.drop(['MF_Multiplier','MF_Volume'],axis=1)
df.head(30)
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
ax1.plot(df['Adj Close'])
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
ax1.legend(loc='best')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['CMF'])
#df['Positive'] = df['CMF'] > 0
#ax2.bar(df.index, df['CMF'], color=df.Positive.map({True: 'g', False: 'r'}))
#ax2.axhline(y=0, color='red')
ax2.grid()
ax2.set_ylabel('Chaikin Money Flow')
ax3 = plt.subplot(3, 1, 3)
df['Positive'] = df['Open'] < df['Adj Close']
colors = df.Positive.map({True: 'g', False: 'r'})
ax3.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid(True)
###Output
_____no_output_____
###Markdown
Candlestick with CMF
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['CMF'])
#df['Positive'] = df['CMF'] > 0
#ax2.bar(df.index, df['CMF'], color=df.Positive.map({True: 'g', False: 'r'}))
#ax2.axhline(y=0, color='red')
ax2.grid()
ax2.set_ylabel('Chaikin Money Flow')
ax3 = plt.subplot(3, 1, 3)
df['Positive'] = df['Open'] < df['Adj Close']
colors = df.Positive.map({True: 'g', False: 'r'})
ax3.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid(True)
###Output
_____no_output_____
###Markdown
Chaikin Money Flow (CMF) https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:chaikin_money_flow_cmf
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-06-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
n = 20
df['MF_Multiplier'] = (2*df['Adj Close'] - df['Low'] - df['High'])/(df['High']-df['Low'])
df['MF_Volume'] = df['MF_Multiplier']*df['Volume']
df['CMF'] = df['MF_Volume'].rolling(n).sum()/df['Volume'].rolling(n).sum()
df = df.drop(['MF_Multiplier','MF_Volume'],axis=1)
df.head(30)
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
ax1.plot(df['Adj Close'])
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
ax1.legend(loc='best')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['CMF'])
#df['Positive'] = df['CMF'] > 0
#ax2.bar(df.index, df['CMF'], color=df.Positive.map({True: 'g', False: 'r'}))
#ax2.axhline(y=0, color='red')
ax2.grid()
ax2.set_ylabel('Chaikin Money Flow')
ax3 = plt.subplot(3, 1, 3)
df['Positive'] = df['Open'] < df['Adj Close']
colors = df.Positive.map({True: 'g', False: 'r'})
ax3.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid(True)
###Output
_____no_output_____
###Markdown
Candlestick with CMF
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['CMF'])
#df['Positive'] = df['CMF'] > 0
#ax2.bar(df.index, df['CMF'], color=df.Positive.map({True: 'g', False: 'r'}))
#ax2.axhline(y=0, color='red')
ax2.grid()
ax2.set_ylabel('Chaikin Money Flow')
ax3 = plt.subplot(3, 1, 3)
df['Positive'] = df['Open'] < df['Adj Close']
colors = df.Positive.map({True: 'g', False: 'r'})
ax3.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid(True)
###Output
_____no_output_____
|
python3/source/linear_algebra.ipynb
|
###Markdown
Learning the fundamentals of linear algebra through python
###Code
from matrix import Matrix
A = Matrix([[(i_out+1)*(i_in+1) for i_in in range(3)] for i_out in range(3)])
B = Matrix([[(j_in+j_out)**(j_out+1) for j_in in range(3)] for j_out in range(3)])
print('A')
print(A)
print('\n{}\n'.format('-'*40))
print('B')
print(B)
#addition and subtraction between two matrices
print('A + B')
print(A + B)
print('\n{}\n'.format('-'*40))
print('A - B')
print(A - B)
###Output
A + B
[[1.0, 3.0, 5.0]
[3.0, 8.0, 15.0]
[11.0, 33.0, 73.0]]
----------------------------------------
A - B
[[1.0, 1.0, 1.0]
[1.0, 0.0, -3.0]
[-5.0, -21.0, -55.0]]
###Markdown
addition and subtraction method for vectors and matrices addition between two vectors$$\boldsymbol{a} + \boldsymbol{b} = \left[\begin{array}{ccc} a_0 + b_0 \\ \vdots \\ a_n + b_n \\ \end{array}\right]$$ subtraction between two vectors$$\boldsymbol{a} - \boldsymbol{b} = \left[\begin{array}{ccc} a_0 - b_0 \\ \vdots \\ a_n - b_n \\ \end{array}\right]$$ addition between two matrices$$\boldsymbol{A} + \boldsymbol{B} = \left[\begin{array}{ccc} a_{0,0} + b_{0,0} & \cdots & a_{0,m} + b_{0,m} \\ \vdots & \ddots & \vdots \\ a_{n,0} + b_{n,0} & \cdots & a_{n,m} + b_{n,m} \\ \end{array}\right]$$ subtraction between two matrices$$\boldsymbol{A} - \boldsymbol{B} = \left[\begin{array}{ccc} a_{0,0} - b_{0,0} & \cdots & a_{0,m} - b_{0,m} \\ \vdots & \ddots & \vdots \\ a_{n,0} - b_{n,0} & \cdots & a_{n,m} - b_{n,m} \\ \end{array}\right]$$ ::::: Note :::::- When calculating the sum/difference of multiple vectors or matrices, the size of both vectors or matrices must match in order to calculate the addition/subtraction between the vectors or matrices.
###Code
#multiplication between a scalar and matrix
scalar = -0.1
print('-1 x (A + B)')
print(scalar * (A + B))
###Output
-1 x (A + B)
[[-0.1, -0.3, -0.5]
[-0.3, -0.8, -1.5]
[-1.1, -3.3, -7.3]]
###Markdown
Scalar multiplication for vectors and matrices multiplication between a scalar and vectors$$\boldsymbol{s} \times \boldsymbol{a} = \left[\begin{array}{ccc} s \times a_0 \\ \vdots \\ s \times a_n \\ \end{array}\right]$$ multiplication between a scalar and matrices$$\boldsymbol{s} \times \boldsymbol{A} = \left[\begin{array}{ccc} s \times a_{0,0} & \cdots & s \times a_{0,m} \\ \vdots & \ddots & \vdots \\ s \times a_{n,0} & \cdots & s \times a_{n,m} \\ \end{array}\right]$$
###Code
#transpose of a matrix
B_T = Matrix(B.matrix)
print('original matrix B')
print(B_T)
print('\n{}\n'.format('-'*40))
print('transposed matrix B')
B_T.T()
print(B_T)
###Output
original matrix B
[[0.0, 1.0, 2.0]
[1.0, 4.0, 9.0]
[8.0, 27.0, 64.0]]
----------------------------------------
transposed matrix B
[[0.0, 1.0, 8.0]
[1.0, 4.0, 27.0]
[2.0, 9.0, 64.0]]
###Markdown
Transpose of vectors and matrices transpose of a vector$$\boldsymbol{a^T} = \left[\begin{array}{ccc} a_0 \\ \vdots \\ a_n \\ \end{array}\right]^T = \left[\begin{array}{ccc} a_0 & \cdots & a_n \\ \end{array}\right]$$ transpose of a matrix$$\boldsymbol{A^T} = \left[\begin{array}{ccc} a_{0,0} & \cdots & a_{0,m} \\ \vdots & \ddots & \vdots \\ a_{n,0} & \cdots & a_{n,m} \\ \end{array}\right]^T = \left[\begin{array}{ccc} a_{0,0} & \cdots & a_{n,0} \\ \vdots & \ddots & \vdots \\ a_{0,m} & \cdots & a_{n,m} \\ \end{array}\right]$$
###Code
#multiplication between two matrices
print('A x B')
print(A * B)
print('\n{}\n'.format('-'*40))
print('B x A')
print(B * A)
###Output
A x B
[[26.0, 90.0, 212.0]
[52.0, 180.0, 424.0]
[78.0, 270.0, 636.0]]
----------------------------------------
B x A
[[8.0, 16.0, 24.0]
[36.0, 72.0, 108.0]
[254.0, 508.0, 762.0]]
###Markdown
Multiplication method for vectors and matrices multiplication between vectors$$\boldsymbol{a} = \left[\begin{array}{ccc} a_0 & \cdots & a_n \\ \end{array}\right], \boldsymbol{b} = \left[\begin{array}{ccc} b_0 \\ \vdots \\ b_n \\ \end{array}\right]$$$$\boldsymbol{a} \times \boldsymbol{b} = \left[\begin{array}{ccc} a_0 \times b_0 & \cdots & a_0 \times b_n \\ \vdots & \ddots & \vdots \\ a_n \times b_0 & \cdots & a_n \times b_n \\ \end{array}\right]$$$$\boldsymbol{b} \times \boldsymbol{a} = \sum^{n}_{i=0}a_i \times b_i$$ multiplication between matrices$$\boldsymbol{A} = \left[\begin{array}{ccc} a_{0,0} & \cdots & a_{0,p} \\ \vdots & \ddots & \vdots \\ a_{m,0} & \cdots & a_{m,p} \\ \end{array}\right], \boldsymbol{B} = \left[\begin{array}{ccc} b_{0,0} & \cdots & b_{0,n} \\ \vdots & \ddots & \vdots \\ b_{q,0} & \cdots & b_{q,n} \\ \end{array}\right]$$$$\boldsymbol{A} \times \boldsymbol{B} = \left[\begin{array}{ccc} \sum^{m}_{i=0}a_{i,0} \times b_{0,i} & \cdots & \sum^{m}_{i=0}a_{i,p} \times b_{0,i} \\ \vdots & \ddots & \vdots \\ \sum^{m}_{i=0}a_{i,0} \times b_{q,i} & \cdots & \sum^{m}_{i=0}a_{i,p} \times b_{q,i} \\ \end{array}\right](n=m)$$$$\boldsymbol{B} \times \boldsymbol{A} = \left[\begin{array}{ccc} \sum^{p}_{j=0}b_{j,0} \times a_{0,j} & \cdots & \sum^{p}_{j=0}b_{j,0} \times a_{m,j} \\ \vdots & \ddots & \vdots \\ \sum^{p}_{j=0}b_{j,0} \times a_{m,j} & \cdots & \sum^{p}_{j=0}b_{j,n} \times a_{m,j} \\ \end{array}\right](p=q)$$ ::::: Note :::::1. The vertical length of the first vector/matrix and the horizontal length of the other vector/matrix must match in order to calculate the product of multiple vectors/matrices2. The order of which the vectors/matrices are multiplied will affect the product between the vectors/matrices unlike the addition/subtraction of vectors/matrices.
###Code
#determinant of the two matrices
print('det(A) = {}'.format(A.det()))
print('\n{}\n'.format('-'*40))
print('det(B) = {}'.format(B.det()))
###Output
det(A) = 0.0
----------------------------------------
det(B) = -2.0
###Markdown
Computation of the determinant of a matrix For matrices with the size 2x2$$\boldsymbol{A} = \left[\begin{array}{ccc} a_{0,0} & a_{0,1} \\ a_{1,0} & a_{1,1} \\ \end{array}\right]$$$$det(\boldsymbol{A}) = a_{0,0} \times a_{1,1} - a_{0,1} \times a_{1,0}$$ For matrices with the size above 2x2 Find the triangular matrix of the matrix$$\boldsymbol{A} = \left[\begin{array}{ccc} a_{0,0} & \cdots & a_{0,n} \\ \vdots & \ddots & \vdots \\ a_{n,0} & \cdots & a_{n,n} \end{array}\right],\boldsymbol{U} = \left[\begin{array}{ccc} u_{0,0} & 0 & \cdots & 0 \\ 0 & u_{1,1} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & u_{n,n} \\ \end{array}\right]$$- __A__ is the original matrix, and __U__ is the triangular matrix of __A__. Computation of the determinant After the triangular matrix of __A__ has been computed, the determinant can be solved with the following algebraic expression$$det(\boldsymbol{A}) = \prod^{n}_{i=0,j=0}u_{i,j}$$ ::::: Note :::::- To find the determinant of a matrix, the matrix must be a square matrix
###Code
#Inverse matrix of each matrix
print('1/A')
print(A.inv())
print('\n{}\n'.format('-'*40))
print('1/B')
print(B.inv())
###Output
1/A
This matrix does not have an inverse matrix.
None
----------------------------------------
1/B
Successfully inverted matrix!
[[-9.0, 9.0, -1.0]
[-9.0, 16.0, -2.0]
[5.0, -8.0, 1.0]]
|
lesson04/.ipynb_checkpoints/inClass_lesson04-checkpoint.ipynb
|
###Markdown
Starting on numerical solver libraries
###Code
# install our usual things
import numpy as np
import matplotlib.pyplot as plt
# for plotting
%matplotlib inline
M1 = 0.0009 # Mjupiter in Msun
M2 = 1.0 # Msun
rp = 1.0 # AU
vp = 35.0 # km/s
# create my initial vectors
r_0 = np.array([[rp, 0], [0, 0]])
v_0 = np.array([[0, vp], [0, 0]])
delta_t = 1e5 # seconds
n_steps = 5000
# importing a specific function from this library
from hermite_library import do_euler_2body
r_eu, v_eu, t_eu, E_eu = do_euler_2body(M1, M2, r_0, v_0, n_steps, delta_t)
r_eu
r_eu.shape
###Output
_____no_output_____
###Markdown
Let's make simple plots:
###Code
# we'll plot the first particle's trajectory
# r_eu[TIME STEPS, PARTICLE NUMBER, X or Y coord]
plt.plot(r_eu[:,0,0], r_eu[:,0,1])
plt.plot(r_eu[:,1,0], r_eu[:,1,1])
plt.show()
plt.plot(t_eu, E_eu)
plt.show()
###Output
_____no_output_____
###Markdown
Let's make a fancy plot!
###Code
# we are doing 1 row of plots with 2 columns of plots
# and making sure that our plot is 2X as wide as long
fig, ax = plt.subplots(1,2, figsize=(6*2, 6))
# r_eu[TIME STEPS, PARTICLE NUMBER, X or Y coord]
ax[0].plot(r_eu[:,0,0], r_eu[:,0,1], color='red')
ax[0].plot(r_eu[:,1,0], r_eu[:,1,1])
# making extra fancy with labels
ax[0].set_xlabel('x in AU')
ax[0].set_ylabel('y in AU')
# now I'll make energy as a function of time
ax[1].plot(t_eu, E_eu)
ax[1].set_xlabel('Time in Seconds')
ax[1].set_ylabel('Energy, Normalized')
plt.show()
###Output
_____no_output_____
###Markdown
Using higher order solver
###Code
from hermite_library import do_hermite
star_mass = 1.0 # Msun, M2 from Euler
planet_mass = np.array( [1.0] ) # Mjupiter masses, M2 = 0.0009 Msun
# array with each entry [x position, y position, z position]
# units are AU
# for now -> NO Z POSITION
planet_initial_position = np.array([ [rp, 0, 0] ])
# array with each entry [vx, vy, vz]
# units are km/s
# for no -> NO VZ velocity
planet_initial_velocity = np.array([ [0, vp, 0] ])
# this assumes that the star is at position (0, 0, 0)
# and has NO initial velocity, so its velocity is also (0,0,0)
r_h, v_h, t_h, E_h = do_hermite(star_mass,
planet_mass,
planet_initial_position,
planet_initial_velocity,
tfinal=delta_t*n_steps,
Nsteps=n_steps)
r_h
r_h.shape
# r_h has the following format for its indicies
# r_h[NUMBER OF PARTICLES, NUMBER COORDINATES (X,Y,Z), NUMBER OF TIMESTEPS]
# particle number 1, Hermite solution
plt.plot(r_h[0,0,:], r_h[0,1,:])
# particle number 1, Euler solution
plt.plot(r_eu[:,0,0], r_eu[:,0,1])
#particle number 2, Hermite solution
plt.plot(r_h[1, 0, :], r_h[1, 1, :])
# particle number 2, Euler Solution
plt.plot(r_eu[:,1,0], r_eu[:,1,1])
plt.show()
# we are doing 1 row of plots with 2 columns of plots
# and making sure that our plot is 2X as wide as long
fig, ax = plt.subplots(1,2, figsize=(6*2, 6))
# PARTICLE #1
# r_eu[TIME STEPS, PARTICLE NUMBER, X or Y coord]
ax[0].plot(r_eu[:,0,0], r_eu[:,0,1], color='red')
# r_h[NUMBER OF PARTICLES, NUMBER COORDINATES (X,Y,Z), NUMBER OF TIMESTEPS]
ax[0].plot(r_h[0,0,:], r_h[0,1,:]) # plot on axis #1, or indexed to zero
# PARTICLE #2
ax[0].plot(r_eu[:,1,0], r_eu[:,1,1]) # Euler's solution
ax[0].plot(r_h[1, 0, :], r_h[1, 1, :]) # Hermite
# making extra fancy with labels
ax[0].set_xlabel('x in AU')
ax[0].set_ylabel('y in AU')
# now I'll make energy as a function of time
ax[1].plot(t_eu, E_eu)
ax[1].plot(t_h, E_h)
ax[1].set_xlabel('Time in Seconds')
ax[1].set_ylabel('Energy, Normalized')
plt.show()
###Output
_____no_output_____
###Markdown
FINALLY - N-body
###Code
star_mass = 1.0 # Msun, M2 from Euler
planet_mass = np.array( [1.0, 0.5] ) # Mjupiter masses, M2 = 0.0009 Msun
# array with each entry [x position, y position, z position]
# units are AU
# for now -> NO Z POSITION
planet_initial_position = np.array([ [1.0, 0, 0],
[0, 2.0, 0]])
# array with each entry [vx, vy, vz]
# units are km/s
# for no -> NO VZ velocity
planet_initial_velocity = np.array([ [0, 35.0, 0],
[35.0, 0, 0]])
# this assumes that the star is at position (0, 0, 0)
# and has NO initial velocity, so its velocity is also (0,0,0)
r_h, v_h, t_h, E_h = do_hermite(star_mass,
planet_mass,
planet_initial_position,
planet_initial_velocity,
tfinal=delta_t*n_steps,
Nsteps=n_steps)
# we are doing 1 row of plots with 2 columns of plots
# and making sure that our plot is 2X as wide as long
fig, ax = plt.subplots(1,2, figsize=(6*2, 6))
# PARTICLE #1
# r_h[NUMBER OF PARTICLES, NUMBER COORDINATES (X,Y,Z), NUMBER OF TIMESTEPS]
#ax[0].plot(r_h[0,0,:], r_h[0,1,:]) # plot on axis #1, or indexed to zero
# PARTICLE #2
#ax[0].plot(r_h[1, 0, :], r_h[1, 1, :]) # Hermite
# PARTICLE #3
#ax[0].plot(r_h[2, 0, :], r_h[2, 1, :])
for i in range(r_h.shape[0]): # loop over number of planets+star
ax[0].plot(r_h[i, 0, :], r_h[i, 1, :])
# making extra fancy with labels
ax[0].set_xlabel('x in AU')
ax[0].set_ylabel('y in AU')
# also set a different-than-default size
ax[0].set_xlim(-5, 5)
ax[0].set_ylim(-5, 5)
# now I'll make energy as a function of time
ax[1].plot(t_h, E_h)
ax[1].set_xlabel('Time in Seconds')
ax[1].set_ylabel('Energy, Normalized')
plt.show()
###Output
_____no_output_____
|
notebooks/8-relu_softmax_early_stopping.ipynb
|
###Markdown
ReLU, softmax, early stopping Network 1Construct a fully-connected network with two hidden layers of 50 neurons each with ReLU activation functions. Use a softmax output layer with cross-entropy cost function (these are MATLAB:s default settings). The precise layout is as follows:$\bullet$ imageInputLayer$\bullet$ fullyConnectedLayer of 50 neurons (followed by reluLayer)$\bullet$ fullyConnectedLayer of 50 neurons (followed by reluLayer)$\bullet$ fullyConnectedLayer of 10 neurons followed by softmaxLayer and classificationLayerTrain the network using stochastic gradient descent with Momentum = 0.9 (SGDM). Train for at most 400 epochs, MiniBatchSize = 8192 and InitialLearningRate = 0.001. Shuffle the training set before each epoch. Use ValidationPatience = 3 and ValidationFrequency = 30 on the validation set as indicators for early stopping. Use the trained network for calculating the classification errors for the training, validation, and test sets separately. Load data
###Code
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import SGD
from matplotlib import pyplot as plt
from keras.utils import np_utils
from keras.callbacks import EarlyStopping, ModelCheckpoint
import numpy as np
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
_____no_output_____
###Markdown
Initalize network
###Code
#Network 1
batch_size = 8192
num_classes = 10
epochs = 400
initial_learning_rate = 0.001
momentum = 0.9
model_name = 'keras_cifar10_trained_model.h5'
input_shape = x_train.shape[1:]
# Normalize data.
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
seed = 6
np.random.seed(seed)
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
sgd = SGD(lr = 0.005, decay=0, momentum=0.9, nesterov=False)
#Compile the model
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['categorical_accuracy'])
model.summary()
###Output
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_9 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_26 (Dense) (None, 50) 153650
_________________________________________________________________
activation_26 (Activation) (None, 50) 0
_________________________________________________________________
dense_27 (Dense) (None, 50) 2550
_________________________________________________________________
activation_27 (Activation) (None, 50) 0
_________________________________________________________________
dense_28 (Dense) (None, 10) 510
_________________________________________________________________
activation_28 (Activation) (None, 10) 0
=================================================================
Total params: 156,710
Trainable params: 156,710
Non-trainable params: 0
_________________________________________________________________
###Markdown
Fit the model
###Code
#Make a callback for early stopping
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3,restore_best_weights=True)
#mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=0)
#Fit the model
model = model.fit(x_train,y_train,batch_size=batch_size, epochs = epochs, validation_data=(x_test,y_test),shuffle=True, callbacks=[es], verbose=1, validation_freq=3)
plt.figure(0)
plt.plot(model.history['categorical_accuracy'],'r')
m_accuracy = model.history['val_categorical_accuracy']
plt.plot(np.arange(len(m_accuracy))*3+3,m_accuracy,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.figure(1)
plt.plot(model.history['loss'],'r')
m_loss = model.history['val_loss']
plt.plot(np.arange(len(m_loss))*3+3,m_loss,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.show()
###Output
_____no_output_____
###Markdown
Network 2Construct a deep fully-connected network with three hidden layers of 50 neurons. Let the ReLU activation function (reluLayer) be applied to the output of the hidden layers. Use softmax outputs (softmaxLayer) and classify using a classificationLayer. Train the network on the training set for at most 400 epochs with Momentum = 0.9, MiniBatchSize = 8192, InitialLearningRate = 0.003, and ValidationPatience = 3, ValidationFrequency = 30. Shuffle the training set before each epoch. Calculate the classification errors obtained on the training, validation, and test sets separately. Initalize network
###Code
#Network 2
batch_size = 8192
num_classes = 10
epochs = 400
initial_learning_rate = 0.01
momentum = 0.9
model_name = 'keras_cifar10_trained_model.h5'
input_shape = x_train.shape[1:]
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
sgd = SGD(lr = 0.005, decay=0, momentum=0.9, nesterov=False)
#Compile the model
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['categorical_accuracy'])
model.summary()
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_3 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_7 (Dense) (None, 50) 153650
_________________________________________________________________
activation_7 (Activation) (None, 50) 0
_________________________________________________________________
dense_8 (Dense) (None, 50) 2550
_________________________________________________________________
activation_8 (Activation) (None, 50) 0
_________________________________________________________________
dense_9 (Dense) (None, 50) 2550
_________________________________________________________________
activation_9 (Activation) (None, 50) 0
_________________________________________________________________
dense_10 (Dense) (None, 10) 510
_________________________________________________________________
activation_10 (Activation) (None, 10) 0
=================================================================
Total params: 159,260
Trainable params: 159,260
Non-trainable params: 0
_________________________________________________________________
###Markdown
Fit the model
###Code
#Make a callback for early stopping
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3,restore_best_weights=True)
#mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=0)
#Fit the model
model = model.fit(x_train,y_train,batch_size=batch_size, epochs = epochs, validation_data=(x_test,y_test),shuffle=True, callbacks=[es], verbose=1, validation_freq=3)
plt.figure(0)
plt.plot(model.history['categorical_accuracy'],'r')
m_accuracy = model.history['val_categorical_accuracy']
plt.plot(np.arange(len(m_accuracy))*3+3,m_accuracy,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.figure(1)
plt.plot(model.history['loss'],'r')
m_loss = model.history['val_loss']
plt.plot(np.arange(len(m_loss))*3+3,m_loss,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.show()
###Output
_____no_output_____
###Markdown
Take Network 1 and tune the $L_2$-regularization parameter (L2Regularization) to 0.2. Train the network on the training set for at most 400 epochs with MiniBatchSize = 8192 and InitialLearningRate = 0.001. Use ValidationPatience = 3 and ValidationFrequency = 30 on the validation set as indicator for early stopping. Shuffle the training data before each epoch. Calculate the classification errors obtained on the training, validation, and test sets separately. Initialize Network
###Code
from keras.regularizers import l2
#Network 3
batch_size = 8192
num_classes = 10
epochs = 400
initial_learning_rate = 0.001
momentum = 0.9
model_name = 'keras_cifar10_trained_model.h5'
input_shape = x_train.shape[1:]
# Normalize data.
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
seed = 6
np.random.seed(seed)
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50, kernel_regularizer=l2(0.01)))
model.add(Activation('relu'))
model.add(Dense(50, kernel_regularizer=l2(0.01)))
model.add(Activation('relu'))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
sgd = SGD(lr = 0.005, decay=0, momentum=0.9, nesterov=False)
#Compile the model
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['categorical_accuracy'])
model.summary()
###Output
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_4 (Flatten) (None, 3072) 0
_________________________________________________________________
dense_7 (Dense) (None, 50) 153650
_________________________________________________________________
activation_7 (Activation) (None, 50) 0
_________________________________________________________________
dense_8 (Dense) (None, 50) 2550
_________________________________________________________________
activation_8 (Activation) (None, 50) 0
_________________________________________________________________
dense_9 (Dense) (None, 10) 510
_________________________________________________________________
activation_9 (Activation) (None, 10) 0
=================================================================
Total params: 156,710
Trainable params: 156,710
Non-trainable params: 0
_________________________________________________________________
###Markdown
Fit the model
###Code
#Make a callback for early stopping
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3,restore_best_weights=True)
#mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=0)
#Fit the model
model = model.fit(x_train,y_train,batch_size=batch_size, epochs = epochs, validation_data=(x_test,y_test),shuffle=True, callbacks=[es], verbose=1, validation_freq=3)
plt.figure(0)
plt.plot(model.history['categorical_accuracy'],'r')
m_accuracy = model.history['val_categorical_accuracy']
plt.plot(np.arange(len(m_accuracy))*3,m_accuracy,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.figure(1)
plt.plot(model.history['loss'],'r')
m_loss = model.history['val_loss']
plt.plot(np.arange(len(m_loss))*3,m_loss,'g')
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.show()
###Output
_____no_output_____
|
Chapter09/CustomerLifetimeValue.ipynb
|
###Markdown
1. Load Data
###Code
df = pd.read_excel('../data/Online Retail.xlsx', sheet_name='Online Retail', engine='openpyxl')
df.shape
df.head()
###Output
_____no_output_____
###Markdown
2. Data Clean-Up - Negative Quantity
###Code
df.loc[df['Quantity'] <= 0].shape
df.shape
df = df.loc[df['Quantity'] > 0]
df.shape
###Output
_____no_output_____
###Markdown
- Missing CustomerID
###Code
pd.isnull(df['CustomerID']).sum()
df.shape
df = df[pd.notnull(df['CustomerID'])]
df.shape
df.head()
###Output
_____no_output_____
###Markdown
- Excluding Incomplete Month
###Code
print('Date Range: %s ~ %s' % (df['InvoiceDate'].min(), df['InvoiceDate'].max()))
df.loc[df['InvoiceDate'] >= '2011-12-01'].shape
df.shape
df = df.loc[df['InvoiceDate'] < '2011-12-01']
df.shape
###Output
_____no_output_____
###Markdown
- Total Sales
###Code
df['Sales'] = df['Quantity'] * df['UnitPrice']
df.head()
###Output
_____no_output_____
###Markdown
- Per Order Data
###Code
orders_df = df.groupby(['CustomerID', 'InvoiceNo']).agg({
'Sales': sum,
'InvoiceDate': max
})
orders_df
###Output
_____no_output_____
###Markdown
3. Data Analysis
###Code
def groupby_mean(x):
return x.mean()
def groupby_count(x):
return x.count()
def purchase_duration(x):
return (x.max() - x.min()).days
def avg_frequency(x):
return (x.max() - x.min()).days/x.count()
groupby_mean.__name__ = 'avg'
groupby_count.__name__ = 'count'
purchase_duration.__name__ = 'purchase_duration'
avg_frequency.__name__ = 'purchase_frequency'
summary_df = orders_df.reset_index().groupby('CustomerID').agg({
'Sales': [min, max, sum, groupby_mean, groupby_count],
'InvoiceDate': [min, max, purchase_duration, avg_frequency]
})
summary_df
summary_df.columns = ['_'.join(col).lower() for col in summary_df.columns]
summary_df
summary_df.shape
summary_df = summary_df.loc[summary_df['invoicedate_purchase_duration'] > 0]
summary_df.shape
ax = summary_df.groupby('sales_count').count()['sales_avg'][:20].plot(
kind='bar',
color='skyblue',
figsize=(12,7),
grid=True
)
ax.set_ylabel('count')
plt.show()
summary_df['sales_count'].describe()
summary_df['sales_avg'].describe()
ax = summary_df['invoicedate_purchase_frequency'].hist(
bins=20,
color='skyblue',
rwidth=0.7,
figsize=(12,7)
)
ax.set_xlabel('avg. number of days between purchases')
ax.set_ylabel('count')
plt.show()
summary_df['invoicedate_purchase_frequency'].describe()
summary_df['invoicedate_purchase_duration'].describe()
###Output
_____no_output_____
###Markdown
4. Predicting 3-Month CLV 4.1. Data Preparation
###Code
clv_freq = '3M'
data_df = orders_df.reset_index().groupby([
'CustomerID',
pd.Grouper(key='InvoiceDate', freq=clv_freq)
]).agg({
'Sales': [sum, groupby_mean, groupby_count],
})
data_df.columns = ['_'.join(col).lower() for col in data_df.columns]
data_df = data_df.reset_index()
data_df.head(10)
date_month_map = {
str(x)[:10]: 'M_%s' % (i+1) for i, x in enumerate(
sorted(data_df.reset_index()['InvoiceDate'].unique(), reverse=True)
)
}
data_df['M'] = data_df['InvoiceDate'].apply(lambda x: date_month_map[str(x)[:10]])
date_month_map
data_df.head(10)
###Output
_____no_output_____
###Markdown
- Building Sample Set
###Code
features_df = pd.pivot_table(
data_df.loc[data_df['M'] != 'M_1'],
values=['sales_sum', 'sales_avg', 'sales_count'],
columns='M',
index='CustomerID'
)
features_df.columns = ['_'.join(col) for col in features_df.columns]
features_df.shape
features_df.head(10)
features_df = features_df.fillna(0)
features_df.head()
response_df = data_df.loc[
data_df['M'] == 'M_1',
['CustomerID', 'sales_sum']
]
response_df.columns = ['CustomerID', 'CLV_'+clv_freq]
response_df.shape
response_df.head(10)
sample_set_df = features_df.merge(
response_df,
left_index=True,
right_on='CustomerID',
how='left'
)
sample_set_df.shape
sample_set_df.head(10)
sample_set_df = sample_set_df.fillna(0)
sample_set_df.head()
sample_set_df['CLV_'+clv_freq].describe()
###Output
_____no_output_____
###Markdown
4.2. Regression Models
###Code
from sklearn.model_selection import train_test_split
target_var = 'CLV_'+clv_freq
all_features = [x for x in sample_set_df.columns if x not in ['CustomerID', target_var]]
x_train, x_test, y_train, y_test = train_test_split(
sample_set_df[all_features],
sample_set_df[target_var],
test_size=0.3
)
###Output
_____no_output_____
###Markdown
- Linear Regression Model
###Code
from sklearn.linear_model import LinearRegression
# Try these models as well
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
reg_fit = LinearRegression()
reg_fit.fit(x_train, y_train)
reg_fit.intercept_
coef = pd.DataFrame(list(zip(all_features, reg_fit.coef_)))
coef.columns = ['feature', 'coef']
coef
###Output
_____no_output_____
###Markdown
4.3. Evaluation
###Code
from sklearn.metrics import r2_score, median_absolute_error
train_preds = reg_fit.predict(x_train)
test_preds = reg_fit.predict(x_test)
###Output
_____no_output_____
###Markdown
- R-Squared
###Code
print('In-Sample R-Squared: %0.4f' % r2_score(y_true=y_train, y_pred=train_preds))
print('Out-of-Sample R-Squared: %0.4f' % r2_score(y_true=y_test, y_pred=test_preds))
###Output
In-Sample R-Squared: 0.6547
Out-of-Sample R-Squared: 0.5745
###Markdown
- Median Absolute Error
###Code
print('In-Sample MSE: %0.4f' % median_absolute_error(y_true=y_train, y_pred=train_preds))
print('Out-of-Sample MSE: %0.4f' % median_absolute_error(y_true=y_test, y_pred=test_preds))
###Output
In-Sample MSE: 185.9164
Out-of-Sample MSE: 178.9355
###Markdown
- Scatter Plot
###Code
plt.scatter(y_train, train_preds)
plt.plot([0, max(y_train)], [0, max(train_preds)], color='gray', lw=1, linestyle='--')
plt.xlabel('actual')
plt.ylabel('predicted')
plt.title('In-Sample Actual vs. Predicted')
plt.grid()
plt.show()
plt.scatter(y_test, test_preds)
plt.plot([0, max(y_test)], [0, max(test_preds)], color='gray', lw=1, linestyle='--')
plt.xlabel('actual')
plt.ylabel('predicted')
plt.title('Out-of-Sample Actual vs. Predicted')
plt.grid()
plt.show()
###Output
_____no_output_____
|
jupyter-notebooks/hudson-and-thames-quant/Open-Source-Soldier-of-Fortune/mirca-submission/hudson-thames-skill-challenge.ipynb
|
###Markdown
Hudson and Thames Skill Set Challenge Author: [@mirca](https://www.github.com/mirca) Introduction In this notebook, we are going to see how to programmatically access stocks price data andconstruct a minimum spanning tree (MST) graph based on those data. The MST graph will revealstocks dependencies which can be further leveraged into tasks such as hierachical risk parity portfolio.
###Code
import numpy as np
import pandas_datareader as pdr
import requests
import bs4 as bs
import matplotlib.pyplot as plt
import networkx as nx
import seaborn as sns
import pandas as pd
from scipy.cluster.hierarchy import dendrogram, linkage
from core import compute_adjacency_mst_and_distances
###Output
_____no_output_____
###Markdown
Accessing financial data from Yahoo! finance In this step we are going to download stocks prices data from Yahoo! finance.For that, I chose to pick the 50 most representative stocks of the S&P 500 index. The following lines of code web scrapes the website https://www.slickcharts.com/sp500 and gets the 50 most representative stocksof the S&P 500 index:
###Code
resp = requests.get('https://www.slickcharts.com/sp500')
soup = bs.BeautifulSoup(resp.text, 'html.parser')
table = soup.find('table', {'class': 'table table-hover table-borderless table-sm'})
stocks = [fn['href'][len("/symbol/"):] for fn in table.find_all('a') if fn['href'].startswith('/symbol')]
stocks_ = [stocks[i] for i in range(len(stocks)) if i % 2 == 0][:50]
print(stocks_)
###Output
['MSFT', 'AAPL', 'AMZN', 'FB', 'BRK.B', 'GOOG', 'GOOGL', 'JNJ', 'JPM', 'V', 'PG', 'T', 'UNH', 'MA', 'HD', 'INTC', 'VZ', 'KO', 'BAC', 'XOM', 'MRK', 'DIS', 'PFE', 'PEP', 'CMCSA', 'CVX', 'ADBE', 'CSCO', 'NVDA', 'WMT', 'NFLX', 'CRM', 'WFC', 'MCD', 'ABT', 'BMY', 'COST', 'BA', 'C', 'PM', 'NEE', 'MDT', 'ABBV', 'PYPL', 'AMGN', 'TMO', 'LLY', 'HON', 'ACN', 'IBM']
###Markdown
Now that we have the ticker names, let's download the price data from Yahoo! finance:
###Code
prices = pdr.get_data_yahoo(stocks_, start = "2018-12-31", end = "2019-12-31")[['Adj Close']]
###Output
/Users/mirca/opt/miniconda3/lib/python3.7/site-packages/pandas_datareader/base.py:270: SymbolWarning: Failed to read symbol: 'BRK.B', replacing with NaN.
warnings.warn(msg.format(sym), SymbolWarning)
###Markdown
Let's drop any stocks that have missing values and let's also clean up the attributes of our data frame:
###Code
prices.dropna(axis='columns', inplace=True)
prices.columns = prices.columns.droplevel(0)
###Output
_____no_output_____
###Markdown
Now we can compute the log returns of the stocks:
###Code
log_returns = prices.apply(np.log).apply(np.diff)
log_returns.head()
###Output
_____no_output_____
###Markdown
In next line we are going to compute the [adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) of a minimum spanning tree graphand the corresponding distances between each pair of stocks:
###Code
distances, adjacency = compute_adjacency_mst_and_distances(log_returns)
adjacency_df = pd.DataFrame(adjacency, columns=prices.columns, index=prices.columns)
###Output
_____no_output_____
###Markdown
Visualizations of the MST (adjacency matrix and graph) Now let's visualize the adjacency matrix of our tree graph and the graph itself:
###Code
plt.figure(figsize=(10, 10))
g = sns.heatmap(adjacency_df, cbar=False)
plt.show()
###Output
_____no_output_____
###Markdown
The adjacency matrix tell us which stocks are conditionally dependent in the graph. In the plotabove, the white squares represent pairs of stocks that are dependent and therefore have an edge between them. Now, let's plot the network itself so that we can recognize its tree structure:
###Code
labels = {v: k for v, k in enumerate(list(adjacency_df.columns))}
plt.figure(figsize=(10, 10))
G = nx.from_numpy_matrix(adjacency)
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos, node_size=300, alpha=.3)
nx.draw_networkx_edges(G, pos, width=1.0)
_ = nx.draw_networkx_labels(G, pos, labels)
_ = plt.axis('off')
###Output
_____no_output_____
###Markdown
From the plot above is easier to see some clusters structures, e.g., formed by the tech companies Facebook (FB), Netflix (NFLX), Amazon (AMZN), and Microsoft (MSFT), or pharmaceutical corporations Pfizer (PFE), Eli Lilly and Company (LLY), UnitedHealth Group (UNH), and Merck & Co (MRK). Let's also draw a dendrogram from the distance matrix:
###Code
clusters = linkage(np.triu(distances, k=1))
plt.figure(figsize=(15, 5))
dendrogram(clusters, labels=adjacency_df.columns)
plt.xlabel('Tickers', fontsize=12)
plt.ylabel('Cluster Leaves Distances', fontsize=12)
plt.title('Hierarchical Clustering Dendrogram', fontsize=12)
plt.show()
###Output
_____no_output_____
|
notebook/numpy_csv.ipynb
|
###Markdown
Numpy CSV minipulater
###Code
import pandas as pd
import glob
pattern = 'F:\\NY_taxi\\2018\\*.csv'
dirs = glob.glob(pattern)
for path in dirs:
file = pd.read_csv(path)
table = file.loc[2:, [
'tpep_pickup_datetime',
'tpep_dropoff_datetime',
'PULocationID',
'DOLocationID',
'trip_distance',
'passenger_count',
'total_amount']]
table.to_csv(path[:-4]+'_projected.csv', index=False, columns=[
'tpep_pickup_datetime',
'tpep_dropoff_datetime',
'PULocationID',
'DOLocationID',
'trip_distance',
'passenger_count',
'total_amount'])
print(f'{path} finished..')
del table
del file
table = pd.read_csv(dirs[0])
table.columns
table.iloc[:2,:]
type(table.iloc[0,2])
type(table.iloc[0,3])
list(map(type, table.iloc[0, :]))
type(Out[11][0])
paths = glob.glob('tensor_dataset/full_year_2018_15min/tensors/*.pkl')
len(paths)
paths[:5]
ids = {}
for k, v in enumerate(paths):
ids[k] = v
ids[1]
len(ids)
35040 // 128 * 128
###Output
_____no_output_____
|
nb_sci_maths/maths_algebra_solve_systems_of_linear_equations_fr.ipynb
|
###Markdown
Méthode par élimination (ou combinaison) Exemple 1
###Code
Ab = np.array([[ 1, 5, 7],
[-2, -7, -5]])
print(complete_matrix_to_latex(Ab))
plot_linear_system(Ab);
Ab = np.array([[1, 5, 7],
[0, 3, 9]])
print(complete_matrix_to_latex(Ab))
plot_linear_system(Ab);
Ab = np.array([[1, 5, 7],
[0, 1, 3]])
print(complete_matrix_to_latex(Ab))
plot_linear_system(Ab);
Ab = np.array([[1, 0, -8],
[0, 1, 3]])
print(complete_matrix_to_latex(Ab))
plot_linear_system(Ab);
###Output
_____no_output_____
|
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/tfx01_interactive.ipynb
|
###Markdown
Create an interactive TFX pipelineThis notebook is the first of two notebooks that guide you through automating the [Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN](https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/tree/master/retail/recommendation-system/bqml-scann) solution with a pipeline.Use this notebook to create and run a [TFX](https://www.tensorflow.org/tfx) pipeline that performs the following steps:1. Compute PMI on item co-occurrence data by using a [custom Python function](https://www.tensorflow.org/tfx/guide/custom_function_component) component.2. Train a BigQuery ML matrix factorization model on the PMI data to learn item embeddings by using a custom Python function component.3. Extract the embeddings from the model to a BigQuery table by using a custom Python function component.4. Export the embeddings in [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) format by using the standard [BigQueryExampleGen](https://www.tensorflow.org/tfx/api_docs/python/tfx/extensions/google_cloud_big_query/example_gen/component/BigQueryExampleGen) component.5. Import the schema for the embeddings by using the standard [ImporterNode](https://www.tensorflow.org/tfx/api_docs/python/tfx/components/ImporterNode) component.6. Validate the embeddings against the imported schema by using the standard [StatisticsGen](https://www.tensorflow.org/tfx/guide/statsgen) and [ExampleValidator](https://www.tensorflow.org/tfx/guide/exampleval) components. 7. Create an embedding lookup SavedModel by using the standard [Trainer](https://www.tensorflow.org/tfx/api_docs/python/tfx/components/Trainer) component.8. Push the embedding lookup model to a model registry directory by using the standard [Pusher](https://www.tensorflow.org/tfx/guide/pusher) component.9. Build the ScaNN index by using the standard Trainer component.10. Evaluate and validate the ScaNN index latency and recall by implementing a [TFX custom component](https://www.tensorflow.org/tfx/guide/custom_component).11. Push the ScaNN index to a model registry directory by using the standard Pusher component.The [tfx_pipeline](tfx_pipeline) directory contains the source code for the TFX pipeline implementation. Before starting this notebook, you must run the [00_prep_bq_procedures](00_prep_bq_procedures.ipynb) notebook to complete the solution prerequisites.After completing this notebook, run the [tfx02_deploy_run](tfx02_deploy_run.ipynb) notebook to deploy the pipeline. Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
###Code
%load_ext autoreload
%autoreload 2
!pip install -U -q tfx
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import logging
import os
import numpy as np
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tfx
from tensorflow_transform.tf_metadata import schema_utils
logging.getLogger().setLevel(logging.INFO)
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
###Output
_____no_output_____
###Markdown
Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
+ `PROJECT_ID`: The ID of the Google Cloud project you are using to implement this solution.
+ `BUCKET`: The name of the Cloud Storage bucket you created to use with this solution. The `BUCKET` value should be just the bucket name, so `myBucket` rather than `gs://myBucket`.
###Code
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucket" # Change to the bucket you created.
BQ_DATASET_NAME = "recommendations"
ARTIFACT_STORE = f"gs://{BUCKET}/tfx_artifact_store"
LOCAL_MLMD_SQLLITE = "mlmd/mlmd.sqllite"
PIPELINE_NAME = "tfx_bqml_scann"
EMBEDDING_LOOKUP_MODEL_NAME = "embeddings_lookup"
SCANN_INDEX_MODEL_NAME = "embeddings_scann"
PIPELINE_ROOT = os.path.join(ARTIFACT_STORE, f"{PIPELINE_NAME}_interactive")
MODEL_REGISTRY_DIR = os.path.join(ARTIFACT_STORE, "model_registry_interactive")
!gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
Authenticate your GCP accountThis is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
###Code
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
###Output
_____no_output_____
###Markdown
Instantiate the interactive context
Instantiate an [interactive context](https://www.tensorflow.org/tfx/api_docs/python/tfx/orchestration/experimental/interactive/interactive_context/InteractiveContext) so that you can execute the TFX pipeline components interactively in the notebook. The interactive context creates a local SQLite database in the `LOCAL_MLMD_SQLLITE` directory to use as its [ML Metadata (MLMD)](https://github.com/google/ml-metadata) store.
###Code
CLEAN_ARTIFACTS = True
if CLEAN_ARTIFACTS:
if tf.io.gfile.exists(PIPELINE_ROOT):
print("Removing previous artifacts...")
tf.io.gfile.rmtree(PIPELINE_ROOT)
if tf.io.gfile.exists("mlmd"):
print("Removing local mlmd SQLite...")
tf.io.gfile.rmtree("mlmd")
if not tf.io.gfile.exists("mlmd"):
print("Creating mlmd directory...")
tf.io.gfile.mkdir("mlmd")
print(f"Pipeline artifacts directory: {PIPELINE_ROOT}")
print(f"Model registry directory: {MODEL_REGISTRY_DIR}")
print(f"Local metadata SQLlit path: {LOCAL_MLMD_SQLLITE}")
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration.experimental.interactive.interactive_context import \
InteractiveContext
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = LOCAL_MLMD_SQLLITE
connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE
mlmd_store = mlmd.metadata_store.MetadataStore(connection_config)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=connection_config,
)
###Output
_____no_output_____
###Markdown
Executing the pipeline stepsThe components that implement the pipeline steps are in the [tfx_pipeline/bq_components.py](tfx_pipeline/bq_components.py) module.
###Code
from tfx_pipeline import bq_components
###Output
_____no_output_____
###Markdown
Step 1: Compute PMI
Run the `pmi_computer` step, which is an instance of the `compute_pmi` custom Python function component. This component executes the [sp_ComputePMI](sql_scripts/sp_ComputePMI.sql) stored procedure in BigQuery and returns the name of the resulting table as a custom property.
###Code
pmi_computer = bq_components.compute_pmi(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
min_item_frequency=15,
max_group_size=100,
)
context.run(pmi_computer)
pmi_computer.outputs.item_cooc.get()[0].get_string_custom_property("bq_result_table")
###Output
_____no_output_____
###Markdown
Step 2: Train the BigQuery ML matrix factorization model
Run the `bqml_trainer` step, which is an instance of the `train_item_matching_model` custom Python function component. This component executes the [sp_TrainItemMatchingModel](sql_scripts/sp_TrainItemMatchingModel.sql) stored procedure in BigQuery and returns the name of the resulting model as a custom property.
###Code
bqml_trainer = bq_components.train_item_matching_model(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
item_cooc=pmi_computer.outputs.item_cooc,
dimensions=50,
)
context.run(bqml_trainer)
bqml_trainer.outputs.bq_model.get()[0].get_string_custom_property("bq_model_name")
###Output
_____no_output_____
###Markdown
Step 3: Extract the trained embeddings
Run the `embeddings_extractor` step, which is an instance of the `extract_embeddings` custom Python function component. This component executes the [sp_ExractEmbeddings](sql_scripts/sp_ExractEmbeddings.sql) stored procedure in BigQuery and returns the name of the resulting table as a custom property.
###Code
embeddings_extractor = bq_components.extract_embeddings(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
bq_model=bqml_trainer.outputs.bq_model,
)
context.run(embeddings_extractor)
embeddings_extractor.outputs.item_embeddings.get()[0].get_string_custom_property(
"bq_result_table"
)
###Output
_____no_output_____
###Markdown
Step 4: Export the embeddings in TFRecord format
Run the `embeddings_exporter` step, which is an instance of the `BigQueryExampleGen` standard component. This component uses a SQL query to read the embedding records from BigQuery and produces an [Examples](https://www.tensorflow.org/tfx/api_docs/python/tfx/types/standard_artifacts/Examples) artifact containing training and evaluation datasets as an output. It then exports these datasets in TFRecord format by using a Beam pipeline. This pipeline can be run using the [DirectRunner or DataflowRunner](https://beam.apache.org/documentation/runners/capability-matrix/). Note that in this interactive context, the embedding records to read is limited to 1000, and the runner of the Beam pipeline is set to DirectRunner.
###Code
from tfx.extensions.google_cloud_big_query.example_gen.component import \
BigQueryExampleGen
from tfx.proto import example_gen_pb2
query = f"""
SELECT item_Id, embedding, bias,
FROM {BQ_DATASET_NAME}.item_embeddings
LIMIT 1000
"""
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=1)]
)
)
embeddings_exporter = BigQueryExampleGen(query=query, output_config=output_config)
beam_pipeline_args = [
"--runner=DirectRunner",
f"--project={PROJECT_ID}",
f"--temp_location=gs://{BUCKET}/bqml_scann/beam/temp",
]
context.run(embeddings_exporter, beam_pipeline_args=beam_pipeline_args)
###Output
_____no_output_____
###Markdown
Step 5: Import the schema for the embeddings
Run the `schema_importer` step, which is an instance of the `ImporterNode` standard component. This component reads the [schema.pbtxt](tfx_pipeline/schema/schema.pbtxt) file from the solution's `schema` directory, and produces a `Schema` artifact as an output. The schema is used to validate the embedding files exported from BigQuery, and to parse the embedding records in the TFRecord files when they are read in the training components.
###Code
schema_importer = tfx.components.ImporterNode(
source_uri="tfx_pipeline/schema",
artifact_type=tfx.types.standard_artifacts.Schema,
instance_name="SchemaImporter",
)
context.run(schema_importer)
context.show(schema_importer.outputs.result)
###Output
_____no_output_____
###Markdown
Read a sample embedding from the exported TFRecord files using the schema
###Code
schema_file = schema_importer.outputs.result.get()[0].uri + "/schema.pbtxt"
schema = tfdv.load_schema_text(schema_file)
feature_sepc = schema_utils.schema_as_feature_spec(schema).feature_spec
data_uri = embeddings_exporter.outputs.examples.get()[0].uri + "/train/*"
def _gzip_reader_fn(filenames):
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
dataset = tf.data.experimental.make_batched_features_dataset(
data_uri,
batch_size=1,
num_epochs=1,
features=feature_sepc,
reader=_gzip_reader_fn,
shuffle=True,
)
counter = 0
for _ in dataset:
counter += 1
print(f"Number of records: {counter}")
print("")
for batch in dataset.take(1):
print(f'item: {batch["item_Id"].numpy()[0][0].decode()}')
print(f'embedding vector: {batch["embedding"].numpy()[0]}')
###Output
_____no_output_____
###Markdown
Step 6: Validate the embeddings against the imported schema
Runs the `stats_generator`, which is an instance of the `StatisticsGen` standard component. This component accepts the output `Examples` artifact from the `embeddings_exporter` step and computes descriptive statistics for these examples by using an Apache Beam pipeline. The component produces a `Statistics` artifact as an output.
###Code
stats_generator = tfx.components.StatisticsGen(
examples=embeddings_exporter.outputs.examples,
)
context.run(stats_generator)
###Output
_____no_output_____
###Markdown
Run the `stats_validator`, which is an instance of the `ExampleValidator` component. This component validates the output statistics against the schema. It accepts the `Statistics` artifact produced by the `stats_generator` step and the `Schema` artifact produced by the `schema_importer` step, and produces `Anomalies` artifacts as outputput if any anomalies are found.
###Code
stats_validator = tfx.components.ExampleValidator(
statistics=stats_generator.outputs.statistics,
schema=schema_importer.outputs.result,
)
context.run(stats_validator)
context.show(stats_validator.outputs.anomalies)
###Output
_____no_output_____
###Markdown
Step 7: Create an embedding lookup SavedModel
Runs the `embedding_lookup_creator` step, which is an instance of the `Trainer` standard component. This component accepts the `Schema` artifact from the `schema_importer` step and the`Examples` artifact from the `embeddings_exporter` step as inputs, executes the [lookup_creator.py](tfx_pipeline/lookup_creator.py) module, and produces an embedding lookup `Model` artifact as an output.
###Code
from tfx.components.base import executor_spec
from tfx.components.trainer import executor as trainer_executor
_module_file = "tfx_pipeline/lookup_creator.py"
embedding_lookup_creator = tfx.components.Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=_module_file,
train_args={"splits": ["train"], "num_steps": 0},
eval_args={"splits": ["train"], "num_steps": 0},
schema=schema_importer.outputs.result,
examples=embeddings_exporter.outputs.examples,
)
context.run(embedding_lookup_creator)
###Output
_____no_output_____
###Markdown
Validate the lookup model
Use the [TFX InfraValidator](https://www.tensorflow.org/tfx/guide/infra_validator) to make sure the created model is mechanically fine and can be loaded successfully.
###Code
from tfx.proto import infra_validator_pb2
serving_config = infra_validator_pb2.ServingSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServing(tags=["latest"]),
local_docker=infra_validator_pb2.LocalDockerConfig(),
)
validation_config = infra_validator_pb2.ValidationSpec(
max_loading_time_seconds=60,
num_tries=3,
)
infra_validator = tfx.components.InfraValidator(
model=embedding_lookup_creator.outputs.model,
serving_spec=serving_config,
validation_spec=validation_config,
)
context.run(infra_validator)
tf.io.gfile.listdir(infra_validator.outputs.blessing.get()[0].uri)
###Output
_____no_output_____
###Markdown
Step 8: Push the embedding lookup model to the model registry
Run the `embedding_lookup_pusher` step, which is an instance of the `Pusher` standard component. This component accepts the embedding lookup `Model` artifact from the `embedding_lookup_creator` step, and stores the SavedModel in the location specified by the `MODEL_REGISTRY_DIR` variable.
###Code
embedding_lookup_pusher = tfx.components.Pusher(
model=embedding_lookup_creator.outputs.model,
infra_blessing=infra_validator.outputs.blessing,
push_destination=tfx.proto.pusher_pb2.PushDestination(
filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(MODEL_REGISTRY_DIR, EMBEDDING_LOOKUP_MODEL_NAME)
)
),
)
context.run(embedding_lookup_pusher)
lookup_savedmodel_dir = embedding_lookup_pusher.outputs.pushed_model.get()[
0
].get_string_custom_property("pushed_destination")
!saved_model_cli show --dir {lookup_savedmodel_dir} --tag_set serve --signature_def serving_default
loaded_model = tf.saved_model.load(lookup_savedmodel_dir)
vocab = [
token.strip()
for token in tf.io.gfile.GFile(
loaded_model.vocabulary_file.asset_path.numpy().decode(), "r"
).readlines()
]
input_items = [vocab[0], " ".join([vocab[1], vocab[2]]), "abc123"]
print(input_items)
output = loaded_model(input_items)
print(f"Embeddings retrieved: {len(output)}")
for idx, embedding in enumerate(output):
print(f"{input_items[idx]}: {embedding[:5]}")
###Output
_____no_output_____
###Markdown
Step 9: Build the ScaNN index
Run the `scann_indexer` step, which is an instance of the `Trainer` standard component. This component accepts the `Schema` artifact from the `schema_importer` step and the `Examples` artifact from the `embeddings_exporter` step as inputs, executes the [scann_indexer.py](tfx_pipeline/scann_indexer.py) module, and produces the ScaNN index `Model` artifact as an output.
###Code
from tfx.components.base import executor_spec
from tfx.components.trainer import executor as trainer_executor
_module_file = "tfx_pipeline/scann_indexer.py"
scann_indexer = tfx.components.Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=_module_file,
train_args={"splits": ["train"], "num_steps": 0},
eval_args={"splits": ["train"], "num_steps": 0},
schema=schema_importer.outputs.result,
examples=embeddings_exporter.outputs.examples,
)
context.run(scann_indexer)
###Output
_____no_output_____
###Markdown
Step 10: Evaluate and validate the ScaNN indexRuns the `index_evaluator` step, which is an instance of the `IndexEvaluator` custom TFX component. This component accepts the `Examples` artifact from the `embeddings_exporter` step, the `Schema` artifact from the `schema_importer` step, and ScaNN index `Model` artifact from the `scann_indexer` step. The IndexEvaluator component completes the following tasks:+ Uses the schema to parse the embedding records. + Evaluates the matching latency of the index.+ Compares the recall of the produced matches with respect to the exact matches.+ Validates the latency and recall against the `max_latency` and `min_recall` input parameters.When it is finished, it produces a `ModelBlessing` artifact as output, which indicates whether the ScaNN index passed the validation criteria or not.The IndexEvaluator custom component is implemented in the [tfx_pipeline/scann_evaluator.py](tfx_pipeline/scann_evaluator.py) module.
###Code
from tfx_pipeline import scann_evaluator
index_evaluator = scann_evaluator.IndexEvaluator(
examples=embeddings_exporter.outputs.examples,
model=scann_indexer.outputs.model,
schema=schema_importer.outputs.result,
min_recall=0.8,
max_latency=0.01,
)
context.run(index_evaluator)
###Output
_____no_output_____
###Markdown
Step 11: Push the ScaNN index to the model registry
Runs the `embedding_scann_pusher` step, which is an instance of the `Pusher` standard component. This component accepts the ScaNN index `Model` artifact from the `scann_indexer` step and the `ModelBlessing` artifact from the `index_evaluator` step, and stores the SavedModel in the location specified by the `MODEL_REGISTRY_DIR` variable.
###Code
embedding_scann_pusher = tfx.components.Pusher(
model=scann_indexer.outputs.model,
model_blessing=index_evaluator.outputs.blessing,
push_destination=tfx.proto.pusher_pb2.PushDestination(
filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(MODEL_REGISTRY_DIR, SCANN_INDEX_MODEL_NAME)
)
),
)
context.run(embedding_scann_pusher)
from index_server.matching import ScaNNMatcher
scann_index_dir = embedding_scann_pusher.outputs.pushed_model.get()[
0
].get_string_custom_property("pushed_destination")
scann_matcher = ScaNNMatcher(scann_index_dir)
vector = np.random.rand(50)
scann_matcher.match(vector, 5)
###Output
_____no_output_____
###Markdown
Check the local MLMD store
###Code
mlmd_store.get_artifacts()
###Output
_____no_output_____
###Markdown
View the model registry directory
###Code
!gsutil ls {MODEL_REGISTRY_DIR}
###Output
_____no_output_____
|
examples/training_barlow_twins_iwang.ipynb
|
###Markdown
Barlow Twins Demo https://arxiv.org/pdf/2103.03230.pdf **Note:** This notebook demonstrates how to use `SimCLR` callback with a single GPU. For distributed version, `DistributedSimCLR` checkout documentation. First import **fastai** for training and other helpers, you can choose not to use **wandb** by setting `WANDB=False`.
###Code
from fastai.vision.all import *
from fastai.callback.wandb import WandbCallback
import wandb
torch.backends.cudnn.benchmark = True
WANDB = False
###Output
_____no_output_____
###Markdown
Then import **self_supervised** `augmentations` module for creating augmentations pipeline, `layers` module for creating encoder and model, and finally `simclr` for self-supervised training.
###Code
from self_supervised.augmentations import *
from self_supervised.layers import *
# from self_supervised.vision.simclr import *
###Output
_____no_output_____
###Markdown
In this notebook we will take a look at [ImageWang](https://github.com/fastai/imagenetteimage%E7%BD%91) benchmark, how to train a self-supervised model using MoCo algorithm and then how to use this pretrained model for finetuning on the given downstream task. Pretraining
###Code
def get_dls(size, bs, workers=None):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
files = get_image_files(source)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=1.)],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
return dls
###Output
_____no_output_____
###Markdown
ImageWang has several benchmarks for different image sizes, in this tutorial we will go for `size=224` and also demonstrate how effectively you can utilize GPU memory. Define batch size, resize resolution before batching and size for random cropping during self-supervised training. It's always good to use a batch size as high as it can fit the GPU memory.
###Code
# bs, resize, size = 256, 256, 224
###Output
_____no_output_____
###Markdown
Select architecture to train on, remember all **timm** and **fastai** models are available! We need to set `pretrained=False` here because using imagenet weights for ImageWang data would be cheating.
###Code
# arch = "xresnet34"
# encoder = create_encoder(arch, pretrained=False, n_in=3)
# if WANDB:
# xtra_config = {"Arch":arch, "Resize":resize, "Size":size, "Algorithm":"Barlow-Twins"}
# wandb.init(project="self-supervised-imagewang", config=xtra_config);
###Output
_____no_output_____
###Markdown
Initialize the Dataloaders using the function above.
###Code
# dls = get_dls(resize, bs)
###Output
_____no_output_____
###Markdown
Create SimCLR model. You can change values of `hidden_size`, `projection_size`, and `n_layers`. For this problem, defaults work just fine so we don't do any changes.
###Code
#export
class BarlowTwinsModel(Module):
"An encoder followed by a projector"
def __init__(self,encoder,projector): self.encoder,self.projector = encoder,projector
def forward(self,x): return self.projector(self.encoder(x))
#export
def create_barlow_twins_model(encoder, hidden_size=256, projection_size=128):
"Create SimCLR model"
n_in = in_channels(encoder)
with torch.no_grad(): representation = encoder(torch.randn((2,n_in,128,128)))
projector = create_mlp_module(representation.size(1), hidden_size, projection_size, bn=True, nlayers=3)
apply_init(projector)
return BarlowTwinsModel(encoder, projector)
# model = create_barlow_twins_model(encoder, hidden_size=768, projection_size=768)
# model.projector
###Output
_____no_output_____
###Markdown
Next step is perhaps the most critical step for achieving good results on a custom problem - data augmentation. For this, we will use utility function from `self_supervised.vision.simclr.get_simclr_aug_pipelines` but you can also use your own list of Pipeline augmentations. `self_supervised.vision.simclr.get_moco_aug_pipelines`should be enough for most of the cases since under the hood it uses `self_supervised.augmentations.get_multi_aug_pipelines` and `self_supervised.augmentations.get_batch_augs`. You can do shift+tab and see all the arguments that can be passed to `get_simclr_aug_pipelines`. You can simply pass anything that you could pass to `get_batch_augs` including custom `xtra_tfms`. `get_simclr_aug_pipelines` excepts size for random resized cropping of the 2 views of a given image and the rest of the arguments are coming from `get_batch_augs()`
###Code
# aug_pipelines = get_multi_aug_pipelines(n=2, size=size, rotate=True,
# rotate_deg=10, jitter=True, bw=True, blur=False)
# aug_pipelines
###Output
_____no_output_____
###Markdown
Here, we will feed the augmentation pipelines and leave temperature parameter as default.
###Code
# pred = torch.randn(32,16)
# bs,nf = pred.size(0)//2,pred.size(1)
# I = torch.eye(nf)
# z1, z2 = pred[:bs],pred[bs:]
# z1norm = (z1 - z1.mean(0)) / z1.std(0, unbiased=False)
# z2norm = (z2 - z2.mean(0)) / z2.std(0, unbiased=False)
# C = (z1norm.T @ z2norm) / bs
# # cdiff = (C - I)**2
# # loss = (cdiff*I + cdiff*(1-I)*lmb).sum()
# C.max(), C.min()
#export
class BarlowTwins(Callback):
order,run_valid = 9,True
def __init__(self, aug_pipelines, lmb=5e-3, print_augs=False):
assert_aug_pipelines(aug_pipelines)
self.aug1, self.aug2 = aug_pipelines
if print_augs: print(self.aug1), print(self.aug2)
store_attr('lmb')
def before_fit(self):
self.learn.loss_func = self.lf
nf = self.learn.model.projector[-1].out_features
self.I = torch.eye(nf).to(self.dls.device)
def before_batch(self):
xi,xj = self.aug1(self.x), self.aug2(self.x)
self.learn.xb = (torch.cat([xi, xj]),)
def lf(self, pred, *yb):
bs,nf = pred.size(0)//2,pred.size(1)
z1, z2 = pred[:bs],pred[bs:]
z1norm = (z1 - z1.mean(0)) / z1.std(0, unbiased=False)
z2norm = (z2 - z2.mean(0)) / z2.std(0, unbiased=False)
C = (z1norm.T @ z2norm) / bs
cdiff = (C - self.I)**2
loss = (cdiff*self.I + cdiff*(1-self.I)*self.lmb).sum()
return loss
@torch.no_grad()
def show(self, n=1):
bs = self.learn.x.size(0)//2
x1,x2 = self.learn.x[:bs], self.learn.x[bs:]
idxs = np.random.choice(range(bs),n,False)
x1 = self.aug1.decode(x1[idxs].to('cpu').clone()).clamp(0,1)
x2 = self.aug2.decode(x2[idxs].to('cpu').clone()).clamp(0,1)
images = []
for i in range(n): images += [x1[i],x2[i]]
return show_batch(x1[0], None, images, max_n=len(images), nrows=n)
# cbs=[BarlowTwins(aug_pipelines, lmb=0.1)]
# if WANDB: cbs += [WandbCallback(log_preds=False,log_model=False)]
# learn = Learner(dls, model, cbs=cbs)
###Output
_____no_output_____
###Markdown
Before starting training let's check whether our augmentations makes sense or not. Since this step consumes GPU memory, once you are done with inspection, restart the notebook and skip this step. We can see that 2 views of the same image side by side and indeed augmentations look pretty good. Now, it's time restart the notebook and skip this step.
###Code
# b = dls.one_batch()
# learn._split(b)
# learn('before_batch')
# learn.sim_clr.show(n=5);
###Output
_____no_output_____
###Markdown
Use mixed precision with `to_fp16()` for more GPU memory, larger batch size and faster training . We could also use gradient checkpointing wrapper models from `self_supervised.layers` to save even more memory, e.g. `CheckpointSequential()`.
###Code
# learn.to_fp16();
# learn.lr_find()
###Output
_____no_output_____
###Markdown
Learning good representations via contrastive learning usually takes a lot of epochs. So here number epochs are set to 100. This might change depending on your data distribution and dataset size.
###Code
# lr,wd,epochs=1e-2,1e-2,100
# learn.unfreeze()
# learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)
# if WANDB: wandb.finish()
###Output
_____no_output_____
###Markdown
Search Best Lambda
###Code
import gc
bs, resize, size = 128, 256, 224
lr,wd,epochs=1e-2, 1e-2, 100
WANDB = True
arch = "xresnet34"
lmb = 5e-3
nhidden = 1024
for bs in (64,128):
if WANDB:
xtra_config = {"Arch":arch, "Resize":resize, "Size":size, "Algorithm":"Barlow-Twins"}
wandb.init(project="self-supervised-imagewang", config=xtra_config)
dls = get_dls(resize, bs)
encoder = create_encoder(arch, pretrained=False, n_in=3)
model = create_barlow_twins_model(encoder, hidden_size=nhidden, projection_size=nhidden)
aug_pipelines = get_multi_aug_pipelines(n=2, size=size, rotate=True,
rotate_deg=10, jitter=True, bw=True,
blur=True, blur_s=(4,8), blur_p=0.2)
cbs=[BarlowTwins(aug_pipelines, lmb=lmb)]
if WANDB: cbs += [WandbCallback(log_preds=False,log_model=False)]
learn = Learner(dls, model, cbs=cbs)
learn.to_fp16()
learn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.25)
save_name = f'btwins_iwang_sz{size}_epc{epochs}_lmb{lmb}_bs{bs}'
learn.save(save_name)
torch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')
if WANDB: wandb.finish()
del dls, learn
gc.collect()
###Output
Failed to query for notebook name, you can set it manually with the WANDB_NOTEBOOK_NAME environment variable
[34m[1mwandb[0m: Currently logged in as: [33mkeremturgutlu[0m (use `wandb login --relogin` to force relogin)
[34m[1mwandb[0m: wandb version 0.10.22 is available! To upgrade, please run:
[34m[1mwandb[0m: $ pip install wandb --upgrade
###Markdown
Downstream Task
###Code
optdict = dict(sqr_mom=0.99,mom=0.95,beta=0.,eps=1e-4)
opt_func = partial(ranger, **optdict)
bs, resize, size = 128, 256, 224
bs, size
def get_dls(size, bs, workers=None):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
files = get_image_files(source, folders=['train', 'val'])
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
return dls
def split_func(m): return L(m[0], m[1]).map(params)
def create_learner(size=size, arch='xresnet34', encoder_path=f'models/btwins_iwang_sz224_epc100_lmb0.005_encoder'):
dls = get_dls(size, bs=bs//2)
pretrained_encoder = torch.load(encoder_path)
encoder = create_encoder(arch, pretrained=False, n_in=3)
encoder.load_state_dict(pretrained_encoder)
nf = encoder(torch.randn(2,3,224,224)).size(-1)
classifier = create_cls_module(nf, dls.c)
model = nn.Sequential(encoder, classifier)
learn = Learner(dls, model, opt_func=opt_func, splitter=split_func,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy())
return learn
def finetune(size, epochs, arch, encoder_path, lr=1e-2, wd=1e-2):
learn = create_learner(size, arch, encoder_path)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=wd)
final_acc = learn.recorder.values[-1][-2]
return final_acc
###Output
_____no_output_____
###Markdown
5 epochs
###Code
runs = 5
lmb = 0.005
for bs in (64,128):
print(lmb)
finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/btwins_iwang_sz224_epc100_lmb{lmb}_bs{bs}_encoder.pth')
acc = []
runs = 5
for i in range(runs): acc += [finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/btwins_iwang_sz224_epc100_lmb0.005_encoder.pth')]
np.mean(acc)
###Output
_____no_output_____
###Markdown
20 epochs
###Code
acc = []
runs = 3
for i in range(runs): acc += [finetune(size, epochs=20, arch='xresnet34', encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]
np.mean(acc)
###Output
_____no_output_____
###Markdown
80 epochs
###Code
acc = []
runs = 1
for i in range(runs): acc += [finetune(size, epochs=80, arch='xresnet34',encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]
np.mean(acc)
###Output
_____no_output_____
###Markdown
200 epochs
###Code
acc = []
runs = 1
for i in range(runs): acc += [finetune(size, epochs=200, arch='xresnet34', encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]
np.mean(acc)
###Output
_____no_output_____
|
tutorials/ReducedDensityMatrices.ipynb
|
###Markdown
Reduced Density Matrices in TequilaThis notebook serves as a tutorial to the computation and usage of the one- and two-particle reduced density matrices.
###Code
import tequila as tq
import numpy
###Output
_____no_output_____
###Markdown
The 1- and 2-RDMFirst, look at the definition of the reduced density matrices (RDM) for some state $ |\psi\rangle$:1-RDM: $ \gamma^p_q \equiv \langle \psi | a^p a_q | \psi\rangle$2-RDM $ \gamma^{pq}_{rs} \equiv \langle \psi | a^p a^q a_s a_r | \psi\rangle$ (we mainly use the standard physics ordering for the second-quantized operators, i.e. $p,r$ go with particle 1 and $q,s$ with particle 2)The operators $ a^p = a_p^\dagger $ and $a_p$ denote the standard fermionic creation and annihilation operators.Since we work on a quantum computer, $|\psi\rangle$ is represented by some unitary transformation $U$: $|\psi\rangle = U |0\rangle^{\otimes N_q}$, using $N_q$ qubits. This corresponds to $N_q$ spin-orbitals in Jordan-Wigner encoding. Obtaining the RDMs from a quantum computer is most intuitive when using the Jordan-Wigner transformation, since the results directly correspond to the ones computed classically in second quantized form.It is worth mentioning that since we only consider real orbitals in chemistry applications, the implementation also expects only real-valued RDM's.The well-known anticommutation relations yield a series of symmetry properties for the reduced density matrices, which can be taken into consideration to reduce the computational cost:\begin{align} \gamma^p_q &= \gamma^q_p \\ \gamma^{pq}_{rs} &= -\gamma^{qp}_{rs} = -\gamma^{pq}_{sr} = \gamma^{qp}_{sr} = \gamma^{rs}_{pq}\end{align}In chemistry applications, solving the electronic structure problem involves the electronic Hamiltonian (here in Born-Oppenheimer approximation)$$ H_{el} = h_0 + \sum_{pq} h^q_p a^p_q + \frac{1}{2}\sum_{pqrs} h^{rs}_{pq} a^{pq}_{rs}$$with the one- and two-body integrals $h^q_p, h^{rs}_{pq}$ that turn out to be independent of spin.Therefore, we introduce the spin-free RDMs $\Gamma^P_Q$ and $\Gamma^{PQ}_{RS}$, obtained by spin-summation (we write molecular orbitals in uppercase letters $P,Q,\ldots\in\{1,\ldots,N_p\}$ in opposite to spin-orbitals $p,q,\ldots\in\{1,\ldots,N_q\}$):\begin{align} \Gamma^P_Q &= \sum_{\sigma \in \{\alpha, \beta\}} \gamma^{p\sigma}_{q\sigma} = \langle \psi |\sum_{\sigma} a^{p\sigma} a_{q\sigma} | \psi\rangle \\ \Gamma^{PQ}_{RS} &= \sum_{\sigma,\tau \in \{\alpha, \beta\}} \gamma^{p\sigma q\tau}_{r\sigma s\tau} = \langle \psi | \sum_{\sigma,\tau} a^{p\sigma} a^{q\tau} a_{s\tau} a_{r\sigma} | \psi \rangle. \end{align} Note, that by making use of linearity, we obtain the second equality in the two expressions above. Performing the summation before evaluating the expected value means less expected values and a considerable reduction in computational cost (only $N_p=\frac{N_q}{2}$ molecular orbitals vs. $N_q$ spin-orbitals).Due to the orthogonality of the spin states, the symmetries for the spin-free 2-RDM are slightly less than for the spin-orbital RDM:\begin{align} \Gamma^P_Q &= \Gamma^Q_P\\ \Gamma^{PQ}_{RS} &= \Gamma^{QP}_{SR} = \Gamma^{RS}_{PQ} \end{align}
###Code
# As an example, let's use the Helium atom in a minimal basis
mol = tq.chemistry.Molecule(geometry='He 0.0 0.0 0.0', basis_set='6-31g')
# We want to get the 1- and 2-RDM for the (approximate) ground state of Helium
# For that, we (i) need to set up a unitary transformation U(angles)
# (ii) determine a set of angles using VQE s.th. U(angles) |0> = |psi>, where H|psi> = E_0|psi>
# (iii) compute the RDMs using compute_rdms
# (i) Set up a circuit
# This can be done either using the make_uccsd-method (see Chemistry-tutorial) or by a hand-written circuit
# We use a hand-written circuit here
U = tq.gates.X(target=0)
U += tq.gates.X(target=1)
U += tq.gates.Ry(target=3, control=0, angle='a1')
U += tq.gates.X(target=0)
U += tq.gates.X(target=1, control=3)
U += tq.gates.Ry(target=2, control=1, angle='a2')
U += tq.gates.X(target=1)
U += tq.gates.Ry(target=2, control=1, angle='a3')
U += tq.gates.X(target=1)
U += tq.gates.X(target=2)
U += tq.gates.X(target=0, control=2)
U += tq.gates.X(target=2)
# (ii) Run VQE
H = mol.make_hamiltonian()
O = tq.objective.objective.ExpectationValue(H=H, U=U)
result = tq.minimize(objective=O, method='bfgs')
# (iii) Using the optimal parameters out of VQE, we know have a circuit U_opt |0> ~ U|0> = |psi>
mol.compute_rdms(U=U, variables=result.angles, spin_free=True, get_rdm1=True, get_rdm2=True)
rdm1_spinfree, rdm2_spinfree = mol.rdm1, mol.rdm2
print('\nThe spin-free matrices:')
print('1-RDM:\n' + str(rdm1_spinfree))
print('2-RDM:\n' + str(rdm2_spinfree))
# Let's also get the spin-orbital rdm2
# We can select to only determine one of either matrix, but if both are needed at some point, it is
# more efficient to compute both within one call of compute_rdms
print('\nThe spin-ful matrices:')
mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=False, get_rdm2=True)
rdm1_spin, rdm2_spin = mol.rdm1, mol.rdm2
print('1-RDM is None now: ' + str(rdm1_spin))
print('2-RDM has been determined:\n' + str(rdm2_spin))
# We can compute the 1-rdm still at a later point
mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=True, get_rdm2=False)
rdm1_spin = mol.rdm1
print('1-RDM is also here now:\n' + str(rdm1_spin))
# To check consistency with the spin-free rdms, we can do spin-summation afterwards
# (again, if only the spin-free version is of interest, it is cheaper to get it right from compute_rdms)
rdm1_spinsum, rdm2_spinsum = mol.rdm_spinsum(sum_rdm1=True, sum_rdm2=True)
print('\nConsistency of spin summation:')
print('1-RDM: ' + str(numpy.allclose(rdm1_spinsum, rdm1_spinfree, atol=1e-10)))
print('2-RDM: ' + str(numpy.allclose(rdm2_spinsum, rdm2_spinfree, atol=1e-10)))
# We can also compute the RDMs using the psi4-interface.
# Then, psi4 is called to perform a CI-calculation, while collecting the 1- and 2-RDM
# Let's use full CI here, but other CI flavors work as well
mol.compute_rdms(psi4_method='fci')
rdm1_psi4, rdm2_psi4 = mol.rdm1, mol.rdm2
print('\nPsi4-RDMs:')
print('1-RDM:\n' + str(rdm1_psi4))
print('2-RDM:\n' + str(rdm2_psi4))
# Comparing the results to the VQE-matrices, we observe a close resemblance,
# also suggested by the obtained energies
fci_energy = mol.logs['fci'].variables['FCI TOTAL ENERGY']
vqe_energy = result.energy
print('\nFCI energy: ' + str(fci_energy))
print('VQE-Energy: ' + str(vqe_energy))
###Output
Psi4-RDMs:
1-RDM:
[[ 1.9913455 -0.00385072]
[-0.00385072 0.0086545 ]]
2-RDM:
[[[[ 1.99133696e+00 -4.12235350e-03]
[-4.12235350e-03 -1.31213704e-01]]
[[-4.12235350e-03 8.53386376e-06]
[ 8.53386376e-06 2.71631211e-04]]]
[[[-4.12235350e-03 8.53386376e-06]
[ 8.53386376e-06 2.71631211e-04]]
[[-1.31213704e-01 2.71631211e-04]
[ 2.71631211e-04 8.64596819e-03]]]]
FCI energy: -2.870162138900821
VQE-Energy: -2.870162072561385
###Markdown
Consistency checksAt this point, we can make a few consistency checks.We can validate the trace condition for the 1- and 2-RDM:\begin{align}\mathrm{tr}(\mathbf{\Gamma}_m)&=N!/(N-m)!\\ \mathrm{tr} (\mathbf{\Gamma}_1) &= \sum_P \Gamma^P_P = N \\ \mathrm{tr} (\mathbf{\Gamma}_2) &= \sum_{PQ} \Gamma^{PQ}_{PQ} = N(N-1), \end{align}$N$ describes the number of particles involved, i.e. in our case using a minimal basis this corresponds to $N_p$ above. For the Helium atom in Born-Oppenheimer approximation, $N_p=2$.In the literature, one can also find the $m$-particle reduced density matrices normalized by a factor $1/m!$, which in that case would be inherited by the trace conditions.Also, the (in our case, as we use the wavefunction from VQE, ground-state) energy can be computed by\begin{equation} E = \langle H_{el} \rangle = h_0 + \sum_{PQ} h^Q_P \Gamma^P_Q + \frac{1}{2}\sum_{PQRS} h^{RS}_{PQ} \Gamma^{PQ}_{RS}, \end{equation}where $h_0$ denotes the nuclear repulsion energy, which is 0 for Helium anyways.Note, that the expressions above also hold true for the spin-RDMs, given that the one- and two-body integrals are available in spin-orbital basis.
###Code
# Computation of consistency checks
#todo: normalization of rdm2 *= 1/2
# Trace
tr1_spin = numpy.einsum('pp', rdm1_spin, optimize='greedy')
tr1_spinfree = numpy.einsum('pp', rdm1_spinfree, optimize='greedy')
tr2_spin = numpy.einsum('pqpq', rdm2_spin, optimize='greedy')
tr2_spinfree = numpy.einsum('pqpq', rdm2_spinfree, optimize='greedy')
print("1-RDM: N_true = 2, N_spin = " + str(tr1_spin) + ", N_spinfree = " + str(tr1_spinfree)+".")
print("2-RDM: N*(N-1)_true = 2, spin = " + str(tr2_spin) + ", spinfree = " + str(tr2_spinfree)+".")
# Energy
# Get molecular integrals
h0 = mol.molecule.nuclear_repulsion
print("h0 is zero: " + str(h0))
h1 = mol.molecule.one_body_integrals
h2 = mol.molecule.two_body_integrals
# Reorder two-body-integrals according to physics convention
h2 = tq.chemistry.qc_base.NBodyTensor(elems=h2, ordering='openfermion')
h2.reorder(to='phys')
h2 = h2.elems
# Compute energy
rdm_energy = numpy.einsum('qp, pq', h1, rdm1_spinfree, optimize='greedy') + 1/2*numpy.einsum('rspq, pqrs', h2, rdm2_spinfree, optimize='greedy')
print('\nVQE-Energy is: ' + str(vqe_energy))
print('RDM-energy matches: ' + str(rdm_energy))
###Output
1-RDM: N_true = 2, N_spin = 2.0000000000000004, N_spinfree = 2.0000000000000004.
2-RDM: N*(N-1)_true = 2, spin = 2.0000000000000004, spinfree = 2.000000000000001.
h0 is zero: 0.0
VQE-Energy is: -2.870162072561385
RDM-energy matches: -2.870162072561384
###Markdown
Reduced Density Matrices in TequilaThis notebook serves as a tutorial to the computation and usage of the one- and two-particle reduced density matrices.
###Code
import tequila as tq
import numpy
###Output
_____no_output_____
###Markdown
The 1- and 2-RDMFirst, look at the definition of the reduced density matrices (RDM) for some state $ |\psi\rangle$:1-RDM: $ \gamma^p_q \equiv \langle \psi | a^p a_q | \psi\rangle$2-RDM $ \gamma^{pq}_{rs} \equiv \langle \psi | a^p a^q a_s a_r | \psi\rangle$ (we mainly use the standard physics ordering for the second-quantized operators, i.e. $p,r$ go with particle 1 and $q,s$ with particle 2)The operators $ a^p = a_p^\dagger $ and $a_p$ denote the standard fermionic creation and annihilation operators.Since we work on a quantum computer, $|\psi\rangle$ is represented by some unitary transformation $U$: $|\psi\rangle = U |0\rangle^{\otimes N_q}$, using $N_q$ qubits. This corresponds to $N_q$ spin-orbitals in Jordan-Wigner encoding. Obtaining the RDMs from a quantum computer is most intuitive when using the Jordan-Wigner transformation, since the results directly correspond to the ones computed classically in second quantized form.It is worth mentioning that since we only consider real orbitals in chemistry applications, the implementation also expects only real-valued RDM's.The well-known anticommutation relations yield a series of symmetry properties for the reduced density matrices, which can be taken into consideration to reduce the computational cost:\begin{align} \gamma^p_q &= \gamma^q_p \\ \gamma^{pq}_{rs} &= -\gamma^{qp}_{rs} = -\gamma^{pq}_{sr} = \gamma^{qp}_{sr} = \gamma^{rs}_{pq}\end{align}In chemistry applications, solving the electronic structure problem involves the electronic Hamiltonian (here in Born-Oppenheimer approximation)$$ H_{el} = h_0 + \sum_{pq} h^q_p a^p_q + \frac{1}{2}\sum_{pqrs} h^{rs}_{pq} a^{pq}_{rs}$$with the one- and two-body integrals $h^q_p, h^{rs}_{pq}$ that turn out to be independent of spin.Therefore, we introduce the spin-free RDMs $\Gamma^P_Q$ and $\Gamma^{PQ}_{RS}$, obtained by spin-summation (we write molecular orbitals in uppercase letters $P,Q,\ldots\in\{1,\ldots,N_p\}$ in opposite to spin-orbitals $p,q,\ldots\in\{1,\ldots,N_q\}$):\begin{align} \Gamma^P_Q &= \sum_{\sigma \in \{\alpha, \beta\}} \gamma^{p\sigma}_{q\sigma} = \langle \psi |\sum_{\sigma} a^{p\sigma} a_{q\sigma} | \psi\rangle \\ \Gamma^{PQ}_{RS} &= \sum_{\sigma,\tau \in \{\alpha, \beta\}} \gamma^{p\sigma q\tau}_{r\sigma s\tau} = \langle \psi | \sum_{\sigma,\tau} a^{p\sigma} a^{q\tau} a_{s\tau} a_{r\sigma} | \psi \rangle. \end{align} Note, that by making use of linearity, we obtain the second equality in the two expressions above. Performing the summation before evaluating the expected value means less expected values and a considerable reduction in computational cost (only $N_p=\frac{N_q}{2}$ molecular orbitals vs. $N_q$ spin-orbitals).Due to the orthogonality of the spin states, the symmetries for the spin-free 2-RDM are slightly less than for the spin-orbital RDM:\begin{align} \Gamma^P_Q &= \Gamma^Q_P\\ \Gamma^{PQ}_{RS} &= \Gamma^{QP}_{SR} = \Gamma^{RS}_{PQ} \end{align}
###Code
# As an example, let's use the Helium atom in a minimal basis
mol = tq.chemistry.Molecule(geometry='He 0.0 0.0 0.0', basis_set='6-31g')
# We want to get the 1- and 2-RDM for the (approximate) ground state of Helium
# For that, we (i) need to set up a unitary transformation U(angles)
# (ii) determine a set of angles using VQE s.th. U(angles) |0> = |psi>, where H|psi> = E_0|psi>
# (iii) compute the RDMs using compute_rdms
# (i) Set up a circuit
# This can be done either using the make_uccsd-method (see Chemistry-tutorial) or by a hand-written circuit
# We use a hand-written circuit here
U = tq.gates.X(target=0)
U += tq.gates.X(target=1)
U += tq.gates.Ry(target=3, control=0, angle='a1')
U += tq.gates.X(target=0)
U += tq.gates.X(target=1, control=3)
U += tq.gates.Ry(target=2, control=1, angle='a2')
U += tq.gates.X(target=1)
U += tq.gates.Ry(target=2, control=1, angle='a3')
U += tq.gates.X(target=1)
U += tq.gates.X(target=2)
U += tq.gates.X(target=0, control=2)
U += tq.gates.X(target=2)
# (ii) Run VQE
H = mol.make_hamiltonian()
O = tq.objective.objective.ExpectationValue(H=H, U=U)
result = tq.minimize(objective=O, method='bfgs')
# (iii) Using the optimal parameters out of VQE, we know have a circuit U_opt |0> ~ U|0> = |psi>
mol.compute_rdms(U=U, variables=result.angles, spin_free=True, get_rdm1=True, get_rdm2=True)
rdm1_spinfree, rdm2_spinfree = mol.rdm1, mol.rdm2
print('\nThe spin-free matrices:')
print('1-RDM:\n' + str(rdm1_spinfree))
print('2-RDM:\n' + str(rdm2_spinfree))
# Let's also get the spin-orbital rdm2
# We can select to only determine one of either matrix, but if both are needed at some point, it is
# more efficient to compute both within one call of compute_rdms
print('\nThe spin-ful matrices:')
mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=False, get_rdm2=True)
rdm1_spin, rdm2_spin = mol.rdm1, mol.rdm2
print('1-RDM is None now: ' + str(rdm1_spin))
print('2-RDM has been determined:\n' + str(rdm2_spin))
# We can compute the 1-rdm still at a later point
mol.compute_rdms(U=U, variables=result.angles, spin_free=False, get_rdm1=True, get_rdm2=False)
rdm1_spin = mol.rdm1
print('1-RDM is also here now:\n' + str(rdm1_spin))
# To check consistency with the spin-free rdms, we can do spin-summation afterwards
# (again, if only the spin-free version is of interest, it is cheaper to get it right from compute_rdms)
rdm1_spinsum, rdm2_spinsum = mol.rdm_spinsum(sum_rdm1=True, sum_rdm2=True)
print('\nConsistency of spin summation:')
print('1-RDM: ' + str(numpy.allclose(rdm1_spinsum, rdm1_spinfree, atol=1e-10)))
print('2-RDM: ' + str(numpy.allclose(rdm2_spinsum, rdm2_spinfree, atol=1e-10)))
# We can also compute the RDMs using the psi4-interface.
# Then, psi4 is called to perform a CI-calculation, while collecting the 1- and 2-RDM
# Let's use full CI here, but other CI flavors work as well
mol.compute_rdms(psi4_method='fci')
rdm1_psi4, rdm2_psi4 = mol.rdm1, mol.rdm2
print('\nPsi4-RDMs:')
print('1-RDM:\n' + str(rdm1_psi4))
print('2-RDM:\n' + str(rdm2_psi4))
# Comparing the results to the VQE-matrices, we observe a close resemblance,
# also suggested by the obtained energies
fci_energy = mol.logs['fci'].variables['FCI TOTAL ENERGY']
vqe_energy = result.energy
print('\nFCI energy: ' + str(fci_energy))
print('VQE-Energy: ' + str(vqe_energy))
###Output
Psi4-RDMs:
1-RDM:
[[ 1.9913455 -0.00385072]
[-0.00385072 0.0086545 ]]
2-RDM:
[[[[ 1.99133696e+00 -4.12235350e-03]
[-4.12235350e-03 -1.31213704e-01]]
[[-4.12235350e-03 8.53386376e-06]
[ 8.53386376e-06 2.71631211e-04]]]
[[[-4.12235350e-03 8.53386376e-06]
[ 8.53386376e-06 2.71631211e-04]]
[[-1.31213704e-01 2.71631211e-04]
[ 2.71631211e-04 8.64596819e-03]]]]
FCI energy: -2.870162138900819
VQE-Energy: -2.870161198619411
###Markdown
Consistency checksAt this point, we can make a few consistency checks.We can validate the trace condition for the 1- and 2-RDM:\begin{align}\mathrm{tr}(\mathbf{\Gamma}_m)&=N!/(N-m)!\\ \mathrm{tr} (\mathbf{\Gamma}_1) &= \sum_P \Gamma^P_P = N \\ \mathrm{tr} (\mathbf{\Gamma}_2) &= \sum_{PQ} \Gamma^{PQ}_{PQ} = N(N-1), \end{align}$N$ describes the number of particles involved, i.e. in our case using a minimal basis this corresponds to $N_p$ above. For the Helium atom in Born-Oppenheimer approximation, $N_p=2$.In the literature, one can also find the $m$-particle reduced density matrices normalized by a factor $1/m!$, which in that case would be inherited by the trace conditions.Also, the (in our case, as we use the wavefunction from VQE, ground-state) energy can be computed by\begin{equation} E = \langle H_{el} \rangle = h_0 + \sum_{PQ} h^Q_P \Gamma^P_Q + \frac{1}{2}\sum_{PQRS} h^{RS}_{PQ} \Gamma^{PQ}_{RS}, \end{equation}where $h_0$ denotes the nuclear repulsion energy, which is 0 for Helium anyways.Note, that the expressions above also hold true for the spin-RDMs, given that the one- and two-body integrals are available in spin-orbital basis.
###Code
# Computation of consistency checks
#todo: normalization of rdm2 *= 1/2
# Trace
tr1_spin = numpy.einsum('pp', rdm1_spin, optimize='greedy')
tr1_spinfree = numpy.einsum('pp', rdm1_spinfree, optimize='greedy')
tr2_spin = numpy.einsum('pqpq', rdm2_spin, optimize='greedy')
tr2_spinfree = numpy.einsum('pqpq', rdm2_spinfree, optimize='greedy')
print("1-RDM: N_true = 2, N_spin = " + str(tr1_spin) + ", N_spinfree = " + str(tr1_spinfree)+".")
print("2-RDM: N*(N-1)_true = 2, spin = " + str(tr2_spin) + ", spinfree = " + str(tr2_spinfree)+".")
# Energy
# Get molecular integrals
h0 = mol.molecule.nuclear_repulsion
print("h0 is zero: " + str(h0))
h1 = mol.molecule.one_body_integrals
h2 = mol.molecule.two_body_integrals
# Reorder two-body-integrals according to physics convention
h2 = tq.chemistry.qc_base.NBodyTensor(elems=h2, scheme='openfermion')
h2.reorder(to='phys')
h2 = h2.elems
# Compute energy
rdm_energy = numpy.einsum('qp, pq', h1, rdm1_spinfree, optimize='greedy') + 1/2*numpy.einsum('rspq, pqrs', h2, rdm2_spinfree, optimize='greedy')
print('\nVQE-Energy is: ' + str(vqe_energy))
print('RDM-energy matches: ' + str(rdm_energy))
###Output
1-RDM: N_true = 2, N_spin = 2.0000000000000004, N_spinfree = 2.0000000000000004.
2-RDM: N*(N-1)_true = 2, spin = 2.0000000000000004, spinfree = 2.000000000000001.
h0 is zero: 0.0
VQE-Energy is: -2.870161198619411
RDM-energy matches: -2.870161198619413
|
Copy of Predicting Movie Reviews with BERT on TF Hub.ipynb
|
###Markdown
Predicting Movie Review Sentiment with BERT on TF Hub If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!
###Code
# %tensorflow_version 1.x
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
###Output
_____no_output_____
###Markdown
In addition to the standard libraries we imported above, we'll need to install BERT's python package.
###Code
!pip install bert-tensorflow
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
###Output
WARNING:tensorflow:From /home/sahand/anaconda3/envs/tf-1/lib/python3.7/site-packages/bert/optimization.py:87: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
###Markdown
Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
###Code
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = 'output_dir'#@param {type:"string"}
#@markdown Whether or not to clear/delete the directory and create a new one
DO_DELETE = False #@param {type:"boolean"}
#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.
USE_BUCKET = False #@param {type:"boolean"}
BUCKET = 'BUCKET_NAME' #@param {type:"string"}
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
# Doesn't matter if the directory didn't exist
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
###Output
***** Model output directory: output_dir *****
###Markdown
Data First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub).
###Code
from tensorflow import keras
import os
import re
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
train, test = download_and_load_datasets()
train
###Output
_____no_output_____
###Markdown
To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
###Code
train = train.sample(5000)
test = test.sample(5000)
###Output
_____no_output_____
###Markdown
For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
###Code
DATA_COLUMN = 'sentence'
LABEL_COLUMN = 'polarity'
# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'
label_list = [0, 1]
###Output
_____no_output_____
###Markdown
Data PreprocessingWe'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.- `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe. - `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.- `label` is the label for our example, i.e. True, False
###Code
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
###Output
_____no_output_____
###Markdown
Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):1. Lowercase our text (if we're using a BERT lowercase model)2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])3. Break words into WordPieces (i.e. "calling" -> ["call", "ing"])4. Map our words to indexes using a vocab file that BERT provides5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert))6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))Happily, we don't have to worry about most of these details. To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
###Code
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
###Output
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
###Markdown
Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
###Code
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
###Output
_____no_output_____
###Markdown
Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands.
###Code
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
train_features
###Output
_____no_output_____
###Markdown
Creating a modelNow that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning).
###Code
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
###Output
_____no_output_____
###Markdown
Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction.
###Code
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
###Output
INFO:tensorflow:Using config: {'_model_dir': 'output_dir', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fa3e033df90>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
###Markdown
Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators).
###Code
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
###Output
_____no_output_____
###Markdown
Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
###Code
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
###Output
Beginning Training!
WARNING:tensorflow:From /home/sahand/anaconda3/envs/tf-1/lib/python3.7/site-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
###Markdown
Now let's use our test data to see how well our model did:
###Code
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
estimator.evaluate(input_fn=test_input_fn, steps=None)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
###Markdown
Now let's write code to make predictions on new sentences:
###Code
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
predictions = getPrediction(pred_sentences)
###Output
INFO:tensorflow:Writing example 0 of 4
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid:
INFO:tensorflow:tokens: [CLS] that movie was absolutely awful [SEP]
INFO:tensorflow:input_ids: 101 2008 3185 2001 7078 9643 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: 0 (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid:
INFO:tensorflow:tokens: [CLS] the acting was a bit lacking [SEP]
INFO:tensorflow:input_ids: 101 1996 3772 2001 1037 2978 11158 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: 0 (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid:
INFO:tensorflow:tokens: [CLS] the film was creative and surprising [SEP]
INFO:tensorflow:input_ids: 101 1996 2143 2001 5541 1998 11341 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: 0 (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid:
INFO:tensorflow:tokens: [CLS] absolutely fantastic ! [SEP]
INFO:tensorflow:input_ids: 101 7078 10392 999 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: 0 (id = 0)
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from gs://bert-tfhub/aclImdb_v1/model.ckpt-468
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
Voila! We have a sentiment classifier!
###Code
predictions
###Output
_____no_output_____
|
colabs/bigquery_storage.ipynb
|
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'bucket': '', # Google cloud bucket.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'auth_read': 'user', # Credentials used for reading data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'path': {'field': {'description': 'Path prefix to read from, no * required.','name': 'path','order': 2,'default': '','kind': 'string'}},
'bucket': {'field': {'description': 'Google cloud bucket.','name': 'bucket','order': 1,'default': '','kind': 'string'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'description': 'Existing BigQuery dataset.','name': 'dataset','order': 3,'default': '','kind': 'string'}},
'table': {'field': {'description': 'Table to create from this query.','name': 'table','order': 4,'default': '','kind': 'string'}}
},
'schema': {'field': {'description': 'Schema provided in JSON list format or empty list.','name': 'schema','order': 5,'default': '[]','kind': 'json'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Storage To Table ParametersMove using bucket and path prefix. 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Storage To TableThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket','kind': 'string','order': 1,'default': '','description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path','kind': 'string','order': 2,'default': '','description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 3,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 4,'default': '','description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema','kind': 'json','order': 5,'default': '[]','description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
Storage To TableMove using bucket and path prefix. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter Storage To Table Recipe Parameters 1. Specify a bucket and path prefix, * suffix is NOT required. 1. Every time the job runs it will overwrite the table.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'bucket': '', # Google cloud bucket.
'auth_write': 'service', # Credentials used for writing data.
'path': '', # Path prefix to read from, no * required.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this query.
'schema': '[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute Storage To TableThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery': {
'auth': 'user',
'from': {
'bucket': {'field': {'name': 'bucket', 'kind': 'string', 'order': 1, 'default': '', 'description': 'Google cloud bucket.'}},
'path': {'field': {'name': 'path', 'kind': 'string', 'order': 2, 'default': '', 'description': 'Path prefix to read from, no * required.'}}
},
'to': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset', 'kind': 'string', 'order': 3, 'default': '', 'description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table', 'kind': 'string', 'order': 4, 'default': '', 'description': 'Table to create from this query.'}}
},
'schema': {'field': {'name': 'schema', 'kind': 'json', 'order': 5, 'default': '[]', 'description': 'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
|
ISBN978-4-7981-6720-6/list5_3.ipynb
|
###Markdown
Chapter 5 ディープラーニングの理論* 5.1 数学の基礎 リスト 5.3 NupPyで自然対数を計算する
###Code
import numpy as np
print(np.log(np.e))
print(np.log(np.exp(2)))
print(np.log(np.exp(12)))
###Output
1.0
2.0
12.0
|
hw1/NaiveBayes.ipynb
|
###Markdown
**Two Class Naive Bayes Classifier**
###Code
class TwoClassNaiveBayesClassifier():
def __init__(self, sample_dim):
self.sample_dim = sample_dim
def train(self, training_samples, flag, training_labels):
sample_dim = self.sample_dim
sample_num = training_samples.shape[0]
if sample_dim != training_samples.shape[1]:
raise Exception("Input samples are not compatible with this classifier!")
conti_feature = np.where(flag==1)
dis_feature = np.where(flag==0)
prob_positive = np.zeros((sample_dim, 3))
prob_negative = np.zeros((sample_dim, 3))
mean_list = np.zeros((sample_dim, 2))
std_list = np.zeros((sample_dim, 2))
class_prior = np.zeros((2))
class_prior[0] = np.count_nonzero(training_labels==-1) / sample_num
class_prior[1] = np.count_nonzero(training_labels==1) / sample_num
for dim in conti_feature[0]:
mean_list[dim, 0] = np.mean(training_samples[training_labels==-1, dim])
mean_list[dim, 1] = np.mean(training_samples[training_labels==1, dim])
std_list[dim, 0] = np.std(training_samples[training_labels==-1, dim])
std_list[dim, 1] = np.std(training_samples[training_labels==1, dim])
pos = np.where(training_labels==1)
neg = np.where(training_labels==-1)
num_positive = len(pos[0])
num_negative = len(neg[0])
for dim in dis_feature[0]:
feature = training_samples[training_labels==1][:,dim]
nums = np.unique(feature)
for i in range(len(nums)):
prob_positive[dim][i] = np.count_nonzero(feature==nums[i])/num_positive
feature = training_samples[training_labels==-1][:,dim]
nums = np.unique(feature)
for i in range(len(nums)):
prob_negative[dim][i] = np.count_nonzero(feature==nums[i])/num_negative
self.class_prior = class_prior
self.mean_list = mean_list
self.std_list = std_list
self.prob_positive = prob_positive
self.prob_negative = prob_negative
def test(self, testing_samples, flag, testing_labels=None):
sample_dim = self.sample_dim
sample_num = testing_samples.shape[0]
if sample_dim != testing_samples.shape[1]:
raise Exception("Input samples are not compatible with this classifier!")
predicted_labels = np.zeros((sample_num))
class_prior = self.class_prior
mean_list = self.mean_list
std_list = self.std_list
prob_positive = self.prob_positive
prob_negative = self.prob_negative
conti_feature = np.where(flag==1)
dis_feature = np.where(flag==0)
for i in range(sample_num):
xi = testing_samples[i]
xi_posterior_prob = [1, 1]
for dim in conti_feature[0]:
xi_prob1 = self.Gaussian(xi[dim], mean_list[dim, 0], std_list[dim, 0])
xi_posterior_prob[0] *= xi_prob1
xi_prob2 = self.Gaussian(xi[dim], mean_list[dim, 1], std_list[dim, 1])
xi_posterior_prob[1] *= xi_prob2
for dim in dis_feature[0]:
xi_prob1 = prob_negative[dim][int(xi[dim])-1]
xi_posterior_prob[0] *= xi_prob1
xi_prob2 = prob_positive[dim][int(xi[dim])-1]
xi_posterior_prob[1] *= xi_prob2
xi_posterior_prob[0] *= class_prior[0]
xi_posterior_prob[1] *= class_prior[1]
if xi_posterior_prob[0] > xi_posterior_prob[1]:
predicted_labels[i] = -1
else:
predicted_labels[i] = 1
if testing_labels is not None:
acc = accuracy_score(testing_labels, predicted_labels)
return predicted_labels
def Gaussian(self, x, mean, std):
return np.exp(- 1 / 2 * np.dot((x - mean).T, (x - mean)) / std) / (2 * np.pi * np.sqrt(np.abs(std)))
def NBClassifier(training_samples, training_labels, testing_samples, testing_labels, flag):
sample_dim = len(flag)
NBC = TwoClassNaiveBayesClassifier(sample_dim)
NBC.train(training_samples, flag, training_labels)
pred = NBC.test(testing_samples, flag, testing_labels)
test_num = len(testing_labels)
correct_num = 0
for i in range(test_num):
if pred[i] == testing_labels[i]:
correct_num += 1
return [test_num, correct_num, correct_num / test_num]
###Output
_____no_output_____
###Markdown
**k重交叉验证**
###Code
def Cross_validation(samples, labels, flag, k=5):
batch_size = int(samples.shape[0] / k)
correct_classification = 0
total = 0
for i in range(0, k):
k_train_samples = np.vstack([samples[0 : i * batch_size], samples[(i + 1) * batch_size :]])
k_train_labels = np.hstack([labels[0 : i * batch_size], labels[(i + 1) * batch_size:]])
k_val_samples = samples[i * batch_size : (i + 1) * batch_size]
k_val_labels = labels[i * batch_size : (i + 1) * batch_size]
res = NBClassifier(k_train_samples, k_train_labels, k_val_samples, k_val_labels, flag)
correct_classification += res[1]
total += res[0]
print('ACC of %dth validation : %.3f' % (i, res[2]))
return correct_classification / total
###Output
_____no_output_____
###Markdown
**数据集乱序**
###Code
idx = list(range(17))
np.random.shuffle(idx)
samples = samples[idx]
labels = labels[idx]
Cross_validation(samples, labels, flag, k=5)
###Output
ACC of 0th validation : 0.667
ACC of 1th validation : 0.667
ACC of 2th validation : 1.000
ACC of 3th validation : 0.000
ACC of 4th validation : 0.333
|
Slides/exc_11/class11.ipynb
|
###Markdown
PS6: Solving the Solow-model Introduction to Programming and Numerical Analysis *Oluf Kelkjær* **Today's Plan** 1. Dataproject1. Working with equations * Scipy's `linalg` * `Sympy`2. Let's work on PS6 DataprojectExpect feedback from me before the next exercise class! Remember to do peer-feedback - **deadline**: 24th april 23:59 Scipy's `linalg` Linalg is one of scipy's submodules. Can basically do anything with the realm of linear algebra: - Basic stuff: determinant, invert, norm- Matrix decompositions (LU, Cholesky etc.)- Solve a system of equations- Find eigenvalues An example:Let's solve for x$$Ax = b$$
###Code
import numpy as np
from scipy import linalg
np.random.seed(1900)
A = np.random.uniform(size=(5,5))
b = np.random.uniform(size=5)
print(f'Matrix A:\n{A}\n\nMatrix b:\n {b}')
# Solve using LU factorization ->
# Split A in a lower, upper triangular matrix and a permutation matrix -> Speed
# LU factorize A using linalg
LU,piv = linalg.lu_factor(A)
# Solve using linalg
x = linalg.lu_solve((LU,piv),b)
print(x)
# or you could use a regular solve
print(linalg.solve(A,b))
###Output
[-15.33189031 -24.00998148 40.02675108 15.24193293 4.89008792]
###Markdown
What do we use it for?In the first question of the exam 2020 you had to implement the OLS estimator using linear algebra. Recall that,$$\hat{\beta}=(X^{'}X)^{-1}X^{'}y$$ Symbolic Python `SymPy` is a Python library for symbolic mathematics and lets you solve equations **analytically**! (*like* WolframAlpha or Symbolab) Say that you want implement the utility function of standard OLG agent. We assume agents derive utility from consumption in both periods:$$U_t = u(c_{1t})+\frac{1}{1+\rho}u(c_{2t+1})$$ We assume log-preferences
###Code
import sympy as sm
# Initialize variabels in Sympy
c1,c2 = sm.symbols('c_1t'), sm.symbols('c_2t+1')
rho = sm.symbols('rho')
# Setup utility in sympy
uc1 = sm.ln(c1)
uc2 = sm.ln(c2)
U = uc1 + 1/(1+rho) * uc2
U
###Output
_____no_output_____
###Markdown
With `sympy` we are able to do many calculations. Say that we need the derivate of $U$ w.r.t. $c_{2t+1}$:
###Code
# We just use SymPy's .diff() method:
sm.diff(U,c2)
###Output
_____no_output_____
###Markdown
Another cool feature is that you can turn your SymPy equations into python functions. This can really tie your model projects together: * Solve model analytically with SymPy * Convert your solution to a python function e.g. the law-of-motion in OLG * Find steady state level of capital using an optimizer How is it done?
###Code
# Use SymPy's lambdify method which takes an iterable of arguments in our case the consumptions and rho
# and of course the function in our case U
util = sm.lambdify((c1,c2,rho),U)
# Compute some utility
util(7,8,0.1)
###Output
_____no_output_____
|
notebooks/challenge_test.ipynb
|
###Markdown
Workshop challenge Package installing and data import
###Code
# standard library imports
import os
import sys
from collections import Counter
# pandas, seaborn etc.
import seaborn as sns
import sklearn
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
# sklearn outlier models
from sklearn.neighbors import NearestNeighbors
# from sklearn.neighbors import LocalOutlierFactor
# from sklearn.ensemble import IsolationForest
from sklearn.cluster import DBSCAN
from sklearn.mixture import GaussianMixture
# other sklearn functions
from sklearn.decomposition import PCA
from sklearn.covariance import MinCovDet, EmpiricalCovariance
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import scale as preproc_scale
from sklearn.manifold import TSNE
# pyod
import pyod
from pyod.models.auto_encoder import AutoEncoder
from pyod.models.knn import KNN
from pyod.models.lof import LOF
# from pyod.models.pca import PCA as pyod_PCA
from pyod.models.iforest import IForest
sys.path.append("..") #to enable importing from outlierutils
from outlierutils import plot_top_N, plot_outlier_scores, LabelSubmitter
url = "https://unsupervised-label-api-pg.herokuapp.com/" # Link to the API
###Output
_____no_output_____
###Markdown
Data Imports
###Code
data_path = '../data'
x_kdd = pd.read_pickle(os.path.join(data_path, 'x_kdd.pkl'))
x_kdd = x_kdd.drop_duplicates()
if x_kdd.index.max() > len(x_kdd):
x_kdd = x_kdd.reset_index()
print(f'Data set size: {x_kdd.shape}')
x_kdd.head()
###Output
_____no_output_____
###Markdown
Challenge DescriptionYou just imported a data set, `x_kdd`, with 48K rows. The dataset was collected by by MIT Lincoln Labs in 1999, by operating a LAN-network as usual, and additionally carrying out various attacks. This specific dataset (which is a subset of the original dataset) has "normal" traffic as inlier class, and several attacks (buffer_overflow, ftp_write, imap, ...) as outlier class. Although this data does not represent payment fraud, it is relevant because of the mixed data type. There are no labels available, there is therefore also no split in train and test. The target is to predict as many true positives as possible (each positive gets you a positive score), and as few false positives as possible (each false positive subtracts a small score). So only submit points that may likely be positives!!Be selective, just submitting all points, or random points, will not get you a good score :)- Each true positive found yields **500** points- Each false positive costs **25** points**Hints**- The fraction of positives is less than 1%. Random guessing to gather labels is therefore unlikely to pay off. - When sufficiently many positive labels are available, this information may be used to further tune unsupervised algorithms, or to train a supervised classifier First clean up the data: convert categorical columns to one-hot encoded, and MinMax-scale all features. Do not remove any rows!
###Code
# clean-up code here
x_kdd_clean = pd.get_dummies(x_kdd)
x_kdd_clean.head()
scaler = StandardScaler()
x_kdd_clean = scaler.fit_transform(x_kdd_clean)
x_kdd_clean.shape
###Output
_____no_output_____
###Markdown
Outlier detection: your code!
###Code
x_kdd_clean.shape
samples = np.random.choice(range(0, len(x_kdd_clean)), 500)
MAX_N_TSNE = 4000 #Avoid overly long computation times with TSNE. Values < 4000 recommended
X_2D = TSNE(n_components=2).fit_transform(x_kdd_clean[samples]) # transform to 2-D space for plotting
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.scatter(X_2D[:, 0], X_2D[:, 1], marker='o', s=10)
plt.axis('off')
plt.show()
gmm = GaussianMixture(n_components=20, covariance_type='full', random_state=1) # try also spherical
gmm.fit(x_kdd_clean)
clf = KNN(method='median', n_neighbors=20)
clf.fit(x_kdd_clean)
# get the prediction label and outlier scores of the training data
y_train_pred = clf.labels_ # binary labels (0: inliers, 1: outliers)
y_train_scores = clf.decision_scores_ # raw outlier scores (use these for scoring!)
lof = LOF(n_neighbors=30, contamination=0.01)
lof.fit(x_kdd_clean)
y_train_pred = lof.labels_ # binary labels (0: inliers, 1: outliers)
y_train_scores = lof.decision_scores_
indices_outliers_lov = get_top_N_indices(lof.decision_scores_)
def get_top_N_indices(scores, N=100):
""" Helper function. Returns the indices of the points with the top N highest outlier scores
"""
return np.argsort(scores)[::-1][:N]
###Output
_____no_output_____
###Markdown
API submissionSubmit your predictions to the API with a LabelSubmitter object. This object has a `.post_predictions()` method to submit predictions, and a `.get_labels()` method to retrieve the labels (positives and negatives) of all previous submissions. Use the parameter `endpoint='kdd'` option for this challenge.
###Code
username='CANBERRA'
password='iusHr'
if not ('ls' in locals() and ls.jwt_token): #only if no labelsubmitter with .jwt_token is available
ls = LabelSubmitter(username=username,
password=password,
url=url)
indices_outliers
indices_outliers_gmm = get_top_N_indices(-scores_gmm, 1000)
sample_scores_knn = get_top_N_indices(clf.decision_scores_, 1000)
indices_outliers_lov = get_top_N_indices(lof.decision_scores_, 1000)
all_indices = np.concatenate([indices_outliers_knn, indices_outliers_lov, indices_outliers_gmm])
unique, counts = np.unique(all_indices, return_counts=True)
counts_dict = dict(zip(unique, counts))
top_indices = unique[counts >=2]
top_indices
top_indices.shape
indices_outliers_knn, indices_outliers_lov, indices_outliers_gmm
ls.post_predictions(idx=top_indices, endpoint='kdd')
6797, 16794, 20513, 27853
result = ls.get_labels(endpoint='kdd')
idxes = result[result == 1].index
-scores_gmm[idxes], clf.decision_scores_[idxes], lof.decision_scores_[idxes]
###Output
_____no_output_____
|
extras/tfhub-text/movie-classification.ipynb
|
###Markdown
Building a text classification model with TF HubIn this notebook, we'll walk you through building a model to predict the genres of a movie given its description. The emphasis here is not on accuracy, but instead how to use TF Hub layers in a text classification model.To start, import the necessary dependencies for this project.
###Code
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import json
import pickle
import urllib
from sklearn.preprocessing import MultiLabelBinarizer
print(tf.__version__)
###Output
_____no_output_____
###Markdown
The datasetWe need a lot of text inputs to train our model. For this model we'll use [this awesome movies dataset](https://www.kaggle.com/rounakbanik/the-movies-dataset) from Kaggle. To simplify things I've made the `movies_metadata.csv` file available in a public Cloud Storage bucket so we can download it with `wget`. I've preprocessed the dataset already to limit the number of genres we'll use for our model, but first let's take a look at the original data so we can see what we're working with.
###Code
# Download the data from GCS
!wget 'https://storage.googleapis.com/movies_data/movies_metadata.csv'
###Output
_____no_output_____
###Markdown
Next we'll convert the dataset to a Pandas dataframe and print the first 5 rows. For this model we're only using 2 of these columns: `genres` and `overview`.
###Code
data = pd.read_csv('movies_metadata.csv')
data.head()
###Output
_____no_output_____
###Markdown
Preparing the data for our modelI've done some preprocessing to limit the dataset to the top 9 genres, and I've saved the Pandas dataframes as public [Pickle](https://docs.python.org/3/library/pickle.html) files in GCS. Here we download those files. The resulting `descriptions` and `genres` variables are Pandas Series containing all descriptions and genres from our dataset respectively.
###Code
urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/descriptions.p', 'descriptions.p')
urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/genres.p', 'genres.p')
descriptions = pickle.load(open('descriptions.p', 'rb'))
genres = pickle.load(open('genres.p', 'rb'))
###Output
_____no_output_____
###Markdown
Splitting our dataWhen we train our model, we'll use 80% of the data for training and set aside 20% of the data to evaluate how our model performed.
###Code
train_size = int(len(descriptions) * .8)
train_descriptions = descriptions[:train_size].astype('str')
train_genres = genres[:train_size]
test_descriptions = descriptions[train_size:].astype('str')
test_genres = genres[train_size:]
###Output
_____no_output_____
###Markdown
Formatting our labelsWhen we train our model we'll provide the labels (in this case genres) associated with each movie. We can't pass the genres in as strings directly, we'll transform them into multi-hot vectors. Since we have 9 genres, we'll have a 9 element vector for each movie with 0s and 1s indicating which genres are present in each description.
###Code
encoder = MultiLabelBinarizer()
encoder.fit_transform(train_genres)
train_encoded = encoder.transform(train_genres)
test_encoded = encoder.transform(test_genres)
num_classes = len(encoder.classes_)
# Print all possible genres and the labels for the first movie in our training dataset
print(encoder.classes_)
print(train_encoded[0])
###Output
_____no_output_____
###Markdown
Create our TF Hub embedding layer[TF Hub]() provides a library of existing pre-trained model checkpoints for various kinds of models (images, text, and more) In this model we'll use the TF Hub `universal-sentence-encoder` module for our pre-trained word embeddings. We only need one line of code to instantiate module. When we train our model, it'll convert our array of movie description strings to embeddings. When we train our model, we'll use this as a feature column.
###Code
description_embeddings = hub.text_embedding_column("descriptions", module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=False)
###Output
_____no_output_____
###Markdown
Instantiating our DNNEstimator ModelThe first parameter we pass to our DNNEstimator is called a head, and defines the type of labels our model should expect. Since we want our model to output multiple labels, we’ll use multi_label_head here. Then we'll convert our features and labels to numpy arrays and instantiate our Estimator. `batch_size` and `num_epochs` are hyperparameters - you should experiment with different values to see what works best on your dataset.
###Code
multi_label_head = tf.contrib.estimator.multi_label_head(
num_classes,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE
)
features = {
"descriptions": np.array(train_descriptions).astype(np.str)
}
labels = np.array(train_encoded).astype(np.int32)
train_input_fn = tf.estimator.inputs.numpy_input_fn(features, labels, shuffle=True, batch_size=32, num_epochs=25)
estimator = tf.contrib.estimator.DNNEstimator(
head=multi_label_head,
hidden_units=[64,10],
feature_columns=[description_embeddings])
###Output
_____no_output_____
###Markdown
Training and serving our model To train our model, we simply call `train()` passing it the input function we defined above. Once our model is trained, we'll define an evaluation input function similar to the one above and call `evaluate()`. When this completes we'll get a few metrics we can use to evaluate our model's accuracy.
###Code
estimator.train(input_fn=train_input_fn)
# Define our eval input_fn and run eval
eval_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(test_descriptions).astype(np.str)}, test_encoded.astype(np.int32), shuffle=False)
estimator.evaluate(input_fn=eval_input_fn)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataNow for the most fun part! Let's generate predictions on movie descriptions our model hasn't seen before. We'll define an array of 3 new description strings (the comments indicate the correct genres) and create a `predict_input_fn`. Then we'll display the top 2 genres along with their confidence percentages for each of the 3 movies.
###Code
# Test our model on some raw description data
raw_test = [
"An examination of our dietary choices and the food we put in our bodies. Based on Jonathan Safran Foer's memoir.", # Documentary
"After escaping an attack by what he claims was a 70-foot shark, Jonas Taylor must confront his fears to save those trapped in a sunken submersible.", # Action, Adventure
"A teenager tries to survive the last week of her disastrous eighth-grade year before leaving to start high school.", # Comedy
]
# Generate predictions
predict_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(raw_test).astype(np.str)}, shuffle=False)
results = estimator.predict(predict_input_fn)
# Display predictions
for movie_genres in results:
top_2 = movie_genres['probabilities'].argsort()[-2:][::-1]
for genre in top_2:
text_genre = encoder.classes_[genre]
print(text_genre + ': ' + str(round(movie_genres['probabilities'][genre] * 100, 2)) + '%')
print('')
###Output
_____no_output_____
|
handson-data-science-python/DataScience-Python3/MeanMedianMode.ipynb
|
###Markdown
Mean, Median, Mode, and introducing NumPy Mean vs. Median Let's create some fake income data, centered around 27,000 with a normal distribution and standard deviation of 15,000, with 10,000 data points. (We'll discuss those terms more later, if you're not familiar with them.)Then, compute the mean (average) - it should be close to 27,000:
###Code
import numpy as np
incomes = np.random.normal(27000, 15000, 10000)
np.mean(incomes)
###Output
_____no_output_____
###Markdown
We can segment the income data into 50 buckets, and plot it as a histogram:
###Code
%matplotlib inline
%config InlineBackend.figure_format='retina'
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context("paper")
sns.set_style("white")
sns.set()
# ----------------------------------------------
incomes = np.random.normal(27000, 15000, 10000)
print(np.mean(incomes))
plt.hist(incomes, 50)
plt.show()
###Output
27042.7636985
###Markdown
Now compute the median - since we have a nice, even distribution it too should be close to 27,000:
###Code
np.median(incomes)
###Output
_____no_output_____
###Markdown
Now we'll add Donald Trump into the mix. Darn income inequality!
###Code
incomes = np.append(incomes, [1000000000])
###Output
_____no_output_____
###Markdown
The median won't change much, but the mean does:
###Code
np.median(incomes)
np.mean(incomes)
###Output
_____no_output_____
###Markdown
Mode Next, let's generate some fake age data for 500 people:
###Code
ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages)
###Output
_____no_output_____
|
HW11.ipynb
|
###Markdown
Chem 30324, Spring 2018, Homework 11 Due May 4 2018 Equilibrium constants from first principles. In 1996, Schneider and co-workers used quantum chemistry to compute the reaction pathway for unimolecular decomposition of trifluoromethanol, a reaction of relevance to the atmospheric degradation of hydrofluorocarbon refrigerants (*J. Phys. Chem.* **1996**, *100*, 6097- 6103, [doi:10.1021/jp952703m](https://pubs.acs.org/doi/abs/10.1021/jp952703m)): $$\mathrm{CF_3OH\rightarrow COF_2 + HF}$$ Following are some of the reported results, computed at 298 K:
###Code
from IPython.display import Image
img = 'HW11.png'
Image(url=img)
###Output
_____no_output_____
###Markdown
1. Estimate $\Delta S^{\circ}$(298 K), in J mol$^{-1}$ K $^{-1}$ , assuming a 1 bar standard state. (*Hint:* What degrees of freedom will dominate the entropy?) The entropy can be taken as the products minus the reagents of each of the individual rotational, translational, vibrational, ZPE & electronic entropies.To calculate for $\Delta S^{\circ}$ rotational, we must calculate $\Lambda$
###Code
import math
Nav = 6.02214*10**(23)
k = 1.38065*10**(-23)
h = 6.62607 * 10**(-34) # J-s
T = 298 # K
e = 1.609*10**(-19)
beta = 1/(k * T)
m = [20.01, 66.01, 86.01]
P = 100000 # Pa
R = 8.314 # J/mol-K
lamb = []
def lamb_func(m):
return h * (beta * Nav/(2 * math.pi * m/1000))**(1/2)
lamb_1 = lamb_func(m[0])
lamb_2 = lamb_func(m[1])
lamb_3 = lamb_func(m[2])
def S_trans(lambs):
return R * math.log(e**(5/2) * k * T/(P * lambs**3))
S_1 = S_trans(lamb_1)
S_2 = S_trans(lamb_2)
S_3 = S_trans(lamb_3)
S_trans = S_1 + S_2 - S_3
print(round(S_trans,2) ,'is the entropy in J/mol-K') # J/mol-K
###Output
-777.32 is the entropy in J/mol-K
###Markdown
2. Using the data provided, determine $\Delta U^{\circ}$(298 K) and $\Delta H^{\circ}$(298 K), in kJ mol$^{-1}$.
###Code
hartreetokjmol = 2625.5
delta_E_el = -100.31885 - 312.57028 + 412.90047
delta_E_el_kj = delta_E_el * hartreetokjmol
delta_ZPE = (0.00925 + 0.01422 - 0.02889) * hartreetokjmol
delta_u_trans = 3.7 + 3.7 - 3.7
delta_u_rot = 2.5 + 3.7 - 3.7
delta_u_vib = 0 + 1.2 - 4.3
delta_u = delta_E_el_kj + delta_u_trans + delta_u_rot + delta_u_vib + delta_ZPE
print(round(delta_u, 2), 'kJ/mol')# kJ/mol
###Output
18.64 kJ/mol
###Markdown
3. Using the data provided, determine $K_c$ (298 K), assuming a 1 mole/liter standard state.
###Code
import numpy as np
knew = 8.61734e-5
newbeta = knew * T
qt_a = 7.72e32
qr_a = 61830
qv_a = 2.33
qt_c = 1.59e32
qr_c = 679
qv_c = 1.16
qt_d = 8.65e31
qr_d = 9.59
qv_d = 1
qa = qt_a * qr_a * qv_a
qc = qt_c * qr_c * qv_c
qd = qt_d * qr_d * qv_d
kJtoeV = 96.485
k = (qc * qd/qa) * np.exp((-(delta_E_el_kj+ delta_ZPE) * kJtoeV * newbeta))
print(k)
###Output
17593028830681.9
###Markdown
4. 1 mole of CF$_3$OH is generated in a 20 L vessel at 298 K and left long enough to come to equilibrium with respect to its decomposition reaction. What is the composition of the gas (concentrations of all the components) at equilibrium (in mol/L)?
###Code
k = 20* (qc * qd/qa) * np.exp((-(delta_E_el_kj+ delta_ZPE) * kJtoeV * newbeta))/ 1000
import cmath
a = 1
b = -k
c = 0
# calculate the discriminant
d = (b**2) - (4*a*c)
sol2 = (-b+cmath.sqrt(d))/(2*a)
print('The solution is', round(abs(sol2)), 'Mol^-1')
###Output
The solution is 351860576614.0 Mol^-1
###Markdown
5. How, directionally, would your answer to Question 4 change if the vessel was at a higher temperature? Why, in statistical mechanical terms? If the temperature increases, the reaction is shifted toward the products. Since the value of intenral energy is positive (the products are at higher energy than the reagents), raising the temperature blurs the energy difference and causes the endothermic reaction to push toward the products. 6. How, directionally, would your answer to Question 4 change if the vessel had a volume of 5 L? Why, in statistical mechanical terms? If the volume increases, the vessel shifts toward the left. Entropy would favor having more products present, so reducing the volume requires a shift toward fewer products (the left). Chemical kinetics from first principles. While chemical equilibrium describes what can happen, chemical kinetics determines what *will* happen. The same paper reports results for the transition state for the unimolecular decomposition reaction, also shown in the table above. 7. Provide a rough sketch of what you expect the transition state to look like.
###Code
from IPython.display import Image
img = 'IMG_0433.jpg'
Image(url=img)
###Output
_____no_output_____
###Markdown
8. Based on the data in the table, sketch out an approximate potential energy surface for the unimolecular decomposition reaction. Indicate on the PES the location of the reactants, the products, and the transition state. Also indicate relevant zero point energies, the 0 K reaction energy, and the activation energy.
###Code
print(delta_E_el_kj+ delta_ZPE, 'is the 0 K reaction energy.')
print(0.00925, 'is ZPE2', 0.01422, 'is ZPE3', '& 0.02889 is ZPE1')
E = delta_u + 2 * kb * T * Nav/1000
print(E, 'is the activation energy.')
from IPython.display import Image
img = 'IMG_0432.jpg'
Image(url=img)
###Output
15.542959999898894 is the 0 K reaction energy.
0.00925 is ZPE2 0.01422 is ZPE3 & 0.02889 is ZPE1
23.598382684134894 is the activation energy.
###Markdown
9. Using data from the table and harmonic transition state theory, compute the first-order rate constant for CF$_3$OH decomposition at 298 K, in s$^{-1}$.
###Code
delta_ut = (-412.83771+412.90047 + 0.02313 - 0.02889) * 27.21139 # hartree to eV
kb = 1.38065*10**(-23)
qt = 7.22e32 * 68420 * 2.28
kc_1 = kb*T/h * qt/qa * np.exp(-delta_ut /(T * knew)) # Units of inverse seconds (s^-1)
###Output
_____no_output_____
###Markdown
10. Which factor in your rate constant dominates the temperature dependence? Estimate the change in temperature necessary to double the rate constant. The factor in rate constant that dominates the temperature dependence is in the exponential fuction.
###Code
kc = kb*301.5/h * qt/qa * np.exp(-delta_ut /(301.5 * knew)) # Units of inverse seconds (s^-1), found using WolframAlpha Nonlinear Solver
print(kc)
print(kc/kc_1)
###Output
7.529598119788413e-14
2.0397678894700544
###Markdown
11. Based on your computed rate constant, what is the half-life of the CF$_3$OH in the vessel of Question 4?
###Code
half_life = math.log(2)/kc
print(half_life)
###Output
9205633149773.246
###Markdown
Use OMC to find the price
###Code
import numpy as np
def OMC(n):
sumvalue=0
for i in range(n):
b=np.random.normal(0,1)
if b < -2:
sumvalue+=1
return sumvalue/n
print(OMC(100000))
###Output
0.02252
###Markdown
Use IS to find the price
###Code
from scipy.stats import norm
import numpy as np
def IS(n,α):
sumvalue=0
for i in range(n):
y=np.random.uniform(0,1)
x=norm(-α,1).ppf(y)
if x<-2:
sumvalue+=np.exp(0.5*(α**2)+α*x)
return sumvalue/n
print(IS(100000,3))
###Output
0.022786303045805943
###Markdown
Can you show your approach is optimal? (Taking too much time to run the code.)
###Code
err_list = []
for alpha in range(10):
M = 1000
error = []
for i in range (M):
error.append(abs(IS(1000,alpha) - 0.0227))
err_list.append(min(error))
min_err = min(err_list)
print(">>> The minimum error among different α is " + str(min_err) + ", where α = " + str(err_list.index(min_err)))
###Output
_____no_output_____
|
nbgrader/tests/apps/files/test-hidden-tests.ipynb
|
###Markdown
For this problem set, we'll be using the Jupyter notebook: --- Part A (2 points)Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a `ValueError`.
###Code
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
### END SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
if n == 1:
return [1]
if n == 2:
return [1, 4]
if n == 10:
return [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
###Output
_____no_output_____
###Markdown
Your function should print `[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]` for $n=10$. Check that it does:
###Code
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
### BEGIN HIDDEN TESTS
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
### END HIDDEN TESTS
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
--- Part B (1 point)Using your `squares` function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the `squares` function -- it should NOT reimplement its functionality.
###Code
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
### END SOLUTION
squares(10)
if n == 1:
return 1
if n == 2:
return 5
if n == 10:
return 385
###Output
_____no_output_____
###Markdown
The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
###Code
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
### BEGIN HIDDEN TESTS
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
### END HIDDEN TESTS
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
###Output
_____no_output_____
###Markdown
For this problem set, we'll be using the Jupyter notebook: --- Part A (2 points)Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a `ValueError`.
###Code
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
### END SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
if n == 1:
return [1]
if n == 2:
return [1, 4]
if n == 10:
return [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
###Output
_____no_output_____
###Markdown
Your function should print `[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]` for $n=10$. Check that it does:
###Code
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
### BEGIN HIDDEN TESTS
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
### END HIDDEN TESTS
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
--- Part B (1 point)Using your `squares` function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the `squares` function -- it should NOT reimplement its functionality.
###Code
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
### END SOLUTION
squares(10)
if n == 1:
return 1
if n == 2:
return 5
if n == 10:
return 385
###Output
_____no_output_____
###Markdown
The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
###Code
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
### BEGIN HIDDEN TESTS
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
### END HIDDEN TESTS
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
###Output
_____no_output_____
###Markdown
For this problem set, we'll be using the Jupyter notebook: --- Part A (2 points)Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a `ValueError`.
###Code
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
### END SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
if n == 1:
return [1]
if n == 2:
return [1, 4]
if n == 10:
return [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
###Output
_____no_output_____
###Markdown
Your function should print `[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]` for $n=10$. Check that it does:
###Code
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
### BEGIN HIDDEN TESTS
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
### END HIDDEN TESTS
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
###Output
_____no_output_____
###Markdown
--- Part B (1 point)Using your `squares` function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the `squares` function -- it should NOT reimplement its functionality.
###Code
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
### END SOLUTION
squares(10)
if n == 1:
return 1
if n == 2:
return 5
if n == 10:
return 385
###Output
_____no_output_____
###Markdown
The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
###Code
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
### BEGIN HIDDEN TESTS
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
### END HIDDEN TESTS
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
###Output
_____no_output_____
|
notebooks/examples/Quickstart Guide.ipynb
|
###Markdown
Quickstart GuideThis guide will demonstrates how to quickly:- Add users from a CSV file to your organization- Assign each user a Tracker for ArcGIS License- Create a track view that includes mobile users and track viewers- Generate a QR Code for quick sign in on the Android and iOS apps
###Code
from arcgis.gis import GIS
from arcgis.apps.tracker import TrackView
from arcgis.apps import build_tracker_url
import csv
import pandas as pd
import pyqrcode
admin_username = 'admin'
admin_password = 'password'
org = 'https://server.domain.com/webadapter'
users_csv = 'users.csv'
track_view_name = "Track View 1"
gis = GIS(org, admin_username, admin_password, verify_cert=False)
###Output
_____no_output_____
###Markdown
First we'll read the CSV file using pandas
###Code
df = pd.read_csv(users_csv)
df
###Output
_____no_output_____
###Markdown
Create users if necessaryIf the user in the CSV file does not exist in the organization, we'll add them
###Code
users = []
for index, row in df.iterrows():
u = gis.users.get(row["Username"])
if u is None:
users.append(gis.users.create(
username=row["Username"],
password=row["Password"],
firstname=row["First Name"],
lastname=row["Last Name"],
email=row["Email"],
role=row["Role"],
user_type=row["User Type"]
))
else:
users.append(u)
###Output
_____no_output_____
###Markdown
Each user is then assigned a Tracker for ArcGIS license so that they can use the mobile app
###Code
tracker_license = gis.admin.license.get('Tracker for ArcGIS')
for user in users:
tracker_license.assign(username=user.username, entitlements=["tracker"])
###Output
_____no_output_____
###Markdown
Create a new track view and add mobile users
###Code
track_view = gis.admin.location_tracking.create_track_view(track_view_name)
track_view.mobile_users.add(users)
###Output
_____no_output_____
###Markdown
Create a Track Viewer role if necessaryIn order to view other users tracks, track viewers need to have 2 specific privileges:- the ability to join a group- the ability to see others users tracksIf a role titled "Track Viewer" does not exist, we'll create one.
###Code
for role in gis.users.roles.all():
if role.name.lower() == "Track Viewer".lower():
track_viewer_role = role
break
else:
track_viewer_role = gis.users.roles.create(
name='Track Viewer',
description="A user that can use the Track Viewer web app to see others tracks",
privileges=[
"portal:user:joinGroup",
"portal:user:viewTracks",
]
)
###Output
_____no_output_____
###Markdown
Add Track ViewersWe'll now add track viewers to the track view based on the "Track Viewer" column in the CSV file
###Code
for index, row in df[df['Track Viewer'] == "Yes"].iterrows():
user = gis.users.get(row["Username"])
if "portal:user:joinGroup" not in user.privileges or "portal:user:viewTracks" not in user.privileges:
user.update_role(track_viewer_role)
track_view.viewers.add(user)
###Output
_____no_output_____
###Markdown
Let's confirm everything is all set
###Code
track_view.viewers.list()
track_view.mobile_users.list()
###Output
_____no_output_____
###Markdown
Generate a QR code for quick sign inFinally, a QR code is generated which can be scanned by a mobile device to quickly sign into the Tracker app.
###Code
from IPython.core.display import Image
url = pyqrcode.create(build_tracker_url(org))
url.png("qr.png", scale=6)
Image(filename="qr.png")
###Output
_____no_output_____
|
how-to-use-azureml/training-with-deep-learning/tensorboard/tensorboard.ipynb
|
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-contrib-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
import tempfile
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment, Run
from azureml.core.script_run_config import ScriptRunConfig
import tensorflow as tf
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.contrib.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM](04.train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import RemoteCompute
from azureml.core.compute_target import ComputeTargetException
import os
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
attached_dsvm_compute = RemoteCompute.attach(workspace=ws,
name=compute_target_name,
username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "cpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-contrib-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
import tempfile
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment, Run
from azureml.core.script_run_config import ScriptRunConfig
import tensorflow as tf
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.contrib.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM](04.train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node AmlCompute. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note that we only support Linux VMs and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import DsvmCompute
from azureml.core.compute_target import ComputeTargetException
compute_target_name = 'cpudsvm'
try:
compute_target = DsvmCompute(workspace=ws, name=compute_target_name)
print('found existing:', compute_target.name)
except ComputeTargetException:
print('creating new.')
dsvm_config = DsvmCompute.provisioning_configuration(vm_size="Standard_D2_v2")
compute_target = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)
compute_target.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "cpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)
# Use the 'status' property to get a detailed status for the current cluster.
print(compute_target.status.serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-contrib-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
import tempfile
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment, Run
from azureml.core.script_run_config import ScriptRunConfig
import tensorflow as tf
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.contrib.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM](04.train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import RemoteCompute
from azureml.core.compute_target import ComputeTargetException
import os
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
attached_dsvm_compute = RemoteCompute.attach(workspace=ws,
name=compute_target_name,
username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "cpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)
# Use the 'status' property to get a detailed status for the current cluster.
print(compute_target.status.serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Install the Azure ML TensorBoard package.
###Code
!pip install azureml-contrib-tensorboard
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.contrib.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
attached_dsvm_compute = RemoteCompute.attach(workspace=ws,
name=compute_target_name,
username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
|
site/en-snapshot/guide/migrate/evaluator.ipynb
|
###Markdown
Copyright 2021 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Migrate evaluation View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Evaluation is a critical part of measuring and benchmarking models.This guide demonstrates how to migrate evaluator tasks from TensorFlow 1 to TensorFlow 2. In Tensorflow 1 this functionality is implemented by `tf.estimator.train_and_evaluate`, when the API is running distributedly. In Tensorflow 2, you can use the built-in `tf.keras.experimental.SidecarEvaluator`, or a custom evaluation loop on the evaluator task.There are simple serial evaluation options in both TensorFlow 1 (`tf.estimator.Estimator.evaluate`) and TensorFlow 2 (`Model.fit(..., validation_data=(...))` or `Model.evaluate`). The evaluator task is preferable when you would like your workers not switching between training and evaluation, and built-in evaluation in `Model.fit` is preferable when you would like your evaluation to be distributed. Setup
###Code
import tensorflow.compat.v1 as tf1
import tensorflow as tf
import numpy as np
import tempfile
import time
import os
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
###Output
_____no_output_____
###Markdown
TensorFlow 1: Evaluating using tf.estimator.train_and_evaluateIn TensorFlow 1, you can configure a `tf.estimator` to evaluate the estimator using `tf.estimator.train_and_evaluate`.In this example, start by defining the `tf.estimator.Estimator` and speciyfing training and evaluation specifications:
###Code
feature_columns = [tf1.feature_column.numeric_column("x", shape=[28, 28])]
classifier = tf1.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 32],
optimizer=tf1.train.AdamOptimizer(0.001),
n_classes=10,
dropout=0.2
)
train_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_train},
y=y_train.astype(np.int32),
num_epochs=10,
batch_size=50,
shuffle=True,
)
test_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_test},
y=y_test.astype(np.int32),
num_epochs=10,
shuffle=False
)
train_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)
eval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,
steps=10,
throttle_secs=0)
###Output
_____no_output_____
###Markdown
Then, train and evaluate the model. The evaluation runs synchronously between training because it's limited as a local run in this notebook and alternates between training and evaluation. However, if the estimator is used distributedly, the evaluator will run as a dedicated evaluator task. For more information, check the [migration guide on distributed training](https://www.tensorflow.org/guide/migrate/multi_worker_cpu_gpu_training).
###Code
tf1.estimator.train_and_evaluate(estimator=classifier,
train_spec=train_spec,
eval_spec=eval_spec)
###Output
_____no_output_____
###Markdown
TensorFlow 2: Evaluating a Keras modelIn TensorFlow 2, if you use the Keras `Model.fit` API for training, you can evaluate the model with `tf.keras.experimental.SidecarEvaluator`. You can also visualize the evaluation metrics in Tensorboard which is not shown in this guide.To help demonstrate this, let's first start by defining and training the model:
###Code
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = create_model()
model.compile(optimizer='adam',
loss=loss,
metrics=['accuracy'],
steps_per_execution=10,
run_eagerly=True)
log_dir = tempfile.mkdtemp()
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=os.path.join(log_dir, 'ckpt-{epoch}'),
save_weights_only=True)
model.fit(x=x_train,
y=y_train,
epochs=1,
callbacks=[model_checkpoint])
###Output
_____no_output_____
###Markdown
Then, evaluate the model using `tf.keras.experimental.SidecarEvaluator`. In real training, it's recommended to use a separate job to conduct the evaluation to free up worker resources for training.
###Code
data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
data = data.batch(64)
tf.keras.experimental.SidecarEvaluator(
model=model,
data=data,
checkpoint_dir=log_dir,
max_evaluations=1
).start()
###Output
_____no_output_____
###Markdown
Copyright 2021 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Migrate evaluation View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Evaluation is a critical part of measuring and benchmarking models.This guide demonstrates how to migrate evaluator tasks from TensorFlow 1 to TensorFlow 2. In Tensorflow 1 this functionality is implemented by `tf.estimator.train_and_evaluate`, when the API is running distributedly. In Tensorflow 2, you can use the built-in `tf.keras.utils.SidecarEvaluator`, or a custom evaluation loop on the evaluator task.There are simple serial evaluation options in both TensorFlow 1 (`tf.estimator.Estimator.evaluate`) and TensorFlow 2 (`Model.fit(..., validation_data=(...))` or `Model.evaluate`). The evaluator task is preferable when you would like your workers not switching between training and evaluation, and built-in evaluation in `Model.fit` is preferable when you would like your evaluation to be distributed. Setup
###Code
import tensorflow.compat.v1 as tf1
import tensorflow as tf
import numpy as np
import tempfile
import time
import os
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
###Output
_____no_output_____
###Markdown
TensorFlow 1: Evaluating using tf.estimator.train_and_evaluateIn TensorFlow 1, you can configure a `tf.estimator` to evaluate the estimator using `tf.estimator.train_and_evaluate`.In this example, start by defining the `tf.estimator.Estimator` and speciyfing training and evaluation specifications:
###Code
feature_columns = [tf1.feature_column.numeric_column("x", shape=[28, 28])]
classifier = tf1.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 32],
optimizer=tf1.train.AdamOptimizer(0.001),
n_classes=10,
dropout=0.2
)
train_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_train},
y=y_train.astype(np.int32),
num_epochs=10,
batch_size=50,
shuffle=True,
)
test_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_test},
y=y_test.astype(np.int32),
num_epochs=10,
shuffle=False
)
train_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)
eval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,
steps=10,
throttle_secs=0)
###Output
_____no_output_____
###Markdown
Then, train and evaluate the model. The evaluation runs synchronously between training because it's limited as a local run in this notebook and alternates between training and evaluation. However, if the estimator is used distributedly, the evaluator will run as a dedicated evaluator task. For more information, check the [migration guide on distributed training](https://www.tensorflow.org/guide/migrate/multi_worker_cpu_gpu_training).
###Code
tf1.estimator.train_and_evaluate(estimator=classifier,
train_spec=train_spec,
eval_spec=eval_spec)
###Output
_____no_output_____
###Markdown
TensorFlow 2: Evaluating a Keras modelIn TensorFlow 2, if you use the Keras `Model.fit` API for training, you can evaluate the model with `tf.keras.utils.SidecarEvaluator`. You can also visualize the evaluation metrics in Tensorboard which is not shown in this guide.To help demonstrate this, let's first start by defining and training the model:
###Code
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = create_model()
model.compile(optimizer='adam',
loss=loss,
metrics=['accuracy'],
steps_per_execution=10,
run_eagerly=True)
log_dir = tempfile.mkdtemp()
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=os.path.join(log_dir, 'ckpt-{epoch}'),
save_weights_only=True)
model.fit(x=x_train,
y=y_train,
epochs=1,
callbacks=[model_checkpoint])
###Output
_____no_output_____
###Markdown
Then, evaluate the model using `tf.keras.utils.SidecarEvaluator`. In real training, it's recommended to use a separate job to conduct the evaluation to free up worker resources for training.
###Code
data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
data = data.batch(64)
tf.keras.utils.SidecarEvaluator(
model=model,
data=data,
checkpoint_dir=log_dir,
max_evaluations=1
).start()
###Output
_____no_output_____
|
.ipynb_checkpoints/Text-Pre-Processing-checkpoint.ipynb
|
###Markdown
HITUNG KATA
###Code
#SEMUA KATA (ALL)
import collections
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read input file, note the encoding is specified here
# It may be different in your text file
file = open('tweet-output-count.txt', encoding="utf8")
a= file.read()
# Stopwords
stopwords = set(line.strip() for line in open('stopwords.txt'))
stopwords = stopwords.union(set(['mr','mrs','one','two','said']))
# Instantiate a dictionary, and for every word in the file,
# Add to the dictionary if it doesn't exist. If it does, increase the count.
wordcount = {}
# To eliminate duplicates, remember to split by punctuation, and use case demiliters.
for word in a.lower().split():
word = word.replace(".","")
word = word.replace(",","")
word = word.replace(":","")
word = word.replace("\"","")
word = word.replace("!","")
word = word.replace("“","")
word = word.replace("]","")
word = word.replace("[","")
word = word.replace("‘","")
word = word.replace("*","")
if word not in stopwords:
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
# Print most common word
n_print = int(input("How many most common words to print: "))
print("\nOK. The {} most common words are as follows\n".format(n_print))
word_counter = collections.Counter(wordcount)
for word, count in word_counter.most_common(n_print):
print(word, ": ", count)
# Close the file
file.close()
# Create a data frame of the most common words
# Draw a bar chart
lst = word_counter.most_common(n_print)
df = pd.DataFrame(lst, columns = ['Word', 'Count'])
df.plot.bar(x='Word',y='Count')
#HITUNG KATA NETRAL
import collections
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read input file, note the encoding is specified here
# It may be different in your text file
file = open('count_tweet_netral.txt', encoding="utf8")
a= file.read()
# Stopwords
stopwords = set(line.strip() for line in open('stopwords.txt'))
stopwords = stopwords.union(set(['mr','mrs','one','two','said']))
# Instantiate a dictionary, and for every word in the file,
# Add to the dictionary if it doesn't exist. If it does, increase the count.
wordcount = {}
# To eliminate duplicates, remember to split by punctuation, and use case demiliters.
for word in a.lower().split():
word = word.replace(".","")
word = word.replace(",","")
word = word.replace(":","")
word = word.replace("\"","")
word = word.replace("!","")
word = word.replace("“","")
word = word.replace("]","")
word = word.replace("[","")
word = word.replace("‘","")
word = word.replace("*","")
if word not in stopwords:
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
# Print most common word
n_print = int(input("How many most common words to print: "))
print("\nOK. The {} most common words are as follows\n".format(n_print))
word_counter = collections.Counter(wordcount)
for word, count in word_counter.most_common(n_print):
print(word, ": ", count)
# Close the file
file.close()
# Create a data frame of the most common words
# Draw a bar chart
lst = word_counter.most_common(n_print)
df = pd.DataFrame(lst, columns = ['Word', 'Count'])
df.plot.bar(x='Word',y='Count')
#HITUNG KATA POSITIF
import collections
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read input file, note the encoding is specified here
# It may be different in your text file
file = open('count_tweet_positif.txt', encoding="utf8")
a= file.read()
# Stopwords
stopwords = set(line.strip() for line in open('stopwords.txt'))
stopwords = stopwords.union(set(['mr','mrs','one','two','said']))
# Instantiate a dictionary, and for every word in the file,
# Add to the dictionary if it doesn't exist. If it does, increase the count.
wordcount = {}
# To eliminate duplicates, remember to split by punctuation, and use case demiliters.
for word in a.lower().split():
word = word.replace(".","")
word = word.replace(",","")
word = word.replace(":","")
word = word.replace("\"","")
word = word.replace("!","")
word = word.replace("“","")
word = word.replace("]","")
word = word.replace("[","")
word = word.replace("‘","")
word = word.replace("*","")
if word not in stopwords:
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
# Print most common word
n_print = int(input("How many most common words to print: "))
print("\nOK. The {} most common words are as follows\n".format(n_print))
word_counter = collections.Counter(wordcount)
for word, count in word_counter.most_common(n_print):
print(word, ": ", count)
# Close the file
file.close()
# Create a data frame of the most common words
# Draw a bar chart
lst = word_counter.most_common(n_print)
df = pd.DataFrame(lst, columns = ['Word', 'Count'])
df.plot.bar(x='Word',y='Count')
#HITUNG KATA NEGATIF
import collections
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Read input file, note the encoding is specified here
# It may be different in your text file
file = open('count_tweet_negatif.txt', encoding="utf8")
a= file.read()
# Stopwords
stopwords = set(line.strip() for line in open('stopwords.txt'))
stopwords = stopwords.union(set(['mr','mrs','one','two','said']))
# Instantiate a dictionary, and for every word in the file,
# Add to the dictionary if it doesn't exist. If it does, increase the count.
wordcount = {}
# To eliminate duplicates, remember to split by punctuation, and use case demiliters.
for word in a.lower().split():
word = word.replace(".","")
word = word.replace(",","")
word = word.replace(":","")
word = word.replace("\"","")
word = word.replace("!","")
word = word.replace("“","")
word = word.replace("]","")
word = word.replace("[","")
word = word.replace("‘","")
word = word.replace("*","")
if word not in stopwords:
if word not in wordcount:
wordcount[word] = 1
else:
wordcount[word] += 1
# Print most common word
n_print = int(input("How many most common words to print: "))
print("\nOK. The {} most common words are as follows\n".format(n_print))
word_counter = collections.Counter(wordcount)
for word, count in word_counter.most_common(n_print):
print(word, ": ", count)
# Close the file
file.close()
# Create a data frame of the most common words
# Draw a bar chart
lst = word_counter.most_common(n_print)
df = pd.DataFrame(lst, columns = ['Word', 'Count'])
df.plot.bar(x='Word',y='Count')
###Output
How many most common words to print: 100
OK. The 100 most common words are as follows
covid : 125
orang : 22
pandemi : 22
perintah : 21
tinggal : 18
rakyat : 16
negara : 15
indonesia : 15
virus : 14
sakit : 13
mati : 13
vaksin : 13
tangan : 12
dunia : 12
gue : 11
rumah : 11
lonjak : 11
darurat : 11
ppkm : 11
pasien : 10
wabah : 9
akibat : 9
oksigen : 9
corona : 9
delta : 9
varian : 9
kena : 9
kabar : 8
rs : 8
allah : 8
ganas : 8
negeri : 8
sehat : 8
moga : 7
manusia : 7
lawan : 7
jokowi : 7
papar : 6
mohon : 6
krn : 6
cari : 6
obat : 6
tular : 6
sebar : 6
kes : 6
gak : 6
ruang : 6
isolasi : 6
amah : 5
bantu : 5
laku : 5
nyata : 5
situasi : 5
jakarta : 5
masuk : 5
takut : 5
prokes : 5
warga : 5
india : 5
infeksi : 5
rekor : 5
korban : 5
jalan : 5
masyarakat : 5
pasal : 5
ekonomi : 5
sedih : 4
aja : 4
baik : 4
paksa : 4
samator : 4
medis : 4
video : 4
kayak : 4
tutup : 4
tingkat : 4
kali : 4
kaya : 4
duka : 4
dgn : 4
hadap : 4
krisis : 4
tp : 4
hilang : 4
korupsi : 4
pimpin : 4
dpt : 4
ledak : 4
selamat : 4
mandiri : 4
penuh : 4
lindung : 4
diy : 4
atas : 4
kejut : 4
cetak : 4
tinggi : 4
sejarah : 4
langsung : 4
gagal : 4
###Markdown
TF-IDF
###Code
import pandas as pd
import numpy as np
TWEET_DATA = pd.read_csv("TweetClean_Label - no stopword.csv", usecols=["Label", "Tweet"])
TWEET_DATA.columns = ["label", "Tweet"]
TWEET_DATA.head()
###Output
_____no_output_____
|
tutorials/text-as-data/exercises/pdf-generator/Generate markdown files.ipynb
|
###Markdown
Load the dataset ...
###Code
import pandas as pd
# TODO. Merge it.
review_df = pd.read_csv("../data/yelp_review.csv").sample(10000)
business_df = pd.read_csv("../data/yelp_business.csv")
df = review_df.merge(business_df, on='business_id', suffixes=('', '_business'))
df.head(2)
###Output
_____no_output_____
###Markdown
Define the settings
###Code
NUM_PDF = 100
FOLDER_NAME = "pdf_reviews"
FOLDER_FILEPATH = f"../data/{FOLDER_NAME}/"
# create directory if it does not exists
from pathlib import Path
Path(FOLDER_FILEPATH).mkdir(parents=True, exist_ok=True)
FILENAME = "review_{}.md"
###Output
_____no_output_____
###Markdown
Define the file structure
###Code
file_structure = """
---
title: {name}
geometry: margin=2cm
output: pdf_document
fontsize: 12pt
---
_{address}_, _{state}_
*({latitude}, {longitude})*
- date: {date}
- stars:: {stars}
- categories: {categories}
Review:
{text}
"""
###Output
_____no_output_____
###Markdown
Generate the markdown files ...
###Code
for i in range(NUM_PDF):
review = df.iloc[i]
content = file_structure.format(name=review['name'],
address=review.address,
state=review.state,
latitude=review.latitude,
longitude=review.longitude,
date=review.date,
stars=review.stars,
categories=review.categories,
text=review.text)
doc_filepath = FOLDER_FILEPATH + FILENAME.format(i)
with open(doc_filepath, 'w') as f:
f.write(content)
###Output
_____no_output_____
###Markdown
Convert markdown files to pdfs
###Code
!./to_pdf.sh
###Output
_____no_output_____
###Markdown
Load text from the first review
###Code
from pdfminer.high_level import extract_text
review_0 = extract_text(FOLDER_FILEPATH + "review_0.pdf")
review_0
###Output
_____no_output_____
|
examples/Debentures.ipynb
|
###Markdown
Debêntures Importe o módulo 'finance' para precificar debêntures.
###Code
%run ../docs/fibra.py
###Output
_____no_output_____
###Markdown
**( ! )** NOTA: As debêntures nem sempre possuem um padrão de emissão como os títulos públicos, portanto é importante a leitura da escritura para entender os aspectos específicos de cada título. Esse módulo permite calcular o preço de grande parte das debêntures disponíveis no mercado que são remuneradas pelo **percentual DI**, **DI spread** e **IPCA spread**. Parâmetros globais:**DATA_REF** = Data de referência do cálculo.**YIELD_CURVE_PATH** = Caminho para o arquivo contendo os vértices e as taxas dos contratos futuros de juros.
###Code
DATA_REF = "2021-03-18"
YIELD_CURVE_PATH = "../interest_rates_futures/20210318-DIFUT.csv"
###Output
_____no_output_____
###Markdown
Debêntures DI Spread Grupo Natura - Código: **NATU27**- Série 2- 7ª Emissão- ISIN: **BRNATUDBS081**- ANBIMA: https://data.anbima.com.br/debentures/emissores/71673990000177/emissoes/7/series/NATU27/caracteristicas Preencha os dados do título:- ***DATA***: Data de referência para o cálculo do preço do ativo.- ***VNE***: Valor Nominal de Emissão.- ***VNA***: Valor Nominal Atualizado na data de referência.- ***PU***: Preço Unitário do título ao Par (marcado na curva).- ***TAXA***: Taxa indicativa (taxa de mercado) da ANBIMA.- ***FREQ***: Frequência de pagamentos (1: Anual, 2: Semestral, 3: Trimestral, etc).
###Code
DATA_natura = DATA_REF
VNE_natura = 10000
VNA_natura = 10000
PU_natura = 10170.808970
TAXA_natura = 0.7883
FREQ_natura = 2
###Output
_____no_output_____
###Markdown
**( ! )** Informe o calendário de amortizações como **percentual do valor nominal atualizado - VNA** (1 pra 100%, 0.50 para 50%, 0,33 para 33,33%, etc.)
###Code
cal_natura = {"2021-09-27": 1}
natura = DebentureSpread(date=DATA_natura, maturity="2021-09-25", vne=VNE_natura, vna=VNA_natura, pu=PU_natura,
issue_spread=1.75, market_spread=TAXA_natura, freq=FREQ_natura, redemption=cal_natura,
yield_curve_file=YIELD_CURVE_PATH)
natura.des()
cf_natura = natura.cashflow()
cf_natura.head()
natura.get("price")
###Output
_____no_output_____
###Markdown
Natura Cosméticos- Código: NATUC0- Série 3- 10ª Emissão- ISIN: **BRNATUDBS0F6**- ANBIMA: https://data.anbima.com.br/debentures/emissores/71673990000177/emissoes/10/series/NATUC0/caracteristicas
###Code
DATA_natuco = DATA_REF
VNE_natuco = 10000
VNA_natuco = 10000
PU_natuco = 10016.822800
TAXA_natuco = 1.5847
FREQ_natuco = 2
###Output
_____no_output_____
###Markdown
**( ! )** Informe o calendário de amortizações como **percentual do valor nominal atualizado - VNA** (1 pra 100%, 0.50 para 50%, 0,33 para 33,33%, etc.)
###Code
cal_natuco = {"2024-08-26": 1}
natuco = DebentureSpread(date=DATA_natuco, maturity="2024-08-26", vne=VNE_natuco, vna=VNA_natuco, pu=PU_natuco,
issue_spread=1.1500, market_spread=TAXA_natuco, freq=FREQ_natuco, redemption=cal_natuco,
yield_curve_file=YIELD_CURVE_PATH)
natuco.des()
cf_natuco = natuco.cashflow()
cf_natuco.head(10)
natuco.get("price")
###Output
_____no_output_____
###Markdown
Debêntures Percentual DIUm exemplo de uma debênture remunerada por percentual do DI. BR Malls- Código: **BRML17**- Série 1- 7ª Emissão- ISIN: **BRBRMLDBS076**- ANBIMA: https://data.anbima.com.br/debentures/emissores/06977745000191/emissoes/7/series/BRML17/caracteristicas
###Code
DATA_brm = DATA_REF
VNE_brm = 10000
VNA_brm = 10000
PU_brm = 10004.015200
TAXA_brm = 124.9368
FREQ_brm = 2
cal_brm = {"2025-03-11": 1}
###Output
_____no_output_____
###Markdown
No caso das debêntures remuneradas pelo depósito interbancário (DI) é necessário estimar as taxas para as datas de pagamento do cupom. A função permite duas opções: 1) a importação de um arquivo csv/txt (`yield_curve_file`) que contém os vértices e taxas do DI, onde a expectativa de juros é obtida a partir da interpolação cúbica dos dados - **Cubic Spline**. 2) input manual, como no exemplo abaixo, no qual os vértices com suas respectivas taxas são definidas pelo o usuário usando os parâmetors `vertices` e `exp_rate`.
###Code
vertices = [123, 247, 374, 499, 624, 747, 876, 1000]
rates = [4.020, 5.190, 6.190, 6.890, 7.490, 7.840, 8.120, 8.340]
brm = DebentureDI(date=DATA_brm, maturity="2025-03-11", vne=VNE_brm, vna=VNA_brm, pu=PU_brm,
issue_rate=107.5000, market_rate=TAXA_brm, freq=FREQ_brm, redemption=cal_brm,
vertices=vertices, exp_rates=rates)
brm.des()
cf_brm = brm.cashflow()
cf_brm.head(10)
brm.get("price")
###Output
_____no_output_____
###Markdown
Debêntures IPCA Spread Equatorial Energia Pará - Celpa - Código: **CLPP13**- Série 1- 3ª Emissão- ISIN: **BRCELPDBS018**- ANBIMA: https://data.anbima.com.br/debentures/emissores/04895728000180/emissoes/3/series/CLPP13/caracteristicas Preencha os dados do título:- ***DATA***: Data de referência para o cálculo do preço do ativo.- ***VNE***: Valor Nominal de Emissão.- ***VNA***: Valor Nominal Atualizado na data de referência.- ***PU***: Preço Unitário do título ao Par (marcado na curva).- ***TAXA***: Taxa indicativa (taxa de mercado) da ANBIMA.- ***FREQ***: Frequência de pagamentos (1: Anual, 2: Semestral, 3: Trimestral, etc).
###Code
DATA_clpp = DATA_REF
VNE_clpp = 1000
VNA_clpp = 1181.115910
PU_clpp = 1200.412954
TAXA_clpp = 0.4221
FREQ_clpp = 1
###Output
_____no_output_____
###Markdown
**( ! )** Informe o calendário de amortizações como **percentual do valor nominal atualizado - VNA** (1 pra 100%, 0.50 para 50%, 0,33 para 33,33%, etc.)
###Code
cal_celpa = {"2021-12-15": 1}
celpa = DebentureIPCA(date=DATA_clpp, maturity="2021-12-15", vne=VNE_clpp, vna=VNA_clpp, pu=PU_clpp,
issue_rate=6.6971, market_rate=TAXA_clpp, freq=FREQ_clpp, redemption=cal_celpa)
celpa.des()
cf_celpa = celpa.cashflow()
cf_celpa.head()
celpa.get("price")
###Output
_____no_output_____
###Markdown
Algar Telecom - Código: **ALGA26**- Série 2- 6ª Emissão- ISIN: **BRALGTDBS050**- ANBIMA: https://data.anbima.com.br/debentures/emissores/71208516000174/emissoes/6/series/ALGA26/caracteristicas
###Code
DATA_alga = DATA_REF
VNE_alga = 1000
VNA_alga = 1167.191940
PU_alga = 1168.115982
TAXA_alga = 2.5468
FREQ_alga = 1
cal_alga = {"2023-03-15": 0.50, "2024-03-15": 1}
alga = DebentureIPCA(date=DATA_alga, maturity="2024-03-15", vne=VNE_alga, vna=VNA_alga, pu=PU_alga,
issue_rate=6.8734, market_rate=TAXA_alga, freq=FREQ_alga, redemption=cal_alga)
alga.des()
cf_alga = alga.cashflow()
cf_alga.head()
alga.get("price")
###Output
_____no_output_____
###Markdown
Ecovias - Código: **ECOV22**- Série 2- 2ª Emissão - ISIN: **BRECOVDBS044**- ANBIMA: https://data.anbima.com.br/debentures/emissores/02509491000126/emissoes/2/series/ECOV22/caracteristicas
###Code
DATA_ecov = DATA_REF
VNE_ecov = 1000
VNA_ecov = 1532.817280
PU_ecov = 1592.849187
TAXA_ecov = 2.5695
FREQ_ecov = 1
###Output
_____no_output_____
###Markdown
**( ! )** O tipo de pagamento ***fixed*** significa que as amortizações são frações fixas do VNA. Ex: No caso de 3 pagamentos cada um corresponde a 1/3 do VNA. Por default o parâmetro `payment` é ***adjust***.
###Code
tipo_pagamento = "fixed"
cal_ecov = {"2022-04-18": 0.3333, "2023-04-17": 0.3333, "2024-04-15": 0.3334}
ecov = DebentureIPCA(date=DATA_ecov, maturity="2024-04-15", vne=VNE_ecov, vna=VNA_ecov, pu=PU_ecov,
issue_rate=4.2800, market_rate=TAXA_ecov, freq=FREQ_ecov, redemption=cal_ecov, payment="fixed")
ecov.des()
cf_ecov = ecov.cashflow()
cf_ecov.head()
ecov.get("price")
###Output
_____no_output_____
###Markdown
Ecorodovias - Código: **ERDV17**- Série 1- 7ª Emissão- ISIN: **BRERDVDBS0E8**- ANBIMA: https://data.anbima.com.br/debentures/emissores/08873873000110/emissoes/7/series/ERDV17/caracteristicas
###Code
DATA_ecor = DATA_REF
VNE_ecor = 1000
VNA_ecor = 1121.243100
PU_ecor = 1183.949760
TAXA_ecor = 4.3061
FREQ_ecor = 1
cal_ecor = {"2024-06-17": 0.50, "2025-06-16": 1}
ecor = DebentureIPCA(date=DATA_ecor, maturity="2025-06-15", vne=VNE_ecor, vna=VNA_ecor, pu=PU_ecor,
issue_rate=7.4438, market_rate=TAXA_ecor, freq=FREQ_ecor, redemption=cal_ecor)
ecor.des()
cf_ecor = ecor.cashflow()
cf_ecor.head()
ecor.get("price")
###Output
_____no_output_____
###Markdown
MRV Engenharia - Código: **MRVE39**- Série 3- 9ª Emissão- ISIN: **BRMRVEDBS0C1**- ANBIMA: https://data.anbima.com.br/debentures/emissores/08343492000120/emissoes/9/series/MRVE39/caracteristicas
###Code
DATA_mrv = DATA_REF
VNE_mrv = 10000
VNA_mrv = 11716.677900
PU_mrv = 11794.337470
TAXA_mrv = 2.5794
FREQ_mrv = 2
cal_mrv = {"2022-02-15": 1}
mrv = DebentureIPCA(date=DATA_mrv, maturity="2022-02-15", vne=VNE_mrv, vna=VNA_mrv, pu=PU_mrv,
issue_rate=8.2502, market_rate=TAXA_mrv, freq=FREQ_mrv, redemption=cal_mrv)
mrv.des()
cf_mrv = mrv.cashflow()
cf_mrv.head()
mrv.get("price")
###Output
_____no_output_____
###Markdown
Arteris - Código: **ARTR19**
###Code
DATA_art = DATA_REF
VNE_art = 1000
VNA_art = 1035.550370
PU_art = 1036.133122
TAXA_art = 4.4851
FREQ_art = 2
cal_art = {"2026-09-15": 0.50, "2027-09-15": 1}
art = DebentureIPCA(date=DATA_art, maturity="2027-09-15", vne=VNE_art, vna=VNA_art, pu=PU_art,
issue_rate=4.8392, market_rate=TAXA_art, freq=FREQ_art, redemption=cal_art)
art.des()
cf_art = art.cashflow()
cf_art.head()
###Output
_____no_output_____
###Markdown
ECHOENERGIA - Código: **ECHP11**
###Code
DATA_ech = DATA_REF
VNE_ech = 1000
VNA_ech = 1059.774480
PU_ech = 1114.748040
TAXA_ech = 5.4258
FREQ_ech = 1
cal_ech = {"2023-06-15": 0.0227, "2024-06-17": 0.036836, "2025-06-16": 0.136938, "2026-06-15": 0.164697,
"2027-06-15": 0.264220, "2028-06-16": 0.341478, "2029-06-15": 0.496046, "2030-06-17": 1}
ech = DebentureIPCA(date=DATA_ech, maturity="2030-06-15", vne=VNE_ech, vna=VNA_ech, pu=PU_ech,
issue_rate=6.9, market_rate=TAXA_ech, freq=FREQ_ech, redemption=cal_ech)
ech.des()
cf_ech = ech.cashflow()
cf_ech.head()
###Output
_____no_output_____
###Markdown
MANAUS AMBIENTAL - AEGEA - Código: **MNAU13**
###Code
DATA_mna = DATA_REF
VNE_mna = 1000
VNA_mna = 1057.759110
PU_mna = 1073.912775
TAXA_mna = 4.7006
FREQ_mna = 2
cal_mna = {"2025-06-16": 1}
mna = DebentureIPCA(date=DATA_mna, maturity="2025-06-16", vne=VNE_mna, vna=VNA_mna, pu=PU_mna,
issue_rate=6.25, market_rate=TAXA_mna, freq=FREQ_mna, redemption=cal_mna)
mna.des()
cf_mna = mna.cashflow()
cf_mna.head()
###Output
_____no_output_____
###Markdown
OMEGA ENERGIA - Código: **OMGE31**
###Code
DATA_omg = DATA_REF
VNE_omg = 1000
VNA_omg = 1080.650430
PU_omg = 1131.094958
TAXA_omg = 4.7234
FREQ_omg = 1
cal_omg = {"2025-05-15": 0.40, "2026-05-15": 0.60}
omg = DebentureIPCA(date=DATA_omg, maturity="2026-05-15", vne=VNE_omg, vna=VNA_omg, pu=PU_omg,
issue_rate=5.6, market_rate=TAXA_omg, freq=FREQ_omg, redemption=cal_omg, payment="fixed")
omg.des()
cf_omg = omg.cashflow()
cf_omg.head(10)
###Output
_____no_output_____
###Markdown
USINA TERMELETRICA PAMPA SUL - Código: **UTPS12**
###Code
DATA_utp = DATA_REF
VNE_utp = 1000
VNA_utp = 1035.128180
PU_utp = 1050.427852
TAXA_utp = 4.6940
FREQ_utp = 2
cal_utp = {"2021-10-15": 0.025, "2022-04-18": 0.028115, "2022-10-17": 0.028928, "2023-04-17": 0.043617,
"2023-10-16": 0.045606, "2024-04-15": 0.092433, "2024-10-15": 0.101846, "2025-04-15": 0.112301,
"2025-10-15": 0.126508, "2026-04-15": 0.154051, "2026-10-15": 0.182105, "2027-04-15": 0.216640,
"2027-10-15": 0.276552, "2028-04-17": 1}
utp = DebentureIPCA(date=DATA_utp, maturity="2028-04-15", vne=VNE_utp, vna=VNA_utp, pu=PU_utp,
issue_rate=4.5, market_rate=TAXA_utp, freq=FREQ_utp, redemption=cal_utp)
utp.des()
cf_utp = utp.cashflow()
cf_utp.head(20)
###Output
_____no_output_____
|
Ch1/SciPy_NumPy/Ch_1_3_3_MoreElaborateArrays_Hy.ipynb
|
###Markdown
1.3.3.1. More data types Casting"Bigger" type wins in mixed-type operations:
###Code
(import [numpy :as np])
(import [helpers [*]])
(require [helpers [*]])
(+ (np.array [1 2 3]) 1.5)
###Output
_____no_output_____
###Markdown
Assignment never changes the type!
###Code
(setv a (np.array [1 2 3]))
a.dtype
(setv #s(a 0) 8.9) ; <-- float is truncated to integer
a
###Output
_____no_output_____
###Markdown
Forced casts:
###Code
(setv a (np.array [1.7 1.2 1.6]))
(setv b (a.astype int)) ; <-- cast to integer via truncation
b
###Output
_____no_output_____
###Markdown
Rounding:
###Code
(setv a (np.array [1.2 1.5 1.6 2.5 3.5 4.5]))
(setv b (np.around a))
b ; still floating-point
(setv c ((. (np.around a) astype) int))
c
###Output
_____no_output_____
###Markdown
Different data type sizesIntegers (signed):|Type |Definition || ------- |:--------------------------------------:||**int8** |8 bits ||**int16**|16 bits ||**int32**|32 bits (same as int on 32-bit platform)||**int64**|64 bits (same as int on 64-bit platform)|
###Code
(. (np.array [1] :dtype int) dtype)
(. (np.iinfo np.int32) max)
(- (** 2 31) 1)
###Output
_____no_output_____
###Markdown
Unsigned integers:|Type |Definition|| -------- |:--------:||**uint8** |8 bits ||**uint16**|16 bits ||**uint32**|32 bits ||**uint64**|64 bits |
###Code
(. (np.iinfo np.uint32) max)
(- (** 2 32) 1)
###Output
_____no_output_____
###Markdown
Floating-point numbers:|Type |Description || ---------- |:------------------------------------------------------:||**float16** |16 bits ||**float32** |32 bits ||**float64** |64 bits (same as float) ||**float96** |96 bits, platform-dependent (same as **np.longdouble**) ||**float128**|128 bits, platform-dependent (same as **np.longdouble**)|
###Code
(. (np.finfo np.float32) eps)
(. (np.finfo np.float64) eps)
(= (+ (np.float32 1e-8) (np.float32 1) 1))
(= (+ (np.float64 1e-8) (np.float64 1)) 1)
###Output
_____no_output_____
###Markdown
Long integersPython 2 has a specific type for `long` integers that cannot overflow which are represented with an `L` immediately after the number (no space). In Python 3, however, all integers are long and, thus, cannot overflow.
###Code
(. (np.iinfo np.int64) max)
(- (** 2 63) 1)
###Output
_____no_output_____
###Markdown
Complex floating-point numbers:|Type |Description || ------------ |:------------------------------------:||**complex64** |two 32-bit floats ||**complex128**|two 64-bit floats ||**complex192**|two 96-bit floats, platform-dependent ||**complex256**|two 128-bit floats, platform-dependent| Smaller data typesIf you don't know you need special data types, then you probably don't.Comparison on using `float32` instead of `float64`:- Half the size in memory on disk- Half the memory bandwidth required (may be a bit faster in some operations)
###Code
(setv a (np.zeros [ 10000000 ] :dtype np.float64))
(setv b (np.zeros [ 10000000 ] :dtype np.float32))
(do (import [time [time]])
(setv time0 (time))
(* a a)
(setv time1 (time))
(- time1 time0))
(do (import [time [time]])
(setv time2 (time))
(* b b)
(setv time3 (time))
(- time3 time2))
###Output
_____no_output_____
###Markdown
- **But**: bigger rounding errors - sometimes in surprising places (i.e., don't use them unless you really need them) 1.3.3.2 Structured data types|Type |Description || ----------- |:------------------:||`sensor_code`|(4-character string)||`position` |(float) ||`value` |(float) |
###Code
(setv samples (np.zeros [6] :dtype [(, "sensor_code" "S4")
(, "position" float)
(, "value" float)]))
samples
samples.ndim
samples.shape
samples.dtype.names
(setv #s(samples :) [(, "ALFA" 1 0.37)
(, "BETA" 1 0.11)
(, "TAU" 1 0.13)
(, "ALFA" 1.5 0.37)
(, "ALFA" 3 0.11)
(, "TAU" 1.2 0.13)])
samples
###Output
_____no_output_____
###Markdown
Field access works by indexing with field names:
###Code
(get samples "sensor_code")
(get samples "value")
#s(samples 0)
(get #s(samples 0) "sensor_code")
(setv (get #s(samples 0) "sensor_code") "TAU")
#s(samples 0)
###Output
_____no_output_____
###Markdown
Multiple fields at once:
###Code
(get samples ["position" "value"])
###Output
_____no_output_____
###Markdown
Fancy indexing works, as usual:
###Code
#s(samples (= (get samples "sensor_code") b"ALFA"))
###Output
_____no_output_____
###Markdown
**Note:** There are a bunch of other syntaxes for constructing structured arrays, see [here](http://docs.scipy.org/doc/numpy/user/basics.rec.html) and [here](http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.htmlspecifying-and-constructing-data-types) 1.3.3.3. `maskedarray`: dealing with (propagation of) missing data - For floats, one could use `NaN`s, but masks work for all types:
###Code
(setv x (np.ma.array [1 2 3 4] :mask [0 1 0 1]))
x
(setv y (np.ma.array [1 2 3 4] :mask [0 1 1 1]))
y
(+ x y)
###Output
_____no_output_____
###Markdown
- Masking versions of common functions:
###Code
(np.ma.sqrt [1 -1 2 -2])
###Output
_____no_output_____
|
SAR_Training/English/Ecosystems/Exercise1-ExploreSARTimeSeries_Example.ipynb
|
###Markdown
Exploring SAR Data and SAR Time Series Analysis with Supplied Data Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, Earth Big Data, LLC This notebook introduces you to the analysis of deep multi-temporal SAR image data stacks in the framework of *Jupyter Notebooks*. The Jupyter Notebook environment is easy to launch in any web browser for interactive data exploration with provided or new training data. Notebooks are comprised of text written in a combination of executable python code and markdown formatting including latex style mathematical equations. Another advantage of Jupyter Notebooks is that they can easily be expanded, changed, and shared with new data sets or newly available time series steps. Therefore, they provide an excellent basis for collaborative and repeatable data analysis. This notebook covers the following data analysis concepts:- How to load time series stacks into Jupyter Notebooks and how to explore image content using basic functions such as mean value calculation and histogram analysis.- How to apply calibration constants to covert initial digital number (DN) data into calibrated radar cross section information.- How to subset images create time series information of calibrated SAR amplitude values.- How to explore the time-series information in SAR data stacks for environmental analysis. Important Notes about JupyterHub Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.
###Code
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
// alert(command)
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
###Output
_____no_output_____
###Markdown
0. Importing Relevant Python Packages In this notebook we will use the following scientific libraries: Pandas is a Python library that provides high-level data structures and a vast variety of tools for analysis. The great feature of this package is the ability to translate rather complex operations with data into one or two commands. Pandas contains many built-in methods for filtering and combining data, as well as the time-series functionality. GDAL is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background. NumPy is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. Matplotlib is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. Moreover, many popular plotting libraries are designed to work in conjunction with matplotlib.
###Code
%%capture
import os # for chdir, getcwd, path.basename, path.exists
import pandas as pd # for DatetimeIndex
from osgeo import gdal # for GetRasterBand, Open, ReadAsArray
import numpy as np #for log10, mean, percentile, power
%matplotlib inline
import matplotlib.pylab as plb # for add_patch, add_subplot, figure, hist, imshow, set_title, xaxis,_label, text
import matplotlib.pyplot as plt # for add_subplot, axis, figure, imshow, legend, plot, set_axis_off, set_data,
# set_title, set_xlabel, set_ylabel, set_ylim, subplots, title, twinx
import matplotlib.patches as patches # for Rectangle
import matplotlib.animation as an # for FuncAnimation
from matplotlib import rc
import asf_notebook as asfn
asfn.jupytertheme_matplotlib_format()
from IPython.display import HTML
plt.rcParams.update({'font.size': 12})
###Output
_____no_output_____
###Markdown
1. Load Data Stack This notebook will be using a 70-image deep C-band SAR data stack over Nepal for a first experience with time series processing. The C-band data were acquired by the Sentinel-1 sensor and are available to us through the services of the Alaska Satellite Facility. Nepal is an interesting site for this analysis due to the significant seasonality of precipitation that is characteristic for this region. Nepal is said to have five seasons: spring, summer, monsoon, autumn and winter. Precipitation is low in the winter (November - March) and peaks dramatically in the summer, with top rain rates in July, August, and September (see figure to the right). As SAR is sensitive to changes in soil moisture, and vegetation structure, these weather patterns have a noticeable impact on the Radar Cross Section ($\sigma$) time series information. We will analyze the variation of $\sigma$ values over time and will interpret them in the context of the weather information shown in the figure to the right. Before we get started, let's first create a working directory for this analysis and change into it:
###Code
path = "/home/jovyan/notebooks/SAR_Training/English/Ecosystems/data_time_series_example"
asfn.new_directory(path)
os.chdir(path)
print(f"Current working directory: {os.getcwd()}")
###Output
_____no_output_____
###Markdown
We will retrieve the relevant data from an Amazon Web Service (AWS) cloud storage bucket using the following command:
###Code
s3_path = 's3://asf-jupyter-data/time_series.zip'
time_series_path = os.path.basename(s3_path)
!aws --region=us-east-1 --no-sign-request s3 cp $s3_path $time_series_path
###Output
_____no_output_____
###Markdown
Now, let's unzip the file (overwriting previous extractions) and clean up after ourselves:
###Code
if asfn.path_exists(time_series_path):
asfn.asf_unzip(os.getcwd(), time_series_path)
os.remove(time_series_path)
###Output
_____no_output_____
###Markdown
The following lines set path variables needed for data processing. This step is not necessary but it saves a lot of extra typing later. Define variables for the main data directory as well as for the files containing data and image information:
###Code
datadirectory = 'time_series/S32644X696260Y3052060sS1-EBD'
datefile = 'S32644X696260Y3052060sS1_D_vv_0092_mtfil.dates'
imagefile = 'S32644X696260Y3052060sS1_D_vv_0092_mtfil.vrt'
imagefile_cross = 'S32644X696260Y3052060sS1_D_vh_0092_mtfil.vrt'
###Output
_____no_output_____
###Markdown
2. Switch to the Data Directory: We now move into the data directory:
###Code
if asfn.path_exists(datadirectory):
os.chdir(datadirectory)
print(f"current directory: {os.getcwd()}")
#!ls *.vrt #Uncomment this line to see a List of the files
###Output
_____no_output_____
###Markdown
3. Assess Image Acquisition Dates Before we start analyzing the available image data, we want to examine the content of our data stack. First, we read the image acquisition dates for all files in the time series and create a *pandas* date index.
###Code
if asfn.path_exists(datefile):
with open(datefile, 'r') as f:
dates = f.readlines()
tindex = pd.DatetimeIndex(dates)
###Output
_____no_output_____
###Markdown
From the date index, we print the band numbers and dates:
###Code
if asfn.path_exists(imagefile):
print('Bands and dates for', imagefile)
for i, d in enumerate(tindex):
print("{:4d} {}".format(i+1, d.date()),end=' ')
if (i+1)%5 == 1: print()
###Output
_____no_output_____
###Markdown
4. Explore the Available Image Data To open an image file using the gdal.Open() function. This returns a variable (img) that can be used for further interactions with the file:
###Code
if asfn.path_exists(imagefile):
img = gdal.Open(imagefile)
###Output
_____no_output_____
###Markdown
To explore the image (number of bands, pixels, lines), you can use several functions associated with the image object (img) created in the last code cell:
###Code
print(img.RasterCount) # Number of Bands
print(img.RasterXSize) # Number of Pixels
print(img.RasterYSize) # Number of Lines
###Output
_____no_output_____
###Markdown
4.1 Reading Data from an Image Band To access any band in the image, use GDAL's *GetRasterBand(x)* function. Replace the band_num value with the number of the band you wish to access.
###Code
band_num = 70
band = img.GetRasterBand(band_num)
###Output
_____no_output_____
###Markdown
Once a band is seleted, several functions associated with the band are available for further processing, e.g., band.ReadAsArray(xoff=0,yoff=0,xsize=None,ysize=None)Let's read the entire raster layer for the band:
###Code
raster = band.ReadAsArray()
###Output
_____no_output_____
###Markdown
4.2 Extracting Subsets from a Larger Image Frame Because of the potentially large data volume when dealing with time series data stacks, it may be prudent to read only a subset of data. Using GDAL's ReadAsArray() function, subsets can be requested by defining pixel offsets and subset size:**img.ReadAsArray(xoff=0, yoff=0, xsize=None, ysize=None)**- xoff, yoff are the offsets from the upper left corner in pixel/line coordinates. - xsize, ysize specify the size of the subset in x-direction (left to right) and y-direction (top to bottom).For example, we can read only a subset of 5x5 pixels with an offset of 5 pixels and 20 lines:
###Code
raster_sub = band.ReadAsArray(5, 20, 50, 50)
###Output
_____no_output_____
###Markdown
The result is a two dimensional numpy array in the datatpye the data were stored in. **We can inspect these data in python by typing the array name on the commandline**:
###Code
raster_sub
###Output
_____no_output_____
###Markdown
4.3 Displaying Bands in the Time Series of SAR Data From the lookup table we know that bands 20 and 27 in the Nepal data stack are from mid February and late August. **Let's take look at these images**.
###Code
raster_1 = img.GetRasterBand(20).ReadAsArray()
raster_2 = img.GetRasterBand(27).ReadAsArray()
###Output
_____no_output_____
###Markdown
4.3.1 Write a Plotting Function Matplotlib's plotting functions allow for powerful options to display imagery. We are following some standard approaches for setting up figures.First we are looking at a **raster band** and it's associated **histogram**. Our function, *show_image()* takes several parameters: - raster = a numpy two dimensional array - tindex = a panda index array for dates- bandnbr = the band number the corresponds to the raster - vmin = minimim value to display - vmax = maximum value to display- output_filename = name of output file, if saving the plotPreconditions: matplotlib.pyplot must be imported as plb and matplotlib.pyplot must be imported as plt. Note: By default, data will be linearly stretched between vmin and vmax.We won't use this function in this notebook but it is a useful utility method, which can be copied and pasted for use in other analyses
###Code
def show_image_histogram(raster, tindex, band_nbr, vmin=None, vmax=None, output_filename=None):
assert 'plb' in globals(), 'Error: matplotlib.pylab must be imported as "plb"'
assert 'plt' in globals(), 'Error: matplotlib.pyplot must be imported as "plt"'
fig = plb.figure(figsize=(16, 8))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
plt.rcParams.update({'font.size': 14})
# plot image
ax1.imshow(raster, cmap='gray', vmin=vmin, vmax=vmax)
ax1.set_title(f'Image Band {band_nbr} {tindex[band_nbr-1].date()}')
vmin = np.percentile(raster, 2) if vmin==None else vmin
vmax = np.percentile(raster, 98) if vmax==None else vmax
ax1.xaxis.set_label_text(f'Linear stretch Min={vmin} Max={vmax}')
#plot histogram
h = ax2.hist(raster.flatten(), bins=200, range=(0, 10000))
ax2.xaxis.set_label_text('Amplitude (Uncalibrated DN Values)')
ax2.set_title(f'Histogram Band {band_nbr} {tindex[band_nbr-1].date()}')
if output_filename:
plt.savefig(output_filename, dpi=300, transparent='true')
###Output
_____no_output_____
###Markdown
We won't be calling our new function elsewhere in this notebook, so test it now:
###Code
show_image_histogram(raster_1, tindex, 20, vmin=2000, vmax=10000)
###Output
_____no_output_____
###Markdown
5. SAR Time Series Visualization, Animation, and Analysis This section introduces you to the handling and analysis of SAR time series stacks. A focus will be put on time series visualization, which allow us to inspect time series in more depth. Note that html animations are not exported into the pdf file, but will display interactively. 5.1 Reading the SAR Time Series Subset Let's read an image subset (offset 400, 400 / size 600, 600) of the entire time series data stack. The data are linearly scaled amplitudes represented as unsigned 16 bit integer.We use the GDAL *ReadAsArray(xoff,yoff,xsize,ysize)* function where *xoff* is the offset in pixels from upper left; *yoff* is the offset in lines from upper left; *xsize* is the number of pixels and *ysize* is the number of lines of the subset.If *ReadAsArray()* is called without any parameters, the entire image data stack is read. Let's first define a subset and make sure it is in the right geographic location.
###Code
# Open the image and read the first raster band
band = img.GetRasterBand(1)
# Define the subset
subset = (400, 400, 600, 600)
###Output
_____no_output_____
###Markdown
Now we are ready to extract this subset from all slices of the data stack.
###Code
# Plot one band together with the outline of the selected subset to verify its geographic location.
raster = band.ReadAsArray()
vmin = np.percentile(raster.flatten(), 5)
vmax = np.percentile(raster.flatten(), 95)
fig = plb.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
ax.imshow(raster, cmap='gray', vmin=vmin, vmax=vmax)
# plot the subset as rectangle
_ = ax.add_patch(patches.Rectangle((subset[0], subset[1]), subset[2], subset[3], fill=False, edgecolor='red'))
raster0 = band.ReadAsArray(*subset)
bandnbr = 0 # Needed for updates
rasterstack = img.ReadAsArray(*subset)
###Output
_____no_output_____
###Markdown
Close img, as it is no longer needed in the notebook:
###Code
img = None
###Output
_____no_output_____
###Markdown
5.2 Calibration and Data Conversion between dB and Power Scales Focused SAR image data natively come in uncalibrated digital numbers (DN) and need to be calibrated to correspond to proper radar cross section information. Calibration coefficients for SAR data are often defined in the decibel (dB) scale due to the high dynamic range of the imaging system. For the L-band ALOS PALSAR data at hand, the conversion from uncalibrated DN values to calibrated radar cross section values in dB scale is performed by applying a standard **calibration factor of -83 dB**. $\gamma^0_{dB} = 20 \cdot log10(DN) -83$The data at hand are radiometrically terrain corrected images, which are often expressed as terrain flattened $\gamma^0$ backscattering coefficients. For forest and land cover monitoring applications $\gamma^o$ is the preferred metric.Let's apply the calibration constant for our data and export it in *dB* scale:
###Code
caldB = 20*np.log10(rasterstack) - 83
###Output
_____no_output_____
###Markdown
While **dB**-scaled images are often "visually pleasing", they are often not a good basis for mathematical operations on data. For instance, when we compute the mean of observations, it makes a difference whether we do that in power or dB scale. Since dB scale is a logarithmic scale, we cannot simply average data in that scale. Please note that the **correct scale** in which operations need to be performed **is the power scale.** This is critical, e.g. when speckle filters are applied, spatial operations like block averaging are performed, or time series are analyzed.To **convert from dB to power**, apply: $\gamma^o_{pwr} = 10^{\frac{\gamma^o_{dB}}{10}}$
###Code
calPwr = np.power(10., caldB/10.)
calAmp = np.sqrt(calPwr)
###Output
_____no_output_____
###Markdown
5.3 Explore the Image Bands of individual Time Steps Let's explore how a band looks in the various image scales. Let's choose a band number and find the associated imaging date:
###Code
bandnbr = 20
tindex[bandnbr-1]
###Output
_____no_output_____
###Markdown
Below is the python code to create a four-part figure comparing the effect of the representation of the backscatter values in the DN, amplitude, power and dB scale.
###Code
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
ax1.imshow(rasterstack[bandnbr],cmap='gray',
vmin=np.percentile(rasterstack,10),
vmax=np.percentile(rasterstack,90))
ax2.imshow(calAmp[bandnbr],cmap='gray',
vmin=np.percentile(calAmp,10),
vmax=np.percentile(calAmp,90))
ax3.imshow(calPwr[bandnbr],cmap='gray',
vmin=np.percentile(calPwr,10),
vmax=np.percentile(calPwr,90))
ax4.imshow(caldB[bandnbr],cmap='gray',
vmin=np.percentile(caldB,10),
vmax=np.percentile(caldB,90))
ax1.set_title('DN Scaled (Uncalibrated)')
ax2.set_title('Calibrated (Amplitude Scaled)')
ax3.set_title('Calibrated (Power Scaled)')
_ = ax4.set_title('Calibrated (dB Scaled)')
###Output
_____no_output_____
###Markdown
5.4 Comparing Histograms of the Amplitude, Power, and dB-Scaled Data The following code cell calculates the histograms for the differently scaled data. You should see significant differences in the data distributions.
###Code
# Setup for three part figure
fig = plt.figure(figsize=(16,4))
fig.suptitle('Comparison of Histograms of SAR Backscatter in Different Scales',fontsize=14)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
# Important to "flatten" the 2D raster image to produce a historgram
ax1.hist(calAmp[bandnbr].flatten(),bins=100,range=(0.,0.7))
ax2.hist(calPwr[bandnbr].flatten(),bins=100,range=(0.,0.5))
ax3.hist(caldB[bandnbr].flatten(),bins=100,range=(-25,0))
# Means, medians and stddev
amp_mean = calAmp[bandnbr].mean()
amp_std = calAmp[bandnbr].std()
pwr_mean = calPwr[bandnbr].mean()
pwr_std = calPwr[bandnbr].std()
dB_mean = caldB[bandnbr].mean()
dB_std = caldB[bandnbr].std()
# Some lines for mean and median
ax1.axvline(amp_mean,color='red')
ax1.axvline(np.median(calAmp[bandnbr]),color='blue')
ax2.axvline(pwr_mean,color='red',label='Mean')
ax2.axvline(np.median(calPwr[bandnbr]),color='blue',label='Median')
ax3.axvline(dB_mean,color='red')
ax3.axvline(np.median(caldB[bandnbr]),color='blue')
# Lines for 1 stddev
ax1.axvline(amp_mean-amp_std,color='gray')
ax1.axvline(amp_mean+amp_std,color='gray')
ax2.axvline(pwr_mean-pwr_std,color='gray',label='1 $\sigma$')
ax2.axvline(pwr_mean+pwr_std,color='gray')
ax3.axvline(dB_mean-dB_std,color='gray')
ax3.axvline(dB_mean+dB_std,color='gray')
ax1.set_title('Amplitude Scaled')
ax2.set_title('Power Scaled')
ax3.set_title('dB Scaled')
_ = ax2.legend()
###Output
_____no_output_____
###Markdown
5.5 Exploring Polarization Differences We will look at the backscatter characteristics in co-polarized (same transmit and reveive polarzation, hh or vv) and cross-polarized (vh or hv polarization) SAR data. For this, we read a timestep in both polarizations, plot the histograms, and display the images in dB scale. First, we open the images, pick the bands from the same acquisition date, read the raster bands and convert them to dB scale.
###Code
# Open the Images
img_like = gdal.Open(imagefile)
img_cross = gdal.Open(imagefile_cross)
# Pick the bands, read rasters and convert to dB
bandnbr_like = 20
bandnbr_cross = 20
rl = img_like.GetRasterBand(bandnbr_like).ReadAsArray()
rc2 = img_cross.GetRasterBand(bandnbr_cross).ReadAsArray()
rl_dB = 20.*np.log10(rl)-83
rc_dB = 20.*np.log10(rc2)-83
###Output
_____no_output_____
###Markdown
Now, we explore the differences in the polarizations by plotting the images with their histograms. We look at the db ranges over which the histograms spread, and can adjust the linear scaling in the image display accordingly to enhace contrast. In the case below- C-VV like polarized data are mostly spread from -17.5 to -5 dB- C-VH cross polarized data are mostly spread from -25 to -10 dBThus, we note that the cross-polarized data exhibit a larger dynamic range of about 2.5 dB.
###Code
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 16))
fig.suptitle('Comaprison of Like- and Cross-Polarized Sentinel-1 C-band Data',
fontsize=14)
ax[0][0].set_title('C-VV Image')
ax[0][1].set_title('C-VH Image')
ax[1][0].set_title('C-VV Histogram')
ax[1][1].set_title('C-VH Histogram')
ax[0][0].axis('off')
ax[0][1].axis('off')
ax[0][0].imshow(rl_dB, vmin=-25, vmax=-5, cmap='gray')
ax[0][1].imshow(rc_dB, vmin=-25, vmax=-5, cmap='gray')
ax[1][0].hist(rl_dB.flatten(), range=(-25, 0), bins=100)
ax[1][1].hist(rc_dB.flatten(), range=(-25, 0), bins=100)
fig.tight_layout() # Use the tight layout to make the figure more compact
###Output
_____no_output_____
###Markdown
6 Create a Time Series Animation First, Create a directory in which to store our plots and move into it:
###Code
os.chdir(path)
product_path = 'plots_and_animations'
asfn.new_directory(product_path)
if asfn.path_exists(product_path) and os.getcwd() != f"{path}/{product_path}":
os.chdir(product_path)
print(f"Current working directory: {os.getcwd()}")
###Output
_____no_output_____
###Markdown
Now we are ready to create a time series animation from the calibrated SAR data.
###Code
%%capture
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.axis('off')
vmin = np.percentile(caldB.flatten(), 5)
vmax = np.percentile(caldB.flatten(), 95)
r0dB = 20*np.log10(raster0) - 83
im = ax.imshow(r0dB,cmap='gray', vmin=vmin, vmax=vmax)
ax.set_title("{}".format(tindex[0].date()))
def animate(i):
ax.set_title("{}".format(tindex[i].date()))
im.set_data(caldB[i])
# Interval is given in milliseconds
ani = an.FuncAnimation(fig, animate, frames=caldB.shape[0], interval=400)
###Output
_____no_output_____
###Markdown
Configure matplotlib's RC settings for the animation:
###Code
rc('animation', embed_limit=40971520.0) # We need to increase the limit maybe to show the entire animation
###Output
_____no_output_____
###Markdown
Create a javascript animation of the time-series running inline in the notebook:
###Code
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
Save the animation (animation.gif):
###Code
ani.save('animation.gif', writer='pillow', fps=2)
###Output
_____no_output_____
###Markdown
5.3 Plot the Time Series of Means Calculated Across the Subset To create the time series of means, we will go through the following steps:1. Compute means using the data in **power scale** ($\gamma^o_{pwr}$) .3. Convert the resulting mean values into dB scale for visualization.4. Plot time series of means. Compute the means:
###Code
rs_means_pwr = np.mean(calPwr,axis=(1, 2))
###Output
_____no_output_____
###Markdown
Convert the resulting mean value time-series to dB scale for visualization and check that we got the means over time:
###Code
rs_means_dB = 10.*np.log10(rs_means_pwr)
rs_means_pwr.shape
###Output
_____no_output_____
###Markdown
Plot and save the time series of means (time_series_means.png):
###Code
# 3. Now let's plot the time series of means
fig = plt.figure(figsize=(16, 4))
ax1 = fig.add_subplot(111)
ax1.plot(tindex, rs_means_pwr)
ax1.set_xlabel('Date')
ax1.set_ylabel('$\overline{\gamma^o}$ [power]')
ax2 = ax1.twinx()
ax2.plot(tindex, rs_means_dB, color='red')
ax2.set_ylabel('$\overline{\gamma^o}$ [dB]')
fig.legend(['power', 'dB'], loc=1)
plt.title('Time series profile of average band backscatter $\gamma^o$ ')
plt.savefig('time_series_means', dpi=72, transparent='true')
###Output
_____no_output_____
###Markdown
5.4 Create Two-Panel Figure with Animated Global Mean $\mu_{\gamma^0_{dB}}$ We use a few Matplotlib functions to create a side-by-side animation of the dB-scaled imagery and the respective global means $\mu_{\gamma^0_{dB}}$.
###Code
%%capture
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4), gridspec_kw={'width_ratios':[1, 3]})
vmin = np.percentile(rasterstack.flatten(), 5)
vmax = np.percentile(rasterstack.flatten(), 95)
im = ax1.imshow(raster0, cmap='gray', vmin=vmin, vmax=vmax)
ax1.set_title("{}".format(tindex[0].date()))
ax1.set_axis_off()
ax2.axis([tindex[0].date(), tindex[-1].date(), rs_means_dB.min(), rs_means_dB.max()])
ax2.set_ylabel('$\overline{\gamma^o}$ [dB]')
ax2.set_xlabel('Date')
ax2.set_ylim((-10, -5))
l, = ax2.plot([], [])
def animate(i):
ax1.set_title("{}".format(tindex[i].date()))
im.set_data(rasterstack[i])
ax2.set_title("{}".format(tindex[i].date()))
l.set_data(tindex[:(i+1)], rs_means_dB[:(i+1)])
# Interval is given in milliseconds
ani = an.FuncAnimation(fig, animate, frames=rasterstack.shape[0], interval=400)
###Output
_____no_output_____
###Markdown
Create a javascript animation of the time-series running inline in the notebook:
###Code
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
Save the animated time-series and histogram (animation_histogram.gif):
###Code
ani.save('animation_histogram.gif', writer='pillow', fps=2)
###Output
_____no_output_____
###Markdown
Exploring SAR Data and SAR Time Series Analysis with Supplied Data Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, Earth Big Data, LLC This notebook introduces you to the analysis of deep multi-temporal SAR image data stacks in the framework of *Jupyter Notebooks*. The Jupyter Notebook environment is easy to launch in any web browser for interactive data exploration with provided or new training data. Notebooks are comprised of text written in a combination of executable python code and markdown formatting including latex style mathematical equations. Another advantage of Jupyter Notebooks is that they can easily be expanded, changed, and shared with new data sets or newly available time series steps. Therefore, they provide an excellent basis for collaborative and repeatable data analysis. This notebook covers the following data analysis concepts:- How to load time series stacks into Jupyter Notebooks and how to explore image content using basic functions such as mean value calculation and histogram analysis.- How to apply calibration constants to covert initial digital number (DN) data into calibrated radar cross section information.- How to subset images create time series information of calibrated SAR amplitude values.- How to explore the time-series information in SAR data stacks for environmental analysis. Important Notes about JupyterHub Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.
###Code
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
// alert(command)
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
###Output
_____no_output_____
###Markdown
0. Importing Relevant Python Packages In this notebook we will use the following scientific libraries: Pandas is a Python library that provides high-level data structures and a vast variety of tools for analysis. The great feature of this package is the ability to translate rather complex operations with data into one or two commands. Pandas contains many built-in methods for filtering and combining data, as well as the time-series functionality. GDAL is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background. NumPy is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. Matplotlib is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. Moreover, many popular plotting libraries are designed to work in conjunction with matplotlib.
###Code
%%capture
from pathlib import Path
import pandas as pd # for DatetimeIndex
from osgeo import gdal # for GetRasterBand, Open, ReadAsArray
import numpy as np #for log10, mean, percentile, power
%matplotlib inline
import matplotlib.pylab as plb # for add_patch, add_subplot, figure, hist, imshow, set_title, xaxis,_label, text
import matplotlib.pyplot as plt # for add_subplot, axis, figure, imshow, legend, plot, set_axis_off, set_data,
# set_title, set_xlabel, set_ylabel, set_ylim, subplots, title, twinx
import matplotlib.patches as patches # for Rectangle
import matplotlib.animation as an # for FuncAnimation
from matplotlib import rc
import asf_notebook as asfn
asfn.jupytertheme_matplotlib_format()
from IPython.display import HTML
plt.rcParams.update({'font.size': 12})
###Output
_____no_output_____
###Markdown
1. Load Data Stack This notebook will be using a 70-image deep C-band SAR data stack over Nepal for a first experience with time series processing. The C-band data were acquired by the Sentinel-1 sensor and are available to us through the services of the Alaska Satellite Facility. Nepal is an interesting site for this analysis due to the significant seasonality of precipitation that is characteristic for this region. Nepal is said to have five seasons: spring, summer, monsoon, autumn and winter. Precipitation is low in the winter (November - March) and peaks dramatically in the summer, with top rain rates in July, August, and September (see figure to the right). As SAR is sensitive to changes in soil moisture, and vegetation structure, these weather patterns have a noticeable impact on the Radar Cross Section ($\sigma$) time series information. We will analyze the variation of $\sigma$ values over time and will interpret them in the context of the weather information shown in the figure to the right. Before we get started, let's first create a working directory for this analysis and change into it:
###Code
path = Path("/home/jovyan/notebooks/SAR_Training/English/Ecosystems/data_time_series_example")
if not path.exists():
path.mkdir()
###Output
_____no_output_____
###Markdown
We will retrieve the relevant data from an Amazon Web Service (AWS) cloud storage bucket using the following command:
###Code
s3_path = 's3://asf-jupyter-data/time_series.zip'
time_series_path = Path(s3_path).name
!aws --region=us-east-1 --no-sign-request s3 cp $s3_path $time_series_path
###Output
_____no_output_____
###Markdown
Now, let's unzip the file (overwriting previous extractions) and clean up after ourselves:
###Code
if Path(time_series_path).exists():
asfn.asf_unzip(str(path), time_series_path)
Path(time_series_path).unlink()
###Output
_____no_output_____
###Markdown
The following lines set path variables needed for data processing. This step is not necessary but it saves a lot of extra typing later. Define variables for the main data directory as well as for the files containing data and image information:
###Code
datadirectory = path/'time_series/S32644X696260Y3052060sS1-EBD'
datefile = datadirectory/'S32644X696260Y3052060sS1_D_vv_0092_mtfil.dates'
imagefile = datadirectory/'S32644X696260Y3052060sS1_D_vv_0092_mtfil.vrt'
imagefile_cross = datadirectory/'S32644X696260Y3052060sS1_D_vh_0092_mtfil.vrt'
###Output
_____no_output_____
###Markdown
2. Switch to the Data Directory: We now move into the data directory:
###Code
# !ls *.vrt #Uncomment this line to see a List of the files
###Output
_____no_output_____
###Markdown
3. Assess Image Acquisition Dates Before we start analyzing the available image data, we want to examine the content of our data stack. First, we read the image acquisition dates for all files in the time series and create a *pandas* date index.
###Code
if datefile.exists():
with open(str(datefile), 'r') as f:
dates = f.readlines()
tindex = pd.DatetimeIndex(dates)
###Output
_____no_output_____
###Markdown
From the date index, we print the band numbers and dates:
###Code
if imagefile.exists():
print('Bands and dates for', imagefile)
for i, d in enumerate(tindex):
print("{:4d} {}".format(i+1, d.date()),end=' ')
if (i+1)%5 == 1: print()
###Output
_____no_output_____
###Markdown
4. Explore the Available Image Data To open an image file using the gdal.Open() function. This returns a variable (img) that can be used for further interactions with the file:
###Code
if imagefile.exists():
img = gdal.Open(str(imagefile))
###Output
_____no_output_____
###Markdown
To explore the image (number of bands, pixels, lines), you can use several functions associated with the image object (img) created in the last code cell:
###Code
print(img.RasterCount) # Number of Bands
print(img.RasterXSize) # Number of Pixels
print(img.RasterYSize) # Number of Lines
###Output
_____no_output_____
###Markdown
4.1 Reading Data from an Image Band To access any band in the image, use GDAL's *GetRasterBand(x)* function. Replace the band_num value with the number of the band you wish to access.
###Code
band_num = 70
band = img.GetRasterBand(band_num)
###Output
_____no_output_____
###Markdown
Once a band is seleted, several functions associated with the band are available for further processing, e.g., band.ReadAsArray(xoff=0,yoff=0,xsize=None,ysize=None)Let's read the entire raster layer for the band:
###Code
raster = band.ReadAsArray()
###Output
_____no_output_____
###Markdown
4.2 Extracting Subsets from a Larger Image Frame Because of the potentially large data volume when dealing with time series data stacks, it may be prudent to read only a subset of data. Using GDAL's ReadAsArray() function, subsets can be requested by defining pixel offsets and subset size:**img.ReadAsArray(xoff=0, yoff=0, xsize=None, ysize=None)**- xoff, yoff are the offsets from the upper left corner in pixel/line coordinates. - xsize, ysize specify the size of the subset in x-direction (left to right) and y-direction (top to bottom).For example, we can read only a subset of 5x5 pixels with an offset of 5 pixels and 20 lines:
###Code
raster_sub = band.ReadAsArray(5, 20, 50, 50)
###Output
_____no_output_____
###Markdown
The result is a two dimensional numpy array in the datatpye the data were stored in. **We can inspect these data in python by typing the array name on the commandline**:
###Code
raster_sub
###Output
_____no_output_____
###Markdown
4.3 Displaying Bands in the Time Series of SAR Data From the lookup table we know that bands 20 and 27 in the Nepal data stack are from mid February and late August. **Let's take look at these images**.
###Code
raster_1 = img.GetRasterBand(20).ReadAsArray()
raster_2 = img.GetRasterBand(27).ReadAsArray()
###Output
_____no_output_____
###Markdown
4.3.1 Write a Plotting Function Matplotlib's plotting functions allow for powerful options to display imagery. We are following some standard approaches for setting up figures.First we are looking at a **raster band** and it's associated **histogram**. Our function, *show_image()* takes several parameters: - raster = a numpy two dimensional array - tindex = a panda index array for dates- bandnbr = the band number the corresponds to the raster - vmin = minimim value to display - vmax = maximum value to display- output_filename = name of output file, if saving the plotPreconditions: matplotlib.pyplot must be imported as plb and matplotlib.pyplot must be imported as plt. Note: By default, data will be linearly stretched between vmin and vmax.We won't use this function in this notebook but it is a useful utility method, which can be copied and pasted for use in other analyses
###Code
def show_image_histogram(raster, tindex, band_nbr, vmin=None, vmax=None, output_filename=None):
assert 'plb' in globals(), 'Error: matplotlib.pylab must be imported as "plb"'
assert 'plt' in globals(), 'Error: matplotlib.pyplot must be imported as "plt"'
fig = plb.figure(figsize=(16, 8))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
plt.rcParams.update({'font.size': 14})
# plot image
ax1.imshow(raster, cmap='gray', vmin=vmin, vmax=vmax)
ax1.set_title(f'Image Band {band_nbr} {tindex[band_nbr-1].date()}')
vmin = np.percentile(raster, 2) if vmin==None else vmin
vmax = np.percentile(raster, 98) if vmax==None else vmax
ax1.xaxis.set_label_text(f'Linear stretch Min={vmin} Max={vmax}')
#plot histogram
h = ax2.hist(raster.flatten(), bins=200, range=(0, 10000))
ax2.xaxis.set_label_text('Amplitude (Uncalibrated DN Values)')
ax2.set_title(f'Histogram Band {band_nbr} {tindex[band_nbr-1].date()}')
if output_filename:
plt.savefig(output_filename, dpi=300, transparent='true')
###Output
_____no_output_____
###Markdown
We won't be calling our new function elsewhere in this notebook, so test it now:
###Code
show_image_histogram(raster_1, tindex, 20, vmin=2000, vmax=10000)
###Output
_____no_output_____
###Markdown
5. SAR Time Series Visualization, Animation, and Analysis This section introduces you to the handling and analysis of SAR time series stacks. A focus will be put on time series visualization, which allow us to inspect time series in more depth. Note that html animations are not exported into the pdf file, but will display interactively. 5.1 Reading the SAR Time Series Subset Let's read an image subset (offset 400, 400 / size 600, 600) of the entire time series data stack. The data are linearly scaled amplitudes represented as unsigned 16 bit integer.We use the GDAL *ReadAsArray(xoff,yoff,xsize,ysize)* function where *xoff* is the offset in pixels from upper left; *yoff* is the offset in lines from upper left; *xsize* is the number of pixels and *ysize* is the number of lines of the subset.If *ReadAsArray()* is called without any parameters, the entire image data stack is read. Let's first define a subset and make sure it is in the right geographic location.
###Code
# Open the image and read the first raster band
band = img.GetRasterBand(1)
# Define the subset
subset = (400, 400, 600, 600)
###Output
_____no_output_____
###Markdown
Now we are ready to extract this subset from all slices of the data stack.
###Code
# Plot one band together with the outline of the selected subset to verify its geographic location.
raster = band.ReadAsArray()
vmin = np.percentile(raster.flatten(), 5)
vmax = np.percentile(raster.flatten(), 95)
fig = plb.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
ax.imshow(raster, cmap='gray', vmin=vmin, vmax=vmax)
# plot the subset as rectangle
_ = ax.add_patch(patches.Rectangle((subset[0], subset[1]), subset[2], subset[3], fill=False, edgecolor='red'))
raster0 = band.ReadAsArray(*subset)
bandnbr = 0 # Needed for updates
rasterstack = img.ReadAsArray(*subset)
###Output
_____no_output_____
###Markdown
Close img, as it is no longer needed in the notebook:
###Code
img = None
###Output
_____no_output_____
###Markdown
5.2 Calibration and Data Conversion between dB and Power Scales Focused SAR image data natively come in uncalibrated digital numbers (DN) and need to be calibrated to correspond to proper radar cross section information. Calibration coefficients for SAR data are often defined in the decibel (dB) scale due to the high dynamic range of the imaging system. For the L-band ALOS PALSAR data at hand, the conversion from uncalibrated DN values to calibrated radar cross section values in dB scale is performed by applying a standard **calibration factor of -83 dB**. $\gamma^0_{dB} = 20 \cdot log10(DN) -83$The data at hand are radiometrically terrain corrected images, which are often expressed as terrain flattened $\gamma^0$ backscattering coefficients. For forest and land cover monitoring applications $\gamma^o$ is the preferred metric.Let's apply the calibration constant for our data and export it in *dB* scale:
###Code
caldB = 20*np.log10(rasterstack) - 83
###Output
_____no_output_____
###Markdown
While **dB**-scaled images are often "visually pleasing", they are often not a good basis for mathematical operations on data. For instance, when we compute the mean of observations, it makes a difference whether we do that in power or dB scale. Since dB scale is a logarithmic scale, we cannot simply average data in that scale. Please note that the **correct scale** in which operations need to be performed **is the power scale.** This is critical, e.g. when speckle filters are applied, spatial operations like block averaging are performed, or time series are analyzed.To **convert from dB to power**, apply: $\gamma^o_{pwr} = 10^{\frac{\gamma^o_{dB}}{10}}$
###Code
calPwr = np.power(10., caldB/10.)
calAmp = np.sqrt(calPwr)
###Output
_____no_output_____
###Markdown
5.3 Explore the Image Bands of individual Time Steps Let's explore how a band looks in the various image scales. Let's choose a band number and find the associated imaging date:
###Code
bandnbr = 20
tindex[bandnbr-1]
###Output
_____no_output_____
###Markdown
Below is the python code to create a four-part figure comparing the effect of the representation of the backscatter values in the DN, amplitude, power and dB scale.
###Code
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
ax1.imshow(rasterstack[bandnbr],cmap='gray',
vmin=np.percentile(rasterstack,10),
vmax=np.percentile(rasterstack,90))
ax2.imshow(calAmp[bandnbr],cmap='gray',
vmin=np.percentile(calAmp,10),
vmax=np.percentile(calAmp,90))
ax3.imshow(calPwr[bandnbr],cmap='gray',
vmin=np.percentile(calPwr,10),
vmax=np.percentile(calPwr,90))
ax4.imshow(caldB[bandnbr],cmap='gray',
vmin=np.percentile(caldB,10),
vmax=np.percentile(caldB,90))
ax1.set_title('DN Scaled (Uncalibrated)')
ax2.set_title('Calibrated (Amplitude Scaled)')
ax3.set_title('Calibrated (Power Scaled)')
_ = ax4.set_title('Calibrated (dB Scaled)')
###Output
_____no_output_____
###Markdown
5.4 Comparing Histograms of the Amplitude, Power, and dB-Scaled Data The following code cell calculates the histograms for the differently scaled data. You should see significant differences in the data distributions.
###Code
# Setup for three part figure
fig = plt.figure(figsize=(16,4))
fig.suptitle('Comparison of Histograms of SAR Backscatter in Different Scales',fontsize=14)
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
# Important to "flatten" the 2D raster image to produce a historgram
ax1.hist(calAmp[bandnbr].flatten(),bins=100,range=(0.,0.7))
ax2.hist(calPwr[bandnbr].flatten(),bins=100,range=(0.,0.5))
ax3.hist(caldB[bandnbr].flatten(),bins=100,range=(-25,0))
# Means, medians and stddev
amp_mean = calAmp[bandnbr].mean()
amp_std = calAmp[bandnbr].std()
pwr_mean = calPwr[bandnbr].mean()
pwr_std = calPwr[bandnbr].std()
dB_mean = caldB[bandnbr].mean()
dB_std = caldB[bandnbr].std()
# Some lines for mean and median
ax1.axvline(amp_mean,color='red')
ax1.axvline(np.median(calAmp[bandnbr]),color='blue')
ax2.axvline(pwr_mean,color='red',label='Mean')
ax2.axvline(np.median(calPwr[bandnbr]),color='blue',label='Median')
ax3.axvline(dB_mean,color='red')
ax3.axvline(np.median(caldB[bandnbr]),color='blue')
# Lines for 1 stddev
ax1.axvline(amp_mean-amp_std,color='gray')
ax1.axvline(amp_mean+amp_std,color='gray')
ax2.axvline(pwr_mean-pwr_std,color='gray',label='1 $\sigma$')
ax2.axvline(pwr_mean+pwr_std,color='gray')
ax3.axvline(dB_mean-dB_std,color='gray')
ax3.axvline(dB_mean+dB_std,color='gray')
ax1.set_title('Amplitude Scaled')
ax2.set_title('Power Scaled')
ax3.set_title('dB Scaled')
_ = ax2.legend()
###Output
_____no_output_____
###Markdown
5.5 Exploring Polarization Differences We will look at the backscatter characteristics in co-polarized (same transmit and reveive polarzation, hh or vv) and cross-polarized (vh or hv polarization) SAR data. For this, we read a timestep in both polarizations, plot the histograms, and display the images in dB scale. First, we open the images, pick the bands from the same acquisition date, read the raster bands and convert them to dB scale.
###Code
# Open the Images
img_like = gdal.Open(str(imagefile))
img_cross = gdal.Open(str(imagefile_cross))
# Pick the bands, read rasters and convert to dB
bandnbr_like = 20
bandnbr_cross = 20
rl = img_like.GetRasterBand(bandnbr_like).ReadAsArray()
rc2 = img_cross.GetRasterBand(bandnbr_cross).ReadAsArray()
rl_dB = 20.*np.log10(rl)-83
rc_dB = 20.*np.log10(rc2)-83
###Output
_____no_output_____
###Markdown
Now, we explore the differences in the polarizations by plotting the images with their histograms. We look at the db ranges over which the histograms spread, and can adjust the linear scaling in the image display accordingly to enhace contrast. In the case below- C-VV like polarized data are mostly spread from -17.5 to -5 dB- C-VH cross polarized data are mostly spread from -25 to -10 dBThus, we note that the cross-polarized data exhibit a larger dynamic range of about 2.5 dB.
###Code
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 16))
fig.suptitle('Comaprison of Like- and Cross-Polarized Sentinel-1 C-band Data',
fontsize=14)
ax[0][0].set_title('C-VV Image')
ax[0][1].set_title('C-VH Image')
ax[1][0].set_title('C-VV Histogram')
ax[1][1].set_title('C-VH Histogram')
ax[0][0].axis('off')
ax[0][1].axis('off')
ax[0][0].imshow(rl_dB, vmin=-25, vmax=-5, cmap='gray')
ax[0][1].imshow(rc_dB, vmin=-25, vmax=-5, cmap='gray')
ax[1][0].hist(rl_dB.flatten(), range=(-25, 0), bins=100)
ax[1][1].hist(rc_dB.flatten(), range=(-25, 0), bins=100)
fig.tight_layout() # Use the tight layout to make the figure more compact
###Output
_____no_output_____
###Markdown
6 Create a Time Series Animation First, Create a directory in which to store our plots and move into it:
###Code
product_path = path/'plots_and_animations'
if not product_path.exists():
product_path.mkdir()
###Output
_____no_output_____
###Markdown
Now we are ready to create a time series animation from the calibrated SAR data.
###Code
%%capture
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
ax.axis('off')
vmin = np.percentile(caldB.flatten(), 5)
vmax = np.percentile(caldB.flatten(), 95)
r0dB = 20*np.log10(raster0) - 83
im = ax.imshow(r0dB,cmap='gray', vmin=vmin, vmax=vmax)
ax.set_title("{}".format(tindex[0].date()))
def animate(i):
ax.set_title("{}".format(tindex[i].date()))
im.set_data(caldB[i])
# Interval is given in milliseconds
ani = an.FuncAnimation(fig, animate, frames=caldB.shape[0], interval=400)
###Output
_____no_output_____
###Markdown
Configure matplotlib's RC settings for the animation:
###Code
rc('animation', embed_limit=40971520.0) # We need to increase the limit maybe to show the entire animation
###Output
_____no_output_____
###Markdown
Create a javascript animation of the time-series running inline in the notebook:
###Code
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
Save the animation (animation.gif):
###Code
ani.save(product_path/'animation.gif', writer='pillow', fps=2)
###Output
_____no_output_____
###Markdown
5.3 Plot the Time Series of Means Calculated Across the Subset To create the time series of means, we will go through the following steps:1. Compute means using the data in **power scale** ($\gamma^o_{pwr}$) .3. Convert the resulting mean values into dB scale for visualization.4. Plot time series of means. Compute the means:
###Code
rs_means_pwr = np.mean(calPwr,axis=(1, 2))
###Output
_____no_output_____
###Markdown
Convert the resulting mean value time-series to dB scale for visualization and check that we got the means over time:
###Code
rs_means_dB = 10.*np.log10(rs_means_pwr)
rs_means_pwr.shape
###Output
_____no_output_____
###Markdown
Plot and save the time series of means (time_series_means.png):
###Code
# 3. Now let's plot the time series of means
fig = plt.figure(figsize=(16, 4))
ax1 = fig.add_subplot(111)
ax1.plot(tindex, rs_means_pwr)
ax1.set_xlabel('Date')
ax1.set_ylabel('$\overline{\gamma^o}$ [power]')
ax2 = ax1.twinx()
ax2.plot(tindex, rs_means_dB, color='red')
ax2.set_ylabel('$\overline{\gamma^o}$ [dB]')
fig.legend(['power', 'dB'], loc=1)
plt.title('Time series profile of average band backscatter $\gamma^o$ ')
plt.savefig(product_path/'time_series_means', dpi=72, transparent='true')
###Output
_____no_output_____
###Markdown
5.4 Create Two-Panel Figure with Animated Global Mean $\mu_{\gamma^0_{dB}}$ We use a few Matplotlib functions to create a side-by-side animation of the dB-scaled imagery and the respective global means $\mu_{\gamma^0_{dB}}$.
###Code
%%capture
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4), gridspec_kw={'width_ratios':[1, 3]})
vmin = np.percentile(rasterstack.flatten(), 5)
vmax = np.percentile(rasterstack.flatten(), 95)
im = ax1.imshow(raster0, cmap='gray', vmin=vmin, vmax=vmax)
ax1.set_title("{}".format(tindex[0].date()))
ax1.set_axis_off()
ax2.axis([tindex[0].date(), tindex[-1].date(), rs_means_dB.min(), rs_means_dB.max()])
ax2.set_ylabel('$\overline{\gamma^o}$ [dB]')
ax2.set_xlabel('Date')
ax2.set_ylim((-10, -5))
l, = ax2.plot([], [])
def animate(i):
ax1.set_title("{}".format(tindex[i].date()))
im.set_data(rasterstack[i])
ax2.set_title("{}".format(tindex[i].date()))
l.set_data(tindex[:(i+1)], rs_means_dB[:(i+1)])
# Interval is given in milliseconds
ani = an.FuncAnimation(fig, animate, frames=rasterstack.shape[0], interval=400)
###Output
_____no_output_____
###Markdown
Create a javascript animation of the time-series running inline in the notebook:
###Code
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
Save the animated time-series and histogram (animation_histogram.gif):
###Code
ani.save(product_path/'animation_histogram.gif', writer='pillow', fps=2)
###Output
_____no_output_____
|
getting_started/notebooks/5_inference_scheduling.ipynb
|
###Markdown
**Amazon Lookout for Equipment** - Demonstration on an anonymized expander dataset*Part 5: Scheduling regular inference calls*
###Code
BUCKET = '<YOUR_BUCKET_NAME_HERE>'
PREFIX = 'data/scheduled_inference'
###Output
_____no_output_____
###Markdown
Initialization---In this notebook, we will update the repository structure to add an inference directory in the data folder:```/lookout-equipment-demo|+-- data/| || +-- inference/| | || | |-- input/| | || | \-- output/| || +-- labelled-data/| | \-- labels.csv| || \-- training-data/| \-- expander/| |-- subsystem-01| | \-- subsystem-01.csv| || |-- subsystem-02| | \-- subsystem-02.csv| || |-- ...| || \-- subsystem-24| \-- subsystem-24.csv|+-- dataset/| |-- labels.csv| |-- tags_description.csv| |-- timeranges.txt| \-- timeseries.zip|+-- notebooks/| |-- 1_data_preparation.ipynb| |-- 2_dataset_creation.ipynb| |-- 3_model_training.ipynb| |-- 4_model_evaluation.ipynb| \-- 5_inference_scheduling.ipynb <<< This notebook <<<|+-- utils/ \-- lookout_equipment_utils.py``` Notebook configuration updateAmazon Lookout for Equipment being a very recent service, we need to make sure that we have access to the latest version of the AWS Python packages. If you see a `pip` dependency error, check that the `boto3` version is ok: if it's greater than 1.17.48 (the first version that includes the `lookoutequipment` API), you can discard this error and move forward with the next cell:
###Code
!pip install --quiet --upgrade boto3 tqdm sagemaker
import boto3
print(f'boto3 version: {boto3.__version__} (should be >= 1.17.48 to include Lookout for Equipment API)')
# Restart the current notebook to ensure we take into account the previous updates:
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
###Output
_____no_output_____
###Markdown
Imports
###Code
import boto3
import datetime
import os
import pandas as pd
import pprint
import pyarrow as pa
import pyarrow.parquet as pq
import sagemaker
import s3fs
import sys
import time
import uuid
import warnings
# Helper functions for managing Lookout for Equipment API calls:
sys.path.append('../utils')
import lookout_equipment_utils as lookout
###Output
_____no_output_____
###Markdown
Parameters
###Code
warnings.filterwarnings('ignore')
DATA = os.path.join('..', 'data')
RAW_DATA = os.path.join('..', 'dataset')
INFER_DATA = os.path.join(DATA, 'inference')
os.makedirs(os.path.join(INFER_DATA, 'input'), exist_ok=True)
os.makedirs(os.path.join(INFER_DATA, 'output'), exist_ok=True)
ROLE_ARN = sagemaker.get_execution_role()
REGION_NAME = boto3.session.Session().region_name
###Output
_____no_output_____
###Markdown
Create an inference scheduler---While navigating to the model details part of the console, you will see that you have no inference scheduled yet: Scheduler configurationLet's create a new inference schedule: some parameters are mandatory, while others offer some added flexibility. Parameters* Set `DATA_UPLOAD_FREQUENCY` at which the data will be uploaded for inference. Allowed values are `PT5M`, `PT10M`, `PT15M`, `PT30M` and `PT1H`. * This is both the frequency of the inference scheduler and how often data are uploaded to the source bucket. * **Note**: ***the upload frequency must be compatible with the sampling rate selected at training time.*** *For example, if a model was trained with a 30 minutes resampling, asking for 5 minutes won't work and you need to select either PT30M and PT1H for this parameter at inference time.** Set `INFERENCE_DATA_SOURCE_BUCKET` to the S3 bucket of your inference data* Set `INFERENCE_DATA_SOURCE_PREFIX` to the S3 prefix of your inference data* Set `INFERENCE_DATA_OUTPUT_BUCKET` to the S3 bucket where you want inference results* Set `INFERENCE_DATA_OUTPUT_PREFIX` to the S3 prefix where you want inference results* Set `ROLE_ARN_FOR_INFERENCE` to the role to be used to **read** data to infer on and **write** inference output
###Code
# Name of the model on which you want to create this inference scheduler
MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER = 'lookout-demo-model-v1'
# Name of the inference scheduler you want to create
INFERENCE_SCHEDULER_NAME = f'{MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER}-scheduler'
# Mandatory parameters:
INFERENCE_DATA_SOURCE_BUCKET = BUCKET
INFERENCE_DATA_SOURCE_PREFIX = f'{PREFIX}/input/'
INFERENCE_DATA_OUTPUT_BUCKET = BUCKET
INFERENCE_DATA_OUTPUT_PREFIX = f'{PREFIX}/output/'
ROLE_ARN_FOR_INFERENCE = ROLE_ARN
DATA_UPLOAD_FREQUENCY = 'PT5M'
###Output
_____no_output_____
###Markdown
Optional parameters* Set `DATA_DELAY_OFFSET_IN_MINUTES` to the number of minutes you expect the data to be delayed to upload. It's a time buffer to upload data.* Set ``INPUT_TIMEZONE_OFFSET``. The allow values are : +00:00, +00:30, -01:00, ... +11:30, +12:00, -00:00, -00:30, -01:00, ... -11:30, -12:00* Set `TIMESTAMP_FORMAT`. The allowed values `EPOCH`, `yyyy-MM-dd-HH-mm-ss` or `yyyyMMddHHmmss`. This is the format of timestamp which is the suffix of the input data file name. This is used by Lookout Equipment to understand which files to run inference on (so that you don't need to remove previous files to let the scheduler finds which one to run on).* Set `COMPONENT_TIMESTAMP_DELIMITER`. The allowed values `-`, `_` or ` `. This is the delimiter character that is used to separate the component from the timestamp in the input filename.
###Code
DATA_DELAY_OFFSET_IN_MINUTES = None
INPUT_TIMEZONE_OFFSET = '+00:00'
COMPONENT_TIMESTAMP_DELIMITER = '_'
TIMESTAMP_FORMAT = 'yyyyMMddHHmmss'
###Output
_____no_output_____
###Markdown
Create the inference schedulerThe CreateInferenceScheduler API creates a scheduler **and** starts it: this means that this starts costing you right away. However, you can stop and start an existing scheduler at will (see at the end of this notebook):
###Code
scheduler = lookout.LookoutEquipmentScheduler(
scheduler_name=INFERENCE_SCHEDULER_NAME,
model_name=MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER,
region_name=REGION_NAME
)
scheduler_params = {
'input_bucket': INFERENCE_DATA_SOURCE_BUCKET,
'input_prefix': INFERENCE_DATA_SOURCE_PREFIX,
'output_bucket': INFERENCE_DATA_OUTPUT_BUCKET,
'output_prefix': INFERENCE_DATA_OUTPUT_PREFIX,
'role_arn': ROLE_ARN_FOR_INFERENCE,
'upload_frequency': DATA_UPLOAD_FREQUENCY,
'delay_offset': DATA_DELAY_OFFSET_IN_MINUTES,
'timezone_offset': INPUT_TIMEZONE_OFFSET,
'component_delimiter': COMPONENT_TIMESTAMP_DELIMITER,
'timestamp_format': TIMESTAMP_FORMAT
}
scheduler.set_parameters(**scheduler_params)
###Output
_____no_output_____
###Markdown
Prepare the inference data---Let's prepare and send some data in the S3 input location our scheduler will monitor:
###Code
# Let's load all our original signals:
all_tags_fname = os.path.join(DATA, 'training-data', 'expander.parquet')
table = pq.read_table(all_tags_fname)
all_tags_df = table.to_pandas()
del table
all_tags_df.head()
###Output
_____no_output_____
###Markdown
Let's load the tags description: this dataset comes with a tag description file including:* `Tag`: the tag name as it is recorded by the customer in his historian system (for instance the [Honeywell process history database](https://www.honeywellprocess.com/en-US/explore/products/advanced-applications/uniformance/Pages/uniformance-phd.aspx))* `UOM`: the unit of measure for the recorded signal* `Subsystem`: an ID linked to the part of the asset this sensor is attached toFrom there, we can collect the list of components (subsystem column):
###Code
tags_description_fname = os.path.join(RAW_DATA, 'tags_description.csv')
tags_description_df = pd.read_csv(tags_description_fname)
components = tags_description_df['Subsystem'].unique()
tags_description_df.head()
###Output
_____no_output_____
###Markdown
To build our sample inference dataset, we will extract a sample from the original evaluation period of the time series where we know something odd is happening (around **November 21st, 2015 at 4am**):
###Code
# How many sequences do we want to extract:
num_sequences = 3
# The scheduling frequency in minutes: this **MUST** match the
# resampling rate used to train the model:
frequency = 5
# Loops through each sequence:
start = pd.to_datetime('2015-11-21 04:00:00')
for i in range(num_sequences):
end = start + datetime.timedelta(minutes=+frequency - 1)
# Rounding time to the previous 5 minutes:
tm = datetime.datetime.now()
tm = tm - datetime.timedelta(
minutes=tm.minute % frequency,
seconds=tm.second,
microseconds=tm.microsecond
)
tm = tm + datetime.timedelta(minutes=+frequency * (i))
current_timestamp = (tm).strftime(format='%Y%m%d%H%M%S')
# For each sequence, we need to loop through all components:
print(f'Extracting data from {start} to {end}')
new_index = None
for component in components:
# Extracting the dataframe for this component and this particular time range:
signals = list(tags_description_df.loc[(tags_description_df['Subsystem'] == component), 'Tag'])
signals_df = all_tags_df.loc[start:end, signals]
# We need to reset the index to match the time
# at which the scheduler will run inference:
if new_index is None:
new_index = pd.date_range(
start=tm,
periods=signals_df.shape[0],
freq='1min'
)
signals_df.index = new_index
signals_df.index.name = 'Timestamp'
signals_df = signals_df.reset_index()
# Export this file in CSV format:
component_fname = os.path.join(INFER_DATA, 'input', f'{component}_{current_timestamp}.csv')
signals_df.to_csv(component_fname, index=None)
start = start + datetime.timedelta(minutes=+frequency)
# Upload the whole folder to S3, in the input location:
INFERENCE_INPUT = os.path.join(INFER_DATA, 'input')
!aws s3 cp --recursive --quiet $INFERENCE_INPUT s3://$BUCKET/$PREFIX/input
# Now that we've prepared the data, create the scheduler by running:
create_scheduler_response = scheduler.create()
###Output
_____no_output_____
###Markdown
Our scheduler is now running and its inference history is currently empty: Get inference results--- List inference executions **Let's now wait for 5-15 minutes to give some time to the scheduler to run its first inferences.** Once the wait is over, we can use the ListInferenceExecution API for our current inference scheduler. The only mandatory parameter is the scheduler name.You can also choose a time period for which you want to query inference executions for. If you don't specify it, then all executions for an inference scheduler will be listed. If you want to specify the time range, you can do this:```pythonSTART_TIME_FOR_INFERENCE_EXECUTIONS = datetime.datetime(2010,1,3,0,0,0)END_TIME_FOR_INFERENCE_EXECUTIONS = datetime.datetime(2010,1,5,0,0,0)```Which means the executions after `2010-01-03 00:00:00` and before `2010-01-05 00:00:00` will be listed.You can also choose to query for executions in particular status, the allowed status are `IN_PROGRESS`, `SUCCESS` and `FAILED`.
###Code
START_TIME_FOR_INFERENCE_EXECUTIONS = None
END_TIME_FOR_INFERENCE_EXECUTIONS = None
EXECUTION_STATUS = None
execution_summaries = []
while len(execution_summaries) == 0:
execution_summaries = scheduler.list_inference_executions(
start_time=START_TIME_FOR_INFERENCE_EXECUTIONS,
end_time=END_TIME_FOR_INFERENCE_EXECUTIONS,
execution_status=EXECUTION_STATUS
)
if len(execution_summaries) == 0:
print('WAITING FOR THE FIRST INFERENCE EXECUTION')
time.sleep(60)
else:
print('FIRST INFERENCE EXECUTED\n')
break
execution_summaries
###Output
_____no_output_____
###Markdown
We have configured this scheduler to run every five minutes. After a few minutes we can also see the history in the console populated with its first executions: if you leave the scheduler to run beyond 15 minutes, any execution after the 3rd one will fail as we only generated 3 sequences above. When the scheduler starts (for example `datetime.datetime(2021, 1, 27, 9, 15)`, it looks for **a single** CSV file located in the input location with a filename that contains a timestamp set to the previous step. For example, a file named:* subsystem-01_2021012709**10**00.csv will be found and ingested* subsystem-01_2021012708**15**00.csv will **not be** ingested (it will be ingested at the next inference execution)In addition, when opening the file `subsystem-01_20210127091000.csv`, it will look for any row with a date that is between the DataStartTime and the DataEndTime of the inference execution. **If it doesn't find such a row, an internal exception will be thrown.** Download inference resultsLet's have a look at the content now available in the scheduler output location: each inference execution creates a subfolder in the output directory. The subfolder name is the timestamp (GMT) at which the inference was executed and it contains a single [JSON lines](https://jsonlines.org/) file named `results.jsonl`:Each execution summary is a JSON document that has the following format:
###Code
execution_summaries[0]
###Output
_____no_output_____
###Markdown
When the `Status` key from the previous JSON result is set to `SUCCESS`, you can collect the results location in the `CustomerResultObject` field. We are now going to loop through each execution result and download each JSON lines files generated by the scheduler. Then we will insert their results into an overall dataframe for further analysis:
###Code
# Fetch the list of execution summaries in case all executions were not captured yet:
_ = scheduler.list_inference_executions()
# Loops through the executions summaries:
results_json = []
for execution_summary in scheduler.execution_summaries:
print('.', end='')
# We only get an output if the inference execution is a sucess:
status = execution_summary['Status']
if status == 'SUCCESS':
# Download the JSON-line file locally:
bucket = execution_summary['CustomerResultObject']['Bucket']
key = execution_summary['CustomerResultObject']['Key']
current_timestamp = key.split('/')[-2]
local_fname = os.path.join(INFER_DATA, 'output', f'centrifugal-pump_{current_timestamp}.jsonl')
s3_fname = f's3://{bucket}/{key}'
!aws s3 cp --quiet $s3_fname $local_fname
# Opens the file and concatenate the results into a dataframe:
with open(local_fname, 'r') as f:
content = [eval(line) for line in f.readlines()]
results_json = results_json + content
# Build the final dataframes with all the results:
results_df = pd.DataFrame(results_json)
results_df['timestamp'] = pd.to_datetime(results_df['timestamp'])
results_df = results_df.set_index('timestamp')
results_df = results_df.sort_index()
results_df.head()
###Output
_____no_output_____
###Markdown
The content of each JSON lines file follows this format: ```json[ { 'timestamp': '2021-04-17T13:25:00.000000', 'prediction': 1, 'diagnostics': [ {'name': 'subsystem-19\\signal-067', 'value': 0.12}, {'name': 'subsystem-18\\signal-099', 'value': 0.0}, {'name': 'subsystem-09\\signal-016', 'value': 0.0}, . . . {'name': 'subsystem-06\\signal-119', 'value': 0.08}, {'name': 'subsystem-10\\signal-071', 'value': 0.02}, {'name': 'subsystem-20\\signal-076', 'value': 0.02} ] } ...]```Each timestamp found in the file is associated to a prediction: 1 when an anomaly is detected an 0 otherwise. When the `prediction` field is 1 (an anomaly is detected), the `diagnostics` field contains each sensor (with the format `component`\\`tag`) and an associated percentage. This percentage corresponds to the magnitude of impact of a given sensor to the detected anomaly. For instance, in the example above, the tag `signal-067` located on the `subsystem-19` component has an estimated 12% magnitude of impact to the anomaly detected at 8pm on April 7th 2021. This dataset has 122 sensors: if each sensor contributed the same way to this event, the impact of each of them would be `100 / 122 = 0.82%`, so 12% is indeed statistically significant. Visualizing the inference results
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import numpy as np
%matplotlib inline
plt.style.use('Solarize_Light2')
plt.rcParams['lines.linewidth'] = 0.5
###Output
_____no_output_____
###Markdown
Single inference analysisLet's first expand the results to expose the content of the **diagnostics** column above into different dataframe columns:
###Code
expanded_results = []
for index, row in results_df.iterrows():
new_row = dict()
new_row.update({'timestamp': index})
new_row.update({'prediction': row['prediction']})
if row['prediction'] == 1:
diagnostics = pd.DataFrame(row['diagnostics'])
diagnostics = dict(zip(diagnostics['name'], diagnostics['value']))
new_row = {**new_row, **diagnostics}
expanded_results.append(new_row)
expanded_results = pd.DataFrame(expanded_results)
expanded_results['timestamp'] = pd.to_datetime(expanded_results['timestamp'])
expanded_results = expanded_results.set_index('timestamp')
expanded_results.head()
###Output
_____no_output_____
###Markdown
Each detected event have some detailed diagnostics. Let's unpack the details for the first event and plot a similar bar chart than what the console provides when it evaluates a trained model:
###Code
event_details = pd.DataFrame(expanded_results.iloc[0, 1:]).reset_index()
event_details.columns = ['name', 'value']
event_details = event_details.sort_values(by='value')
event_details_limited = event_details.tail(10)
# We can then plot a horizontal bar chart:
y_pos = np.arange(event_details_limited.shape[0])
values = list(event_details_limited['value'])
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(1,1,1)
ax.barh(y_pos, event_details_limited['value'], align='center')
ax.set_yticks(y_pos)
ax.set_yticklabels(event_details_limited['name'])
ax.xaxis.set_major_formatter(mtick.PercentFormatter(1.0))
# Add the values in each bar:
for i, v in enumerate(values):
if v == 0:
ax.text(0.0005, i, f'{v*100:.2f}%', color='#000000', verticalalignment='center')
else:
ax.text(0.0005, i, f'{v*100:.2f}%', color='#FFFFFF', fontweight='bold', verticalalignment='center')
plt.title(f'Event detected at {expanded_results.index[0]}', fontsize=12, fontweight='bold')
plt.show()
###Output
_____no_output_____
###Markdown
As we did in the previous notebook, the above bar chart is already of great help to pinpoint what might be going wrong with your asset. Let's load the initial tags description file we prepared in the first notebook and match the sensors with our initial components to group sensors by component:
###Code
# Agregate event diagnostics at the component level:
event_details[['asset', 'sensor']] = event_details['name'].str.split('\\', expand=True)
component_diagnostics = event_details.groupby(by='asset').sum().sort_values(by='value')
component_diagnostics = component_diagnostics[component_diagnostics['value'] > 0.0]
# Prepare Y position and values for bar chart:
y_pos = np.arange(component_diagnostics.shape[0])
values = list(component_diagnostics['value'])
# Plot the bar chart:
fig = plt.figure(figsize=(12,6))
ax = plt.subplot(1,1,1)
ax.barh(y_pos, component_diagnostics['value'], align='center')
ax.set_yticks(y_pos)
ax.set_yticklabels(list(component_diagnostics.index))
ax.xaxis.set_major_formatter(mtick.PercentFormatter(1.0))
# Add the values in each bar:
for i, v in enumerate(values):
ax.text(0.005, i, f'{v*100:.2f}%', color='#FFFFFF', fontweight='bold', verticalalignment='center')
# Show the final plot:
plt.show()
###Output
_____no_output_____
###Markdown
Inference scheduler operations--- Stop inference scheduler**Be frugal**, running the scheduler is the main cost driver of Amazon Lookout for Equipment. Use the following API to stop an already running inference scheduler. This will stop the periodic inference executions:
###Code
scheduler.stop()
###Output
_____no_output_____
###Markdown
Start an inference schedulerYou can restart any `STOPPED` inference scheduler using this API:
###Code
scheduler.start()
###Output
_____no_output_____
###Markdown
Delete an inference schedulerYou can delete a **stopped** scheduler you have no more use of: you can only have one scheduler per model.
###Code
scheduler.stop()
scheduler.delete()
###Output
_____no_output_____
###Markdown
**Amazon Lookout for Equipment** - Demonstration on an anonymized expander dataset*Part 5: Scheduling regular inference calls*
###Code
BUCKET = '<YOUR_BUCKET_NAME_HERE>'
PREFIX = 'data/scheduled_inference'
###Output
_____no_output_____
###Markdown
Initialization---In this notebook, we will update the repository structure to add an inference directory in the data folder:```/lookout-equipment-demo|+-- data/| || +-- inference/| | || | |-- input/| | || | \-- output/| || +-- labelled-data/| | \-- labels.csv| || \-- training-data/| \-- expander/| |-- subsystem-01| | \-- subsystem-01.csv| || |-- subsystem-02| | \-- subsystem-02.csv| || |-- ...| || \-- subsystem-24| \-- subsystem-24.csv|+-- dataset/| |-- labels.csv| |-- tags_description.csv| |-- timeranges.txt| \-- timeseries.zip|+-- notebooks/| |-- 1_data_preparation.ipynb| |-- 2_dataset_creation.ipynb| |-- 3_model_training.ipynb| |-- 4_model_evaluation.ipynb| \-- 5_inference_scheduling.ipynb <<< This notebook <<<|+-- utils/ |-- lookout_equipment_utils.py \-- lookoutequipment.json``` Imports
###Code
%%sh
pip -q install --upgrade pip
pip -q install --upgrade awscli boto3 sagemaker
aws configure add-model --service-model file://../utils/lookoutequipment.json --service-name lookoutequipment
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
import boto3
import datetime
import os
import pandas as pd
import pprint
import pyarrow as pa
import pyarrow.parquet as pq
import sagemaker
import s3fs
import sys
import time
import uuid
import warnings
# Helper functions for managing Lookout for Equipment API calls:
sys.path.append('../utils')
import lookout_equipment_utils as lookout
###Output
_____no_output_____
###Markdown
Parameters
###Code
warnings.filterwarnings('ignore')
DATA = os.path.join('..', 'data')
RAW_DATA = os.path.join('..', 'dataset')
INFER_DATA = os.path.join(DATA, 'inference')
os.makedirs(os.path.join(INFER_DATA, 'input'), exist_ok=True)
os.makedirs(os.path.join(INFER_DATA, 'output'), exist_ok=True)
ROLE_ARN = sagemaker.get_execution_role()
REGION_NAME = boto3.session.Session().region_name
###Output
_____no_output_____
###Markdown
Create an inference scheduler---While navigating to the model details part of the console, you will see that you have no inference scheduled yet: Scheduler configurationLet's create a new inference schedule: some parameters are mandatory, while others offer some added flexibility. Parameters* Set `DATA_UPLOAD_FREQUENCY` at which the data will be uploaded for inference. Allowed values are `PT5M`, `PT10M`, `PT15M`, `PT30M` and `PT1H`. * This is both the frequency of the inference scheduler and how often data are uploaded to the source bucket. * **Note**: ***the upload frequency must be compatible with the sampling rate selected at training time.*** *For example, if a model was trained with a 30 minutes resampling, asking for 5 minutes won't work and you need to select either PT30M and PT1H for this parameter at inference time.** Set `INFERENCE_DATA_SOURCE_BUCKET` to the S3 bucket of your inference data* Set `INFERENCE_DATA_SOURCE_PREFIX` to the S3 prefix of your inference data* Set `INFERENCE_DATA_OUTPUT_BUCKET` to the S3 bucket where you want inference results* Set `INFERENCE_DATA_OUTPUT_PREFIX` to the S3 prefix where you want inference results* Set `ROLE_ARN_FOR_INFERENCE` to the role to be used to **read** data to infer on and **write** inference output
###Code
# Name of the inference scheduler you want to create
INFERENCE_SCHEDULER_NAME = 'lookout-demo-model-v1-scheduler'
# Name of the model on which you want to create this inference scheduler
MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER = 'lookout-demo-model-v1'
# Mandatory parameters:
INFERENCE_DATA_SOURCE_BUCKET = BUCKET
INFERENCE_DATA_SOURCE_PREFIX = f'{PREFIX}/input/'
INFERENCE_DATA_OUTPUT_BUCKET = BUCKET
INFERENCE_DATA_OUTPUT_PREFIX = f'{PREFIX}/output/'
ROLE_ARN_FOR_INFERENCE = ROLE_ARN
DATA_UPLOAD_FREQUENCY = 'PT5M'
###Output
_____no_output_____
###Markdown
Optional parameters* Set `DATA_DELAY_OFFSET_IN_MINUTES` to the number of minutes you expect the data to be delayed to upload. It's a time buffer to upload data.* Set ``INPUT_TIMEZONE_OFFSET``. The allow values are : +00:00, +00:30, -01:00, ... +11:30, +12:00, -00:00, -00:30, -01:00, ... -11:30, -12:00* Set `TIMESTAMP_FORMAT`. The allowed values `EPOCH`, `yyyy-MM-dd-HH-mm-ss` or `yyyyMMddHHmmss`. This is the format of timestamp which is the suffix of the input data file name. This is used by Lookout Equipment to understand which files to run inference on (so that you don't need to remove previous files to let the scheduler finds which one to run on).* Set `COMPONENT_TIMESTAMP_DELIMITER`. The allowed values `-`, `_` or ` `. This is the delimiter character that is used to separate the component from the timestamp in the input filename.
###Code
DATA_DELAY_OFFSET_IN_MINUTES = None
INPUT_TIMEZONE_OFFSET = '+00:00'
COMPONENT_TIMESTAMP_DELIMITER = '_'
TIMESTAMP_FORMAT = 'yyyyMMddHHmmss'
###Output
_____no_output_____
###Markdown
Create the inference schedulerThe CreateInferenceScheduler API creates a scheduler **and** starts it: this means that this starts costing you right away. However, you can stop and start an existing scheduler at will (see at the end of this notebook):
###Code
scheduler = lookout.LookoutEquipmentScheduler(
scheduler_name=INFERENCE_SCHEDULER_NAME,
model_name=MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER,
region_name=REGION_NAME
)
scheduler_params = {
'input_bucket': INFERENCE_DATA_SOURCE_BUCKET,
'input_prefix': INFERENCE_DATA_SOURCE_PREFIX,
'output_bucket': INFERENCE_DATA_OUTPUT_BUCKET,
'output_prefix': INFERENCE_DATA_OUTPUT_PREFIX,
'role_arn': ROLE_ARN_FOR_INFERENCE,
'upload_frequency': DATA_UPLOAD_FREQUENCY,
'delay_offset': DATA_DELAY_OFFSET_IN_MINUTES,
'timezone_offset': INPUT_TIMEZONE_OFFSET,
'component_delimiter': COMPONENT_TIMESTAMP_DELIMITER,
'timestamp_format': TIMESTAMP_FORMAT
}
scheduler.set_parameters(**scheduler_params)
###Output
_____no_output_____
###Markdown
Prepare the inference data---Let's prepare and send some data in the S3 input location our scheduler will monitor:
###Code
# Let's load all our original signals:
all_tags_fname = os.path.join(DATA, 'training-data', 'expander.parquet')
table = pq.read_table(all_tags_fname)
all_tags_df = table.to_pandas()
del table
all_tags_df.head()
###Output
_____no_output_____
###Markdown
Let's load the tags description: this dataset comes with a tag description file including:* `Tag`: the tag name as it is recorded by the customer in his historian system (for instance the [Honeywell process history database](https://www.honeywellprocess.com/en-US/explore/products/advanced-applications/uniformance/Pages/uniformance-phd.aspx))* `UOM`: the unit of measure for the recorded signal* `Subsystem`: an ID linked to the part of the asset this sensor is attached toFrom there, we can collect the list of components (subsystem column):
###Code
tags_description_fname = os.path.join(RAW_DATA, 'tags_description.csv')
tags_description_df = pd.read_csv(tags_description_fname)
components = tags_description_df['Subsystem'].unique()
tags_description_df.head()
###Output
_____no_output_____
###Markdown
To build our sample inference dataset, we will extract the last few minutes of the evaluation period of the original time series:
###Code
# How many sequences do we want to extract:
num_sequences = 3
# The scheduling frequency in minutes: this **MUST** match the
# resampling rate used to train the model:
frequency = 5
# Loops through each sequence:
start = all_tags_df.index.max() + datetime.timedelta(minutes=-frequency * (num_sequences) + 1)
for i in range(num_sequences):
end = start + datetime.timedelta(minutes=+frequency - 1)
# Rounding time to the previous 5 minutes:
tm = datetime.datetime.now()
tm = tm - datetime.timedelta(
minutes=tm.minute % frequency,
seconds=tm.second,
microseconds=tm.microsecond
)
tm = tm + datetime.timedelta(minutes=+frequency * (i))
current_timestamp = (tm).strftime(format='%Y%m%d%H%M%S')
# For each sequence, we need to loop through all components:
print(f'Extracting data from {start} to {end}:')
new_index = None
for component in components:
# Extracting the dataframe for this component and this particular time range:
signals = list(tags_description_df.loc[(tags_description_df['Subsystem'] == component), 'Tag'])
signals_df = all_tags_df.loc[start:end, signals]
# We need to reset the index to match the time
# at which the scheduler will run inference:
if new_index is None:
new_index = pd.date_range(
start=tm,
periods=signals_df.shape[0],
freq='1min'
)
signals_df.index = new_index
signals_df.index.name = 'Timestamp'
signals_df = signals_df.reset_index()
signals_df['Timestamp'] = signals_df['Timestamp'].dt.strftime('%Y-%m-%dT%H:%M:%S.%f')
# Export this file in CSV format:
component_fname = os.path.join(INFER_DATA, 'input', f'{component}_{current_timestamp}.csv')
signals_df.to_csv(component_fname, index=None)
start = start + datetime.timedelta(minutes=+frequency)
# Upload the whole folder to S3, in the input location:
INFERENCE_INPUT = os.path.join(INFER_DATA, 'input')
!aws s3 cp --recursive --quiet $INFERENCE_INPUT s3://$BUCKET/$PREFIX/input
# Now that we've prepared the data, create the scheduler by running:
create_scheduler_response = scheduler.create()
###Output
_____no_output_____
###Markdown
Our scheduler is now running and its inference history is currently empty: Get inference results--- List inference executions **Let's now wait for 5-15 minutes to give some time to the scheduler to run its first inferences.** Once the wait is over, we can use the ListInferenceExecution API for our current inference scheduler. The only mandatory parameter is the scheduler name.You can also choose a time period for which you want to query inference executions for. If you don't specify it, then all executions for an inference scheduler will be listed. If you want to specify the time range, you can do this:```pythonSTART_TIME_FOR_INFERENCE_EXECUTIONS = datetime.datetime(2010,1,3,0,0,0)END_TIME_FOR_INFERENCE_EXECUTIONS = datetime.datetime(2010,1,5,0,0,0)```Which means the executions after `2010-01-03 00:00:00` and before `2010-01-05 00:00:00` will be listed.You can also choose to query for executions in particular status, the allowed status are `IN_PROGRESS`, `SUCCESS` and `FAILED`.
###Code
START_TIME_FOR_INFERENCE_EXECUTIONS = None
END_TIME_FOR_INFERENCE_EXECUTIONS = None
EXECUTION_STATUS = None
execution_summaries = []
while len(execution_summaries) == 0:
execution_summaries = scheduler.list_inference_executions(
start_time=START_TIME_FOR_INFERENCE_EXECUTIONS,
end_time=END_TIME_FOR_INFERENCE_EXECUTIONS,
execution_status=EXECUTION_STATUS
)
if len(execution_summaries) == 0:
print('WAITING FOR THE FIRST INFERENCE EXECUTION')
time.sleep(60)
else:
print('FIRST INFERENCE EXECUTED\n')
break
execution_summaries
###Output
_____no_output_____
###Markdown
We have configured this scheduler to run every five minutes. After a few minutes we can also see the history in the console populated with its first execution: When the scheduler starts (for example `datetime.datetime(2021, 1, 27, 9, 15)`, it looks for **a single** CSV file located in the input location with a filename that contains a timestamp set to the previous step. For example, a file named:* subsystem-01_2021012709**10**00.csv will be found and ingested* subsystem-01_2021012708**15**00.csv will **not be** ingested (it will be ingested at the next inference execution)In addition, when opening the file `subsystem-01_20210127091000.csv`, it will look for any row with a date that is between the DataStartTime and the DataEndTime of the inference execution. If it doesn't find such a row, an internal exception will be thrown. Get actual prediction results After each successful inference, a CSV file is positionned in the output location of your bucket. Each inference creates a new folder with a single `results.csv` file in it. Let's read these files and display their content here:
###Code
results_df = scheduler.get_predictions()
results_df.to_csv(os.path.join(INFER_DATA, 'output', 'results.csv'))
results_df
###Output
_____no_output_____
###Markdown
Inference scheduler operations--- Stop inference scheduler**Be frugal**, running the scheduler is the main cost driver of Amazon Lookout for Equipment. Use the following API to stop an already running inference scheduler. This will stop the periodic inference executions:
###Code
scheduler.stop()
###Output
_____no_output_____
###Markdown
Start an inference schedulerYou can restart any `STOPPED` inference scheduler using this API:
###Code
scheduler.start()
###Output
_____no_output_____
###Markdown
Delete an inference schedulerYou can delete a **stopped** scheduler you have no more use of: you can only have one scheduler per model.
###Code
scheduler.stop()
scheduler.delete()
###Output
_____no_output_____
###Markdown
**Amazon Lookout for Equipment** - Demonstration on an anonymized compressor dataset*Part 5: Scheduling regular inference calls* Initialization---In this notebook, we will update the repository structure to add an inference directory in the data folder:```/lookout-equipment-demo/getting_started/|├── data/| || ├── inference-data/| | ├── input/| | └── output/| || ├── labelled-data/| | └── labels.csv| || └── training-data/| └── expander/| ├── subsystem-01| | └── subsystem-01.csv| || ├── subsystem-02| | └── subsystem-02.csv| || ├── ...| || └── subsystem-24| └── subsystem-24.csv|├── dataset/ <<< Original dataset <<<| ├── labels.csv| ├── tags_description.csv| ├── timeranges.txt| └── timeseries.zip|├── notebooks/| ├── 1_data_preparation.ipynb| ├── 2_dataset_creation.ipynb| ├── 3_model_training.ipynb| ├── 4_model_evaluation.ipynb| ├── 5_inference_scheduling.ipynb <<< This notebook <<<| └── config.py|└── utils/ ├── aws_matplotlib_light.py └── lookout_equipment_utils.py```
###Code
!pip install --quiet --upgrade tqdm sagemaker
import boto3
import config
import datetime
import os
import pandas as pd
import pprint
import pyarrow as pa
import pyarrow.parquet as pq
import sagemaker
import s3fs
import sys
import time
import uuid
# Helper functions for managing Lookout for Equipment API calls:
sys.path.append('../utils')
import lookout_equipment_utils as lookout
###Output
_____no_output_____
###Markdown
Parameters
###Code
DATA = os.path.join('..', 'data')
RAW_DATA = os.path.join('..', 'dataset')
INFER_DATA = os.path.join(DATA, 'inference-data')
os.makedirs(os.path.join(INFER_DATA, 'input'), exist_ok=True)
os.makedirs(os.path.join(INFER_DATA, 'output'), exist_ok=True)
ROLE_ARN = sagemaker.get_execution_role()
REGION_NAME = boto3.session.Session().region_name
DATASET_NAME = config.DATASET_NAME
BUCKET = config.BUCKET
PREFIX_TRAINING = config.PREFIX_TRAINING
PREFIX_LABEL = config.PREFIX_LABEL
PREFIX_INFERENCE = config.PREFIX_INFERENCE
MODEL_NAME = config.MODEL_NAME
###Output
_____no_output_____
###Markdown
Scheduler configuration
###Code
# Name of the model on which you want to create this inference scheduler
MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER = MODEL_NAME
# Name of the inference scheduler you want to create
INFERENCE_SCHEDULER_NAME = config.INFERENCE_SCHEDULER_NAME
# Mandatory parameters:
INFERENCE_DATA_SOURCE_BUCKET = BUCKET
INFERENCE_DATA_SOURCE_PREFIX = f'{PREFIX_INFERENCE}/input/'
INFERENCE_DATA_OUTPUT_BUCKET = BUCKET
INFERENCE_DATA_OUTPUT_PREFIX = f'{PREFIX_INFERENCE}/output/'
ROLE_ARN_FOR_INFERENCE = ROLE_ARN
DATA_UPLOAD_FREQUENCY = 'PT5M'
###Output
_____no_output_____
###Markdown
Optional parameters
###Code
DATA_DELAY_OFFSET_IN_MINUTES = None
INPUT_TIMEZONE_OFFSET = '+00:00'
COMPONENT_TIMESTAMP_DELIMITER = '_'
TIMESTAMP_FORMAT = 'yyyyMMddHHmmss'
###Output
_____no_output_____
###Markdown
Create the inference scheduler
###Code
scheduler = lookout.LookoutEquipmentScheduler(
scheduler_name=INFERENCE_SCHEDULER_NAME,
model_name=MODEL_NAME_FOR_CREATING_INFERENCE_SCHEDULER,
region_name=REGION_NAME
)
scheduler_params = {
'input_bucket': INFERENCE_DATA_SOURCE_BUCKET,
'input_prefix': INFERENCE_DATA_SOURCE_PREFIX,
'output_bucket': INFERENCE_DATA_OUTPUT_BUCKET,
'output_prefix': INFERENCE_DATA_OUTPUT_PREFIX,
'role_arn': ROLE_ARN_FOR_INFERENCE,
'upload_frequency': DATA_UPLOAD_FREQUENCY,
'delay_offset': DATA_DELAY_OFFSET_IN_MINUTES,
'timezone_offset': INPUT_TIMEZONE_OFFSET,
'component_delimiter': COMPONENT_TIMESTAMP_DELIMITER,
'timestamp_format': TIMESTAMP_FORMAT
}
scheduler.set_parameters(**scheduler_params)
###Output
_____no_output_____
###Markdown
Prepare the inference data
###Code
# Let's load all our original signals:
all_tags_fname = os.path.join(DATA, 'training-data', 'expander.parquet')
table = pq.read_table(all_tags_fname)
all_tags_df = table.to_pandas()
del table
all_tags_df.head()
tags_description_fname = os.path.join(RAW_DATA, 'tags_description.csv')
tags_description_df = pd.read_csv(tags_description_fname)
components = tags_description_df['Subsystem'].unique()
tags_description_df.head()
start = pd.to_datetime('2015-11-21 04:00:00')
# How many sequences do we want to extract:
num_sequences = 12
# The scheduling frequency in minutes: this **MUST** match the
# resampling rate used to train the model:
frequency = 5
# Loops through each sequence:
for i in range(num_sequences):
end = start + datetime.timedelta(minutes=+frequency - 1)
# Rounding time to the previous 5 minutes:
tm = datetime.datetime.now()
tm = tm - datetime.timedelta(
minutes=tm.minute % frequency,
seconds=tm.second,
microseconds=tm.microsecond
)
tm = tm + datetime.timedelta(minutes=+frequency * (i))
current_timestamp = (tm).strftime(format='%Y%m%d%H%M%S')
# For each sequence, we need to loop through all components:
print(f'Extracting data from {start} to {end}')
new_index = None
for component in components:
# Extracting the dataframe for this component and this particular time range:
signals = list(tags_description_df.loc[(tags_description_df['Subsystem'] == component), 'Tag'])
signals_df = all_tags_df.loc[start:end, signals]
# We need to reset the index to match the time
# at which the scheduler will run inference:
if new_index is None:
new_index = pd.date_range(
start=tm,
periods=signals_df.shape[0],
freq='1min'
)
signals_df.index = new_index
signals_df.index.name = 'Timestamp'
signals_df = signals_df.reset_index()
# Export this file in CSV format:
component_fname = os.path.join(INFER_DATA, 'input', f'{component}_{current_timestamp}.csv')
signals_df.to_csv(component_fname, index=None)
start = start + datetime.timedelta(minutes=+frequency)
# Upload the whole folder to S3, in the input location:
INFERENCE_INPUT = os.path.join(INFER_DATA, 'input')
!aws s3 cp --recursive --quiet $INFERENCE_INPUT s3://$BUCKET/$PREFIX_INFERENCE/input
# Now that we've prepared the data, create the scheduler by running:
create_scheduler_response = scheduler.create()
###Output
_____no_output_____
###Markdown
Get inference results--- List inference executions
###Code
START_TIME_FOR_INFERENCE_EXECUTIONS = None
END_TIME_FOR_INFERENCE_EXECUTIONS = None
EXECUTION_STATUS = None
execution_summaries = []
while len(execution_summaries) == 0:
execution_summaries = scheduler.list_inference_executions(
start_time=START_TIME_FOR_INFERENCE_EXECUTIONS,
end_time=END_TIME_FOR_INFERENCE_EXECUTIONS,
execution_status=EXECUTION_STATUS
)
if len(execution_summaries) == 0:
print('WAITING FOR THE FIRST INFERENCE EXECUTION')
time.sleep(60)
else:
print('FIRST INFERENCE EXECUTED\n')
break
execution_summaries
###Output
_____no_output_____
###Markdown
Download inference resultsLet's have a look at the content now available in the scheduler output location: each inference execution creates a subfolder in the output directory. The subfolder name is the timestamp (GMT) at which the inference was executed and it contains a single [JSON lines](https://jsonlines.org/) file named `results.jsonl`:
###Code
# Fetch the list of execution summaries in case all executions were not captured yet:
execution_summaries = scheduler.list_inference_executions()
execution_summaries[0]
# Loops through the executions summaries:
results_json = []
for execution_summary in scheduler.execution_summaries:
print('.', end='')
# We only get an output if the inference execution is a sucess:
status = execution_summary['Status']
if status == 'SUCCESS':
# Download the JSON-line file locally:
bucket = execution_summary['CustomerResultObject']['Bucket']
key = execution_summary['CustomerResultObject']['Key']
current_timestamp = key.split('/')[-2]
local_fname = os.path.join(INFER_DATA, 'output', f'centrifugal-pump_{current_timestamp}.jsonl')
s3_fname = f's3://{bucket}/{key}'
!aws s3 cp --quiet $s3_fname $local_fname
# Opens the file and concatenate the results into a dataframe:
with open(local_fname, 'r') as f:
content = [eval(line) for line in f.readlines()]
results_json = results_json + content
# Build the final dataframes with all the results:
results_df = pd.DataFrame(results_json)
results_df['timestamp'] = pd.to_datetime(results_df['timestamp'])
results_df = results_df.set_index('timestamp')
results_df = results_df.sort_index()
results_df.head()
###Output
_____no_output_____
###Markdown
The content of each JSON lines file follows this format: ```json[ { 'timestamp': '2021-04-17T13:25:00.000000', 'prediction': 1, 'prediction_reason': 'ANOMALY_DETECTED', 'diagnostics': [ {'name': 'subsystem-19\\signal-067', 'value': 0.12}, {'name': 'subsystem-18\\signal-099', 'value': 0.0}, {'name': 'subsystem-09\\signal-016', 'value': 0.0}, . . . {'name': 'subsystem-06\\signal-119', 'value': 0.08}, {'name': 'subsystem-10\\signal-071', 'value': 0.02}, {'name': 'subsystem-20\\signal-076', 'value': 0.02} ] } ...]``` Visualizing the inference results
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import numpy as np
%matplotlib inline
# Load style sheet:
plt.style.use('../utils/aws_matplotlib_light.py')
# Get colors from custom AWS palette:
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
###Output
_____no_output_____
###Markdown
Single inference analysisLet's first expand the results to expose the content of the **diagnostics** column above into different dataframe columns:
###Code
expanded_results = []
for index, row in results_df.iterrows():
new_row = dict()
new_row.update({'timestamp': index})
new_row.update({'prediction': row['prediction']})
if row['prediction'] == 1:
diagnostics = pd.DataFrame(row['diagnostics'])
diagnostics = dict(zip(diagnostics['name'], diagnostics['value']))
new_row = {**new_row, **diagnostics}
expanded_results.append(new_row)
expanded_results = pd.DataFrame(expanded_results)
expanded_results['timestamp'] = pd.to_datetime(expanded_results['timestamp'])
expanded_results = expanded_results.set_index('timestamp')
expanded_results.head()
###Output
_____no_output_____
###Markdown
Each detected event have some detailed diagnostics. Let's unpack the details for the first event and plot a similar bar chart than what the console provides when it evaluates a trained model:
###Code
event_details = pd.DataFrame(expanded_results.iloc[0, 1:]).reset_index()
event_details.columns = ['name', 'value']
event_details = event_details.sort_values(by='value')
event_details_limited = event_details.tail(10)
# We can then plot a horizontal bar chart:
y_pos = np.arange(event_details_limited.shape[0])
values = list(event_details_limited['value'])
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(1,1,1)
ax.barh(y_pos, event_details_limited['value'], align='center')
ax.set_yticks(y_pos)
ax.set_yticklabels(event_details_limited['name'])
ax.xaxis.set_major_formatter(mtick.PercentFormatter(1.0))
# Add the values in each bar:
for i, v in enumerate(values):
if v == 0:
ax.text(0.0005, i, f'{v*100:.2f}%', color='#000000', verticalalignment='center')
else:
ax.text(0.0005, i, f'{v*100:.2f}%', color='#FFFFFF', fontweight='bold', verticalalignment='center')
plt.title(f'Event detected at {expanded_results.index[0]}', fontsize=12, fontweight='bold')
plt.show()
###Output
_____no_output_____
###Markdown
As we did in the previous notebook, the above bar chart is already of great help to pinpoint what might be going wrong with your asset. Let's load the initial tags description file we prepared in the first notebook and match the sensors with our initial components to group sensors by component:
###Code
# Agregate event diagnostics at the component level:
event_details[['asset', 'sensor']] = event_details['name'].str.split('\\', expand=True)
component_diagnostics = event_details.groupby(by='asset').sum().sort_values(by='value')
component_diagnostics = component_diagnostics[component_diagnostics['value'] > 0.0]
# Prepare Y position and values for bar chart:
y_pos = np.arange(component_diagnostics.shape[0])
values = list(component_diagnostics['value'])
# Plot the bar chart:
fig = plt.figure(figsize=(12,6))
ax = plt.subplot(1,1,1)
ax.barh(y_pos, component_diagnostics['value'], align='center')
ax.set_yticks(y_pos)
ax.set_yticklabels(list(component_diagnostics.index))
ax.xaxis.set_major_formatter(mtick.PercentFormatter(1.0))
# Add the values in each bar:
for i, v in enumerate(values):
ax.text(0.005, i, f'{v*100:.2f}%', color='#FFFFFF', fontweight='bold', verticalalignment='center')
# Show the final plot:
plt.show()
###Output
_____no_output_____
###Markdown
Inference scheduler operations--- Stop inference scheduler**Be frugal**, running the scheduler is the main cost driver of Amazon Lookout for Equipment. Use the following API to stop an already running inference scheduler. This will stop the periodic inference executions:
###Code
scheduler.stop()
###Output
_____no_output_____
###Markdown
Start an inference schedulerYou can restart any `STOPPED` inference scheduler using this API:
###Code
scheduler.start()
###Output
_____no_output_____
###Markdown
Delete an inference schedulerYou can delete a **stopped** scheduler you have no more use of: you can only have one scheduler per model.
###Code
scheduler.stop()
scheduler.delete()
###Output
_____no_output_____
|
Notebooks/Lec01_2_scripting.ipynb
|
###Markdown
**Lec1_2: Basic Programming (Python)** **1. 논리연산자 ^**
###Code
# True ^ False 를 출력해보자
print(True ^ False)
# True ^ False 를 출력해보자
print(True ^ False)
###Output
True
###Markdown
1-1. Bit by bit 연산 (2진법 연산)
###Code
# 10 ^ 4 를 출력해보자
print(10^4)
###Output
14
###Markdown
2. 다음과 같은 화면을 출력하도록 introduction.py 를 작성하여 실행해보자.  결과 화면을 메모장과 cmd 실행화면이 함께 나오도록 추가 3. 변수 3-1. 변수를 할당해보자.
###Code
x = 100 # x 에 100 이라는 값을 할당하겠다.
print(x)
###Output
_____no_output_____
###Markdown
3-2. 변수를 활용한 계산.
###Code
x = 100
y = 200
avg = (x+y)/2.
###Output
_____no_output_____
###Markdown
3-3. 문자열도 변수에 저장할 수 있다.
###Code
name = '내이름'
addr = '부산'
print(name + '은(는)'+addr+'에 살아요')
###Output
_____no_output_____
###Markdown
3-4. 변수를 할당하는 것은 '같다'는 뜻이 아니다.
###Code
x = 4
x = x+1
###Output
_____no_output_____
###Markdown
3-5. 다음에서 에러가 발생하는 이유는 무엇일까? 에러를 수정하여 바르게 실행하여보자.
###Code
sum = 100
list_a = [10,20,30]
total = sum(list_a)
###Output
_____no_output_____
###Markdown
4. input() 함수 활용하기 4-1. 위에서 작성한 '내이름은 부산에 살아요' 를 input() 을 사용해서 다시 출력해보자.
###Code
name = input('본인의 이름을 입력하세요:') # 자신의 이름을 name 변수에 할당함.
addr = input('사는 도시 이름을 입력하세요:') # 사는 도시를 addr 변수에 할당함.
print(name + '은(는)' + addr + '에 살아요')
###Output
_____no_output_____
###Markdown
4-2. 다양한 지름의 원의 넓이를 계산할 수 있는 코드를 작성해보자. 이 코드를 radius.py 라고 하자.
###Code
# radius.py
# 여기에 반지름을 입력받는 코드를 input() 을 사용하여 작성 (write a code for the radius by using input() function).
# 여기에 넓이를 계산하는 코드를 원의 넓이를 구하는 코드를 작성 (write a code for an area of the circle with the radius).
# 여기에 '반지름이 (radius 값) 인 원의 넓이는 (area 값) 입니다' 라고 출력하는 코드를 작성.
# radius.py
radius = int(input('반지름을 입력하세요:')) # 반지름을 입력받아, radius 에 할당한다.
area = radius**2 * 3.14 # 원의 넓이를 구하는데에 원하는 반지름이 저장된 radius 를 사용한다.
print('반지름이 '+str(radius)+'인 원의 넓이는 '+ str(area)+'입니다.') # 반지름과 넓이를 같이 출력하도록 한다.
###Output
_____no_output_____
|
final/ipynb/test_bm25_ir.ipynb
|
###Markdown
基于BM25的检索方法---核心思想:借助n-gram特征进行检索
###Code
import os
import re
import time
from tqdm import tqdm
import random
import sys
import pprint
sys.path.insert(0, "/home/team55/notespace/zengbin")
random.seed(1234)
from jddc.config import BM25Config
from jddc.utils import write_file, read_file, save_to_pkl, read_from_pkl, create_logger
from jddc.utils import insure_folder_exists
from jddc.bm25 import *
print(str(conf))
###Output
_____no_output_____
###Markdown
1 - 准备数据---
###Code
all_sessions = read_from_pkl(conf.pkl_sessions)
# sessions = random.sample(all_sessions, 1000000)
# sessions = all_sessions
left, right = random.sample(all_sessions[:400000], 100000), random.sample(all_sessions[400000:], 200000)
data_q = []
data_a = []
# 单轮数据集
for sess in left:
q, a = sess.qaqaq_a
if 2 < len(a) < 100:
q = re.sub("(\t)| ", "", q)
q_split = q.split("<s>")
if len(q_split) == 5:
q_merged = "".join([q_split[0], q_split[2], q_split[4]])
data_q.append(q_merged.replace("\t", ""))
data_a.append(a.replace("\t", " "))
# 多轮数据集
for sess in right:
qas = sess.qas_merged
if qas[0][0] == "1":
qas = qas[1:]
if qas[-1][0] == "0":
qas = qas[:-1]
assert len(qas) % 2 == 0
for i in range(0, len(qas), 2):
# 随机判断是否要新增QA
if random.choice([True, False]):
data_q.append(qas[i][1].replace("\t", ""))
data_a.append(qas[i + 1][1].replace("\t", " "))
questions, answers = data_q, data_a
questions, answers = create_dataset_for_bm25_01(sessions)
len(questions)
###Output
_____no_output_____
###Markdown
2 - 构建/加载 bm25模型---
###Code
questions_tokens = []
for q in tqdm(questions, desc="cut"):
# q_tokens = n_grams(q, conf.n)
q_tokens = jieba_tokenize(q, for_search=False)
questions_tokens.append(q_tokens)
data = read_file(os.path.join(conf.base_path, "seq2seq/train.tsv"))
questions_tokens = [x.split("\t")[0].split(" ") for x in data]
answers = [x.split("\t")[1].strip("\n").replace(" ", "") for x in data]
answers[0]
model = bm25.BM25(questions_tokens)
average_idf = sum(float(val) for val in
model.idf.values()) / len(model.idf)
data = [model, answers, average_idf]
save_to_pkl(file=conf.pkl_bm25, data=data)
model, answers, average_idf = create_bm25_model(questions, answers)
model, answers, average_idf = load_bm25_model()
###Output
_____no_output_____
###Markdown
3 - 验证模型效果---- 预测时,首先根据切词后的句子长度进行筛选,仅保留长度相近的question,然后逐个计算相似度。
###Code
run_prediction(conf.file_test_q, "/home/team55/notespace/zengbin/answers/bm25_answers008.txt")
###Output
<BM25Config | model:/home/team55/notespace/data/BM25/bm25_3.model.small.pkl; n:3; top:5; random_num:1234>
|
MachineLearning_9/08_xgboost_lightgbm/.ipynb_checkpoints/lightgbm_cheatsheet-checkpoint.ipynb
|
###Markdown
LightGBM用法速查表 by 寒小阳 1.读取csv数据并指定参数建模**by 寒小阳**
###Code
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
# 加载数据集合
print('Load data...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 设定训练集和测试集
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
# 构建lgb中的Dataset格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# 敲定好一组参数
params = {
'task': 'train',
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': {'l2', 'auc'},
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0
}
print('开始训练...')
# 训练
gbm = lgb.train(params,
lgb_train,
num_boost_round=20,
valid_sets=lgb_eval,
early_stopping_rounds=5)
# 保存模型
print('保存模型...')
# 保存模型到文件中
gbm.save_model('model.txt')
print('开始预测...')
# 预测
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)
# 评估
print('预估结果的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
Load data...
开始训练...
[1] valid_0's l2: 0.24288 valid_0's auc: 0.764496
Training until validation scores don't improve for 5 rounds.
[2] valid_0's l2: 0.239307 valid_0's auc: 0.766173
[3] valid_0's l2: 0.235559 valid_0's auc: 0.785547
[4] valid_0's l2: 0.230771 valid_0's auc: 0.797786
[5] valid_0's l2: 0.226297 valid_0's auc: 0.805155
[6] valid_0's l2: 0.223692 valid_0's auc: 0.800979
[7] valid_0's l2: 0.220941 valid_0's auc: 0.806566
[8] valid_0's l2: 0.217982 valid_0's auc: 0.808566
[9] valid_0's l2: 0.215351 valid_0's auc: 0.809041
[10] valid_0's l2: 0.213064 valid_0's auc: 0.805953
[11] valid_0's l2: 0.211053 valid_0's auc: 0.804631
[12] valid_0's l2: 0.209336 valid_0's auc: 0.802922
[13] valid_0's l2: 0.207492 valid_0's auc: 0.802011
[14] valid_0's l2: 0.206016 valid_0's auc: 0.80193
Early stopping, best iteration is:
[9] valid_0's l2: 0.215351 valid_0's auc: 0.809041
保存模型...
开始预测...
预估结果的rmse为:
0.4640593794679212
###Markdown
2.添加样本权重训练**by 寒小阳**
###Code
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings("ignore")
# 加载数据集
print('加载数据...')
df_train = pd.read_csv('./data/binary.train', header=None, sep='\t')
df_test = pd.read_csv('./data/binary.test', header=None, sep='\t')
W_train = pd.read_csv('./data/binary.train.weight', header=None)[0]
W_test = pd.read_csv('./data/binary.test.weight', header=None)[0]
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
num_train, num_feature = X_train.shape
# 加载数据的同时加载权重
lgb_train = lgb.Dataset(X_train, y_train,
weight=W_train, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train,
weight=W_test, free_raw_data=False)
# 设定参数
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'binary_logloss',
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0
}
# 产出特征名称
feature_name = ['feature_' + str(col) for col in range(num_feature)]
print('开始训练...')
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
valid_sets=lgb_train, # 评估训练集
feature_name=feature_name,
categorical_feature=[21])
###Output
加载数据...
开始训练...
[1] training's binary_logloss: 0.68205
[2] training's binary_logloss: 0.673618
[3] training's binary_logloss: 0.665891
[4] training's binary_logloss: 0.656874
[5] training's binary_logloss: 0.648523
[6] training's binary_logloss: 0.641874
[7] training's binary_logloss: 0.636029
[8] training's binary_logloss: 0.629427
[9] training's binary_logloss: 0.623354
[10] training's binary_logloss: 0.617593
###Markdown
3.模型的载入与预测**by 寒小阳**
###Code
# 查看特征名称
print('完成10轮训练...')
print('第7个特征为:')
print(repr(lgb_train.feature_name[6]))
# 存储模型
gbm.save_model('./model/lgb_model.txt')
# 特征名称
print('特征名称:')
print(gbm.feature_name())
# 特征重要度
print('特征重要度:')
print(list(gbm.feature_importance()))
# 加载模型
print('加载模型用于预测')
bst = lgb.Booster(model_file='./model/lgb_model.txt')
# 预测
y_pred = bst.predict(X_test)
# 在测试集评估效果
print('在测试集上的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
完成10轮训练...
第7个特征为:
'feature_6'
特征名称:
[u'feature_0', u'feature_1', u'feature_2', u'feature_3', u'feature_4', u'feature_5', u'feature_6', u'feature_7', u'feature_8', u'feature_9', u'feature_10', u'feature_11', u'feature_12', u'feature_13', u'feature_14', u'feature_15', u'feature_16', u'feature_17', u'feature_18', u'feature_19', u'feature_20', u'feature_21', u'feature_22', u'feature_23', u'feature_24', u'feature_25', u'feature_26', u'feature_27']
特征重要度:
[8, 5, 1, 19, 7, 33, 2, 0, 2, 10, 5, 2, 0, 9, 3, 3, 0, 2, 2, 5, 1, 0, 36, 3, 33, 45, 29, 35]
加载模型用于预测
在测试集上的rmse为:
0.4629245607636925
###Markdown
4.接着之前的模型继续训练**by 寒小阳**
###Code
# 继续训练
# 从./model/model.txt中加载模型初始化
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model='./model/lgb_model.txt',
valid_sets=lgb_eval)
print('以旧模型为初始化,完成第 10-20 轮训练...')
# 在训练的过程中调整超参数
# 比如这里调整的是学习率
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
learning_rates=lambda iter: 0.05 * (0.99 ** iter),
valid_sets=lgb_eval)
print('逐步调整学习率完成第 20-30 轮训练...')
# 调整其他超参数
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
valid_sets=lgb_eval,
callbacks=[lgb.reset_parameter(bagging_fraction=[0.7] * 5 + [0.6] * 5)])
print('逐步调整bagging比率完成第 30-40 轮训练...')
###Output
[11] valid_0's binary_logloss: 0.616177
[12] valid_0's binary_logloss: 0.611792
[13] valid_0's binary_logloss: 0.607043
[14] valid_0's binary_logloss: 0.602314
[15] valid_0's binary_logloss: 0.598433
[16] valid_0's binary_logloss: 0.595238
[17] valid_0's binary_logloss: 0.592047
[18] valid_0's binary_logloss: 0.588673
[19] valid_0's binary_logloss: 0.586084
[20] valid_0's binary_logloss: 0.584033
以旧模型为初始化,完成第 10-20 轮训练...
[21] valid_0's binary_logloss: 0.616177
[22] valid_0's binary_logloss: 0.611834
[23] valid_0's binary_logloss: 0.607177
[24] valid_0's binary_logloss: 0.602577
[25] valid_0's binary_logloss: 0.59831
[26] valid_0's binary_logloss: 0.595259
[27] valid_0's binary_logloss: 0.592201
[28] valid_0's binary_logloss: 0.589017
[29] valid_0's binary_logloss: 0.586597
[30] valid_0's binary_logloss: 0.584454
逐步调整学习率完成第 20-30 轮训练...
[31] valid_0's binary_logloss: 0.616053
[32] valid_0's binary_logloss: 0.612291
[33] valid_0's binary_logloss: 0.60856
[34] valid_0's binary_logloss: 0.605387
[35] valid_0's binary_logloss: 0.601744
[36] valid_0's binary_logloss: 0.598556
[37] valid_0's binary_logloss: 0.595585
[38] valid_0's binary_logloss: 0.593228
[39] valid_0's binary_logloss: 0.59018
[40] valid_0's binary_logloss: 0.588391
逐步调整bagging比率完成第 30-40 轮训练...
###Markdown
5.自定义损失函数**by 寒小阳**
###Code
# 类似在xgboost中的形式
# 自定义损失函数需要
def loglikelood(preds, train_data):
labels = train_data.get_label()
preds = 1. / (1. + np.exp(-preds))
grad = preds - labels
hess = preds * (1. - preds)
return grad, hess
# 自定义评估函数
def binary_error(preds, train_data):
labels = train_data.get_label()
return 'error', np.mean(labels != (preds > 0.5)), False
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
fobj=loglikelood,
feval=binary_error,
valid_sets=lgb_eval)
print('用自定义的损失函数与评估标准完成第40-50轮...')
###Output
[41] valid_0's binary_logloss: 0.614429 valid_0's error: 0.268
[42] valid_0's binary_logloss: 0.610689 valid_0's error: 0.26
[43] valid_0's binary_logloss: 0.606267 valid_0's error: 0.264
[44] valid_0's binary_logloss: 0.601949 valid_0's error: 0.258
[45] valid_0's binary_logloss: 0.597271 valid_0's error: 0.266
[46] valid_0's binary_logloss: 0.593971 valid_0's error: 0.276
[47] valid_0's binary_logloss: 0.591427 valid_0's error: 0.278
[48] valid_0's binary_logloss: 0.588301 valid_0's error: 0.284
[49] valid_0's binary_logloss: 0.586562 valid_0's error: 0.288
[50] valid_0's binary_logloss: 0.584056 valid_0's error: 0.288
用自定义的损失函数与评估标准完成第40-50轮...
###Markdown
sklearn与LightGBM配合使用 1.LightGBM建模,sklearn评估**by 寒小阳**
###Code
# coding: utf-8
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
# 加载数据
print('加载数据...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
print('开始训练...')
# 直接初始化LGBMRegressor
# 这个LightGBM的Regressor和sklearn中其他Regressor基本是一致的
gbm = lgb.LGBMRegressor(objective='regression',
num_leaves=31,
learning_rate=0.05,
n_estimators=20)
# 使用fit函数拟合
gbm.fit(X_train, y_train,
eval_set=[(X_test, y_test)],
eval_metric='l1',
early_stopping_rounds=5)
# 预测
print('开始预测...')
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)
# 评估预测结果
print('预测结果的rmse是:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
加载数据...
开始训练...
[1] valid_0's l1: 0.491735
Training until validation scores don't improve for 5 rounds.
[2] valid_0's l1: 0.486563
[3] valid_0's l1: 0.481489
[4] valid_0's l1: 0.476848
[5] valid_0's l1: 0.47305
[6] valid_0's l1: 0.469049
[7] valid_0's l1: 0.465556
[8] valid_0's l1: 0.462208
[9] valid_0's l1: 0.458676
[10] valid_0's l1: 0.454998
[11] valid_0's l1: 0.452047
[12] valid_0's l1: 0.449158
[13] valid_0's l1: 0.44608
[14] valid_0's l1: 0.443554
[15] valid_0's l1: 0.440643
[16] valid_0's l1: 0.437687
[17] valid_0's l1: 0.435454
[18] valid_0's l1: 0.433288
[19] valid_0's l1: 0.431297
[20] valid_0's l1: 0.428946
Did not meet early stopping. Best iteration is:
[20] valid_0's l1: 0.428946
开始预测...
预测结果的rmse是:
0.4441153344254208
###Markdown
2.网格搜索查找最优超参数**by 寒小阳**
###Code
# 配合scikit-learn的网格搜索交叉验证选择最优超参数
estimator = lgb.LGBMRegressor(num_leaves=31)
param_grid = {
'learning_rate': [0.01, 0.1, 1],
'n_estimators': [20, 40]
}
gbm = GridSearchCV(estimator, param_grid)
gbm.fit(X_train, y_train)
print('用网格搜索找到的最优超参数为:')
print(gbm.best_params_)
###Output
用网格搜索找到的最优超参数为:
{'n_estimators': 40, 'learning_rate': 0.1}
###Markdown
3.绘图解释**by 寒小阳**
###Code
# coding: utf-8
import lightgbm as lgb
import pandas as pd
try:
import matplotlib.pyplot as plt
except ImportError:
raise ImportError('You need to install matplotlib for plotting.')
# 加载数据集
print('加载数据...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
# 构建lgb中的Dataset数据格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_test = lgb.Dataset(X_test, y_test, reference=lgb_train)
# 设定参数
params = {
'num_leaves': 5,
'metric': ('l1', 'l2'),
'verbose': 0
}
evals_result = {} # to record eval results for plotting
print('开始训练...')
# 训练
gbm = lgb.train(params,
lgb_train,
num_boost_round=100,
valid_sets=[lgb_train, lgb_test],
feature_name=['f' + str(i + 1) for i in range(28)],
categorical_feature=[21],
evals_result=evals_result,
verbose_eval=10)
print('在训练过程中绘图...')
ax = lgb.plot_metric(evals_result, metric='l1')
plt.show()
print('画出特征重要度...')
ax = lgb.plot_importance(gbm, max_num_features=10)
plt.show()
print('画出第84颗树...')
ax = lgb.plot_tree(gbm, tree_index=83, figsize=(20, 8), show_info=['split_gain'])
plt.show()
#print('用graphviz画出第84颗树...')
#graph = lgb.create_tree_digraph(gbm, tree_index=83, name='Tree84')
#graph.render(view=True)
###Output
加载数据...
开始训练...
[10] training's l2: 0.217995 training's l1: 0.457448 valid_1's l2: 0.21641 valid_1's l1: 0.456464
[20] training's l2: 0.205099 training's l1: 0.436869 valid_1's l2: 0.201616 valid_1's l1: 0.434057
[30] training's l2: 0.197421 training's l1: 0.421302 valid_1's l2: 0.192514 valid_1's l1: 0.417019
[40] training's l2: 0.192856 training's l1: 0.411107 valid_1's l2: 0.187258 valid_1's l1: 0.406303
[50] training's l2: 0.189593 training's l1: 0.403695 valid_1's l2: 0.183688 valid_1's l1: 0.398997
[60] training's l2: 0.187043 training's l1: 0.398704 valid_1's l2: 0.181009 valid_1's l1: 0.393977
[70] training's l2: 0.184982 training's l1: 0.394876 valid_1's l2: 0.178803 valid_1's l1: 0.389805
[80] training's l2: 0.1828 training's l1: 0.391147 valid_1's l2: 0.176799 valid_1's l1: 0.386476
[90] training's l2: 0.180817 training's l1: 0.388101 valid_1's l2: 0.175775 valid_1's l1: 0.384404
[100] training's l2: 0.179171 training's l1: 0.385174 valid_1's l2: 0.175321 valid_1's l1: 0.382929
在训练过程中绘图...
|
notebooks/Jan2018/pre_tutorial/CH6 Advanced Plot.ipynb
|
###Markdown
Files You can actually get the data directly from web using urllib package:
###Code
import urllib
urllib.urlretrieve("http://www.einstein-online.info/spotlights/binding_energy-data_file/index.txt/at_download/file", "index.txt")
###Output
_____no_output_____
###Markdown
Read in the file by open() function
###Code
for row in open('index.txt').readlines():
print row
###Output
# Mass number, number of protons, name of isotope, mass [MeV/c^2],
# binding energy [MeV] and binding energy per nucleus [MeV]
# for different atomic nuclei
#
# Calculated for Einstein Online
# http://www.einstein-online.info,
# using data from the Atomic Mass Data Center,
# http://www.nndc.bnl.gov/amdc/web/nubase_en.html
#
# July 2005
#
# (Addendum November 2014: NUBASE apparently no longer accessible under
# that URL - use http://amdc.in2p3.fr/web/nubase_en.html
# instead; November 2014)
#
001 001 1H 938.272 0 0
002 001 2H 1875.613 2.225 1.112
003 001 3H 2808.921 8.482 2.827
003 002 3He 2808.391 7.718 2.573
004 001 4H 3751.365 5.603 1.401
004 002 4He 3727.379 28.296 7.074
004 003 4Li 3749.763 4.618 1.155
005 001 5H 4689.849 6.684 1.337
005 002 5He 4667.838 27.402 5.48
005 003 5Li 4667.617 26.33 5.266
006 001 6H 5630.313 5.786 0.964
006 002 6He 5605.537 29.268 4.878
006 003 6Li 5601.518 31.994 5.332
006 004 6Be 5605.295 26.924 4.487
007 002 7He 6545.537 28.834 4.119
007 003 7Li 6533.833 39.244 5.606
007 004 7Be 6534.184 37.6 5.371
007 005 7B 6545.773 24.718 3.531
008 002 8He 7482.528 31.408 3.926
008 003 8Li 7471.366 41.277 5.16
008 004 8Be 7454.85 56.5 7.062
008 005 8B 7472.319 37.737 4.717
008 006 8C 7483.98 24.783 3.098
009 002 9He 8423.363 30.138 3.349
009 003 9Li 8406.867 45.341 5.038
009 004 9Be 8392.75 58.165 6.463
009 005 9B 8393.307 56.314 6.257
009 006 9C 8409.291 39.037 4.337
010 002 10He 9362.728 30.339 3.034
010 003 10Li 9346.458 45.315 4.532
010 004 10Be 9325.503 64.977 6.498
010 005 10B 9324.436 64.751 6.475
010 006 10C 9327.573 60.32 6.032
010 007 10N 9350.163 36.437 3.644
011 003 11Li 10285.698 45.64 4.149
011 004 11Be 10264.564 65.481 5.953
011 005 11B 10252.547 76.205 6.928
011 006 11C 10254.018 73.44 6.676
011 007 11N 10267.157 59.008 5.364
012 004 12Be 11200.961 68.649 5.721
012 005 12B 11188.742 79.575 6.631
012 006 12C 11174.862 92.162 7.68
012 007 12N 11191.689 74.041 6.17
012 008 12O 11205.888 58.549 4.879
013 004 13Be 12140.628 68.548 5.273
013 005 13B 12123.429 84.453 6.496
013 006 13C 12109.481 97.108 7.47
013 007 13N 12111.191 94.105 7.239
013 008 13O 12128.446 75.556 5.812
014 004 14Be 13078.822 69.919 4.994
014 005 14B 13062.025 85.423 6.102
014 006 14C 13040.87 105.285 7.52
014 007 14N 13040.203 104.659 7.476
014 008 14O 13044.836 98.732 7.052
015 005 15B 13998.827 88.186 5.879
015 006 15C 13979.217 106.503 7.1
015 007 15N 13968.935 115.492 7.699
015 008 15O 13971.178 111.955 7.464
015 009 15F 13984.591 97.249 6.483
016 005 16B 14938.429 88.149 5.509
016 006 16C 14914.532 110.753 6.922
016 007 16N 14906.011 117.981 7.374
016 008 16O 14895.079 127.619 7.976
016 009 16F 14909.985 111.42 6.964
016 010 16Ne 14922.79 97.322 6.083
017 005 17B 15876.613 89.531 5.267
017 006 17C 15853.371 111.479 6.558
017 007 17N 15839.692 123.865 7.286
017 008 17O 15830.501 131.763 7.751
017 009 17F 15832.751 128.22 7.542
017 010 17Ne 15846.749 112.928 6.643
018 006 18C 16788.756 115.66 6.426
018 007 18N 16776.429 126.693 7.039
018 008 18O 16762.023 139.807 7.767
018 009 18F 16763.167 137.369 7.632
018 010 18Ne 16767.099 132.143 7.341
018 011 18Na 16785.461 112.488 6.249
019 006 19C 17727.74 116.241 6.118
019 007 19N 17710.671 132.017 6.948
019 008 19O 17697.633 143.761 7.566
019 009 19F 17692.3 147.801 7.779
019 010 19Ne 17695.028 143.78 7.567
019 011 19Na 17705.692 131.822 6.938
019 012 19Mg 17725.294 110.927 5.838
020 006 20C 18664.374 119.172 5.959
020 007 20N 18648.073 134.18 6.709
020 008 20O 18629.59 151.37 7.569
020 009 20F 18625.264 154.403 7.72
020 010 20Ne 18617.728 160.645 8.032
020 011 20Na 18631.107 145.973 7.299
020 012 20Mg 18641.318 134.468 6.723
021 007 21N 19583.047 138.771 6.608
021 008 21O 19565.349 155.176 7.389
021 009 21F 19556.728 162.504 7.738
021 010 21Ne 19550.533 167.406 7.972
021 011 21Na 19553.569 163.076 7.766
021 012 21Mg 19566.153 149.199 7.105
022 007 22N 20521.331 140.053 6.366
022 008 22O 20498.06 162.03 7.365
022 009 22F 20491.062 167.735 7.624
022 010 22Ne 20479.734 177.77 8.08
022 011 22Na 20482.065 174.146 7.916
022 012 22Mg 20486.339 168.578 7.663
023 008 23O 21434.884 164.772 7.164
023 009 23F 21423.093 175.269 7.62
023 010 23Ne 21414.098 182.971 7.955
023 011 23Na 21409.211 186.564 8.111
023 012 23Mg 21412.757 181.726 7.901
023 013 23Al 21424.489 168.7 7.335
024 008 24O 22370.838 168.383 7.016
024 009 24F 22358.817 179.111 7.463
024 010 24Ne 22344.795 191.84 7.993
024 011 24Na 22341.817 193.524 8.064
024 012 24Mg 22335.791 198.257 8.261
024 013 24Al 22349.156 183.598 7.65
024 014 24Si 22359.457 172.004 7.167
025 009 25F 23294.021 183.472 7.339
025 010 25Ne 23280.132 196.068 7.843
025 011 25Na 23272.372 202.535 8.101
025 012 25Mg 23268.026 205.588 8.224
025 013 25Al 23271.791 200.529 8.021
025 014 25Si 23284.02 187.006 7.48
026 009 26F 24232.515 184.543 7.098
026 010 26Ne 24214.164 201.601 7.754
026 011 26Na 24206.361 208.111 8.004
026 012 26Mg 24196.498 216.681 8.334
026 013 26Al 24199.991 211.894 8.15
026 014 26Si 24204.545 206.047 7.925
027 009 27F 25170.669 185.955 6.887
027 010 27Ne 25152.298 203.032 7.52
027 011 27Na 25139.2 214.837 7.957
027 012 27Mg 25129.62 223.124 8.264
027 013 27Al 25126.499 224.952 8.332
027 014 27Si 25130.8 219.357 8.124
027 015 27P 25141.956 206.908 7.663
028 010 28Ne 26087.962 206.934 7.39
028 011 28Na 26075.222 218.38 7.799
028 012 28Mg 26060.682 231.627 8.272
028 013 28Al 26058.339 232.677 8.31
028 014 28Si 26053.186 236.537 8.448
028 015 28P 26067.008 221.421 7.908
028 016 28S 26077.726 209.41 7.479
029 010 29Ne 27026.276 208.185 7.179
029 011 29Na 27010.37 222.798 7.683
029 012 29Mg 26996.575 235.299 8.114
029 013 29Al 26988.468 242.113 8.349
029 014 29Si 26984.277 245.011 8.449
029 015 29P 26988.709 239.286 8.251
029 016 29S 27001.99 224.711 7.749
030 010 30Ne 27962.81 211.216 7.041
030 011 30Na 27947.56 225.173 7.506
030 012 30Mg 27929.777 241.663 8.055
030 013 30Al 27922.305 247.841 8.261
030 014 30Si 27913.233 255.62 8.521
030 015 30P 27916.955 250.605 8.354
030 016 30S 27922.581 243.685 8.123
031 011 31Na 28883.343 228.955 7.386
031 012 31Mg 28866.965 244.04 7.872
031 013 31Al 28854.717 254.994 8.226
031 014 31Si 28846.211 262.207 8.458
031 015 31P 28844.209 262.917 8.481
031 016 31S 28849.094 256.738 8.282
031 017 31Cl 28860.557 243.981 7.87
032 011 32Na 29821.247 230.616 7.207
032 012 32Mg 29800.721 249.849 7.808
032 013 32Al 29790.105 259.172 8.099
032 014 32Si 29776.574 271.41 8.482
032 015 32P 29775.838 270.852 8.464
032 016 32S 29773.617 271.781 8.493
032 017 32Cl 29785.791 258.312 8.072
032 018 32Ar 29796.41 246.4 7.7
033 011 33Na 30758.571 232.858 7.056
033 012 33Mg 30738.064 252.071 7.639
033 013 33Al 30724.129 264.713 8.022
033 014 33Si 30711.655 275.894 8.36
033 015 33P 30705.3 280.956 8.514
033 016 33S 30704.54 280.422 8.498
033 017 33Cl 30709.612 274.057 8.305
033 018 33Ar 30720.72 261.656 7.929
034 012 34Mg 31673.474 256.227 7.536
034 013 34Al 31661.223 267.184 7.858
034 014 34Si 31643.685 283.429 8.336
034 015 34P 31638.573 287.248 8.448
034 016 34S 31632.689 291.839 8.584
034 017 34Cl 31637.67 285.565 8.399
034 018 34Ar 31643.221 278.72 8.198
035 013 35Al 32595.517 272.456 7.784
035 014 35Si 32580.776 285.903 8.169
035 015 35P 32569.768 295.619 8.446
035 016 35S 32565.268 298.825 8.538
035 017 35Cl 32564.59 298.21 8.52
035 018 35Ar 32570.045 291.461 8.327
035 019 35K 32581.412 278.801 7.966
036 013 36Al 33532.921 274.617 7.628
036 014 36Si 33514.15 292.095 8.114
036 015 36P 33505.868 299.083 8.308
036 016 36S 33494.944 308.714 8.575
036 017 36Cl 33495.576 306.79 8.522
036 018 36Ar 33494.355 306.717 8.52
036 019 36K 33506.649 293.129 8.142
036 020 36Ca 33517.124 281.361 7.816
037 013 37Al 34468.585 278.518 7.528
037 014 37Si 34451.544 294.266 7.953
037 015 37P 34438.623 305.894 8.267
037 016 37S 34430.206 313.018 8.46
037 017 37Cl 34424.83 317.101 8.57
037 018 37Ar 34425.133 315.504 8.527
037 019 37K 34430.769 308.575 8.34
037 020 37Ca 34441.897 296.154 8.004
038 013 38Al 35406.18 280.49 7.381
038 014 38Si 35385.549 299.827 7.89
038 015 38P 35374.348 309.735 8.151
038 016 38S 35361.736 321.054 8.449
038 017 38Cl 35358.287 323.208 8.505
038 018 38Ar 35352.86 327.343 8.614
038 019 38K 35358.263 320.646 8.438
038 020 38Ca 35364.494 313.122 8.24
039 013 39Al 36343.024 283.211 7.262
039 014 39Si 36323.043 301.899 7.741
039 015 39P 36307.732 315.916 8.1
039 016 39S 36296.931 325.424 8.344
039 017 39Cl 36289.779 331.282 8.494
039 018 39Ar 36285.827 333.941 8.563
039 019 39K 36284.751 333.724 8.557
039 020 39Ca 36290.772 326.409 8.369
039 021 39Sc 36303.368 312.52 8.013
040 014 40Si 37258.077 306.43 7.661
040 015 40P 37243.986 319.228 7.981
040 016 40S 37228.715 333.205 8.33
040 017 40Cl 37223.514 337.113 8.428
040 018 40Ar 37215.523 343.811 8.595
040 019 40K 37216.516 341.524 8.538
040 020 40Ca 37214.694 342.052 8.551
040 021 40Sc 37228.506 326.947 8.174
040 022 40Ti 37239.669 314.491 7.862
041 014 41Si 38197.661 306.411 7.473
041 015 41P 38178.31 324.469 7.914
041 016 41S 38164.059 337.427 8.23
041 017 41Cl 38155.258 344.934 8.413
041 018 41Ar 38148.989 349.91 8.534
041 019 41K 38145.986 351.619 8.576
041 020 41Ca 38145.897 350.415 8.547
041 021 41Sc 38151.881 343.137 8.369
042 015 42P 39116.024 326.32 7.77
042 016 42S 39096.893 344.158 8.194
042 017 42Cl 39089.152 350.606 8.348
042 018 42Ar 39079.128 359.336 8.556
042 019 42K 39078.018 359.153 8.551
042 020 42Ca 39073.981 361.896 8.617
042 021 42Sc 39079.896 354.688 8.445
042 022 42Ti 39086.385 346.906 8.26
043 015 43P 40052.348 329.562 7.664
043 016 43S 40034.097 346.519 8.059
043 017 43Cl 40021.386 357.937 8.324
043 018 43Ar 40013.035 364.995 8.488
043 019 43K 40007.941 368.795 8.577
043 020 43Ca 40005.614 369.829 8.601
043 021 43Sc 40007.324 366.826 8.531
043 022 43Ti 40013.68 359.176 8.353
044 016 44S 40968.441 351.741 7.994
044 017 44Cl 40956.82 362.068 8.229
044 018 44Ar 40943.865 373.729 8.494
044 019 44K 40940.218 376.084 8.547
044 020 44Ca 40934.048 380.96 8.658
044 021 44Sc 40937.189 376.525 8.557
044 022 44Ti 40936.946 375.475 8.534
044 023 44V 40949.864 361.264 8.211
045 016 45S 41905.805 353.942 7.865
045 017 45Cl 41890.184 368.27 8.184
045 018 45Ar 41878.262 378.898 8.42
045 019 45K 41870.914 384.953 8.555
045 020 45Ca 41866.199 388.375 8.631
045 021 45Sc 41865.432 387.848 8.619
045 022 45Ti 41866.983 385.004 8.556
045 023 45V 41873.598 377.096 8.38
045 024 45Cr 41885.997 363.403 8.076
046 017 46Cl 42825.328 372.691 8.102
046 018 46Ar 42809.807 386.919 8.411
046 019 46K 42803.598 391.834 8.518
046 020 46Ca 42795.37 398.769 8.669
046 021 46Sc 42796.237 396.609 8.622
046 022 46Ti 42793.359 398.193 8.656
046 023 46V 42799.899 390.36 8.486
046 024 46Cr 42806.987 381.979 8.304
047 018 47Ar 43745.111 391.18 8.323
047 019 47K 43734.814 400.184 8.515
047 020 47Ca 43727.659 406.045 8.639
047 021 47Sc 43725.156 407.255 8.665
047 022 47Ti 43724.044 407.073 8.661
047 023 47V 43726.464 403.36 8.582
047 024 47Cr 43733.397 395.134 8.407
048 019 48K 44669.88 404.683 8.431
048 020 48Ca 44657.279 415.991 8.666
048 021 48Sc 44656.486 415.49 8.656
048 022 48Ti 44651.983 418.7 8.723
048 023 48V 44655.484 413.905 8.623
048 024 48Cr 44656.63 411.466 8.572
048 025 48Mn 44669.618 397.185 8.275
049 019 49K 45603.178 410.95 8.387
049 020 49Ca 45591.698 421.137 8.595
049 021 49Sc 45585.924 425.618 8.686
049 022 49Ti 45583.406 426.842 8.711
049 023 49V 45583.497 425.458 8.683
049 024 49Cr 45585.612 422.049 8.613
049 025 49Mn 45592.816 413.552 8.44
050 019 50K 46539.642 414.052 8.281
050 020 50Ca 46524.91 427.49 8.55
050 021 50Sc 46519.433 431.674 8.633
050 022 50Ti 46512.032 437.781 8.756
050 023 50V 46513.726 434.794 8.696
050 024 50Cr 46512.177 435.049 8.701
050 025 50Mn 46519.299 426.634 8.533
050 026 50Fe 46526.935 417.705 8.354
051 020 51Ca 47460.115 431.851 8.468
051 021 51Sc 47452.246 438.426 8.597
051 022 51Ti 47445.225 444.154 8.709
051 023 51V 47442.24 445.845 8.742
051 024 51Cr 47442.482 444.31 8.712
051 025 51Mn 47445.178 440.32 8.634
051 026 51Fe 47452.687 431.519 8.461
052 020 52Ca 48394.959 436.572 8.396
052 021 52Sc 48386.598 443.639 8.532
052 022 52Ti 48376.982 451.962 8.692
052 023 52V 48374.494 453.156 8.715
052 024 52Cr 48370.008 456.349 8.776
052 025 52Mn 48374.208 450.856 8.67
052 026 52Fe 48376.071 447.7 8.61
053 022 53Ti 49311.111 457.398 8.63
053 023 53V 49305.581 461.635 8.71
053 024 53Cr 49301.634 464.289 8.76
053 025 53Mn 49301.72 462.909 8.734
053 026 53Fe 49304.951 458.384 8.649
053 027 53Co 49312.741 449.302 8.477
054 021 54Sc 50255.726 453.642 8.401
054 022 54Ti 50243.845 464.23 8.597
054 023 54V 50239.033 467.748 8.662
054 024 54Cr 50231.48 474.008 8.778
054 025 54Mn 50232.346 471.848 8.738
054 026 54Fe 50231.138 471.763 8.736
054 027 54Co 50238.87 462.738 8.569
054 028 54Ni 50247.159 453.156 8.392
055 021 55Sc 51191.86 457.073 8.31
055 022 55Ti 51179.259 468.381 8.516
055 023 55V 51171.268 475.079 8.638
055 024 55Cr 51164.799 480.254 8.732
055 025 55Mn 51161.685 482.075 8.765
055 026 55Fe 51161.405 481.061 8.747
055 027 55Co 51164.346 476.827 8.67
055 028 55Ni 51172.527 467.353 8.497
056 022 56Ti 52113.483 473.722 8.459
056 023 56V 52105.832 480.08 8.573
056 024 56Cr 52096.12 488.499 8.723
056 025 56Mn 52093.98 489.345 8.738
056 026 56Fe 52089.773 492.258 8.79
056 027 56Co 52093.828 486.91 8.695
056 028 56Ni 52095.453 483.992 8.643
057 022 57Ti 53050.377 476.394 8.358
057 023 57V 53039.216 486.261 8.531
057 024 57Cr 53030.371 493.813 8.663
057 025 57Mn 53024.897 497.994 8.737
057 026 57Fe 53021.693 499.905 8.77
057 027 57Co 53022.018 498.286 8.742
057 028 57Ni 53024.769 494.242 8.671
057 029 57Cu 53033.03 484.687 8.503
058 023 58V 53974.69 490.353 8.454
058 024 58Cr 53962.559 501.19 8.641
058 025 58Mn 53957.968 504.488 8.698
058 026 58Fe 53951.213 509.949 8.792
058 027 58Co 53953.01 506.859 8.739
058 028 58Ni 53952.117 506.459 8.732
058 029 58Cu 53960.172 497.111 8.571
058 030 58Zn 53969.023 486.966 8.396
059 023 59V 54909.324 495.284 8.395
059 024 59Cr 54897.993 505.322 8.565
059 025 59Mn 54889.892 512.129 8.68
059 026 59Fe 54884.198 516.53 8.755
059 027 59Co 54882.121 517.313 8.768
059 028 59Ni 54882.683 515.458 8.737
059 029 59Cu 54886.971 509.877 8.642
059 030 59Zn 54895.557 499.998 8.475
060 023 60V 55845.308 498.865 8.314
060 024 60Cr 55830.877 512.003 8.533
060 025 60Mn 55823.686 517.901 8.632
060 026 60Fe 55814.943 525.35 8.756
060 027 60Co 55814.195 524.805 8.747
060 028 60Ni 55810.861 526.846 8.781
060 029 60Cu 55816.478 519.935 8.666
060 030 60Zn 55820.123 514.997 8.583
061 024 61Cr 56766.691 515.754 8.455
061 025 61Mn 56756.8 524.352 8.596
061 026 61Fe 56748.928 530.931 8.704
061 027 61Co 56744.439 534.126 8.756
061 028 61Ni 56742.606 534.666 8.765
061 029 61Cu 56744.332 531.646 8.716
061 030 61Zn 56749.46 525.225 8.61
061 031 61Ga 56758.204 515.188 8.446
062 024 62Cr 57699.955 522.056 8.42
062 025 62Mn 57691.814 528.903 8.531
062 026 62Fe 57680.442 538.982 8.693
062 027 62Co 57677.4 540.731 8.721
062 028 62Ni 57671.575 545.262 8.795
062 029 62Cu 57675.012 540.532 8.718
062 030 62Zn 57676.128 538.123 8.679
062 031 62Ga 57684.788 528.169 8.519
063 025 63Mn 58624.998 535.285 8.497
063 026 63Fe 58615.287 543.702 8.63
063 027 63Co 58608.486 549.21 8.718
063 028 63Ni 58604.302 552.1 8.763
063 029 63Cu 58603.724 551.385 8.752
063 030 63Zn 58606.58 547.236 8.686
063 031 63Ga 58611.735 540.788 8.584
064 025 64Mn 59560.222 539.626 8.432
064 026 64Fe 59547.561 550.994 8.609
064 027 64Co 59542.027 555.234 8.676
064 028 64Ni 59534.21 561.758 8.777
064 029 64Cu 59535.374 559.301 8.739
064 030 64Zn 59534.283 559.098 8.736
064 031 64Ga 59540.942 551.146 8.612
064 032 64Ge 59544.915 545.88 8.529
065 025 65Mn 60493.666 545.747 8.396
065 026 65Fe 60482.945 555.175 8.541
065 027 65Co 60474.144 562.683 8.657
065 028 65Ni 60467.677 567.856 8.736
065 029 65Cu 60465.028 569.212 8.757
065 030 65Zn 60465.869 567.077 8.724
065 031 65Ga 60468.613 563.04 8.662
065 032 65Ge 60474.349 556.011 8.554
066 026 66Fe 61415.749 561.936 8.514
066 027 66Co 61408.698 567.694 8.601
066 028 66Ni 61398.291 576.808 8.74
066 029 66Cu 61397.528 576.278 8.731
066 030 66Zn 61394.375 578.136 8.76
066 031 66Ga 61399.04 572.179 8.669
066 032 66Ge 61400.633 569.292 8.626
066 033 66As 61410.242 558.39 8.46
067 026 67Fe 62351.123 566.128 8.45
067 027 67Co 62341.242 574.715 8.578
067 028 67Ni 62332.048 582.616 8.696
067 029 67Cu 62327.961 585.409 8.737
067 030 67Zn 62326.889 585.189 8.734
067 031 67Ga 62327.378 583.406 8.708
067 032 67Ge 62331.089 578.402 8.633
067 033 67As 62336.586 571.611 8.532
068 026 68Fe 63285.177 571.639 8.406
068 027 68Co 63276.446 579.077 8.516
068 028 68Ni 63263.821 590.408 8.682
068 029 68Cu 63261.207 591.729 8.702
068 030 68Zn 63256.256 595.387 8.756
068 031 68Ga 63258.666 591.683 8.701
068 032 68Ge 63258.261 590.795 8.688
068 033 68As 63265.83 581.933 8.558
068 034 68Se 63270.009 576.46 8.477
069 027 69Co 64209.29 585.798 8.49
069 028 69Ni 64198.8 594.995 8.623
069 029 69Cu 64192.532 599.969 8.695
069 030 69Zn 64189.339 601.869 8.723
069 031 69Ga 64187.918 601.996 8.725
069 032 69Ge 64189.634 598.987 8.681
069 033 69As 64193.134 594.194 8.612
069 034 69Se 64199.413 586.622 8.502
070 027 70Co 65145.144 589.509 8.422
070 028 70Ni 65131.123 602.237 8.603
070 029 70Cu 65126.786 605.281 8.647
070 030 70Zn 65119.686 611.087 8.73
070 031 70Ga 65119.83 609.65 8.709
070 032 70Ge 65117.666 610.521 8.722
070 033 70As 65123.378 603.515 8.622
070 034 70Se 65125.157 600.443 8.578
071 027 71Co 66078.408 595.811 8.392
071 028 71Ni 66066.567 606.358 8.54
071 029 71Cu 66058.545 613.087 8.635
071 030 71Zn 66053.418 616.921 8.689
071 031 71Ga 66050.094 618.951 8.718
071 032 71Ge 66049.815 617.937 8.703
071 033 71As 66051.318 615.141 8.664
071 034 71Se 66055.581 609.584 8.586
071 035 71Br 66061.13 602.742 8.489
071 036 71Kr 66070.759 591.82 8.335
072 028 72Ni 66999.321 613.169 8.516
072 029 72Cu 66992.967 618.23 8.587
072 030 72Zn 66984.108 625.796 8.692
072 031 72Ga 66983.139 625.472 8.687
072 032 72Ge 66978.631 628.686 8.732
072 033 72As 66982.476 623.548 8.66
072 034 72Se 66982.301 622.429 8.645
072 035 72Br 66990.664 612.773 8.511
072 036 72Kr 66995.232 606.912 8.429
073 029 73Cu 67925.257 625.505 8.569
073 030 73Zn 67918.323 631.146 8.646
073 031 73Ga 67913.523 634.653 8.694
073 032 73Ge 67911.413 635.469 8.705
073 033 73As 67911.243 634.346 8.69
073 034 73Se 67913.471 630.825 8.641
073 035 73Br 67917.548 625.454 8.568
073 036 73Kr 67924.115 617.594 8.46
074 029 74Cu 68859.732 630.596 8.522
074 030 74Zn 68849.517 639.517 8.642
074 031 74Ga 68846.666 641.075 8.663
074 032 74Ge 68840.783 645.665 8.725
074 033 74As 68842.834 642.32 8.68
074 034 74Se 68840.97 642.891 8.688
074 035 74Br 68847.366 635.202 8.584
074 036 74Kr 68849.83 631.445 8.533
074 037 74Rb 68859.733 620.248 8.382
075 029 75Cu 69793.112 636.781 8.49
075 030 75Zn 69784.251 644.349 8.591
075 031 75Ga 69777.745 649.561 8.661
075 032 75Ge 69773.843 652.171 8.696
075 033 75As 69772.156 652.564 8.701
075 034 75Se 69772.508 650.918 8.679
075 035 75Br 69775.027 647.106 8.628
075 036 75Kr 69779.331 641.509 8.553
075 037 75Rb 69785.922 633.624 8.448
075 038 75Sr 69796.013 622.24 8.297
076 029 76Cu 70727.75 641.708 8.444
076 030 76Zn 70716.075 652.09 8.58
076 031 76Ga 70711.407 655.464 8.625
076 032 76Ge 70703.98 661.598 8.705
076 033 76As 70704.393 659.893 8.683
076 034 76Se 70700.919 662.073 8.711
076 035 76Br 70705.371 656.327 8.636
076 036 76Kr 70706.135 654.27 8.609
076 037 76Rb 70714.158 644.954 8.486
076 038 76Sr 70719.887 637.931 8.394
077 030 77Zn 71650.989 656.741 8.529
077 031 77Ga 71643.206 663.231 8.613
077 032 77Ge 71637.473 667.671 8.671
077 033 77As 71634.259 669.591 8.696
077 034 77Se 71633.065 669.492 8.695
077 035 77Br 71633.919 667.345 8.667
077 036 77Kr 71636.474 663.497 8.617
077 037 77Rb 71641.307 657.37 8.537
077 038 77Sr 71647.817 649.567 8.436
078 030 78Zn 72583.863 663.433 8.506
078 031 78Ga 72576.985 669.017 8.577
078 032 78Ge 72568.319 676.39 8.672
078 033 78As 72566.853 676.563 8.674
078 034 78Se 72562.133 679.99 8.718
078 035 78Br 72565.196 675.633 8.662
078 036 78Kr 72563.957 675.578 8.661
078 037 78Rb 72570.69 667.552 8.558
078 038 78Sr 72573.941 663.008 8.5
079 031 79Ga 73509.676 675.892 8.556
079 032 79Ge 73502.185 682.089 8.634
079 033 79As 73497.527 685.454 8.677
079 034 79Se 73494.735 686.952 8.696
079 035 79Br 73494.074 686.321 8.688
079 036 79Kr 73495.188 683.913 8.657
079 037 79Rb 73498.317 679.491 8.601
079 038 79Sr 73503.132 673.382 8.524
079 039 79Y 73509.738 665.483 8.424
080 030 80Zn 74452.351 674.075 8.426
080 031 80Ga 74444.54 680.593 8.507
080 032 80Ge 74433.654 690.186 8.627
080 033 80As 74430.499 692.047 8.651
080 034 80Se 74424.387 696.866 8.711
080 035 80Br 74425.747 694.213 8.678
080 036 80Kr 74423.233 695.434 8.693
080 037 80Rb 74428.441 688.932 8.612
080 038 80Sr 74429.795 686.285 8.579
080 039 80Y 74438.372 676.414 8.455
080 040 80Zr 74443.561 669.932 8.374
081 031 81Ga 75377.194 687.504 8.488
081 032 81Ge 75368.363 695.042 8.581
081 033 81As 75361.619 700.493 8.648
081 034 81Se 75357.252 703.567 8.686
081 035 81Br 75355.155 704.37 8.696
081 036 81Kr 75354.925 703.307 8.683
081 037 81Rb 75356.653 700.285 8.645
081 038 81Sr 75360.069 695.576 8.587
081 039 81Y 75365.066 689.286 8.51
081 040 81Zr 75372.085 680.973 8.407
082 032 82Ge 76300.537 702.433 8.566
082 033 82As 76295.326 706.351 8.614
082 034 82Se 76287.541 712.843 8.693
082 035 82Br 76287.128 711.963 8.682
082 036 82Kr 76283.524 714.274 8.711
082 037 82Rb 76287.414 709.09 8.647
082 038 82Sr 76287.083 708.127 8.636
082 039 82Y 76294.39 699.527 8.531
083 033 83As 77227.26 713.982 8.602
083 034 83Se 77221.288 718.661 8.659
083 035 83Br 77217.109 721.547 8.693
083 036 83Kr 77215.625 721.737 8.696
083 037 83Rb 77216.021 720.048 8.675
083 038 83Sr 77217.79 716.986 8.638
083 039 83Y 77221.744 711.738 8.575
083 040 83Zr 77227.103 705.086 8.495
083 041 83Nb 77234.092 696.804 8.395
084 034 84Se 78152.171 727.343 8.659
084 035 84Br 78149.813 728.408 8.672
084 036 84Kr 78144.67 732.258 8.717
084 037 84Rb 78146.84 728.794 8.676
084 038 84Sr 78145.435 728.906 8.677
084 039 84Y 78151.408 721.64 8.591
085 034 85Se 79087.189 731.891 8.61
085 035 85Br 79080.496 737.29 8.674
085 036 85Kr 79077.115 739.378 8.699
085 037 85Rb 79075.917 739.283 8.697
085 038 85Sr 79076.471 737.436 8.676
085 039 85Y 79079.22 733.393 8.628
085 040 85Zr 79083.401 727.919 8.564
085 041 85Nb 79088.89 721.136 8.484
086 034 86Se 80020.57 738.075 8.582
086 035 86Br 80014.96 742.392 8.632
086 036 86Kr 80006.824 749.235 8.712
086 037 86Rb 80006.831 747.934 8.697
086 038 86Sr 80004.544 748.928 8.708
086 039 86Y 80009.272 742.906 8.638
086 040 86Zr 80010.245 740.64 8.612
086 041 86Nb 80017.704 731.888 8.51
086 042 86Mo 80022.463 725.835 8.44
087 034 87Se 80956.025 742.185 8.531
087 035 87Br 80948.237 748.68 8.606
087 036 87Kr 80940.874 754.75 8.675
087 037 87Rb 80936.474 757.856 8.711
087 038 87Sr 80935.681 757.356 8.705
087 039 87Y 80937.031 754.712 8.675
087 040 87Zr 80940.191 750.259 8.624
087 041 87Nb 80944.848 744.309 8.555
087 042 87Mo 80950.827 737.037 8.472
088 034 88Se 81890.219 747.557 8.495
088 035 88Br 81882.858 753.624 8.564
088 036 88Kr 81873.385 761.804 8.657
088 037 88Rb 81869.957 763.939 8.681
088 038 88Sr 81864.133 768.469 8.733
088 039 88Y 81867.245 764.064 8.683
088 040 88Zr 81867.41 762.606 8.666
088 041 88Nb 81874.452 754.27 8.571
088 042 88Mo 81877.311 750.118 8.524
089 035 89Br 82816.512 759.536 8.534
089 036 89Kr 82807.841 766.913 8.617
089 037 89Rb 82802.347 771.114 8.664
089 038 89Sr 82797.34 774.828 8.706
089 039 89Y 82795.336 775.538 8.714
089 040 89Zr 82797.658 771.923 8.673
089 041 89Nb 82801.366 766.922 8.617
089 042 89Mo 82806.501 760.493 8.545
090 035 90Br 83751.956 763.657 8.485
090 036 90Kr 83741.095 773.225 8.591
090 037 90Rb 83736.192 776.834 8.631
090 038 90Sr 83729.102 782.631 8.696
090 039 90Y 83728.045 782.395 8.693
090 040 90Zr 83725.254 783.893 8.71
090 041 90Nb 83730.854 776.999 8.633
090 042 90Mo 83732.832 773.728 8.597
090 043 90Tc 83741.278 763.988 8.489
091 035 91Br 84686.56 768.618 8.446
091 036 91Kr 84676.249 777.636 8.545
091 037 91Rb 84669.303 783.289 8.608
091 038 91Sr 84662.892 788.406 8.664
091 039 91Y 84659.681 790.324 8.685
091 040 91Zr 84657.625 791.087 8.693
091 041 91Nb 84658.372 789.046 8.671
091 042 91Mo 84662.289 783.836 8.614
091 043 91Tc 84668.002 776.83 8.537
092 035 92Br 85622.984 771.76 8.389
092 036 92Kr 85610.268 783.182 8.513
092 037 92Rb 85603.77 788.387 8.569
092 038 92Sr 85595.163 795.701 8.649
092 039 92Y 85592.707 796.863 8.662
092 040 92Zr 85588.555 799.722 8.693
092 041 92Nb 85590.05 796.934 8.662
092 042 92Mo 85589.182 796.508 8.658
092 043 92Tc 85596.541 787.856 8.564
093 036 93Kr 86546.527 786.488 8.457
093 037 93Rb 86537.418 794.304 8.541
093 038 93Sr 86529.44 800.989 8.613
093 039 93Y 86524.791 804.344 8.649
093 040 93Zr 86521.386 806.456 8.672
093 041 93Nb 86520.784 805.765 8.664
093 042 93Mo 86520.678 804.577 8.651
093 043 93Tc 86523.367 800.595 8.609
093 044 93Ru 86529.189 793.48 8.532
094 037 94Rb 87472.977 798.31 8.493
094 038 94Sr 87462.179 807.815 8.594
094 039 94Y 87458.16 810.541 8.623
094 040 94Zr 87452.73 814.677 8.667
094 041 94Nb 87453.122 812.993 8.649
094 042 94Mo 87450.566 814.256 8.662
094 043 94Tc 87454.31 809.217 8.609
094 044 94Ru 87455.385 806.849 8.584
095 037 95Rb 88407.17 803.683 8.46
095 038 95Sr 88397.396 812.163 8.549
095 039 95Y 88390.795 817.471 8.605
095 040 95Zr 88385.833 821.14 8.644
095 041 95Nb 88384.198 821.481 8.647
095 042 95Mo 88382.762 821.625 8.649
095 043 95Tc 88383.941 819.152 8.623
095 044 95Ru 88385.997 815.802 8.587
095 045 95Rh 88390.596 809.91 8.525
096 037 96Rb 89343.293 807.125 8.408
096 038 96Sr 89331.068 818.057 8.521
096 039 96Y 89325.149 822.682 8.57
096 040 96Zr 89317.542 828.996 8.635
096 041 96Nb 89316.87 828.375 8.629
096 042 96Mo 89313.173 830.779 8.654
096 043 96Tc 89315.635 827.023 8.615
096 044 96Ru 89314.869 826.496 8.609
096 045 96Rh 89320.751 819.32 8.535
096 046 96Pd 89323.689 815.089 8.491
097 037 97Rb 90277.652 812.331 8.375
097 038 97Sr 90266.713 821.977 8.474
097 039 97Y 90258.732 828.665 8.543
097 040 97Zr 90251.533 834.571 8.604
097 041 97Nb 90248.363 836.448 8.623
097 042 97Mo 90245.917 837.6 8.635
097 043 97Tc 90245.726 836.497 8.624
097 044 97Ru 90246.323 834.607 8.604
097 045 97Rh 90249.334 830.303 8.56
097 046 97Pd 90253.613 824.73 8.502
097 047 97Ag 90260.082 816.968 8.422
098 037 98Rb 91213.286 816.263 8.329
098 038 98Sr 91200.349 827.906 8.448
098 039 98Y 91194.017 832.945 8.499
098 040 98Zr 91184.686 840.983 8.581
098 041 98Nb 91181.933 842.442 8.596
098 042 98Mo 91176.84 846.243 8.635
098 043 98Tc 91178.012 843.777 8.61
098 044 98Ru 91175.705 844.79 8.62
098 045 98Rh 91180.243 838.959 8.561
098 046 98Pd 91181.607 836.302 8.534
098 047 98Ag 91189.336 827.279 8.442
098 048 98Cd 91194.255 821.067 8.378
099 037 99Rb 92148.12 820.994 8.293
099 038 99Sr 92136.299 831.522 8.399
099 039 99Y 92127.777 838.75 8.472
099 040 99Zr 92119.699 845.535 8.541
099 041 99Nb 92114.629 849.312 8.579
099 042 99Mo 92110.48 852.168 8.608
099 043 99Tc 92108.611 852.743 8.614
099 044 99Ru 92107.806 852.255 8.609
099 045 99Rh 92109.338 849.429 8.58
099 046 99Pd 92112.213 845.261 8.538
099 047 99Ag 92117.13 839.051 8.475
100 038 100Sr 93069.763 837.623 8.376
100 039 100Y 93062.182 843.911 8.439
100 040 100Zr 93052.361 852.438 8.524
100 041 100Nb 93048.511 854.995 8.55
100 042 100Mo 93041.755 860.458 8.605
100 043 100Tc 93041.412 859.508 8.595
100 044 100Ru 93037.698 861.928 8.619
100 045 100Rh 93040.822 857.511 8.575
100 046 100Pd 93040.669 856.37 8.564
100 047 100Ag 93047.234 848.512 8.485
100 048 100Cd 93050.623 843.83 8.438
100 049 100In 93060.192 832.967 8.33
100 050 100Sn 93067.071 824.795 8.248
101 037 101Rb 94018.388 829.857 8.216
101 038 101Sr 94006.067 840.884 8.326
101 039 101Y 93996.056 849.602 8.412
101 040 101Zr 93986.995 857.37 8.489
101 041 101Nb 93981.002 862.069 8.535
101 042 101Mo 93975.922 865.856 8.573
101 043 101Tc 93972.586 867.899 8.593
101 044 101Ru 93970.462 868.73 8.601
101 045 101Rh 93970.492 867.406 8.588
101 046 101Pd 93971.961 864.644 8.561
101 047 101Ag 93975.658 859.653 8.511
101 048 101Cd 93980.617 853.401 8.45
102 038 102Sr 94939.891 846.626 8.3
102 039 102Y 94930.57 854.653 8.379
102 040 102Zr 94920.209 863.721 8.468
102 041 102Nb 94915.088 867.549 8.505
102 042 102Mo 94907.37 873.973 8.568
102 043 102Tc 94905.85 874.2 8.571
102 044 102Ru 94900.807 877.95 8.607
102 045 102Rh 94902.619 874.844 8.577
102 046 102Pd 94900.958 875.212 8.581
102 047 102Ag 94906.107 868.77 8.517
102 048 102Cd 94908.183 865.4 8.484
102 049 102In 94916.64 855.65 8.389
102 050 102Sn 94921.909 849.088 8.324
103 040 103Zr 95855.073 868.422 8.431
103 041 103Nb 95847.612 874.59 8.491
103 042 103Mo 95841.571 879.338 8.537
103 043 103Tc 95837.313 882.302 8.566
103 044 103Ru 95834.141 884.182 8.584
103 045 103Rh 95832.866 884.163 8.584
103 046 103Pd 95832.898 882.837 8.571
103 047 103Ag 95835.075 879.367 8.538
103 048 103Cd 95838.706 874.443 8.49
103 049 103In 95844.245 867.61 8.423
104 041 104Nb 96782.206 879.561 8.457
104 042 104Mo 96773.585 886.889 8.528
104 043 104Tc 96770.914 888.267 8.541
104 044 104Ru 96764.804 893.083 8.587
104 045 104Rh 96765.433 891.162 8.569
104 046 104Pd 96762.481 892.82 8.585
104 047 104Ag 96766.249 887.758 8.536
104 048 104Cd 96766.874 885.84 8.518
104 049 104In 96774.228 877.193 8.435
104 050 104Sn 96778.237 871.89 8.384
105 041 105Nb 97715.07 886.263 8.441
105 042 105Mo 97708.069 891.97 8.495
105 043 105Tc 97702.608 896.138 8.535
105 044 105Ru 97698.459 898.994 8.562
105 045 105Rh 97696.03 900.129 8.573
105 046 105Pd 97694.952 899.914 8.571
105 047 105Ag 97695.786 897.787 8.55
105 048 105Cd 97698.013 894.266 8.517
105 049 105In 97702.351 888.635 8.463
105 050 105Sn 97708.061 881.632 8.396
105 051 105Sb 97716.99 871.409 8.299
106 042 106Mo 98640.648 898.957 8.481
106 043 106Tc 98636.617 901.694 8.507
106 044 106Ru 98629.559 907.459 8.561
106 045 106Rh 98629.009 906.716 8.554
106 046 106Pd 98624.957 909.474 8.58
106 047 106Ag 98627.411 905.727 8.545
106 048 106Cd 98626.705 905.14 8.539
106 049 106In 98632.72 897.831 8.47
106 050 106Sn 98635.385 893.873 8.433
106 052 106Te 98653.583 873.088 8.237
107 042 107Mo 99575.457 903.713 8.446
107 043 107Tc 99568.786 909.091 8.496
107 044 107Ru 99563.455 913.128 8.534
107 045 107Rh 99560.001 915.289 8.554
107 046 107Pd 99557.985 916.012 8.561
107 047 107Ag 99557.44 915.263 8.554
107 048 107Cd 99558.346 913.064 8.533
107 049 107In 99561.26 908.857 8.494
107 050 107Sn 99565.729 903.094 8.44
108 043 108Tc 100503.43 914.012 8.463
108 044 108Ru 100495.199 920.95 8.527
108 045 108Rh 100493.338 921.517 8.533
108 046 108Pd 100488.323 925.239 8.567
108 047 108Ag 100489.734 922.535 8.542
108 048 108Cd 100487.573 923.402 8.55
108 049 108In 100492.198 917.484 8.495
108 050 108Sn 100493.762 914.627 8.469
108 052 108Te 100509.061 896.741 8.303
109 043 109Tc 101436.334 920.673 8.447
109 044 109Ru 101429.513 926.201 8.497
109 045 109Rh 101424.841 929.58 8.528
109 046 109Pd 101421.734 931.393 8.545
109 047 109Ag 101420.108 931.727 8.548
109 048 109Cd 101419.811 930.73 8.539
109 049 109In 101421.319 927.928 8.513
109 050 109Sn 101424.658 923.296 8.471
109 051 109Sb 101430.527 916.134 8.405
109 052 109Te 101438.665 906.702 8.318
109 053 109I 101448.154 895.92 8.219
110 043 110Tc 102371.408 925.165 8.411
110 044 110Ru 102361.877 933.402 8.485
110 045 110Rh 102358.566 935.42 8.504
110 046 110Pd 102352.486 940.207 8.547
110 047 110Ag 102352.864 938.536 8.532
110 048 110Cd 102349.46 940.646 8.551
110 049 110In 102352.827 935.986 8.509
110 050 110Sn 102352.947 934.572 8.496
110 052 110Te 102365.489 919.444 8.359
110 054 110Xe 102384.847 897.499 8.159
111 043 111Tc 103304.642 931.496 8.392
111 044 111Ru 103296.681 938.164 8.452
111 045 111Rh 103290.483 943.068 8.496
111 046 111Pd 103286.325 945.933 8.522
111 047 111Ag 103283.597 947.368 8.535
111 048 111Cd 103282.05 947.622 8.537
111 049 111In 103282.4 945.978 8.522
111 050 111Sn 103284.34 942.745 8.493
111 051 111Sb 103288.886 936.905 8.441
111 052 111Te 103295.784 928.715 8.367
112 043 112Tc 104239.357 936.347 8.36
112 044 112Ru 104229.366 945.045 8.438
112 045 112Rh 104224.595 948.523 8.469
112 046 112Pd 104217.488 954.336 8.521
112 047 112Ag 104216.689 953.842 8.516
112 048 112Cd 104212.221 957.016 8.545
112 049 112In 104214.295 953.649 8.515
112 050 112Sn 104213.119 953.532 8.514
112 051 112Sb 104219.668 945.69 8.444
112 052 112Te 104223.458 940.606 8.398
112 054 112Xe 104239.766 921.712 8.23
113 044 113Ru 105164.14 949.836 8.406
113 045 113Rh 105157.149 955.534 8.456
113 046 113Pd 105151.628 959.761 8.493
113 047 113Ag 105147.774 962.322 8.516
113 048 113Cd 105145.246 963.556 8.527
113 049 113In 105144.415 963.094 8.523
113 050 113Sn 105144.941 961.275 8.507
113 051 113Sb 105148.343 956.58 8.465
113 052 113Te 105153.905 949.724 8.405
113 053 113I 105160.611 941.725 8.334
113 054 113Xe 105169.14 931.903 8.247
113 055 113Cs 105179.019 920.731 8.148
114 045 114Rh 106091.693 960.555 8.426
114 046 114Pd 106083.315 967.64 8.488
114 047 114Ag 106081.352 968.309 8.494
114 048 114Cd 106075.769 972.599 8.532
114 049 114In 106076.707 970.368 8.512
114 050 114Sn 106074.207 971.574 8.523
114 051 114Sb 106079.742 964.746 8.463
114 052 114Te 106081.857 961.338 8.433
114 054 114Xe 106095.638 944.97 8.289
114 056 114Ba 106115.752 922.269 8.09
115 044 115Ru 107032.898 960.209 8.35
115 045 115Rh 107024.607 967.206 8.41
115 046 115Pd 107017.906 972.614 8.458
115 047 115Ag 107012.805 976.422 8.491
115 048 115Cd 107009.193 978.74 8.511
115 049 115In 107007.236 979.404 8.517
115 050 115Sn 107006.226 979.121 8.514
115 051 115Sb 107008.748 975.305 8.481
115 052 115Te 107013.177 969.583 8.431
115 053 115I 107018.391 963.076 8.375
115 054 115Xe 107025.561 954.612 8.301
116 045 116Rh 107959.571 971.808 8.378
116 046 116Pd 107949.84 980.245 8.45
116 047 116Ag 107946.719 982.073 8.466
116 048 116Cd 107940.059 987.44 8.512
116 049 116In 107940.017 986.188 8.502
116 050 116Sn 107936.227 988.684 8.523
116 051 116Sb 107940.424 983.195 8.476
116 052 116Te 107941.465 980.86 8.456
116 053 116I 107948.733 972.299 8.382
116 054 116Xe 107952.665 967.074 8.337
117 046 117Pd 108884.764 984.887 8.418
117 047 117Ag 108878.513 989.844 8.46
117 048 117Cd 108873.847 993.217 8.489
117 049 117In 108870.816 994.955 8.504
117 050 117Sn 108868.85 995.627 8.51
117 051 117Sb 108870.094 993.09 8.488
117 052 117Te 108873.131 988.76 8.451
117 053 117I 108877.282 983.315 8.404
117 054 117Xe 108883.021 976.283 8.344
117 055 117Cs 108890.255 967.756 8.271
118 046 118Pd 109817.318 991.898 8.406
118 047 118Ag 109812.707 995.216 8.434
118 048 118Cd 109805.057 1001.572 8.488
118 049 118In 109804.025 1001.311 8.486
118 050 118Sn 109799.087 1004.955 8.517
118 051 118Sb 109802.234 1000.515 8.479
118 052 118Te 109802.001 999.455 8.47
118 053 118I 109808.24 991.923 8.406
118 054 118Xe 109810.621 988.248 8.375
118 055 118Cs 109819.78 977.796 8.286
119 047 119Ag 110745.211 1002.277 8.422
119 048 119Cd 110739.35 1006.845 8.461
119 049 119In 110735.045 1009.856 8.486
119 050 119Sn 110732.169 1011.438 8.499
119 051 119Sb 110732.25 1010.065 8.488
119 052 119Te 110734.032 1006.989 8.462
119 053 119I 110736.939 1002.789 8.427
119 054 119Xe 110741.4 997.035 8.378
119 055 119Cs 110747.378 989.763 8.317
119 056 119Ba 110754.582 981.266 8.246
120 046 120Pd 111685.626 1002.721 8.356
120 047 120Ag 111679.615 1007.438 8.395
120 048 120Cd 111670.78 1014.98 8.458
120 049 120In 111668.503 1015.964 8.466
120 050 120Sn 111662.627 1020.546 8.505
120 051 120Sb 111664.797 1017.083 8.476
120 052 120Te 111663.305 1017.282 8.477
120 053 120I 111668.409 1010.884 8.424
120 054 120Xe 111669.516 1008.484 8.404
120 055 120Cs 111677.288 999.419 8.328
120 056 120Ba 111681.776 993.637 8.28
121 047 121Ag 112612.099 1014.52 8.384
121 048 121Cd 112605.188 1020.137 8.431
121 049 121In 112599.896 1024.136 8.464
121 050 121Sn 112596.022 1026.717 8.485
121 051 121Sb 112595.12 1026.325 8.482
121 052 121Te 112595.653 1024.499 8.467
121 053 121I 112597.406 1021.453 8.442
121 054 121Xe 112600.709 1016.856 8.404
121 055 121Cs 112605.571 1010.701 8.353
121 056 121Ba 112611.42 1003.559 8.294
122 048 122Cd 113537.012 1027.879 8.425
122 049 122In 113533.651 1029.946 8.442
122 050 122Sn 113526.774 1035.53 8.488
122 051 122Sb 113527.878 1033.132 8.468
122 052 122Te 113525.384 1034.333 8.478
122 053 122I 113529.107 1029.317 8.437
122 054 122Xe 113529.321 1027.81 8.425
122 055 122Cs 113536.025 1019.812 8.359
122 056 122Ba 113539.045 1015.499 8.324
123 048 123Cd 114471.926 1032.53 8.395
123 049 123In 114465.299 1037.864 8.438
123 050 123Sn 114460.393 1041.476 8.467
123 051 123Sb 114458.479 1042.097 8.472
123 052 123Te 114458.02 1041.263 8.466
123 053 123I 114458.738 1039.251 8.449
123 054 123Xe 114460.921 1035.775 8.421
123 055 123Cs 114464.615 1030.788 8.38
123 056 123Ba 114469.493 1024.616 8.33
124 048 124Cd 115404.02 1040.001 8.387
124 049 124In 115399.339 1043.389 8.414
124 050 124Sn 115391.471 1049.963 8.467
124 051 124Sb 115391.576 1048.565 8.456
124 052 124Te 115388.161 1050.686 8.473
124 053 124I 115390.81 1046.745 8.441
124 054 124Xe 115390.004 1046.257 8.438
124 055 124Cs 115395.422 1039.546 8.383
124 056 124Ba 115397.552 1036.123 8.356
124 057 124La 115405.871 1026.51 8.278
125 048 125Cd 116338.864 1044.723 8.358
125 049 125In 116331.233 1051.06 8.408
125 050 125Sn 116325.303 1055.696 8.446
125 051 125Sb 116322.435 1057.271 8.458
125 052 125Te 116321.157 1057.256 8.458
125 053 125I 116320.832 1056.287 8.45
125 054 125Xe 116321.966 1053.861 8.431
125 055 125Cs 116324.559 1049.974 8.4
125 056 125Ba 116328.468 1044.772 8.358
125 057 125La 116333.866 1038.081 8.305
126 048 126Cd 117271.388 1051.764 8.347
126 049 126In 117265.397 1056.462 8.385
126 050 126Sn 117256.676 1063.889 8.444
126 051 126Sb 117255.785 1063.487 8.44
126 052 126Te 117251.609 1066.369 8.463
126 053 126I 117253.252 1063.433 8.44
126 054 126Xe 117251.483 1063.909 8.444
126 055 126Cs 117255.796 1058.303 8.399
126 056 126Ba 117256.96 1055.845 8.38
126 057 126La 117264.149 1047.363 8.312
126 058 126Ce 117267.787 1042.432 8.273
127 048 127Cd 118206.692 1056.025 8.315
127 049 127In 118197.711 1063.713 8.376
127 050 127Sn 118190.691 1069.44 8.421
127 051 127Sb 118186.979 1071.858 8.44
127 052 127Te 118184.887 1072.657 8.446
127 053 127I 118183.674 1072.577 8.445
127 054 127Xe 118183.825 1071.132 8.434
127 055 127Cs 118185.395 1068.269 8.412
127 056 127Ba 118188.308 1064.063 8.378
127 057 127La 118192.717 1058.36 8.334
127 058 127Ce 118198.122 1051.662 8.281
128 048 128Cd 119139.416 1062.867 8.304
128 049 128In 119131.835 1069.154 8.353
128 050 128Sn 119122.349 1077.347 8.417
128 051 128Sb 119120.564 1077.839 8.421
128 052 128Te 119115.67 1081.439 8.449
128 053 128I 119116.413 1079.403 8.433
128 054 128Xe 119113.78 1080.743 8.443
128 055 128Cs 119117.198 1076.031 8.406
128 056 128Ba 119117.216 1074.72 8.396
128 057 128La 119123.477 1067.166 8.337
128 058 128Ce 119126.062 1063.287 8.307
128 059 128Pr 119134.754 1053.302 8.229
129 049 129In 120064.749 1075.806 8.34
129 050 129Sn 120056.584 1082.677 8.393
129 051 129Sb 120052.039 1085.929 8.418
129 052 129Te 120049.153 1087.522 8.43
129 053 129I 120047.142 1088.239 8.436
129 054 129Xe 120046.436 1087.651 8.431
129 055 129Cs 120047.123 1085.672 8.416
129 056 129Ba 120049.047 1082.454 8.391
129 057 129La 120052.275 1077.933 8.356
129 058 129Ce 120056.803 1072.112 8.311
129 059 129Pr 120062.805 1064.816 8.254
130 048 130Cd 121008.124 1073.289 8.256
130 049 130In 120999.293 1080.827 8.314
130 050 130Sn 120988.533 1090.294 8.387
130 051 130Sb 120985.869 1091.664 8.397
130 052 130Te 120980.298 1095.941 8.43
130 053 130I 120980.207 1094.74 8.421
130 054 130Xe 120976.746 1096.907 8.438
130 055 130Cs 120979.217 1093.143 8.409
130 056 130Ba 120978.344 1092.722 8.406
130 057 130La 120983.467 1086.306 8.356
130 058 130Ce 120985.161 1083.319 8.333
130 059 130Pr 120992.893 1074.294 8.264
130 060 130Nd 120996.966 1068.927 8.223
131 049 131In 121932.54 1087.145 8.299
131 050 131Sn 121922.852 1095.54 8.363
131 051 131Sb 121917.667 1099.432 8.393
131 052 131Te 121913.934 1101.871 8.411
131 053 131I 121911.188 1103.323 8.422
131 054 131Xe 121909.707 1103.512 8.424
131 055 131Cs 121909.551 1102.374 8.415
131 056 131Ba 121910.416 1100.216 8.399
131 057 131La 121912.82 1096.519 8.37
131 058 131Ce 121916.358 1091.687 8.333
131 059 131Pr 121921.287 1085.465 8.286
131 060 131Nd 121927.287 1078.172 8.23
132 049 132In 122869.751 1089.5 8.254
132 050 132Sn 122855.106 1102.851 8.355
132 051 132Sb 122851.475 1105.189 8.373
132 052 132Te 122845.456 1109.915 8.408
132 053 132I 122844.427 1109.65 8.406
132 054 132Xe 122840.335 1112.448 8.428
132 055 132Cs 122841.949 1109.541 8.406
132 056 132Ba 122840.159 1110.038 8.409
132 057 132La 122844.343 1104.561 8.368
132 058 132Ce 122845.098 1102.513 8.352
132 059 132Pr 122851.851 1094.466 8.291
132 060 132Nd 122855.124 1089.9 8.257
133 050 133Sn 123792.204 1105.319 8.311
133 051 133Sb 123783.7 1112.529 8.365
133 052 133Te 123779.187 1115.749 8.389
133 053 133I 123775.734 1117.909 8.405
133 054 133Xe 123773.466 1118.883 8.413
133 055 133Cs 123772.528 1118.528 8.41
133 056 133Ba 123772.534 1117.228 8.4
133 057 133La 123774.083 1114.386 8.379
133 058 133Ce 123776.643 1110.533 8.35
133 059 133Pr 123780.617 1105.266 8.31
133 060 133Nd 123785.714 1098.875 8.262
133 061 133Pm 123792.123 1091.173 8.204
134 050 134Sn 124727.848 1109.24 8.278
134 051 134Sb 124719.967 1115.827 8.327
134 052 134Te 124711.067 1123.434 8.384
134 053 134I 124709.043 1124.165 8.389
134 054 134Xe 124704.479 1127.435 8.414
134 055 134Cs 124705.202 1125.419 8.399
134 056 134Ba 124702.632 1126.696 8.408
134 057 134La 124705.852 1122.182 8.374
134 058 134Ce 124705.724 1121.017 8.366
134 059 134Pr 124711.539 1113.909 8.313
134 060 134Nd 124713.892 1110.262 8.286
134 061 134Pm 124722.287 1100.574 8.213
135 051 135Sb 125655.921 1119.439 8.292
135 052 135Te 125647.29 1126.776 8.346
135 053 135I 125640.819 1131.954 8.385
135 054 135Xe 125637.681 1133.799 8.399
135 055 135Cs 125636.005 1134.181 8.401
135 056 135Ba 125635.225 1133.668 8.398
135 057 135La 125635.914 1131.686 8.383
135 058 135Ce 125637.429 1128.877 8.362
135 059 135Pr 125640.607 1124.406 8.329
135 060 135Nd 125644.818 1118.902 8.288
135 061 135Pm 125650.541 1111.885 8.236
135 062 135Sm 125657.15 1103.983 8.178
136 052 136Te 126582.184 1131.448 8.319
136 053 136I 126576.603 1135.735 8.351
136 054 136Xe 126569.167 1141.878 8.396
136 055 136Cs 126568.742 1141.009 8.39
136 056 136Ba 126565.683 1142.775 8.403
136 057 136La 126568.019 1139.146 8.376
136 058 136Ce 126567.08 1138.792 8.373
136 059 136Pr 126571.71 1132.868 8.33
136 060 136Nd 126573.327 1129.958 8.309
136 061 136Pm 126580.815 1121.177 8.244
136 062 136Sm 126584.693 1116.005 8.206
137 052 137Te 127518.548 1134.649 8.282
137 053 137I 127511.094 1140.81 8.327
137 054 137Xe 127504.707 1145.903 8.364
137 055 137Cs 127500.029 1149.288 8.389
137 056 137Ba 127498.343 1149.681 8.392
137 057 137La 127498.452 1148.278 8.382
137 058 137Ce 127499.163 1146.274 8.367
137 059 137Pr 127501.354 1142.79 8.342
137 060 137Nd 127504.44 1138.41 8.31
137 061 137Pm 127509.436 1132.121 8.264
137 062 137Sm 127514.968 1125.296 8.214
138 053 138I 128446.761 1144.708 8.295
138 054 138Xe 128438.43 1151.746 8.346
138 055 138Cs 128435.182 1153.7 8.36
138 056 138Ba 128429.296 1158.293 8.393
138 057 138La 128430.522 1155.774 8.375
138 058 138Ce 128428.967 1156.035 8.377
138 059 138Pr 128432.893 1150.816 8.339
138 060 138Nd 128433.496 1148.92 8.326
138 061 138Pm 128440.063 1141.059 8.269
138 062 138Sm 128442.994 1136.835 8.238
138 063 138Eu 128452.231 1126.305 8.162
139 053 139I 129381.745 1149.289 8.268
139 054 139Xe 129374.43 1155.311 8.312
139 055 139Cs 129368.862 1159.586 8.342
139 056 139Ba 129364.138 1163.016 8.367
139 057 139La 129361.309 1164.551 8.378
139 058 139Ce 129361.078 1163.49 8.37
139 059 139Pr 129362.696 1160.578 8.349
139 060 139Nd 129365.016 1156.965 8.323
139 061 139Pm 129369.001 1151.687 8.286
139 062 139Sm 129373.606 1145.788 8.243
139 063 139Eu 129380.077 1138.024 8.187
140 054 140Xe 130308.578 1160.728 8.291
140 055 140Cs 130304.006 1164.007 8.314
140 056 140Ba 130297.275 1169.445 8.353
140 057 140La 130295.714 1169.712 8.355
140 058 140Ce 130291.441 1172.692 8.376
140 059 140Pr 130294.318 1168.522 8.347
140 060 140Nd 130294.25 1167.296 8.338
140 061 140Pm 130299.781 1160.472 8.289
140 062 140Sm 130302.024 1156.936 8.264
140 063 140Eu 130309.979 1147.687 8.198
140 064 140Gd 130314.676 1141.697 8.155
140 065 140Tb 130325.467 1129.613 8.069
141 054 141Xe 131244.732 1164.14 8.256
141 055 141Cs 131238.074 1169.504 8.294
141 056 141Ba 131232.314 1173.971 8.326
141 057 141La 131228.591 1176.401 8.343
141 058 141Ce 131225.578 1178.12 8.355
141 059 141Pr 131224.486 1177.919 8.354
141 060 141Nd 131225.798 1175.314 8.336
141 061 141Pm 131228.962 1170.856 8.304
141 062 141Sm 131233.035 1165.49 8.266
141 063 141Eu 131238.536 1158.696 8.218
141 064 141Gd 131244.728 1151.21 8.165
141 065 141Tb 131252.901 1141.744 8.097
142 054 142Xe 132179.076 1169.361 8.235
142 055 142Cs 132173.53 1173.614 8.265
142 056 142Ba 132165.711 1180.139 8.311
142 057 142La 132162.988 1181.569 8.321
142 058 142Ce 132157.973 1185.29 8.347
142 059 142Pr 132158.208 1183.762 8.336
142 060 142Nd 132155.535 1185.142 8.346
142 061 142Pm 132159.822 1179.562 8.307
142 062 142Sm 132161.475 1176.615 8.286
142 063 142Eu 132168.637 1168.16 8.226
142 064 142Gd 132172.486 1163.018 8.19
143 055 143Cs 133107.868 1178.841 8.244
143 056 143Ba 133101.092 1184.324 8.282
143 057 143La 133096.33 1187.792 8.306
143 058 143Ce 133092.394 1190.435 8.325
143 059 143Pr 133090.421 1191.114 8.329
143 060 143Nd 133088.977 1191.266 8.331
143 061 143Pm 133089.507 1189.442 8.318
143 062 143Sm 133092.439 1185.217 8.288
143 063 143Eu 133097.209 1179.153 8.246
143 064 143Gd 133102.71 1172.359 8.198
143 065 143Tb 133109.999 1163.777 8.138
144 055 144Cs 134043.763 1182.511 8.212
144 056 144Ba 134034.753 1190.228 8.265
144 057 144La 134031.121 1192.567 8.282
144 058 144Ce 134025.063 1197.331 8.315
144 059 144Pr 134024.233 1196.868 8.312
144 060 144Nd 134020.725 1199.083 8.327
144 061 144Pm 134022.546 1195.968 8.305
144 062 144Sm 134021.484 1195.737 8.304
144 063 144Eu 134027.323 1188.605 8.254
144 064 144Gd 134030.674 1183.96 8.222
144 065 144Tb 134039.555 1173.786 8.151
144 066 144Dy 134044.832 1167.216 8.106
145 055 145Cs 134978.47 1187.37 8.189
145 056 145Ba 134970.606 1193.94 8.234
145 057 145La 134964.515 1198.738 8.267
145 058 145Ce 134959.894 1202.066 8.29
145 059 145Pr 134956.851 1203.815 8.302
145 060 145Nd 134954.535 1204.838 8.309
145 061 145Pm 134954.187 1203.893 8.303
145 062 145Sm 134954.292 1202.494 8.293
145 063 145Eu 134956.441 1199.052 8.269
145 064 145Gd 134961.001 1193.199 8.229
145 065 145Tb 134967.537 1185.369 8.175
145 066 145Dy 134974.616 1176.997 8.117
146 055 146Cs 135914.401 1191.004 8.158
146 056 146Ba 135904.51 1199.602 8.216
146 057 146La 135899.879 1202.939 8.239
146 058 146Ce 135892.808 1208.717 8.279
146 059 146Pr 135891.267 1208.965 8.281
146 060 146Nd 135886.535 1212.403 8.304
146 061 146Pm 135887.495 1210.15 8.289
146 062 146Sm 135885.442 1210.91 8.294
146 063 146Eu 135888.811 1206.247 8.262
146 064 146Gd 135889.329 1204.436 8.25
146 065 146Tb 135897.141 1195.331 8.187
146 066 146Dy 135901.846 1189.332 8.146
147 055 147Cs 136849.495 1195.475 8.132
147 057 147La 136833.643 1208.741 8.223
147 058 147Ce 136827.952 1213.138 8.253
147 059 147Pr 136824.016 1215.781 8.271
147 060 147Nd 136820.808 1217.696 8.284
147 061 147Pm 136819.401 1217.809 8.284
147 062 147Sm 136818.666 1217.251 8.281
147 063 147Eu 136819.877 1214.747 8.264
147 064 147Gd 136821.553 1211.777 8.243
147 065 147Tb 136825.653 1206.384 8.207
147 066 147Dy 136831.706 1199.038 8.157
147 067 147Ho 136839.546 1189.904 8.095
148 055 148Cs 137785.709 1198.827 8.1
148 056 148Ba 137774.488 1208.754 8.167
148 057 148La 137768.857 1213.092 8.197
148 058 148Ce 137761.085 1219.571 8.24
148 059 148Pr 137758.434 1220.928 8.25
148 060 148Nd 137753.041 1225.028 8.277
148 061 148Pm 137753.071 1223.705 8.268
148 062 148Sm 137750.09 1225.392 8.28
148 063 148Eu 137752.619 1221.57 8.254
148 064 148Gd 137752.134 1220.761 8.248
148 065 148Tb 137757.359 1214.243 8.204
148 066 148Dy 137759.529 1210.78 8.181
148 067 148Ho 137768.857 1200.159 8.109
149 058 149Ce 138696.27 1223.951 8.214
149 059 149Pr 138691.399 1227.529 8.238
149 060 149Nd 138687.567 1230.067 8.255
149 061 149Pm 138685.366 1230.975 8.262
149 062 149Sm 138683.784 1231.263 8.264
149 063 149Eu 138683.968 1229.786 8.254
149 064 149Gd 138684.771 1227.69 8.24
149 065 149Tb 138687.897 1223.271 8.21
149 066 149Dy 138691.167 1218.707 8.179
149 067 149Ho 138696.683 1211.898 8.134
149 068 149Er 138704.118 1203.17 8.075
150 058 150Ce 139629.644 1230.142 8.201
150 059 150Pr 139625.649 1232.844 8.219
150 060 150Nd 139619.752 1237.448 8.25
150 061 150Pm 139619.328 1236.578 8.244
150 062 150Sm 139615.363 1239.25 8.262
150 063 150Eu 139617.112 1236.208 8.241
150 064 150Gd 139615.629 1236.397 8.243
150 065 150Tb 139619.776 1230.957 8.206
150 066 150Dy 139621.059 1228.381 8.189
150 067 150Ho 139627.917 1220.229 8.135
150 068 150Er 139631.521 1215.332 8.102
151 058 151Ce 140564.458 1234.894 8.178
151 059 151Pr 140558.676 1239.382 8.208
151 060 151Nd 140553.983 1242.782 8.23
151 061 151Pm 140551.03 1244.442 8.241
151 062 151Sm 140549.332 1244.847 8.244
151 063 151Eu 140548.744 1244.141 8.239
151 064 151Gd 140548.697 1242.895 8.231
151 065 151Tb 140550.751 1239.547 8.209
151 066 151Dy 140553.111 1235.894 8.185
151 067 151Ho 140557.727 1229.985 8.146
151 068 151Er 140562.582 1223.836 8.105
151 069 151Tm 140569.555 1215.57 8.05
151 070 151Yb 140578.286 1205.546 7.984
152 059 152Pr 141493.131 1244.493 8.187
152 060 152Nd 141486.272 1250.058 8.224
152 061 152Pm 141484.657 1250.38 8.226
152 062 152Sm 141480.639 1253.104 8.244
152 063 152Eu 141482.003 1250.448 8.227
152 064 152Gd 141479.672 1251.485 8.233
152 065 152Tb 141483.155 1246.709 8.202
152 066 152Dy 141483.24 1245.33 8.193
152 067 152Ho 141489.245 1238.032 8.145
152 068 152Er 141491.842 1234.142 8.119
152 069 152Tm 141500.061 1224.629 8.057
152 070 152Yb 141505.01 1218.387 8.016
153 059 153Pr 142426.805 1250.384 8.172
153 060 153Nd 142420.575 1255.321 8.205
153 061 153Pm 142416.728 1257.874 8.221
153 062 153Sm 142414.336 1258.973 8.229
153 063 153Eu 142413.018 1258.998 8.229
153 064 153Gd 142412.99 1257.732 8.22
153 065 153Tb 142414.049 1255.38 8.205
153 066 153Dy 142415.708 1252.428 8.186
153 067 153Ho 142419.328 1247.514 8.154
153 068 153Er 142423.348 1242.201 8.119
153 069 153Tm 142429.31 1234.946 8.072
153 071 153Lu 142443.893 1217.776 7.959
154 059 154Pr 143361.729 1255.025 8.15
154 060 154Nd 143353.728 1261.733 8.193
154 061 154Pm 143350.407 1263.76 8.206
154 062 154Sm 143345.934 1266.94 8.227
154 063 154Eu 143346.141 1265.44 8.217
154 064 154Gd 143343.661 1266.627 8.225
154 065 154Tb 143346.703 1262.291 8.197
154 066 154Dy 143345.954 1261.747 8.193
154 067 154Ho 143351.197 1255.211 8.151
154 068 154Er 143352.718 1252.396 8.132
154 069 154Tm 143360.39 1243.431 8.074
154 070 154Yb 143364.374 1238.154 8.04
155 061 155Pm 144283.431 1270.302 8.195
155 062 155Sm 144279.693 1272.747 8.211
155 063 155Eu 144277.555 1273.592 8.217
155 064 155Gd 144276.791 1273.062 8.213
155 065 155Tb 144277.103 1271.456 8.203
155 066 155Dy 144278.686 1268.58 8.184
155 067 155Ho 144281.295 1264.678 8.159
155 068 155Er 144284.609 1260.07 8.129
155 069 155Tm 144289.678 1253.708 8.088
155 070 155Yb 144295.299 1246.794 8.044
155 071 155Lu 144302.737 1238.062 7.987
156 060 156Nd 145221.876 1272.715 8.158
156 061 156Pm 145217.675 1275.623 8.177
156 062 156Sm 145212.014 1279.991 8.205
156 063 156Eu 145210.78 1279.931 8.205
156 064 156Gd 145207.82 1281.598 8.215
156 065 156Tb 145209.753 1278.372 8.195
156 066 156Dy 145208.81 1278.021 8.192
156 067 156Ho 145213.479 1272.059 8.154
156 068 156Er 145214.105 1270.14 8.142
156 069 156Tm 145220.967 1261.984 8.09
156 070 156Yb 145224.032 1257.626 8.062
156 071 156Lu 145233.035 1247.33 7.996
156 072 156Hf 145238.424 1240.647 7.953
157 061 157Pm 146151.019 1281.844 8.165
157 062 157Sm 146146.148 1285.422 8.187
157 063 157Eu 146142.9 1287.377 8.2
157 064 157Gd 146141.025 1287.958 8.204
157 065 157Tb 146140.575 1287.116 8.198
157 066 157Dy 146141.406 1284.991 8.185
157 067 157Ho 146143.494 1281.609 8.163
157 068 157Er 146146.392 1277.418 8.136
157 069 157Tm 146150.592 1271.925 8.101
157 070 157Yb 146155.348 1265.875 8.063
157 071 157Lu 146161.796 1258.134 8.014
157 073 157Ta 146177.627 1239.716 7.896
158 061 158Pm 147085.793 1286.636 8.143
158 062 158Sm 147079.162 1291.973 8.177
158 063 158Eu 147076.651 1293.191 8.185
158 064 158Gd 147072.653 1295.896 8.202
158 065 158Tb 147073.362 1293.894 8.189
158 066 158Dy 147071.916 1294.046 8.19
158 067 158Ho 147075.626 1289.043 8.158
158 068 158Er 147076.002 1287.373 8.148
158 069 158Tm 147082.092 1279.99 8.101
158 070 158Yb 147084.269 1276.52 8.079
158 071 158Lu 147092.559 1266.936 8.019
158 072 158Hf 147097.158 1261.044 7.981
159 062 159Sm 148013.656 1297.045 8.158
159 063 159Eu 148009.302 1300.105 8.177
159 064 159Gd 148006.276 1301.839 8.188
159 065 159Tb 148004.794 1302.027 8.189
159 066 159Dy 148004.649 1300.879 8.182
159 067 159Ho 148005.975 1298.259 8.165
159 068 159Er 148008.233 1294.708 8.143
159 069 159Tm 148011.719 1289.928 8.113
159 070 159Yb 148015.935 1284.419 8.078
159 071 159Lu 148021.557 1277.504 8.035
159 072 159Hf 148027.902 1269.865 7.987
159 073 159Ta 148035.797 1260.677 7.929
160 064 160Gd 148938.39 1309.29 8.183
160 065 160Tb 148937.984 1308.402 8.178
160 066 160Dy 148935.638 1309.455 8.184
160 067 160Ho 148938.417 1305.382 8.159
160 068 160Er 148938.236 1304.27 8.152
160 069 160Tm 148943.483 1297.73 8.111
160 070 160Yb 148945.102 1294.817 8.093
160 071 160Lu 148952.491 1286.135 8.038
160 072 160Hf 148956.313 1281.02 8.006
160 073 160Ta 148965.859 1270.18 7.939
160 074 160W 148971.868 1262.878 7.893
161 064 161Gd 149872.319 1314.925 8.167
161 065 161Tb 149869.853 1316.099 8.175
161 066 161Dy 149868.749 1315.909 8.173
161 067 161Ho 149869.096 1314.269 8.163
161 068 161Er 149870.579 1311.492 8.146
161 069 161Tm 149873.378 1307.4 8.12
161 070 161Yb 149876.922 1302.563 8.09
161 071 161Lu 149881.693 1296.498 8.053
161 072 161Hf 149887.425 1289.473 8.009
161 075 161Re 149911.331 1261.687 7.837
162 064 162Gd 150805.039 1321.771 8.159
162 065 162Tb 150803.135 1322.382 8.163
162 066 162Dy 150800.117 1324.106 8.173
162 067 162Ho 150801.746 1321.184 8.155
162 068 162Er 150800.939 1320.698 8.152
162 069 162Tm 150805.287 1315.056 8.118
162 070 162Yb 150806.428 1312.622 8.103
162 071 162Lu 150812.909 1304.848 8.055
162 072 162Hf 150816.065 1300.398 8.027
162 073 162Ta 150824.947 1290.223 7.964
162 074 162W 150830.214 1283.663 7.924
163 065 163Tb 151735.708 1329.374 8.156
163 066 163Dy 151733.412 1330.377 8.162
163 067 163Ho 151732.903 1329.592 8.157
163 068 163Er 151733.602 1327.6 8.145
163 069 163Tm 151735.53 1324.379 8.125
163 070 163Yb 151738.45 1320.165 8.099
163 071 163Lu 151742.452 1314.87 8.067
163 072 163Hf 151747.446 1308.583 8.028
163 073 163Ta 151753.681 1301.054 7.982
163 074 163W 151760.8 1292.642 7.93
163 075 163Re 151769.192 1282.957 7.871
164 065 164Tb 152669.723 1334.924 8.14
164 066 164Dy 152665.319 1338.035 8.159
164 067 164Ho 152665.794 1336.267 8.148
164 068 164Er 152664.32 1336.447 8.149
164 069 164Tm 152667.871 1331.603 8.12
164 070 164Yb 152668.225 1329.956 8.109
164 071 164Lu 152674.095 1322.792 8.066
164 072 164Hf 152676.404 1319.19 8.044
164 073 164Ta 152684.432 1309.869 7.987
164 074 164W 152688.97 1304.037 7.951
164 076 164Os 152705.722 1284.699 7.834
165 066 165Dy 153599.168 1343.751 8.144
165 067 165Ho 153597.371 1344.256 8.147
165 068 165Er 153597.236 1343.097 8.14
165 069 165Tm 153598.317 1340.722 8.126
165 070 165Yb 153600.455 1337.291 8.105
165 071 165Lu 153603.789 1332.664 8.077
165 072 165Hf 153608.084 1327.075 8.043
165 073 165Ta 153613.354 1320.512 8.003
165 074 165W 153619.836 1312.737 7.956
165 075 165Re 153627.53 1303.749 7.902
166 065 166Tb 154537.031 1346.747 8.113
166 066 166Dy 154531.69 1350.795 8.137
166 067 166Ho 154530.692 1350.499 8.136
166 068 166Er 154528.327 1351.572 8.142
166 069 166Tm 154530.853 1347.752 8.119
166 070 166Yb 154530.648 1346.663 8.112
166 071 166Lu 154535.704 1340.314 8.074
166 072 166Hf 154537.355 1337.37 8.056
166 073 166Ta 154544.605 1328.826 8.005
166 074 166W 154548.3 1323.838 7.975
166 076 166Os 154563.732 1305.819 7.866
167 066 167Dy 155465.834 1356.216 8.121
167 067 167Ho 155462.976 1357.781 8.13
167 068 167Er 155461.456 1358.008 8.132
167 069 167Tm 155461.693 1356.477 8.123
167 070 167Yb 155463.136 1353.741 8.106
167 071 167Lu 155465.719 1349.864 8.083
167 072 167Hf 155469.24 1345.05 8.054
167 073 167Ta 155473.846 1339.151 8.019
167 074 167W 155479.597 1332.106 7.977
167 076 167Os 155494.164 1314.953 7.874
167 077 167Ir 155503.074 1304.749 7.813
168 066 168Dy 156398.708 1362.907 8.113
168 067 168Ho 156396.687 1363.635 8.117
168 068 168Er 156393.25 1365.779 8.13
168 069 168Tm 156394.418 1363.318 8.115
168 070 168Yb 156393.649 1362.793 8.112
168 071 168Lu 156397.653 1357.496 8.08
168 072 168Hf 156398.841 1355.014 8.066
168 073 168Ta 156405.297 1347.265 8.019
168 074 168W 156408.29 1342.979 7.994
168 075 168Re 156416.879 1333.096 7.935
168 076 168Os 156422.167 1326.515 7.896
168 078 168Pt 156440.096 1305.999 7.774
169 066 169Dy 157333.162 1368.019 8.095
169 067 169Ho 157329.448 1370.439 8.109
169 068 169Er 157326.812 1371.783 8.117
169 069 169Tm 157325.949 1371.352 8.115
169 070 169Yb 157326.348 1369.659 8.104
169 071 169Lu 157328.13 1366.584 8.086
169 072 169Hf 157330.979 1362.442 8.062
169 073 169Ta 157334.895 1357.232 8.031
169 074 169W 157339.756 1351.078 7.995
169 075 169Re 157345.777 1343.764 7.951
169 076 169Os 157352.931 1335.316 7.901
169 077 169Ir 157361.06 1325.894 7.846
170 067 170Ho 158263.505 1375.948 8.094
170 068 170Er 158259.12 1379.04 8.112
170 069 170Tm 158258.923 1377.944 8.106
170 070 170Yb 158257.443 1378.13 8.107
170 071 170Lu 158260.391 1373.888 8.082
170 072 170Hf 158260.936 1372.05 8.071
170 073 170Ta 158266.541 1365.152 8.03
170 074 170W 158268.875 1361.524 8.009
170 075 170Re 158276.739 1352.367 7.955
170 076 170Os 158281.218 1346.595 7.921
170 078 170Pt 158297.818 1327.408 7.808
171 067 171Ho 159196.719 1382.299 8.084
171 068 171Er 159193.003 1384.721 8.098
171 069 171Tm 159191.002 1385.43 8.102
171 070 171Yb 159190.394 1384.744 8.098
171 071 171Lu 159191.362 1382.483 8.085
171 072 171Hf 159193.253 1379.298 8.066
171 073 171Ta 159196.453 1374.805 8.04
171 074 171W 159200.576 1369.389 8.008
171 075 171Re 159205.901 1362.77 7.969
171 076 171Os 159212.347 1355.031 7.924
171 077 171Ir 159219.699 1346.386 7.874
171 078 171Pt 159228.148 1336.643 7.817
171 079 171Au 159237.542 1325.956 7.754
172 068 172Er 160125.733 1391.557 8.09
172 069 172Tm 160124.331 1391.666 8.091
172 070 172Yb 160121.94 1392.764 8.097
172 071 172Lu 160123.948 1389.462 8.078
172 072 172Hf 160123.774 1388.343 8.072
172 073 172Ta 160128.337 1382.486 8.038
172 074 172W 160130.059 1379.471 8.02
172 075 172Re 160137.125 1371.112 7.972
172 076 172Os 160140.896 1366.047 7.942
172 078 172Pt 160156.011 1348.346 7.839
172 080 172Hg 160175 1326.77 7.714
173 069 173Tm 161056.946 1398.616 8.084
173 070 173Yb 161055.138 1399.131 8.087
173 071 173Lu 161055.298 1397.678 8.079
173 072 173Hf 161056.26 1395.422 8.066
173 073 173Ta 161058.764 1391.625 8.044
173 074 173W 161061.923 1387.172 8.018
173 075 173Re 161066.585 1381.217 7.984
173 076 173Os 161072.19 1374.319 7.944
173 077 173Ir 161078.845 1366.37 7.898
173 078 173Pt 161086.666 1357.256 7.845
173 079 173Au 161095.275 1347.354 7.788
174 069 174Tm 161990.829 1404.298 8.071
174 070 174Yb 161987.239 1406.595 8.084
174 071 174Lu 161988.102 1404.439 8.071
174 072 174Hf 161987.32 1403.928 8.069
174 073 174Ta 161990.914 1399.04 8.04
174 074 174W 161991.917 1396.744 8.027
174 075 174Re 161997.96 1389.407 7.985
174 076 174Os 162001.126 1384.948 7.959
174 077 174Ir 162009.742 1375.039 7.903
174 078 174Pt 162014.781 1368.706 7.866
174 080 174Hg 162032.431 1348.47 7.75
175 069 175Tm 162923.873 1410.819 8.062
175 070 175Yb 162920.982 1412.418 8.071
175 071 175Lu 162920.001 1412.106 8.069
175 072 175Hf 162920.177 1410.636 8.061
175 073 175Ta 162921.74 1407.779 8.044
175 074 175W 162924.005 1404.221 8.024
175 075 175Re 162927.839 1399.093 7.995
175 076 175Os 162932.511 1393.128 7.961
175 077 175Ir 162938.676 1385.67 7.918
175 078 175Pt 162945.904 1377.148 7.869
175 079 175Au 162953.643 1368.116 7.818
175 080 175Hg 162962.582 1357.884 7.759
176 069 176Tm 163858.317 1415.941 8.045
176 070 176Yb 163853.682 1419.283 8.064
176 071 176Lu 163853.278 1418.394 8.059
176 072 176Hf 163851.577 1418.801 8.061
176 073 176Ta 163854.273 1414.811 8.039
176 074 176W 163854.49 1413.301 8.03
176 075 176Re 163859.558 1406.94 7.994
176 076 176Os 163862.012 1403.192 7.973
176 077 176Ir 163869.738 1394.173 7.921
176 078 176Pt 163874.16 1388.458 7.889
176 080 176Hg 163890.287 1369.744 7.783
177 070 177Yb 164787.681 1424.849 8.05
177 071 177Lu 164785.77 1425.466 8.053
177 072 177Hf 164784.759 1425.185 8.052
177 073 177Ta 164785.413 1423.237 8.041
177 074 177W 164786.924 1420.432 8.025
177 075 177Re 164789.846 1416.217 8.001
177 076 177Os 164793.654 1411.116 7.972
177 077 177Ir 164799.046 1404.43 7.935
177 078 177Pt 164805.212 1396.971 7.892
177 079 177Au 164812.521 1388.369 7.844
177 080 177Hg 164820.78 1378.816 7.79
177 081 177Tl 164829.721 1368.582 7.732
178 070 178Yb 165720.466 1431.629 8.043
178 071 178Lu 165719.31 1431.492 8.042
178 072 178Hf 165716.698 1432.811 8.049
178 073 178Ta 165718.124 1430.091 8.034
178 074 178W 165717.704 1429.218 8.029
178 075 178Re 165721.956 1423.672 7.998
178 076 178Os 165723.552 1420.783 7.982
178 077 178Ir 165730.335 1412.707 7.937
178 078 178Pt 165734.078 1407.67 7.908
178 079 178Au 165743.235 1397.22 7.85
178 080 178Hg 165748.737 1390.425 7.811
178 082 178Pb 165767.6 1368.975 7.691
179 071 179Lu 166652.083 1438.284 8.035
179 072 179Hf 166650.165 1438.91 8.039
179 073 179Ta 166649.759 1438.022 8.034
179 074 179W 166650.31 1436.177 8.023
179 075 179Re 166652.517 1432.677 8.004
179 076 179Os 166655.572 1428.328 7.979
179 077 179Ir 166660.004 1422.603 7.948
179 078 179Pt 166665.306 1416.008 7.911
179 079 179Au 166672.107 1407.913 7.865
179 080 179Hg 166679.626 1399.101 7.816
179 081 179Tl 166687.737 1389.697 7.764
180 071 180Lu 167585.951 1443.981 8.022
180 072 180Hf 167582.342 1446.297 8.035
180 073 180Ta 167582.683 1444.663 8.026
180 074 180W 167581.464 1444.588 8.025
180 075 180Re 167584.757 1440.002 8
180 076 180Os 167585.727 1437.739 7.987
180 077 180Ir 167591.597 1430.575 7.948
180 078 180Pt 167594.628 1426.251 7.924
180 079 180Au 167602.957 1416.629 7.87
180 080 180Hg 167607.797 1410.495 7.836
180 082 180Pb 167625.081 1390.625 7.726
181 072 181Hf 168516.213 1451.992 8.022
181 073 181Ta 168514.672 1452.24 8.023
181 074 181W 168514.348 1451.27 8.018
181 075 181Re 168515.58 1448.744 8.004
181 076 181Os 168518.03 1445.001 7.983
181 077 181Ir 168521.597 1440.141 7.957
181 078 181Pt 168526.183 1434.261 7.924
181 079 181Au 168532.176 1426.975 7.884
181 080 181Hg 168538.875 1418.983 7.84
181 081 181Tl 168546.224 1410.34 7.792
181 082 181Pb 168555.374 1399.897 7.734
182 072 182Hf 169449.059 1458.711 8.015
182 073 182Ta 169448.174 1458.303 8.013
182 074 182W 169445.849 1459.335 8.018
182 075 182Re 169448.135 1455.755 7.999
182 076 182Os 169448.465 1454.131 7.99
182 077 182Ir 169453.511 1447.792 7.955
182 078 182Pt 169455.883 1444.127 7.935
182 079 182Au 169463.24 1435.476 7.887
182 080 182Hg 169467.454 1429.969 7.857
182 081 182Tl 169477.169 1418.961 7.796
182 082 182Pb 169483.182 1411.654 7.756
183 072 183Hf 170383.322 1464.013 8
183 073 183Ta 170380.805 1465.237 8.007
183 074 183W 170379.223 1465.525 8.008
183 075 183Re 170379.268 1464.187 8.001
183 076 183Os 170380.908 1461.254 7.985
183 077 183Ir 170383.86 1457.008 7.962
183 078 183Pt 170387.774 1451.801 7.933
183 079 183Au 170392.848 1445.434 7.899
183 080 183Hg 170398.724 1438.264 7.859
183 081 183Tl 170405.426 1430.269 7.816
183 082 183Pb 170413.933 1420.469 7.762
184 072 184Hf 171316.606 1470.294 7.991
184 073 184Ta 171314.754 1470.853 7.994
184 074 184W 171311.377 1472.937 8.005
184 075 184Re 171312.346 1470.674 7.993
184 076 184Os 171311.806 1469.921 7.989
184 077 184Ir 171315.94 1464.494 7.959
184 078 184Pt 171317.708 1461.432 7.943
184 079 184Au 171324.21 1453.637 7.9
184 080 184Hg 171327.669 1448.885 7.874
184 081 184Tl 171336.617 1438.643 7.819
184 082 184Pb 171341.951 1432.016 7.783
185 073 185Ta 172247.693 1477.479 7.986
185 074 185W 172245.189 1478.691 7.993
185 075 185Re 172244.245 1478.341 7.991
185 076 185Os 172244.747 1476.546 7.981
185 077 185Ir 172246.709 1473.29 7.964
185 078 185Pt 172249.854 1468.852 7.94
185 079 185Au 172254.156 1463.256 7.909
185 080 185Hg 172259.336 1456.783 7.875
185 081 185Tl 172265.241 1449.585 7.836
185 082 185Pb 172272.949 1440.583 7.787
186 073 186Ta 173181.973 1482.765 7.972
186 074 186W 173177.563 1485.882 7.989
186 075 186Re 173177.631 1484.52 7.981
186 076 186Os 173176.051 1484.807 7.983
186 077 186Ir 173179.367 1480.198 7.958
186 078 186Pt 173180.165 1478.107 7.947
186 079 186Au 173185.803 1471.176 7.91
186 080 186Hg 173188.468 1467.217 7.888
186 081 186Tl 173196.306 1458.086 7.839
186 082 186Pb 173201.304 1451.795 7.805
186 083 186Bi 173212.304 1439.501 7.739
187 074 187W 174111.662 1491.348 7.975
187 075 187Re 174109.84 1491.877 7.978
187 076 187Os 174109.326 1491.097 7.974
187 077 187Ir 174110.318 1488.813 7.962
187 078 187Pt 174112.81 1485.027 7.941
187 079 187Au 174116.007 1480.537 7.917
187 080 187Hg 174120.383 1474.868 7.887
187 081 187Tl 174125.546 1468.411 7.852
187 082 187Pb 174132.499 1460.165 7.808
187 083 187Bi 174140.595 1450.776 7.758
188 074 188W 175044.394 1498.182 7.969
188 075 188Re 175043.533 1497.749 7.967
188 076 188Os 175040.902 1499.087 7.974
188 077 188Ir 175043.2 1495.496 7.955
188 078 188Pt 175043.194 1494.209 7.948
188 079 188Au 175048.205 1487.904 7.914
188 080 188Hg 175049.793 1485.023 7.899
188 081 188Tl 175057.134 1476.389 7.853
188 082 188Pb 175061.158 1471.071 7.825
188 083 188Bi 175071.262 1459.674 7.764
188 084 188Po 175077.413 1452.23 7.725
189 074 189W 175979.075 1503.066 7.953
189 075 189Re 175976.066 1504.782 7.962
189 076 189Os 175974.547 1505.007 7.963
189 077 189Ir 175974.569 1503.692 7.956
189 078 189Pt 175976.028 1500.94 7.941
189 079 189Au 175978.418 1497.257 7.922
189 080 189Hg 175981.859 1492.522 7.897
189 081 189Tl 175986.376 1486.712 7.866
189 082 189Pb 175992.587 1479.208 7.826
189 083 189Bi 175999.896 1470.605 7.781
189 084 189Po 176008.03 1461.178 7.731
190 074 190W 176911.749 1509.958 7.947
190 075 190Re 176909.968 1510.445 7.95
190 076 190Os 176906.32 1512.799 7.962
190 077 190Ir 176907.764 1510.062 7.948
190 078 190Pt 176906.682 1509.851 7.947
190 079 190Au 176910.613 1504.627 7.919
190 080 190Hg 176911.613 1502.334 7.907
190 081 190Tl 176918.142 1494.511 7.866
190 082 190Pb 176921.544 1489.816 7.841
190 083 190Bi 176930.55 1479.517 7.787
190 084 190Po 176936.376 1472.397 7.749
191 075 191Re 177842.683 1517.296 7.944
191 076 191Os 177840.127 1518.558 7.951
191 077 191Ir 177839.303 1518.088 7.948
191 078 191Pt 177839.801 1516.298 7.939
191 079 191Au 177841.178 1513.627 7.925
191 080 191Hg 177843.884 1509.628 7.904
191 081 191Tl 177847.685 1504.534 7.877
191 082 191Pb 177853.205 1497.72 7.841
191 083 191Bi 177859.704 1489.928 7.801
191 084 191Po 177867.379 1480.96 7.754
192 076 192Os 178772.134 1526.116 7.949
192 077 192Ir 178772.67 1524.286 7.939
192 078 192Pt 178770.7 1524.964 7.943
192 079 192Au 178773.705 1520.666 7.92
192 080 192Hg 178773.96 1519.117 7.912
192 081 192Tl 178779.59 1512.194 7.876
192 082 192Pb 178782.393 1508.098 7.855
192 083 192Bi 178790.888 1498.309 7.804
192 084 192Po 178795.856 1492.048 7.771
193 076 193Os 179706.116 1531.699 7.936
193 077 193Ir 179704.464 1532.058 7.938
193 078 193Pt 179704.01 1531.219 7.934
193 079 193Au 179704.582 1529.354 7.924
193 080 193Hg 179706.414 1526.229 7.908
193 081 193Tl 179709.634 1521.715 7.885
193 082 193Pb 179714.253 1515.803 7.854
193 083 193Bi 179720.059 1508.704 7.817
193 084 193Po 179727.061 1500.408 7.774
193 085 193At 179734.76 1491.416 7.728
194 076 194Os 180638.57 1538.811 7.932
194 077 194Ir 180637.962 1538.125 7.928
194 078 194Pt 180635.218 1539.577 7.936
194 079 194Au 180637.208 1536.293 7.919
194 080 194Hg 180636.766 1535.442 7.915
194 081 194Tl 180641.618 1529.297 7.883
194 082 194Pb 180643.729 1525.892 7.865
194 083 194Bi 180651.436 1516.892 7.819
194 084 194Po 180655.91 1511.125 7.789
194 085 194At 180665.214 1500.527 7.735
195 076 195Os 181572.807 1544.139 7.919
195 077 195Ir 181570.296 1545.357 7.925
195 078 195Pt 181568.678 1545.682 7.927
195 079 195Au 181568.394 1544.673 7.921
195 080 195Hg 181569.453 1542.32 7.909
195 081 195Tl 181571.787 1538.693 7.891
195 082 195Pb 181575.717 1533.47 7.864
195 083 195Bi 181580.896 1526.997 7.831
195 084 195Po 181587.339 1519.261 7.791
195 085 195At 181594.422 1510.885 7.748
195 086 195Rn 181602.457 1501.556 7.7
196 076 196Os 182505.711 1550.801 7.912
196 077 196Ir 182504.04 1551.178 7.914
196 078 196Pt 182500.321 1553.604 7.927
196 079 196Au 182501.318 1551.314 7.915
196 080 196Hg 182500.12 1551.218 7.914
196 081 196Tl 182503.939 1546.106 7.888
196 082 196Pb 182505.564 1543.188 7.873
196 083 196Bi 182512.405 1535.053 7.832
196 084 196Po 182516.429 1529.736 7.805
196 085 196At 182525.472 1519.4 7.752
196 086 196Rn 182530.851 1512.727 7.718
197 077 197Ir 183436.706 1558.078 7.909
197 078 197Pt 183434.04 1559.45 7.916
197 079 197Au 183432.811 1559.386 7.916
197 080 197Hg 183432.9 1558.004 7.909
197 081 197Tl 183434.589 1555.021 7.894
197 082 197Pb 183437.67 1550.647 7.871
197 083 197Bi 183442.22 1544.804 7.842
197 084 197Po 183448.037 1537.693 7.806
197 085 197At 183454.546 1529.891 7.766
197 086 197Rn 183461.855 1521.289 7.722
198 078 198Pt 184366.049 1567.007 7.914
198 079 198Au 184365.864 1565.899 7.909
198 080 198Hg 184363.98 1566.489 7.912
198 081 198Tl 184366.934 1562.242 7.89
198 082 198Pb 184367.863 1560.019 7.879
198 083 198Bi 184374.033 1552.556 7.841
198 084 198Po 184377.418 1547.878 7.818
198 085 198At 184385.71 1538.292 7.769
198 086 198Rn 184390.638 1532.071 7.738
199 077 199Ir 185303.562 1570.352 7.891
199 078 199Pt 185300.059 1572.562 7.902
199 079 199Au 185297.845 1573.483 7.907
199 080 199Hg 185296.882 1573.153 7.905
199 081 199Tl 185297.859 1570.882 7.894
199 082 199Pb 185300.179 1567.269 7.876
199 083 199Bi 185304.098 1562.056 7.85
199 084 199Po 185309.17 1555.691 7.818
199 085 199At 185315.054 1548.514 7.781
199 086 199Rn 185321.843 1540.431 7.741
199 087 199Fr 185329.612 1531.369 7.695
200 078 200Pt 186232.342 1579.844 7.899
200 079 200Au 186231.164 1579.729 7.899
200 080 200Hg 186228.419 1581.181 7.906
200 081 200Tl 186230.364 1577.942 7.89
200 082 200Pb 186230.658 1576.355 7.882
200 083 200Bi 186236.02 1569.7 7.848
200 084 200Po 186238.925 1565.501 7.828
200 085 200At 186246.38 1556.753 7.784
200 086 200Rn 186250.851 1550.989 7.755
200 087 200Fr 186260.466 1540.08 7.7
201 078 201Pt 187166.699 1585.053 7.886
201 079 201Au 187163.527 1586.931 7.895
201 080 201Hg 187161.753 1587.411 7.898
201 081 201Tl 187161.724 1586.148 7.891
201 082 201Pb 187163.137 1583.441 7.878
201 083 201Bi 187166.468 1578.817 7.855
201 084 201Po 187170.848 1573.144 7.827
201 085 201At 187176.073 1566.625 7.794
201 086 201Rn 187182.281 1559.124 7.757
201 087 201Fr 187189.44 1550.672 7.715
202 079 202Au 188097.022 1593.002 7.886
202 080 202Hg 188093.565 1595.165 7.897
202 081 202Tl 188094.417 1593.02 7.886
202 082 202Pb 188093.955 1592.189 7.882
202 083 202Bi 188098.645 1586.205 7.853
202 084 202Po 188100.943 1582.614 7.835
202 085 202At 188107.765 1574.499 7.795
202 086 202Rn 188111.57 1569.4 7.769
202 087 202Fr 188120.474 1559.203 7.719
202 088 202Ra 188126.033 1552.351 7.685
203 079 203Au 189029.773 1599.816 7.881
203 080 203Hg 189027.136 1601.16 7.887
203 081 203Tl 189026.133 1600.87 7.886
203 082 203Pb 189026.596 1599.113 7.877
203 083 203Bi 189029.332 1595.084 7.858
203 084 203Po 189033.054 1590.068 7.833
203 085 203At 189037.687 1584.142 7.804
203 086 203Rn 189043.179 1577.357 7.77
203 087 203Fr 189049.689 1569.553 7.732
203 088 203Ra 189056.957 1560.992 7.69
204 080 204Hg 189959.209 1608.652 7.886
204 081 204Tl 189959.042 1607.526 7.88
204 082 204Pb 189957.767 1607.507 7.88
204 083 204Bi 189961.699 1602.282 7.854
204 084 204Po 189963.521 1599.167 7.839
204 085 204At 189969.469 1591.925 7.804
204 086 204Rn 189972.849 1587.252 7.781
204 087 204Fr 189980.93 1577.878 7.735
204 088 204Ra 189985.865 1571.649 7.704
205 080 205Hg 190893.106 1614.32 7.875
205 081 205Tl 190891.061 1615.072 7.878
205 082 205Pb 190890.601 1614.239 7.874
205 083 205Bi 190892.798 1610.748 7.857
205 084 205Po 190895.84 1606.413 7.836
205 085 205At 190899.866 1601.094 7.81
205 086 205Rn 190904.617 1595.049 7.781
205 087 205Fr 190910.506 1587.867 7.746
205 088 205Ra 190917.145 1579.935 7.707
206 080 206Hg 191825.941 1621.051 7.869
206 081 206Tl 191824.123 1621.575 7.872
206 082 206Pb 191822.079 1622.325 7.875
206 083 206Bi 191825.326 1617.786 7.853
206 084 206Po 191826.661 1615.157 7.841
206 085 206At 191831.912 1608.613 7.809
206 086 206Rn 191834.705 1604.527 7.789
206 087 206Fr 191842.067 1595.871 7.747
206 088 206Ra 191846.364 1590.281 7.72
206 089 206Ac 191855.798 1579.554 7.668
207 080 207Hg 192762.161 1624.396 7.847
207 081 207Tl 192756.836 1628.428 7.867
207 082 207Pb 192754.907 1629.063 7.87
207 083 207Bi 192756.793 1625.883 7.855
207 084 207Po 192759.191 1622.193 7.837
207 085 207At 192762.583 1617.507 7.814
207 086 207Rn 192766.684 1612.113 7.788
207 087 207Fr 192771.964 1605.54 7.756
207 088 207Ra 192777.833 1598.377 7.722
207 089 207Ac 192784.912 1590.005 7.681
208 081 208Tl 193692.614 1632.214 7.847
208 082 208Pb 193687.104 1636.431 7.867
208 083 208Bi 193689.472 1632.77 7.85
208 084 208Po 193690.361 1630.587 7.839
208 085 208At 193694.829 1624.827 7.812
208 086 208Rn 193697.161 1621.201 7.794
208 087 208Fr 193703.628 1613.441 7.757
208 088 208Ra 193707.501 1608.275 7.732
208 089 208Ac 193716.036 1598.446 7.685
209 081 209Tl 194627.22 1637.174 7.833
209 082 209Pb 194622.732 1640.368 7.849
209 083 209Bi 194621.577 1640.23 7.848
209 084 209Po 194622.959 1637.555 7.835
209 085 209At 194625.934 1633.287 7.815
209 086 209Rn 194629.374 1628.554 7.792
209 087 209Fr 194634.023 1622.611 7.764
209 088 209Ra 194639.131 1616.21 7.733
209 089 209Ac 194645.61 1608.438 7.696
209 090 209Th 194652.759 1599.995 7.655
210 081 210Tl 195563.106 1640.854 7.814
210 082 210Pb 195557.113 1645.554 7.836
210 083 210Bi 195556.538 1644.835 7.833
210 084 210Po 195554.866 1645.214 7.834
210 085 210At 195558.336 1640.45 7.812
210 086 210Rn 195560.199 1637.294 7.797
210 087 210Fr 195565.94 1630.26 7.763
210 088 210Ra 195569.236 1625.67 7.741
210 089 210Ac 195577.054 1616.559 7.698
210 090 210Th 195581.796 1610.524 7.669
211 082 211Pb 196492.843 1649.388 7.817
211 083 211Bi 196490.966 1649.972 7.82
211 084 211Po 196489.88 1649.764 7.819
211 085 211At 196490.155 1648.197 7.811
211 086 211Rn 196492.535 1644.523 7.794
211 087 211Fr 196496.622 1639.143 7.768
211 088 211Ra 196501.105 1633.367 7.741
211 089 211Ac 196506.958 1626.22 7.707
211 090 211Th 196513.157 1618.728 7.672
212 082 212Pb 197427.281 1654.515 7.804
212 083 212Bi 197426.201 1654.303 7.803
212 084 212Po 197423.437 1655.773 7.81
212 085 212At 197424.675 1653.242 7.798
212 086 212Rn 197424.125 1652.499 7.795
212 087 212Fr 197428.736 1646.594 7.767
212 088 212Ra 197431.572 1642.465 7.747
212 089 212Ac 197438.532 1634.212 7.709
212 090 212Th 197442.832 1628.618 7.682
212 091 212Pa 197451.84 1618.317 7.634
213 082 213Pb 198363.139 1658.223 7.785
213 083 213Bi 198360.581 1659.488 7.791
213 084 213Po 198358.648 1660.128 7.794
213 085 213At 198358.211 1659.271 7.79
213 086 213Rn 198358.581 1657.608 7.782
213 087 213Fr 198360.218 1654.678 7.768
213 088 213Ra 198363.615 1649.987 7.746
213 089 213Ac 198368.896 1643.413 7.716
213 090 213Th 198374.355 1636.661 7.684
213 091 213Pa 198381.384 1628.338 7.645
214 082 214Pb 199297.636 1663.292 7.772
214 083 214Bi 199296.106 1663.528 7.773
214 084 214Po 199292.325 1666.016 7.785
214 085 214At 199292.904 1664.144 7.776
214 086 214Rn 199291.453 1664.301 7.777
214 087 214Fr 199294.304 1660.157 7.758
214 088 214Ra 199294.852 1658.316 7.749
214 089 214Ac 199300.669 1651.205 7.716
214 090 214Th 199304.441 1646.14 7.692
214 091 214Pa 199312.708 1636.58 7.648
215 083 215Bi 200230.449 1668.751 7.762
215 084 215Po 200227.749 1670.157 7.768
215 085 215At 200226.523 1670.09 7.768
215 086 215Rn 200226.098 1669.222 7.764
215 087 215Fr 200227.074 1666.952 7.753
215 088 215Ra 200228.779 1663.954 7.739
215 089 215Ac 200231.746 1659.694 7.72
215 090 215Th 200236.15 1653.996 7.693
215 091 215Pa 200242.582 1646.271 7.657
216 083 216Bi 201166.168 1672.597 7.744
216 084 216Po 201161.567 1675.905 7.759
216 085 216At 201161.529 1674.649 7.753
216 086 216Rn 201159.017 1675.868 7.759
216 087 216Fr 201161.229 1672.362 7.742
216 088 216Ra 201161.03 1671.268 7.737
216 089 216Ac 201165.351 1665.654 7.711
216 090 216Th 201167.021 1662.69 7.698
216 091 216Pa 201174.006 1654.412 7.659
217 084 217Po 202097.178 1679.859 7.741
217 085 217At 202095.162 1680.581 7.745
217 086 217Rn 202093.914 1680.536 7.744
217 087 217Fr 202094.059 1679.098 7.738
217 088 217Ra 202095.12 1676.743 7.727
217 089 217Ac 202097.429 1673.141 7.71
217 090 217Th 202100.427 1668.85 7.691
217 091 217Pa 202104.77 1663.213 7.665
217 092 217U 202109.889 1656.801 7.635
218 084 218Po 203031.129 1685.473 7.732
218 085 218At 203030.359 1684.95 7.729
218 086 218Rn 203026.966 1687.049 7.739
218 087 218Fr 203028.297 1684.425 7.727
218 088 218Ra 203027.378 1684.051 7.725
218 089 218Ac 203031.056 1679.079 7.702
218 090 218Th 203032.079 1676.763 7.692
218 091 218Pa 203037.863 1669.686 7.659
218 092 218U 203040.603 1665.652 7.641
219 085 219At 203964.151 1690.723 7.72
219 086 219Rn 203962.074 1691.507 7.724
219 087 219Fr 203961.35 1690.937 7.721
219 088 219Ra 203961.615 1689.379 7.714
219 089 219Ac 203963.28 1686.421 7.701
219 090 219Th 203965.669 1682.738 7.684
219 091 219Pa 203969.208 1677.906 7.662
219 092 219U 203973.387 1672.434 7.637
220 085 220At 204899.598 1694.841 7.704
220 086 220Rn 204895.35 1697.796 7.717
220 087 220Fr 204895.709 1696.144 7.71
220 088 220Ra 204893.988 1696.571 7.712
220 089 220Ac 204896.956 1692.31 7.692
220 090 220Th 204897.362 1690.611 7.685
220 091 220Pa 204902.562 1684.117 7.655
221 086 221Rn 205830.703 1702.008 7.701
221 087 221Fr 205828.998 1702.42 7.703
221 088 221Ra 205828.173 1701.952 7.701
221 089 221Ac 205829.218 1699.613 7.691
221 090 221Th 205831.125 1696.413 7.676
221 091 221Pa 205834.056 1692.189 7.657
222 086 222Rn 206764.099 1708.178 7.694
222 087 222Fr 206763.563 1707.42 7.691
222 088 222Ra 206761.024 1708.666 7.697
222 089 222Ac 206762.813 1705.584 7.683
222 090 222Th 206762.884 1704.219 7.677
223 087 223Fr 207697.092 1713.457 7.684
223 088 223Ra 207695.432 1713.824 7.685
223 089 223Ac 207695.512 1712.45 7.679
223 090 223Th 207696.561 1710.108 7.669
223 091 223Pa 207698.984 1706.391 7.652
223 092 223U 207701.993 1702.089 7.633
224 087 224Fr 208631.862 1718.252 7.671
224 088 224Ra 208628.518 1720.302 7.68
224 089 224Ac 208629.415 1718.112 7.67
224 090 224Th 208628.665 1717.569 7.668
224 091 224Pa 208632.028 1712.913 7.647
224 092 224U 208633.361 1710.286 7.635
225 087 225Fr 209565.506 1724.173 7.663
225 088 225Ra 209563.179 1725.207 7.668
225 089 225Ac 209562.312 1724.781 7.666
225 090 225Th 209562.473 1723.326 7.659
225 091 225Pa 209563.992 1720.514 7.647
225 092 225U 209566.518 1716.695 7.63
225 093 225Np 209570.22 1711.699 7.608
226 087 226Fr 210500.56 1728.685 7.649
226 088 226Ra 210496.348 1731.603 7.662
226 089 226Ac 210496.478 1730.18 7.656
226 090 226Th 210494.854 1730.511 7.657
226 091 226Pa 210497.179 1726.892 7.641
226 092 226U 210497.964 1724.814 7.632
227 087 227Fr 211434.334 1734.476 7.641
227 088 227Ra 211431.352 1736.165 7.648
227 089 227Ac 211429.513 1736.71 7.651
227 090 227Th 211428.957 1735.973 7.647
227 091 227Pa 211429.472 1734.165 7.639
227 092 227U 211431.151 1731.192 7.626
227 093 227Np 211434.178 1726.872 7.607
228 088 228Ra 212364.609 1742.473 7.642
228 089 228Ac 212364.052 1741.737 7.639
228 090 228Th 212361.417 1743.078 7.645
228 091 228Pa 212363.058 1740.144 7.632
228 092 228U 212362.848 1739.061 7.627
228 094 228Pu 212368.691 1730.631 7.59
229 087 229Fr 213303.492 1744.449 7.618
229 088 229Ra 213299.724 1746.923 7.628
229 089 229Ac 213297.4 1747.954 7.633
229 090 229Th 213295.726 1748.335 7.635
229 091 229Pa 213295.526 1747.241 7.63
229 092 229U 213296.328 1745.146 7.621
229 093 229Np 213298.386 1741.795 7.606
229 094 229Pu 213301.495 1737.392 7.587
230 088 230Ra 214233.173 1753.04 7.622
230 089 230Ac 214231.954 1752.965 7.622
230 090 230Th 214228.497 1755.129 7.631
230 091 230Pa 214229.297 1753.036 7.622
230 092 230U 214228.226 1752.813 7.621
230 093 230Np 214231.34 1748.406 7.602
230 094 230Pu 214232.523 1745.93 7.591
231 089 231Ac 215165.558 1758.927 7.614
231 090 231Th 215162.944 1760.247 7.62
231 091 231Pa 215162.042 1759.856 7.618
231 092 231U 215161.912 1758.693 7.613
231 093 231Np 215163.224 1756.087 7.602
231 094 231Pu 215165.368 1752.65 7.587
232 089 232Ac 216100.282 1763.768 7.602
232 090 232Th 216096.069 1766.687 7.615
232 091 232Pa 216096.058 1765.405 7.61
232 092 232U 216094.21 1765.96 7.612
232 094 232Pu 216096.943 1760.64 7.589
233 090 233Th 217030.848 1771.474 7.603
233 091 233Pa 217029.094 1771.934 7.605
233 092 233U 217028.013 1771.722 7.604
233 093 233Np 217028.532 1769.91 7.596
233 094 233Pu 217030.121 1767.028 7.584
233 096 233Cm 217036.339 1758.223 7.546
234 090 234Th 217964.223 1777.664 7.597
234 091 234Pa 217963.439 1777.155 7.595
234 092 234U 217960.734 1778.567 7.601
234 093 234Np 217962.032 1775.975 7.59
234 094 234Pu 217961.915 1774.799 7.585
234 096 234Cm 217967.267 1766.86 7.551
235 090 235Th 218899.363 1782.09 7.583
235 091 235Pa 218896.922 1783.237 7.588
235 092 235U 218895.002 1783.864 7.591
235 093 235Np 218894.615 1782.958 7.587
235 094 235Pu 218895.243 1781.036 7.579
236 091 236Pa 219831.436 1788.289 7.577
236 092 236U 219828.021 1790.41 7.586
236 093 236Np 219828.444 1788.694 7.579
236 094 236Pu 219827.456 1788.389 7.578
237 091 237Pa 220765.22 1794.07 7.57
237 092 237U 220762.461 1795.536 7.576
237 093 237Np 220761.431 1795.272 7.575
237 094 237Pu 220761.14 1794.27 7.571
238 091 238Pa 221699.844 1799.011 7.559
238 092 238U 221695.872 1801.69 7.57
238 093 238Np 221695.508 1800.76 7.566
238 094 238Pu 221693.706 1801.269 7.568
238 095 238Am 221695.45 1798.232 7.556
238 096 238Cm 221695.919 1796.469 7.548
239 092 239U 222630.631 1806.496 7.559
239 093 239Np 222628.859 1806.975 7.561
239 094 239Pu 222627.625 1806.916 7.56
239 095 239Am 222627.916 1805.331 7.554
240 092 240U 223564.266 1812.426 7.552
240 093 240Np 223563.355 1812.044 7.55
240 094 240Pu 223560.656 1813.45 7.556
240 095 240Am 223561.53 1811.282 7.547
240 096 240Cm 223561.233 1810.287 7.543
241 093 241Np 224496.794 1818.17 7.544
241 094 241Pu 224494.98 1818.691 7.546
241 095 241Am 224494.448 1817.93 7.543
241 096 241Cm 224494.705 1816.38 7.537
242 093 242Np 225431.448 1823.082 7.533
242 094 242Pu 225428.236 1825.001 7.541
242 095 242Am 225428.476 1823.467 7.535
242 096 242Cm 225427.3 1823.35 7.535
242 098 242Cf 225430.813 1817.25 7.509
243 094 243Pu 226362.767 1830.035 7.531
243 095 243Am 226361.676 1829.832 7.53
243 096 243Cm 226361.173 1829.042 7.527
243 097 243Bk 226362.169 1826.753 7.518
244 094 244Pu 227296.311 1836.056 7.525
244 095 244Am 227295.875 1835.199 7.521
244 096 244Cm 227293.937 1835.844 7.524
244 097 244Bk 227295.688 1832.799 7.511
244 098 244Cf 227295.94 1831.254 7.505
245 094 245Pu 228231.105 1840.827 7.514
245 095 245Am 228229.388 1841.251 7.515
245 096 245Cm 228227.982 1841.364 7.516
245 097 245Bk 228228.282 1839.771 7.509
245 098 245Cf 228229.342 1837.417 7.5
246 094 246Pu 229164.888 1846.61 7.507
246 095 246Am 229163.977 1846.227 7.505
246 096 246Cm 229161.09 1847.822 7.511
246 097 246Bk 229161.93 1845.688 7.503
246 098 246Cf 229161.541 1844.784 7.499
246 100 246Fm 229166.567 1837.171 7.468
247 096 247Cm 230095.499 1852.977 7.502
247 097 247Bk 230094.945 1852.238 7.499
247 098 247Cf 230095.08 1850.81 7.493
248 096 248Cm 231028.851 1859.191 7.497
248 098 248Cf 231027.677 1857.778 7.491
248 100 248Fm 231031.321 1851.547 7.466
249 096 249Cm 231963.703 1863.904 7.486
249 097 249Bk 231962.292 1864.022 7.486
249 098 249Cf 231961.657 1863.364 7.483
250 096 250Cm 232897.436 1869.736 7.479
250 097 250Bk 232896.887 1868.992 7.476
250 098 250Cf 232894.597 1869.989 7.48
250 100 250Fm 232896.477 1865.522 7.462
251 096 251Cm 233832.589 1874.149 7.467
251 097 251Bk 233830.658 1874.786 7.469
251 098 251Cf 233829.054 1875.097 7.471
251 099 251Es 233828.92 1873.938 7.466
251 100 251Fm 233829.884 1871.68 7.457
252 098 252Cf 234762.447 1881.269 7.465
252 099 252Es 234763.192 1879.231 7.457
252 100 252Fm 234762.208 1878.922 7.456
252 102 252No 234767.25 1871.293 7.426
253 098 253Cf 235697.208 1886.074 7.455
253 099 253Es 235696.41 1885.579 7.453
253 100 253Fm 235696.235 1884.46 7.448
254 098 254Cf 236630.742 1892.105 7.449
254 099 254Es 236630.882 1890.672 7.444
254 100 254Fm 236629.284 1890.977 7.445
254 102 254No 236632.081 1885.593 7.424
255 099 255Es 237564.473 1896.646 7.438
255 100 255Fm 237563.672 1896.154 7.436
255 101 255Md 237564.205 1894.327 7.429
255 102 255No 237565.705 1891.534 7.418
256 100 256Fm 238496.853 1902.538 7.432
256 101 256Md 238498.476 1899.622 7.42
256 102 256No 238498.169 1898.635 7.417
256 104 256Rf 238503.559 1890.659 7.385
257 100 257Fm 239431.45 1907.506 7.422
257 101 257Md 239431.347 1906.317 7.418
257 102 257No 239432.08 1904.289 7.41
258 101 258Md 240365.532 1911.696 7.41
260 106 260Sg 242240.857 1909.035 7.342
261 104 261Rf 243168.109 1923.936 7.371
264 108 264Hs 245978.832 1926.736 7.298
265 106 265Sg 246904.568 1943.152 7.333
###Markdown
Filtered the '' comment line by .startswith('') functions which returns true if starts with that character
###Code
for row in open('index.txt').readlines():
if not row.startswith('#'):
print row
###Output
001 001 1H 938.272 0 0
002 001 2H 1875.613 2.225 1.112
003 001 3H 2808.921 8.482 2.827
003 002 3He 2808.391 7.718 2.573
004 001 4H 3751.365 5.603 1.401
004 002 4He 3727.379 28.296 7.074
004 003 4Li 3749.763 4.618 1.155
005 001 5H 4689.849 6.684 1.337
005 002 5He 4667.838 27.402 5.48
005 003 5Li 4667.617 26.33 5.266
006 001 6H 5630.313 5.786 0.964
006 002 6He 5605.537 29.268 4.878
006 003 6Li 5601.518 31.994 5.332
006 004 6Be 5605.295 26.924 4.487
007 002 7He 6545.537 28.834 4.119
007 003 7Li 6533.833 39.244 5.606
007 004 7Be 6534.184 37.6 5.371
007 005 7B 6545.773 24.718 3.531
008 002 8He 7482.528 31.408 3.926
008 003 8Li 7471.366 41.277 5.16
008 004 8Be 7454.85 56.5 7.062
008 005 8B 7472.319 37.737 4.717
008 006 8C 7483.98 24.783 3.098
009 002 9He 8423.363 30.138 3.349
009 003 9Li 8406.867 45.341 5.038
009 004 9Be 8392.75 58.165 6.463
009 005 9B 8393.307 56.314 6.257
009 006 9C 8409.291 39.037 4.337
010 002 10He 9362.728 30.339 3.034
010 003 10Li 9346.458 45.315 4.532
010 004 10Be 9325.503 64.977 6.498
010 005 10B 9324.436 64.751 6.475
010 006 10C 9327.573 60.32 6.032
010 007 10N 9350.163 36.437 3.644
011 003 11Li 10285.698 45.64 4.149
011 004 11Be 10264.564 65.481 5.953
011 005 11B 10252.547 76.205 6.928
011 006 11C 10254.018 73.44 6.676
011 007 11N 10267.157 59.008 5.364
012 004 12Be 11200.961 68.649 5.721
012 005 12B 11188.742 79.575 6.631
012 006 12C 11174.862 92.162 7.68
012 007 12N 11191.689 74.041 6.17
012 008 12O 11205.888 58.549 4.879
013 004 13Be 12140.628 68.548 5.273
013 005 13B 12123.429 84.453 6.496
013 006 13C 12109.481 97.108 7.47
013 007 13N 12111.191 94.105 7.239
013 008 13O 12128.446 75.556 5.812
014 004 14Be 13078.822 69.919 4.994
014 005 14B 13062.025 85.423 6.102
014 006 14C 13040.87 105.285 7.52
014 007 14N 13040.203 104.659 7.476
014 008 14O 13044.836 98.732 7.052
015 005 15B 13998.827 88.186 5.879
015 006 15C 13979.217 106.503 7.1
015 007 15N 13968.935 115.492 7.699
015 008 15O 13971.178 111.955 7.464
015 009 15F 13984.591 97.249 6.483
016 005 16B 14938.429 88.149 5.509
016 006 16C 14914.532 110.753 6.922
016 007 16N 14906.011 117.981 7.374
016 008 16O 14895.079 127.619 7.976
016 009 16F 14909.985 111.42 6.964
016 010 16Ne 14922.79 97.322 6.083
017 005 17B 15876.613 89.531 5.267
017 006 17C 15853.371 111.479 6.558
017 007 17N 15839.692 123.865 7.286
017 008 17O 15830.501 131.763 7.751
017 009 17F 15832.751 128.22 7.542
017 010 17Ne 15846.749 112.928 6.643
018 006 18C 16788.756 115.66 6.426
018 007 18N 16776.429 126.693 7.039
018 008 18O 16762.023 139.807 7.767
018 009 18F 16763.167 137.369 7.632
018 010 18Ne 16767.099 132.143 7.341
018 011 18Na 16785.461 112.488 6.249
019 006 19C 17727.74 116.241 6.118
019 007 19N 17710.671 132.017 6.948
019 008 19O 17697.633 143.761 7.566
019 009 19F 17692.3 147.801 7.779
019 010 19Ne 17695.028 143.78 7.567
019 011 19Na 17705.692 131.822 6.938
019 012 19Mg 17725.294 110.927 5.838
020 006 20C 18664.374 119.172 5.959
020 007 20N 18648.073 134.18 6.709
020 008 20O 18629.59 151.37 7.569
020 009 20F 18625.264 154.403 7.72
020 010 20Ne 18617.728 160.645 8.032
020 011 20Na 18631.107 145.973 7.299
020 012 20Mg 18641.318 134.468 6.723
021 007 21N 19583.047 138.771 6.608
021 008 21O 19565.349 155.176 7.389
021 009 21F 19556.728 162.504 7.738
021 010 21Ne 19550.533 167.406 7.972
021 011 21Na 19553.569 163.076 7.766
021 012 21Mg 19566.153 149.199 7.105
022 007 22N 20521.331 140.053 6.366
022 008 22O 20498.06 162.03 7.365
022 009 22F 20491.062 167.735 7.624
022 010 22Ne 20479.734 177.77 8.08
022 011 22Na 20482.065 174.146 7.916
022 012 22Mg 20486.339 168.578 7.663
023 008 23O 21434.884 164.772 7.164
023 009 23F 21423.093 175.269 7.62
023 010 23Ne 21414.098 182.971 7.955
023 011 23Na 21409.211 186.564 8.111
023 012 23Mg 21412.757 181.726 7.901
023 013 23Al 21424.489 168.7 7.335
024 008 24O 22370.838 168.383 7.016
024 009 24F 22358.817 179.111 7.463
024 010 24Ne 22344.795 191.84 7.993
024 011 24Na 22341.817 193.524 8.064
024 012 24Mg 22335.791 198.257 8.261
024 013 24Al 22349.156 183.598 7.65
024 014 24Si 22359.457 172.004 7.167
025 009 25F 23294.021 183.472 7.339
025 010 25Ne 23280.132 196.068 7.843
025 011 25Na 23272.372 202.535 8.101
025 012 25Mg 23268.026 205.588 8.224
025 013 25Al 23271.791 200.529 8.021
025 014 25Si 23284.02 187.006 7.48
026 009 26F 24232.515 184.543 7.098
026 010 26Ne 24214.164 201.601 7.754
026 011 26Na 24206.361 208.111 8.004
026 012 26Mg 24196.498 216.681 8.334
026 013 26Al 24199.991 211.894 8.15
026 014 26Si 24204.545 206.047 7.925
027 009 27F 25170.669 185.955 6.887
027 010 27Ne 25152.298 203.032 7.52
027 011 27Na 25139.2 214.837 7.957
027 012 27Mg 25129.62 223.124 8.264
027 013 27Al 25126.499 224.952 8.332
027 014 27Si 25130.8 219.357 8.124
027 015 27P 25141.956 206.908 7.663
028 010 28Ne 26087.962 206.934 7.39
028 011 28Na 26075.222 218.38 7.799
028 012 28Mg 26060.682 231.627 8.272
028 013 28Al 26058.339 232.677 8.31
028 014 28Si 26053.186 236.537 8.448
028 015 28P 26067.008 221.421 7.908
028 016 28S 26077.726 209.41 7.479
029 010 29Ne 27026.276 208.185 7.179
029 011 29Na 27010.37 222.798 7.683
029 012 29Mg 26996.575 235.299 8.114
029 013 29Al 26988.468 242.113 8.349
029 014 29Si 26984.277 245.011 8.449
029 015 29P 26988.709 239.286 8.251
029 016 29S 27001.99 224.711 7.749
030 010 30Ne 27962.81 211.216 7.041
030 011 30Na 27947.56 225.173 7.506
030 012 30Mg 27929.777 241.663 8.055
030 013 30Al 27922.305 247.841 8.261
030 014 30Si 27913.233 255.62 8.521
030 015 30P 27916.955 250.605 8.354
030 016 30S 27922.581 243.685 8.123
031 011 31Na 28883.343 228.955 7.386
031 012 31Mg 28866.965 244.04 7.872
031 013 31Al 28854.717 254.994 8.226
031 014 31Si 28846.211 262.207 8.458
031 015 31P 28844.209 262.917 8.481
031 016 31S 28849.094 256.738 8.282
031 017 31Cl 28860.557 243.981 7.87
032 011 32Na 29821.247 230.616 7.207
032 012 32Mg 29800.721 249.849 7.808
032 013 32Al 29790.105 259.172 8.099
032 014 32Si 29776.574 271.41 8.482
032 015 32P 29775.838 270.852 8.464
032 016 32S 29773.617 271.781 8.493
032 017 32Cl 29785.791 258.312 8.072
032 018 32Ar 29796.41 246.4 7.7
033 011 33Na 30758.571 232.858 7.056
033 012 33Mg 30738.064 252.071 7.639
033 013 33Al 30724.129 264.713 8.022
033 014 33Si 30711.655 275.894 8.36
033 015 33P 30705.3 280.956 8.514
033 016 33S 30704.54 280.422 8.498
033 017 33Cl 30709.612 274.057 8.305
033 018 33Ar 30720.72 261.656 7.929
034 012 34Mg 31673.474 256.227 7.536
034 013 34Al 31661.223 267.184 7.858
034 014 34Si 31643.685 283.429 8.336
034 015 34P 31638.573 287.248 8.448
034 016 34S 31632.689 291.839 8.584
034 017 34Cl 31637.67 285.565 8.399
034 018 34Ar 31643.221 278.72 8.198
035 013 35Al 32595.517 272.456 7.784
035 014 35Si 32580.776 285.903 8.169
035 015 35P 32569.768 295.619 8.446
035 016 35S 32565.268 298.825 8.538
035 017 35Cl 32564.59 298.21 8.52
035 018 35Ar 32570.045 291.461 8.327
035 019 35K 32581.412 278.801 7.966
036 013 36Al 33532.921 274.617 7.628
036 014 36Si 33514.15 292.095 8.114
036 015 36P 33505.868 299.083 8.308
036 016 36S 33494.944 308.714 8.575
036 017 36Cl 33495.576 306.79 8.522
036 018 36Ar 33494.355 306.717 8.52
036 019 36K 33506.649 293.129 8.142
036 020 36Ca 33517.124 281.361 7.816
037 013 37Al 34468.585 278.518 7.528
037 014 37Si 34451.544 294.266 7.953
037 015 37P 34438.623 305.894 8.267
037 016 37S 34430.206 313.018 8.46
037 017 37Cl 34424.83 317.101 8.57
037 018 37Ar 34425.133 315.504 8.527
037 019 37K 34430.769 308.575 8.34
037 020 37Ca 34441.897 296.154 8.004
038 013 38Al 35406.18 280.49 7.381
038 014 38Si 35385.549 299.827 7.89
038 015 38P 35374.348 309.735 8.151
038 016 38S 35361.736 321.054 8.449
038 017 38Cl 35358.287 323.208 8.505
038 018 38Ar 35352.86 327.343 8.614
038 019 38K 35358.263 320.646 8.438
038 020 38Ca 35364.494 313.122 8.24
039 013 39Al 36343.024 283.211 7.262
039 014 39Si 36323.043 301.899 7.741
039 015 39P 36307.732 315.916 8.1
039 016 39S 36296.931 325.424 8.344
039 017 39Cl 36289.779 331.282 8.494
039 018 39Ar 36285.827 333.941 8.563
039 019 39K 36284.751 333.724 8.557
039 020 39Ca 36290.772 326.409 8.369
039 021 39Sc 36303.368 312.52 8.013
040 014 40Si 37258.077 306.43 7.661
040 015 40P 37243.986 319.228 7.981
040 016 40S 37228.715 333.205 8.33
040 017 40Cl 37223.514 337.113 8.428
040 018 40Ar 37215.523 343.811 8.595
040 019 40K 37216.516 341.524 8.538
040 020 40Ca 37214.694 342.052 8.551
040 021 40Sc 37228.506 326.947 8.174
040 022 40Ti 37239.669 314.491 7.862
041 014 41Si 38197.661 306.411 7.473
041 015 41P 38178.31 324.469 7.914
041 016 41S 38164.059 337.427 8.23
041 017 41Cl 38155.258 344.934 8.413
041 018 41Ar 38148.989 349.91 8.534
041 019 41K 38145.986 351.619 8.576
041 020 41Ca 38145.897 350.415 8.547
041 021 41Sc 38151.881 343.137 8.369
042 015 42P 39116.024 326.32 7.77
042 016 42S 39096.893 344.158 8.194
042 017 42Cl 39089.152 350.606 8.348
042 018 42Ar 39079.128 359.336 8.556
042 019 42K 39078.018 359.153 8.551
042 020 42Ca 39073.981 361.896 8.617
042 021 42Sc 39079.896 354.688 8.445
042 022 42Ti 39086.385 346.906 8.26
043 015 43P 40052.348 329.562 7.664
043 016 43S 40034.097 346.519 8.059
043 017 43Cl 40021.386 357.937 8.324
043 018 43Ar 40013.035 364.995 8.488
043 019 43K 40007.941 368.795 8.577
043 020 43Ca 40005.614 369.829 8.601
043 021 43Sc 40007.324 366.826 8.531
043 022 43Ti 40013.68 359.176 8.353
044 016 44S 40968.441 351.741 7.994
044 017 44Cl 40956.82 362.068 8.229
044 018 44Ar 40943.865 373.729 8.494
044 019 44K 40940.218 376.084 8.547
044 020 44Ca 40934.048 380.96 8.658
044 021 44Sc 40937.189 376.525 8.557
044 022 44Ti 40936.946 375.475 8.534
044 023 44V 40949.864 361.264 8.211
045 016 45S 41905.805 353.942 7.865
045 017 45Cl 41890.184 368.27 8.184
045 018 45Ar 41878.262 378.898 8.42
045 019 45K 41870.914 384.953 8.555
045 020 45Ca 41866.199 388.375 8.631
045 021 45Sc 41865.432 387.848 8.619
045 022 45Ti 41866.983 385.004 8.556
045 023 45V 41873.598 377.096 8.38
045 024 45Cr 41885.997 363.403 8.076
046 017 46Cl 42825.328 372.691 8.102
046 018 46Ar 42809.807 386.919 8.411
046 019 46K 42803.598 391.834 8.518
046 020 46Ca 42795.37 398.769 8.669
046 021 46Sc 42796.237 396.609 8.622
046 022 46Ti 42793.359 398.193 8.656
046 023 46V 42799.899 390.36 8.486
046 024 46Cr 42806.987 381.979 8.304
047 018 47Ar 43745.111 391.18 8.323
047 019 47K 43734.814 400.184 8.515
047 020 47Ca 43727.659 406.045 8.639
047 021 47Sc 43725.156 407.255 8.665
047 022 47Ti 43724.044 407.073 8.661
047 023 47V 43726.464 403.36 8.582
047 024 47Cr 43733.397 395.134 8.407
048 019 48K 44669.88 404.683 8.431
048 020 48Ca 44657.279 415.991 8.666
048 021 48Sc 44656.486 415.49 8.656
048 022 48Ti 44651.983 418.7 8.723
048 023 48V 44655.484 413.905 8.623
048 024 48Cr 44656.63 411.466 8.572
048 025 48Mn 44669.618 397.185 8.275
049 019 49K 45603.178 410.95 8.387
049 020 49Ca 45591.698 421.137 8.595
049 021 49Sc 45585.924 425.618 8.686
049 022 49Ti 45583.406 426.842 8.711
049 023 49V 45583.497 425.458 8.683
049 024 49Cr 45585.612 422.049 8.613
049 025 49Mn 45592.816 413.552 8.44
050 019 50K 46539.642 414.052 8.281
050 020 50Ca 46524.91 427.49 8.55
050 021 50Sc 46519.433 431.674 8.633
050 022 50Ti 46512.032 437.781 8.756
050 023 50V 46513.726 434.794 8.696
050 024 50Cr 46512.177 435.049 8.701
050 025 50Mn 46519.299 426.634 8.533
050 026 50Fe 46526.935 417.705 8.354
051 020 51Ca 47460.115 431.851 8.468
051 021 51Sc 47452.246 438.426 8.597
051 022 51Ti 47445.225 444.154 8.709
051 023 51V 47442.24 445.845 8.742
051 024 51Cr 47442.482 444.31 8.712
051 025 51Mn 47445.178 440.32 8.634
051 026 51Fe 47452.687 431.519 8.461
052 020 52Ca 48394.959 436.572 8.396
052 021 52Sc 48386.598 443.639 8.532
052 022 52Ti 48376.982 451.962 8.692
052 023 52V 48374.494 453.156 8.715
052 024 52Cr 48370.008 456.349 8.776
052 025 52Mn 48374.208 450.856 8.67
052 026 52Fe 48376.071 447.7 8.61
053 022 53Ti 49311.111 457.398 8.63
053 023 53V 49305.581 461.635 8.71
053 024 53Cr 49301.634 464.289 8.76
053 025 53Mn 49301.72 462.909 8.734
053 026 53Fe 49304.951 458.384 8.649
053 027 53Co 49312.741 449.302 8.477
054 021 54Sc 50255.726 453.642 8.401
054 022 54Ti 50243.845 464.23 8.597
054 023 54V 50239.033 467.748 8.662
054 024 54Cr 50231.48 474.008 8.778
054 025 54Mn 50232.346 471.848 8.738
054 026 54Fe 50231.138 471.763 8.736
054 027 54Co 50238.87 462.738 8.569
054 028 54Ni 50247.159 453.156 8.392
055 021 55Sc 51191.86 457.073 8.31
055 022 55Ti 51179.259 468.381 8.516
055 023 55V 51171.268 475.079 8.638
055 024 55Cr 51164.799 480.254 8.732
055 025 55Mn 51161.685 482.075 8.765
055 026 55Fe 51161.405 481.061 8.747
055 027 55Co 51164.346 476.827 8.67
055 028 55Ni 51172.527 467.353 8.497
056 022 56Ti 52113.483 473.722 8.459
056 023 56V 52105.832 480.08 8.573
056 024 56Cr 52096.12 488.499 8.723
056 025 56Mn 52093.98 489.345 8.738
056 026 56Fe 52089.773 492.258 8.79
056 027 56Co 52093.828 486.91 8.695
056 028 56Ni 52095.453 483.992 8.643
057 022 57Ti 53050.377 476.394 8.358
057 023 57V 53039.216 486.261 8.531
057 024 57Cr 53030.371 493.813 8.663
057 025 57Mn 53024.897 497.994 8.737
057 026 57Fe 53021.693 499.905 8.77
057 027 57Co 53022.018 498.286 8.742
057 028 57Ni 53024.769 494.242 8.671
057 029 57Cu 53033.03 484.687 8.503
058 023 58V 53974.69 490.353 8.454
058 024 58Cr 53962.559 501.19 8.641
058 025 58Mn 53957.968 504.488 8.698
058 026 58Fe 53951.213 509.949 8.792
058 027 58Co 53953.01 506.859 8.739
058 028 58Ni 53952.117 506.459 8.732
058 029 58Cu 53960.172 497.111 8.571
058 030 58Zn 53969.023 486.966 8.396
059 023 59V 54909.324 495.284 8.395
059 024 59Cr 54897.993 505.322 8.565
059 025 59Mn 54889.892 512.129 8.68
059 026 59Fe 54884.198 516.53 8.755
059 027 59Co 54882.121 517.313 8.768
059 028 59Ni 54882.683 515.458 8.737
059 029 59Cu 54886.971 509.877 8.642
059 030 59Zn 54895.557 499.998 8.475
060 023 60V 55845.308 498.865 8.314
060 024 60Cr 55830.877 512.003 8.533
060 025 60Mn 55823.686 517.901 8.632
060 026 60Fe 55814.943 525.35 8.756
060 027 60Co 55814.195 524.805 8.747
060 028 60Ni 55810.861 526.846 8.781
060 029 60Cu 55816.478 519.935 8.666
060 030 60Zn 55820.123 514.997 8.583
061 024 61Cr 56766.691 515.754 8.455
061 025 61Mn 56756.8 524.352 8.596
061 026 61Fe 56748.928 530.931 8.704
061 027 61Co 56744.439 534.126 8.756
061 028 61Ni 56742.606 534.666 8.765
061 029 61Cu 56744.332 531.646 8.716
061 030 61Zn 56749.46 525.225 8.61
061 031 61Ga 56758.204 515.188 8.446
062 024 62Cr 57699.955 522.056 8.42
062 025 62Mn 57691.814 528.903 8.531
062 026 62Fe 57680.442 538.982 8.693
062 027 62Co 57677.4 540.731 8.721
062 028 62Ni 57671.575 545.262 8.795
062 029 62Cu 57675.012 540.532 8.718
062 030 62Zn 57676.128 538.123 8.679
062 031 62Ga 57684.788 528.169 8.519
063 025 63Mn 58624.998 535.285 8.497
063 026 63Fe 58615.287 543.702 8.63
063 027 63Co 58608.486 549.21 8.718
063 028 63Ni 58604.302 552.1 8.763
063 029 63Cu 58603.724 551.385 8.752
063 030 63Zn 58606.58 547.236 8.686
063 031 63Ga 58611.735 540.788 8.584
064 025 64Mn 59560.222 539.626 8.432
064 026 64Fe 59547.561 550.994 8.609
064 027 64Co 59542.027 555.234 8.676
064 028 64Ni 59534.21 561.758 8.777
064 029 64Cu 59535.374 559.301 8.739
064 030 64Zn 59534.283 559.098 8.736
064 031 64Ga 59540.942 551.146 8.612
064 032 64Ge 59544.915 545.88 8.529
065 025 65Mn 60493.666 545.747 8.396
065 026 65Fe 60482.945 555.175 8.541
065 027 65Co 60474.144 562.683 8.657
065 028 65Ni 60467.677 567.856 8.736
065 029 65Cu 60465.028 569.212 8.757
065 030 65Zn 60465.869 567.077 8.724
065 031 65Ga 60468.613 563.04 8.662
065 032 65Ge 60474.349 556.011 8.554
066 026 66Fe 61415.749 561.936 8.514
066 027 66Co 61408.698 567.694 8.601
066 028 66Ni 61398.291 576.808 8.74
066 029 66Cu 61397.528 576.278 8.731
066 030 66Zn 61394.375 578.136 8.76
066 031 66Ga 61399.04 572.179 8.669
066 032 66Ge 61400.633 569.292 8.626
066 033 66As 61410.242 558.39 8.46
067 026 67Fe 62351.123 566.128 8.45
067 027 67Co 62341.242 574.715 8.578
067 028 67Ni 62332.048 582.616 8.696
067 029 67Cu 62327.961 585.409 8.737
067 030 67Zn 62326.889 585.189 8.734
067 031 67Ga 62327.378 583.406 8.708
067 032 67Ge 62331.089 578.402 8.633
067 033 67As 62336.586 571.611 8.532
068 026 68Fe 63285.177 571.639 8.406
068 027 68Co 63276.446 579.077 8.516
068 028 68Ni 63263.821 590.408 8.682
068 029 68Cu 63261.207 591.729 8.702
068 030 68Zn 63256.256 595.387 8.756
068 031 68Ga 63258.666 591.683 8.701
068 032 68Ge 63258.261 590.795 8.688
068 033 68As 63265.83 581.933 8.558
068 034 68Se 63270.009 576.46 8.477
069 027 69Co 64209.29 585.798 8.49
069 028 69Ni 64198.8 594.995 8.623
069 029 69Cu 64192.532 599.969 8.695
069 030 69Zn 64189.339 601.869 8.723
069 031 69Ga 64187.918 601.996 8.725
069 032 69Ge 64189.634 598.987 8.681
069 033 69As 64193.134 594.194 8.612
069 034 69Se 64199.413 586.622 8.502
070 027 70Co 65145.144 589.509 8.422
070 028 70Ni 65131.123 602.237 8.603
070 029 70Cu 65126.786 605.281 8.647
070 030 70Zn 65119.686 611.087 8.73
070 031 70Ga 65119.83 609.65 8.709
070 032 70Ge 65117.666 610.521 8.722
070 033 70As 65123.378 603.515 8.622
070 034 70Se 65125.157 600.443 8.578
071 027 71Co 66078.408 595.811 8.392
071 028 71Ni 66066.567 606.358 8.54
071 029 71Cu 66058.545 613.087 8.635
071 030 71Zn 66053.418 616.921 8.689
071 031 71Ga 66050.094 618.951 8.718
071 032 71Ge 66049.815 617.937 8.703
071 033 71As 66051.318 615.141 8.664
071 034 71Se 66055.581 609.584 8.586
071 035 71Br 66061.13 602.742 8.489
071 036 71Kr 66070.759 591.82 8.335
072 028 72Ni 66999.321 613.169 8.516
072 029 72Cu 66992.967 618.23 8.587
072 030 72Zn 66984.108 625.796 8.692
072 031 72Ga 66983.139 625.472 8.687
072 032 72Ge 66978.631 628.686 8.732
072 033 72As 66982.476 623.548 8.66
072 034 72Se 66982.301 622.429 8.645
072 035 72Br 66990.664 612.773 8.511
072 036 72Kr 66995.232 606.912 8.429
073 029 73Cu 67925.257 625.505 8.569
073 030 73Zn 67918.323 631.146 8.646
073 031 73Ga 67913.523 634.653 8.694
073 032 73Ge 67911.413 635.469 8.705
073 033 73As 67911.243 634.346 8.69
073 034 73Se 67913.471 630.825 8.641
073 035 73Br 67917.548 625.454 8.568
073 036 73Kr 67924.115 617.594 8.46
074 029 74Cu 68859.732 630.596 8.522
074 030 74Zn 68849.517 639.517 8.642
074 031 74Ga 68846.666 641.075 8.663
074 032 74Ge 68840.783 645.665 8.725
074 033 74As 68842.834 642.32 8.68
074 034 74Se 68840.97 642.891 8.688
074 035 74Br 68847.366 635.202 8.584
074 036 74Kr 68849.83 631.445 8.533
074 037 74Rb 68859.733 620.248 8.382
075 029 75Cu 69793.112 636.781 8.49
075 030 75Zn 69784.251 644.349 8.591
075 031 75Ga 69777.745 649.561 8.661
075 032 75Ge 69773.843 652.171 8.696
075 033 75As 69772.156 652.564 8.701
075 034 75Se 69772.508 650.918 8.679
075 035 75Br 69775.027 647.106 8.628
075 036 75Kr 69779.331 641.509 8.553
075 037 75Rb 69785.922 633.624 8.448
075 038 75Sr 69796.013 622.24 8.297
076 029 76Cu 70727.75 641.708 8.444
076 030 76Zn 70716.075 652.09 8.58
076 031 76Ga 70711.407 655.464 8.625
076 032 76Ge 70703.98 661.598 8.705
076 033 76As 70704.393 659.893 8.683
076 034 76Se 70700.919 662.073 8.711
076 035 76Br 70705.371 656.327 8.636
076 036 76Kr 70706.135 654.27 8.609
076 037 76Rb 70714.158 644.954 8.486
076 038 76Sr 70719.887 637.931 8.394
077 030 77Zn 71650.989 656.741 8.529
077 031 77Ga 71643.206 663.231 8.613
077 032 77Ge 71637.473 667.671 8.671
077 033 77As 71634.259 669.591 8.696
077 034 77Se 71633.065 669.492 8.695
077 035 77Br 71633.919 667.345 8.667
077 036 77Kr 71636.474 663.497 8.617
077 037 77Rb 71641.307 657.37 8.537
077 038 77Sr 71647.817 649.567 8.436
078 030 78Zn 72583.863 663.433 8.506
078 031 78Ga 72576.985 669.017 8.577
078 032 78Ge 72568.319 676.39 8.672
078 033 78As 72566.853 676.563 8.674
078 034 78Se 72562.133 679.99 8.718
078 035 78Br 72565.196 675.633 8.662
078 036 78Kr 72563.957 675.578 8.661
078 037 78Rb 72570.69 667.552 8.558
078 038 78Sr 72573.941 663.008 8.5
079 031 79Ga 73509.676 675.892 8.556
079 032 79Ge 73502.185 682.089 8.634
079 033 79As 73497.527 685.454 8.677
079 034 79Se 73494.735 686.952 8.696
079 035 79Br 73494.074 686.321 8.688
079 036 79Kr 73495.188 683.913 8.657
079 037 79Rb 73498.317 679.491 8.601
079 038 79Sr 73503.132 673.382 8.524
079 039 79Y 73509.738 665.483 8.424
080 030 80Zn 74452.351 674.075 8.426
080 031 80Ga 74444.54 680.593 8.507
080 032 80Ge 74433.654 690.186 8.627
080 033 80As 74430.499 692.047 8.651
080 034 80Se 74424.387 696.866 8.711
080 035 80Br 74425.747 694.213 8.678
080 036 80Kr 74423.233 695.434 8.693
080 037 80Rb 74428.441 688.932 8.612
080 038 80Sr 74429.795 686.285 8.579
080 039 80Y 74438.372 676.414 8.455
080 040 80Zr 74443.561 669.932 8.374
081 031 81Ga 75377.194 687.504 8.488
081 032 81Ge 75368.363 695.042 8.581
081 033 81As 75361.619 700.493 8.648
081 034 81Se 75357.252 703.567 8.686
081 035 81Br 75355.155 704.37 8.696
081 036 81Kr 75354.925 703.307 8.683
081 037 81Rb 75356.653 700.285 8.645
081 038 81Sr 75360.069 695.576 8.587
081 039 81Y 75365.066 689.286 8.51
081 040 81Zr 75372.085 680.973 8.407
082 032 82Ge 76300.537 702.433 8.566
082 033 82As 76295.326 706.351 8.614
082 034 82Se 76287.541 712.843 8.693
082 035 82Br 76287.128 711.963 8.682
082 036 82Kr 76283.524 714.274 8.711
082 037 82Rb 76287.414 709.09 8.647
082 038 82Sr 76287.083 708.127 8.636
082 039 82Y 76294.39 699.527 8.531
083 033 83As 77227.26 713.982 8.602
083 034 83Se 77221.288 718.661 8.659
083 035 83Br 77217.109 721.547 8.693
083 036 83Kr 77215.625 721.737 8.696
083 037 83Rb 77216.021 720.048 8.675
083 038 83Sr 77217.79 716.986 8.638
083 039 83Y 77221.744 711.738 8.575
083 040 83Zr 77227.103 705.086 8.495
083 041 83Nb 77234.092 696.804 8.395
084 034 84Se 78152.171 727.343 8.659
084 035 84Br 78149.813 728.408 8.672
084 036 84Kr 78144.67 732.258 8.717
084 037 84Rb 78146.84 728.794 8.676
084 038 84Sr 78145.435 728.906 8.677
084 039 84Y 78151.408 721.64 8.591
085 034 85Se 79087.189 731.891 8.61
085 035 85Br 79080.496 737.29 8.674
085 036 85Kr 79077.115 739.378 8.699
085 037 85Rb 79075.917 739.283 8.697
085 038 85Sr 79076.471 737.436 8.676
085 039 85Y 79079.22 733.393 8.628
085 040 85Zr 79083.401 727.919 8.564
085 041 85Nb 79088.89 721.136 8.484
086 034 86Se 80020.57 738.075 8.582
086 035 86Br 80014.96 742.392 8.632
086 036 86Kr 80006.824 749.235 8.712
086 037 86Rb 80006.831 747.934 8.697
086 038 86Sr 80004.544 748.928 8.708
086 039 86Y 80009.272 742.906 8.638
086 040 86Zr 80010.245 740.64 8.612
086 041 86Nb 80017.704 731.888 8.51
086 042 86Mo 80022.463 725.835 8.44
087 034 87Se 80956.025 742.185 8.531
087 035 87Br 80948.237 748.68 8.606
087 036 87Kr 80940.874 754.75 8.675
087 037 87Rb 80936.474 757.856 8.711
087 038 87Sr 80935.681 757.356 8.705
087 039 87Y 80937.031 754.712 8.675
087 040 87Zr 80940.191 750.259 8.624
087 041 87Nb 80944.848 744.309 8.555
087 042 87Mo 80950.827 737.037 8.472
088 034 88Se 81890.219 747.557 8.495
088 035 88Br 81882.858 753.624 8.564
088 036 88Kr 81873.385 761.804 8.657
088 037 88Rb 81869.957 763.939 8.681
088 038 88Sr 81864.133 768.469 8.733
088 039 88Y 81867.245 764.064 8.683
088 040 88Zr 81867.41 762.606 8.666
088 041 88Nb 81874.452 754.27 8.571
088 042 88Mo 81877.311 750.118 8.524
089 035 89Br 82816.512 759.536 8.534
089 036 89Kr 82807.841 766.913 8.617
089 037 89Rb 82802.347 771.114 8.664
089 038 89Sr 82797.34 774.828 8.706
089 039 89Y 82795.336 775.538 8.714
089 040 89Zr 82797.658 771.923 8.673
089 041 89Nb 82801.366 766.922 8.617
089 042 89Mo 82806.501 760.493 8.545
090 035 90Br 83751.956 763.657 8.485
090 036 90Kr 83741.095 773.225 8.591
090 037 90Rb 83736.192 776.834 8.631
090 038 90Sr 83729.102 782.631 8.696
090 039 90Y 83728.045 782.395 8.693
090 040 90Zr 83725.254 783.893 8.71
090 041 90Nb 83730.854 776.999 8.633
090 042 90Mo 83732.832 773.728 8.597
090 043 90Tc 83741.278 763.988 8.489
091 035 91Br 84686.56 768.618 8.446
091 036 91Kr 84676.249 777.636 8.545
091 037 91Rb 84669.303 783.289 8.608
091 038 91Sr 84662.892 788.406 8.664
091 039 91Y 84659.681 790.324 8.685
091 040 91Zr 84657.625 791.087 8.693
091 041 91Nb 84658.372 789.046 8.671
091 042 91Mo 84662.289 783.836 8.614
091 043 91Tc 84668.002 776.83 8.537
092 035 92Br 85622.984 771.76 8.389
092 036 92Kr 85610.268 783.182 8.513
092 037 92Rb 85603.77 788.387 8.569
092 038 92Sr 85595.163 795.701 8.649
092 039 92Y 85592.707 796.863 8.662
092 040 92Zr 85588.555 799.722 8.693
092 041 92Nb 85590.05 796.934 8.662
092 042 92Mo 85589.182 796.508 8.658
092 043 92Tc 85596.541 787.856 8.564
093 036 93Kr 86546.527 786.488 8.457
093 037 93Rb 86537.418 794.304 8.541
093 038 93Sr 86529.44 800.989 8.613
093 039 93Y 86524.791 804.344 8.649
093 040 93Zr 86521.386 806.456 8.672
093 041 93Nb 86520.784 805.765 8.664
093 042 93Mo 86520.678 804.577 8.651
093 043 93Tc 86523.367 800.595 8.609
093 044 93Ru 86529.189 793.48 8.532
094 037 94Rb 87472.977 798.31 8.493
094 038 94Sr 87462.179 807.815 8.594
094 039 94Y 87458.16 810.541 8.623
094 040 94Zr 87452.73 814.677 8.667
094 041 94Nb 87453.122 812.993 8.649
094 042 94Mo 87450.566 814.256 8.662
094 043 94Tc 87454.31 809.217 8.609
094 044 94Ru 87455.385 806.849 8.584
095 037 95Rb 88407.17 803.683 8.46
095 038 95Sr 88397.396 812.163 8.549
095 039 95Y 88390.795 817.471 8.605
095 040 95Zr 88385.833 821.14 8.644
095 041 95Nb 88384.198 821.481 8.647
095 042 95Mo 88382.762 821.625 8.649
095 043 95Tc 88383.941 819.152 8.623
095 044 95Ru 88385.997 815.802 8.587
095 045 95Rh 88390.596 809.91 8.525
096 037 96Rb 89343.293 807.125 8.408
096 038 96Sr 89331.068 818.057 8.521
096 039 96Y 89325.149 822.682 8.57
096 040 96Zr 89317.542 828.996 8.635
096 041 96Nb 89316.87 828.375 8.629
096 042 96Mo 89313.173 830.779 8.654
096 043 96Tc 89315.635 827.023 8.615
096 044 96Ru 89314.869 826.496 8.609
096 045 96Rh 89320.751 819.32 8.535
096 046 96Pd 89323.689 815.089 8.491
097 037 97Rb 90277.652 812.331 8.375
097 038 97Sr 90266.713 821.977 8.474
097 039 97Y 90258.732 828.665 8.543
097 040 97Zr 90251.533 834.571 8.604
097 041 97Nb 90248.363 836.448 8.623
097 042 97Mo 90245.917 837.6 8.635
097 043 97Tc 90245.726 836.497 8.624
097 044 97Ru 90246.323 834.607 8.604
097 045 97Rh 90249.334 830.303 8.56
097 046 97Pd 90253.613 824.73 8.502
097 047 97Ag 90260.082 816.968 8.422
098 037 98Rb 91213.286 816.263 8.329
098 038 98Sr 91200.349 827.906 8.448
098 039 98Y 91194.017 832.945 8.499
098 040 98Zr 91184.686 840.983 8.581
098 041 98Nb 91181.933 842.442 8.596
098 042 98Mo 91176.84 846.243 8.635
098 043 98Tc 91178.012 843.777 8.61
098 044 98Ru 91175.705 844.79 8.62
098 045 98Rh 91180.243 838.959 8.561
098 046 98Pd 91181.607 836.302 8.534
098 047 98Ag 91189.336 827.279 8.442
098 048 98Cd 91194.255 821.067 8.378
099 037 99Rb 92148.12 820.994 8.293
099 038 99Sr 92136.299 831.522 8.399
099 039 99Y 92127.777 838.75 8.472
099 040 99Zr 92119.699 845.535 8.541
099 041 99Nb 92114.629 849.312 8.579
099 042 99Mo 92110.48 852.168 8.608
099 043 99Tc 92108.611 852.743 8.614
099 044 99Ru 92107.806 852.255 8.609
099 045 99Rh 92109.338 849.429 8.58
099 046 99Pd 92112.213 845.261 8.538
099 047 99Ag 92117.13 839.051 8.475
100 038 100Sr 93069.763 837.623 8.376
100 039 100Y 93062.182 843.911 8.439
100 040 100Zr 93052.361 852.438 8.524
100 041 100Nb 93048.511 854.995 8.55
100 042 100Mo 93041.755 860.458 8.605
100 043 100Tc 93041.412 859.508 8.595
100 044 100Ru 93037.698 861.928 8.619
100 045 100Rh 93040.822 857.511 8.575
100 046 100Pd 93040.669 856.37 8.564
100 047 100Ag 93047.234 848.512 8.485
100 048 100Cd 93050.623 843.83 8.438
100 049 100In 93060.192 832.967 8.33
100 050 100Sn 93067.071 824.795 8.248
101 037 101Rb 94018.388 829.857 8.216
101 038 101Sr 94006.067 840.884 8.326
101 039 101Y 93996.056 849.602 8.412
101 040 101Zr 93986.995 857.37 8.489
101 041 101Nb 93981.002 862.069 8.535
101 042 101Mo 93975.922 865.856 8.573
101 043 101Tc 93972.586 867.899 8.593
101 044 101Ru 93970.462 868.73 8.601
101 045 101Rh 93970.492 867.406 8.588
101 046 101Pd 93971.961 864.644 8.561
101 047 101Ag 93975.658 859.653 8.511
101 048 101Cd 93980.617 853.401 8.45
102 038 102Sr 94939.891 846.626 8.3
102 039 102Y 94930.57 854.653 8.379
102 040 102Zr 94920.209 863.721 8.468
102 041 102Nb 94915.088 867.549 8.505
102 042 102Mo 94907.37 873.973 8.568
102 043 102Tc 94905.85 874.2 8.571
102 044 102Ru 94900.807 877.95 8.607
102 045 102Rh 94902.619 874.844 8.577
102 046 102Pd 94900.958 875.212 8.581
102 047 102Ag 94906.107 868.77 8.517
102 048 102Cd 94908.183 865.4 8.484
102 049 102In 94916.64 855.65 8.389
102 050 102Sn 94921.909 849.088 8.324
103 040 103Zr 95855.073 868.422 8.431
103 041 103Nb 95847.612 874.59 8.491
103 042 103Mo 95841.571 879.338 8.537
103 043 103Tc 95837.313 882.302 8.566
103 044 103Ru 95834.141 884.182 8.584
103 045 103Rh 95832.866 884.163 8.584
103 046 103Pd 95832.898 882.837 8.571
103 047 103Ag 95835.075 879.367 8.538
103 048 103Cd 95838.706 874.443 8.49
103 049 103In 95844.245 867.61 8.423
104 041 104Nb 96782.206 879.561 8.457
104 042 104Mo 96773.585 886.889 8.528
104 043 104Tc 96770.914 888.267 8.541
104 044 104Ru 96764.804 893.083 8.587
104 045 104Rh 96765.433 891.162 8.569
104 046 104Pd 96762.481 892.82 8.585
104 047 104Ag 96766.249 887.758 8.536
104 048 104Cd 96766.874 885.84 8.518
104 049 104In 96774.228 877.193 8.435
104 050 104Sn 96778.237 871.89 8.384
105 041 105Nb 97715.07 886.263 8.441
105 042 105Mo 97708.069 891.97 8.495
105 043 105Tc 97702.608 896.138 8.535
105 044 105Ru 97698.459 898.994 8.562
105 045 105Rh 97696.03 900.129 8.573
105 046 105Pd 97694.952 899.914 8.571
105 047 105Ag 97695.786 897.787 8.55
105 048 105Cd 97698.013 894.266 8.517
105 049 105In 97702.351 888.635 8.463
105 050 105Sn 97708.061 881.632 8.396
105 051 105Sb 97716.99 871.409 8.299
106 042 106Mo 98640.648 898.957 8.481
106 043 106Tc 98636.617 901.694 8.507
106 044 106Ru 98629.559 907.459 8.561
106 045 106Rh 98629.009 906.716 8.554
106 046 106Pd 98624.957 909.474 8.58
106 047 106Ag 98627.411 905.727 8.545
106 048 106Cd 98626.705 905.14 8.539
106 049 106In 98632.72 897.831 8.47
106 050 106Sn 98635.385 893.873 8.433
106 052 106Te 98653.583 873.088 8.237
107 042 107Mo 99575.457 903.713 8.446
107 043 107Tc 99568.786 909.091 8.496
107 044 107Ru 99563.455 913.128 8.534
107 045 107Rh 99560.001 915.289 8.554
107 046 107Pd 99557.985 916.012 8.561
107 047 107Ag 99557.44 915.263 8.554
107 048 107Cd 99558.346 913.064 8.533
107 049 107In 99561.26 908.857 8.494
107 050 107Sn 99565.729 903.094 8.44
108 043 108Tc 100503.43 914.012 8.463
108 044 108Ru 100495.199 920.95 8.527
108 045 108Rh 100493.338 921.517 8.533
108 046 108Pd 100488.323 925.239 8.567
108 047 108Ag 100489.734 922.535 8.542
108 048 108Cd 100487.573 923.402 8.55
108 049 108In 100492.198 917.484 8.495
108 050 108Sn 100493.762 914.627 8.469
108 052 108Te 100509.061 896.741 8.303
109 043 109Tc 101436.334 920.673 8.447
109 044 109Ru 101429.513 926.201 8.497
109 045 109Rh 101424.841 929.58 8.528
109 046 109Pd 101421.734 931.393 8.545
109 047 109Ag 101420.108 931.727 8.548
109 048 109Cd 101419.811 930.73 8.539
109 049 109In 101421.319 927.928 8.513
109 050 109Sn 101424.658 923.296 8.471
109 051 109Sb 101430.527 916.134 8.405
109 052 109Te 101438.665 906.702 8.318
109 053 109I 101448.154 895.92 8.219
110 043 110Tc 102371.408 925.165 8.411
110 044 110Ru 102361.877 933.402 8.485
110 045 110Rh 102358.566 935.42 8.504
110 046 110Pd 102352.486 940.207 8.547
110 047 110Ag 102352.864 938.536 8.532
110 048 110Cd 102349.46 940.646 8.551
110 049 110In 102352.827 935.986 8.509
110 050 110Sn 102352.947 934.572 8.496
110 052 110Te 102365.489 919.444 8.359
110 054 110Xe 102384.847 897.499 8.159
111 043 111Tc 103304.642 931.496 8.392
111 044 111Ru 103296.681 938.164 8.452
111 045 111Rh 103290.483 943.068 8.496
111 046 111Pd 103286.325 945.933 8.522
111 047 111Ag 103283.597 947.368 8.535
111 048 111Cd 103282.05 947.622 8.537
111 049 111In 103282.4 945.978 8.522
111 050 111Sn 103284.34 942.745 8.493
111 051 111Sb 103288.886 936.905 8.441
111 052 111Te 103295.784 928.715 8.367
112 043 112Tc 104239.357 936.347 8.36
112 044 112Ru 104229.366 945.045 8.438
112 045 112Rh 104224.595 948.523 8.469
112 046 112Pd 104217.488 954.336 8.521
112 047 112Ag 104216.689 953.842 8.516
112 048 112Cd 104212.221 957.016 8.545
112 049 112In 104214.295 953.649 8.515
112 050 112Sn 104213.119 953.532 8.514
112 051 112Sb 104219.668 945.69 8.444
112 052 112Te 104223.458 940.606 8.398
112 054 112Xe 104239.766 921.712 8.23
113 044 113Ru 105164.14 949.836 8.406
113 045 113Rh 105157.149 955.534 8.456
113 046 113Pd 105151.628 959.761 8.493
113 047 113Ag 105147.774 962.322 8.516
113 048 113Cd 105145.246 963.556 8.527
113 049 113In 105144.415 963.094 8.523
113 050 113Sn 105144.941 961.275 8.507
113 051 113Sb 105148.343 956.58 8.465
113 052 113Te 105153.905 949.724 8.405
113 053 113I 105160.611 941.725 8.334
113 054 113Xe 105169.14 931.903 8.247
113 055 113Cs 105179.019 920.731 8.148
114 045 114Rh 106091.693 960.555 8.426
114 046 114Pd 106083.315 967.64 8.488
114 047 114Ag 106081.352 968.309 8.494
114 048 114Cd 106075.769 972.599 8.532
114 049 114In 106076.707 970.368 8.512
114 050 114Sn 106074.207 971.574 8.523
114 051 114Sb 106079.742 964.746 8.463
114 052 114Te 106081.857 961.338 8.433
114 054 114Xe 106095.638 944.97 8.289
114 056 114Ba 106115.752 922.269 8.09
115 044 115Ru 107032.898 960.209 8.35
115 045 115Rh 107024.607 967.206 8.41
115 046 115Pd 107017.906 972.614 8.458
115 047 115Ag 107012.805 976.422 8.491
115 048 115Cd 107009.193 978.74 8.511
115 049 115In 107007.236 979.404 8.517
115 050 115Sn 107006.226 979.121 8.514
115 051 115Sb 107008.748 975.305 8.481
115 052 115Te 107013.177 969.583 8.431
115 053 115I 107018.391 963.076 8.375
115 054 115Xe 107025.561 954.612 8.301
116 045 116Rh 107959.571 971.808 8.378
116 046 116Pd 107949.84 980.245 8.45
116 047 116Ag 107946.719 982.073 8.466
116 048 116Cd 107940.059 987.44 8.512
116 049 116In 107940.017 986.188 8.502
116 050 116Sn 107936.227 988.684 8.523
116 051 116Sb 107940.424 983.195 8.476
116 052 116Te 107941.465 980.86 8.456
116 053 116I 107948.733 972.299 8.382
116 054 116Xe 107952.665 967.074 8.337
117 046 117Pd 108884.764 984.887 8.418
117 047 117Ag 108878.513 989.844 8.46
117 048 117Cd 108873.847 993.217 8.489
117 049 117In 108870.816 994.955 8.504
117 050 117Sn 108868.85 995.627 8.51
117 051 117Sb 108870.094 993.09 8.488
117 052 117Te 108873.131 988.76 8.451
117 053 117I 108877.282 983.315 8.404
117 054 117Xe 108883.021 976.283 8.344
117 055 117Cs 108890.255 967.756 8.271
118 046 118Pd 109817.318 991.898 8.406
118 047 118Ag 109812.707 995.216 8.434
118 048 118Cd 109805.057 1001.572 8.488
118 049 118In 109804.025 1001.311 8.486
118 050 118Sn 109799.087 1004.955 8.517
118 051 118Sb 109802.234 1000.515 8.479
118 052 118Te 109802.001 999.455 8.47
118 053 118I 109808.24 991.923 8.406
118 054 118Xe 109810.621 988.248 8.375
118 055 118Cs 109819.78 977.796 8.286
119 047 119Ag 110745.211 1002.277 8.422
119 048 119Cd 110739.35 1006.845 8.461
119 049 119In 110735.045 1009.856 8.486
119 050 119Sn 110732.169 1011.438 8.499
119 051 119Sb 110732.25 1010.065 8.488
119 052 119Te 110734.032 1006.989 8.462
119 053 119I 110736.939 1002.789 8.427
119 054 119Xe 110741.4 997.035 8.378
119 055 119Cs 110747.378 989.763 8.317
119 056 119Ba 110754.582 981.266 8.246
120 046 120Pd 111685.626 1002.721 8.356
120 047 120Ag 111679.615 1007.438 8.395
120 048 120Cd 111670.78 1014.98 8.458
120 049 120In 111668.503 1015.964 8.466
120 050 120Sn 111662.627 1020.546 8.505
120 051 120Sb 111664.797 1017.083 8.476
120 052 120Te 111663.305 1017.282 8.477
120 053 120I 111668.409 1010.884 8.424
120 054 120Xe 111669.516 1008.484 8.404
120 055 120Cs 111677.288 999.419 8.328
120 056 120Ba 111681.776 993.637 8.28
121 047 121Ag 112612.099 1014.52 8.384
121 048 121Cd 112605.188 1020.137 8.431
121 049 121In 112599.896 1024.136 8.464
121 050 121Sn 112596.022 1026.717 8.485
121 051 121Sb 112595.12 1026.325 8.482
121 052 121Te 112595.653 1024.499 8.467
121 053 121I 112597.406 1021.453 8.442
121 054 121Xe 112600.709 1016.856 8.404
121 055 121Cs 112605.571 1010.701 8.353
121 056 121Ba 112611.42 1003.559 8.294
122 048 122Cd 113537.012 1027.879 8.425
122 049 122In 113533.651 1029.946 8.442
122 050 122Sn 113526.774 1035.53 8.488
122 051 122Sb 113527.878 1033.132 8.468
122 052 122Te 113525.384 1034.333 8.478
122 053 122I 113529.107 1029.317 8.437
122 054 122Xe 113529.321 1027.81 8.425
122 055 122Cs 113536.025 1019.812 8.359
122 056 122Ba 113539.045 1015.499 8.324
123 048 123Cd 114471.926 1032.53 8.395
123 049 123In 114465.299 1037.864 8.438
123 050 123Sn 114460.393 1041.476 8.467
123 051 123Sb 114458.479 1042.097 8.472
123 052 123Te 114458.02 1041.263 8.466
123 053 123I 114458.738 1039.251 8.449
123 054 123Xe 114460.921 1035.775 8.421
123 055 123Cs 114464.615 1030.788 8.38
123 056 123Ba 114469.493 1024.616 8.33
124 048 124Cd 115404.02 1040.001 8.387
124 049 124In 115399.339 1043.389 8.414
124 050 124Sn 115391.471 1049.963 8.467
124 051 124Sb 115391.576 1048.565 8.456
124 052 124Te 115388.161 1050.686 8.473
124 053 124I 115390.81 1046.745 8.441
124 054 124Xe 115390.004 1046.257 8.438
124 055 124Cs 115395.422 1039.546 8.383
124 056 124Ba 115397.552 1036.123 8.356
124 057 124La 115405.871 1026.51 8.278
125 048 125Cd 116338.864 1044.723 8.358
125 049 125In 116331.233 1051.06 8.408
125 050 125Sn 116325.303 1055.696 8.446
125 051 125Sb 116322.435 1057.271 8.458
125 052 125Te 116321.157 1057.256 8.458
125 053 125I 116320.832 1056.287 8.45
125 054 125Xe 116321.966 1053.861 8.431
125 055 125Cs 116324.559 1049.974 8.4
125 056 125Ba 116328.468 1044.772 8.358
125 057 125La 116333.866 1038.081 8.305
126 048 126Cd 117271.388 1051.764 8.347
126 049 126In 117265.397 1056.462 8.385
126 050 126Sn 117256.676 1063.889 8.444
126 051 126Sb 117255.785 1063.487 8.44
126 052 126Te 117251.609 1066.369 8.463
126 053 126I 117253.252 1063.433 8.44
126 054 126Xe 117251.483 1063.909 8.444
126 055 126Cs 117255.796 1058.303 8.399
126 056 126Ba 117256.96 1055.845 8.38
126 057 126La 117264.149 1047.363 8.312
126 058 126Ce 117267.787 1042.432 8.273
127 048 127Cd 118206.692 1056.025 8.315
127 049 127In 118197.711 1063.713 8.376
127 050 127Sn 118190.691 1069.44 8.421
127 051 127Sb 118186.979 1071.858 8.44
127 052 127Te 118184.887 1072.657 8.446
127 053 127I 118183.674 1072.577 8.445
127 054 127Xe 118183.825 1071.132 8.434
127 055 127Cs 118185.395 1068.269 8.412
127 056 127Ba 118188.308 1064.063 8.378
127 057 127La 118192.717 1058.36 8.334
127 058 127Ce 118198.122 1051.662 8.281
128 048 128Cd 119139.416 1062.867 8.304
128 049 128In 119131.835 1069.154 8.353
128 050 128Sn 119122.349 1077.347 8.417
128 051 128Sb 119120.564 1077.839 8.421
128 052 128Te 119115.67 1081.439 8.449
128 053 128I 119116.413 1079.403 8.433
128 054 128Xe 119113.78 1080.743 8.443
128 055 128Cs 119117.198 1076.031 8.406
128 056 128Ba 119117.216 1074.72 8.396
128 057 128La 119123.477 1067.166 8.337
128 058 128Ce 119126.062 1063.287 8.307
128 059 128Pr 119134.754 1053.302 8.229
129 049 129In 120064.749 1075.806 8.34
129 050 129Sn 120056.584 1082.677 8.393
129 051 129Sb 120052.039 1085.929 8.418
129 052 129Te 120049.153 1087.522 8.43
129 053 129I 120047.142 1088.239 8.436
129 054 129Xe 120046.436 1087.651 8.431
129 055 129Cs 120047.123 1085.672 8.416
129 056 129Ba 120049.047 1082.454 8.391
129 057 129La 120052.275 1077.933 8.356
129 058 129Ce 120056.803 1072.112 8.311
129 059 129Pr 120062.805 1064.816 8.254
130 048 130Cd 121008.124 1073.289 8.256
130 049 130In 120999.293 1080.827 8.314
130 050 130Sn 120988.533 1090.294 8.387
130 051 130Sb 120985.869 1091.664 8.397
130 052 130Te 120980.298 1095.941 8.43
130 053 130I 120980.207 1094.74 8.421
130 054 130Xe 120976.746 1096.907 8.438
130 055 130Cs 120979.217 1093.143 8.409
130 056 130Ba 120978.344 1092.722 8.406
130 057 130La 120983.467 1086.306 8.356
130 058 130Ce 120985.161 1083.319 8.333
130 059 130Pr 120992.893 1074.294 8.264
130 060 130Nd 120996.966 1068.927 8.223
131 049 131In 121932.54 1087.145 8.299
131 050 131Sn 121922.852 1095.54 8.363
131 051 131Sb 121917.667 1099.432 8.393
131 052 131Te 121913.934 1101.871 8.411
131 053 131I 121911.188 1103.323 8.422
131 054 131Xe 121909.707 1103.512 8.424
131 055 131Cs 121909.551 1102.374 8.415
131 056 131Ba 121910.416 1100.216 8.399
131 057 131La 121912.82 1096.519 8.37
131 058 131Ce 121916.358 1091.687 8.333
131 059 131Pr 121921.287 1085.465 8.286
131 060 131Nd 121927.287 1078.172 8.23
132 049 132In 122869.751 1089.5 8.254
132 050 132Sn 122855.106 1102.851 8.355
132 051 132Sb 122851.475 1105.189 8.373
132 052 132Te 122845.456 1109.915 8.408
132 053 132I 122844.427 1109.65 8.406
132 054 132Xe 122840.335 1112.448 8.428
132 055 132Cs 122841.949 1109.541 8.406
132 056 132Ba 122840.159 1110.038 8.409
132 057 132La 122844.343 1104.561 8.368
132 058 132Ce 122845.098 1102.513 8.352
132 059 132Pr 122851.851 1094.466 8.291
132 060 132Nd 122855.124 1089.9 8.257
133 050 133Sn 123792.204 1105.319 8.311
133 051 133Sb 123783.7 1112.529 8.365
133 052 133Te 123779.187 1115.749 8.389
133 053 133I 123775.734 1117.909 8.405
133 054 133Xe 123773.466 1118.883 8.413
133 055 133Cs 123772.528 1118.528 8.41
133 056 133Ba 123772.534 1117.228 8.4
133 057 133La 123774.083 1114.386 8.379
133 058 133Ce 123776.643 1110.533 8.35
133 059 133Pr 123780.617 1105.266 8.31
133 060 133Nd 123785.714 1098.875 8.262
133 061 133Pm 123792.123 1091.173 8.204
134 050 134Sn 124727.848 1109.24 8.278
134 051 134Sb 124719.967 1115.827 8.327
134 052 134Te 124711.067 1123.434 8.384
134 053 134I 124709.043 1124.165 8.389
134 054 134Xe 124704.479 1127.435 8.414
134 055 134Cs 124705.202 1125.419 8.399
134 056 134Ba 124702.632 1126.696 8.408
134 057 134La 124705.852 1122.182 8.374
134 058 134Ce 124705.724 1121.017 8.366
134 059 134Pr 124711.539 1113.909 8.313
134 060 134Nd 124713.892 1110.262 8.286
134 061 134Pm 124722.287 1100.574 8.213
135 051 135Sb 125655.921 1119.439 8.292
135 052 135Te 125647.29 1126.776 8.346
135 053 135I 125640.819 1131.954 8.385
135 054 135Xe 125637.681 1133.799 8.399
135 055 135Cs 125636.005 1134.181 8.401
135 056 135Ba 125635.225 1133.668 8.398
135 057 135La 125635.914 1131.686 8.383
135 058 135Ce 125637.429 1128.877 8.362
135 059 135Pr 125640.607 1124.406 8.329
135 060 135Nd 125644.818 1118.902 8.288
135 061 135Pm 125650.541 1111.885 8.236
135 062 135Sm 125657.15 1103.983 8.178
136 052 136Te 126582.184 1131.448 8.319
136 053 136I 126576.603 1135.735 8.351
136 054 136Xe 126569.167 1141.878 8.396
136 055 136Cs 126568.742 1141.009 8.39
136 056 136Ba 126565.683 1142.775 8.403
136 057 136La 126568.019 1139.146 8.376
136 058 136Ce 126567.08 1138.792 8.373
136 059 136Pr 126571.71 1132.868 8.33
136 060 136Nd 126573.327 1129.958 8.309
136 061 136Pm 126580.815 1121.177 8.244
136 062 136Sm 126584.693 1116.005 8.206
137 052 137Te 127518.548 1134.649 8.282
137 053 137I 127511.094 1140.81 8.327
137 054 137Xe 127504.707 1145.903 8.364
137 055 137Cs 127500.029 1149.288 8.389
137 056 137Ba 127498.343 1149.681 8.392
137 057 137La 127498.452 1148.278 8.382
137 058 137Ce 127499.163 1146.274 8.367
137 059 137Pr 127501.354 1142.79 8.342
137 060 137Nd 127504.44 1138.41 8.31
137 061 137Pm 127509.436 1132.121 8.264
137 062 137Sm 127514.968 1125.296 8.214
138 053 138I 128446.761 1144.708 8.295
138 054 138Xe 128438.43 1151.746 8.346
138 055 138Cs 128435.182 1153.7 8.36
138 056 138Ba 128429.296 1158.293 8.393
138 057 138La 128430.522 1155.774 8.375
138 058 138Ce 128428.967 1156.035 8.377
138 059 138Pr 128432.893 1150.816 8.339
138 060 138Nd 128433.496 1148.92 8.326
138 061 138Pm 128440.063 1141.059 8.269
138 062 138Sm 128442.994 1136.835 8.238
138 063 138Eu 128452.231 1126.305 8.162
139 053 139I 129381.745 1149.289 8.268
139 054 139Xe 129374.43 1155.311 8.312
139 055 139Cs 129368.862 1159.586 8.342
139 056 139Ba 129364.138 1163.016 8.367
139 057 139La 129361.309 1164.551 8.378
139 058 139Ce 129361.078 1163.49 8.37
139 059 139Pr 129362.696 1160.578 8.349
139 060 139Nd 129365.016 1156.965 8.323
139 061 139Pm 129369.001 1151.687 8.286
139 062 139Sm 129373.606 1145.788 8.243
139 063 139Eu 129380.077 1138.024 8.187
140 054 140Xe 130308.578 1160.728 8.291
140 055 140Cs 130304.006 1164.007 8.314
140 056 140Ba 130297.275 1169.445 8.353
140 057 140La 130295.714 1169.712 8.355
140 058 140Ce 130291.441 1172.692 8.376
140 059 140Pr 130294.318 1168.522 8.347
140 060 140Nd 130294.25 1167.296 8.338
140 061 140Pm 130299.781 1160.472 8.289
140 062 140Sm 130302.024 1156.936 8.264
140 063 140Eu 130309.979 1147.687 8.198
140 064 140Gd 130314.676 1141.697 8.155
140 065 140Tb 130325.467 1129.613 8.069
141 054 141Xe 131244.732 1164.14 8.256
141 055 141Cs 131238.074 1169.504 8.294
141 056 141Ba 131232.314 1173.971 8.326
141 057 141La 131228.591 1176.401 8.343
141 058 141Ce 131225.578 1178.12 8.355
141 059 141Pr 131224.486 1177.919 8.354
141 060 141Nd 131225.798 1175.314 8.336
141 061 141Pm 131228.962 1170.856 8.304
141 062 141Sm 131233.035 1165.49 8.266
141 063 141Eu 131238.536 1158.696 8.218
141 064 141Gd 131244.728 1151.21 8.165
141 065 141Tb 131252.901 1141.744 8.097
142 054 142Xe 132179.076 1169.361 8.235
142 055 142Cs 132173.53 1173.614 8.265
142 056 142Ba 132165.711 1180.139 8.311
142 057 142La 132162.988 1181.569 8.321
142 058 142Ce 132157.973 1185.29 8.347
142 059 142Pr 132158.208 1183.762 8.336
142 060 142Nd 132155.535 1185.142 8.346
142 061 142Pm 132159.822 1179.562 8.307
142 062 142Sm 132161.475 1176.615 8.286
142 063 142Eu 132168.637 1168.16 8.226
142 064 142Gd 132172.486 1163.018 8.19
143 055 143Cs 133107.868 1178.841 8.244
143 056 143Ba 133101.092 1184.324 8.282
143 057 143La 133096.33 1187.792 8.306
143 058 143Ce 133092.394 1190.435 8.325
143 059 143Pr 133090.421 1191.114 8.329
143 060 143Nd 133088.977 1191.266 8.331
143 061 143Pm 133089.507 1189.442 8.318
143 062 143Sm 133092.439 1185.217 8.288
143 063 143Eu 133097.209 1179.153 8.246
143 064 143Gd 133102.71 1172.359 8.198
143 065 143Tb 133109.999 1163.777 8.138
144 055 144Cs 134043.763 1182.511 8.212
144 056 144Ba 134034.753 1190.228 8.265
144 057 144La 134031.121 1192.567 8.282
144 058 144Ce 134025.063 1197.331 8.315
144 059 144Pr 134024.233 1196.868 8.312
144 060 144Nd 134020.725 1199.083 8.327
144 061 144Pm 134022.546 1195.968 8.305
144 062 144Sm 134021.484 1195.737 8.304
144 063 144Eu 134027.323 1188.605 8.254
144 064 144Gd 134030.674 1183.96 8.222
144 065 144Tb 134039.555 1173.786 8.151
144 066 144Dy 134044.832 1167.216 8.106
145 055 145Cs 134978.47 1187.37 8.189
145 056 145Ba 134970.606 1193.94 8.234
145 057 145La 134964.515 1198.738 8.267
145 058 145Ce 134959.894 1202.066 8.29
145 059 145Pr 134956.851 1203.815 8.302
145 060 145Nd 134954.535 1204.838 8.309
145 061 145Pm 134954.187 1203.893 8.303
145 062 145Sm 134954.292 1202.494 8.293
145 063 145Eu 134956.441 1199.052 8.269
145 064 145Gd 134961.001 1193.199 8.229
145 065 145Tb 134967.537 1185.369 8.175
145 066 145Dy 134974.616 1176.997 8.117
146 055 146Cs 135914.401 1191.004 8.158
146 056 146Ba 135904.51 1199.602 8.216
146 057 146La 135899.879 1202.939 8.239
146 058 146Ce 135892.808 1208.717 8.279
146 059 146Pr 135891.267 1208.965 8.281
146 060 146Nd 135886.535 1212.403 8.304
146 061 146Pm 135887.495 1210.15 8.289
146 062 146Sm 135885.442 1210.91 8.294
146 063 146Eu 135888.811 1206.247 8.262
146 064 146Gd 135889.329 1204.436 8.25
146 065 146Tb 135897.141 1195.331 8.187
146 066 146Dy 135901.846 1189.332 8.146
147 055 147Cs 136849.495 1195.475 8.132
147 057 147La 136833.643 1208.741 8.223
147 058 147Ce 136827.952 1213.138 8.253
147 059 147Pr 136824.016 1215.781 8.271
147 060 147Nd 136820.808 1217.696 8.284
147 061 147Pm 136819.401 1217.809 8.284
147 062 147Sm 136818.666 1217.251 8.281
147 063 147Eu 136819.877 1214.747 8.264
147 064 147Gd 136821.553 1211.777 8.243
147 065 147Tb 136825.653 1206.384 8.207
147 066 147Dy 136831.706 1199.038 8.157
147 067 147Ho 136839.546 1189.904 8.095
148 055 148Cs 137785.709 1198.827 8.1
148 056 148Ba 137774.488 1208.754 8.167
148 057 148La 137768.857 1213.092 8.197
148 058 148Ce 137761.085 1219.571 8.24
148 059 148Pr 137758.434 1220.928 8.25
148 060 148Nd 137753.041 1225.028 8.277
148 061 148Pm 137753.071 1223.705 8.268
148 062 148Sm 137750.09 1225.392 8.28
148 063 148Eu 137752.619 1221.57 8.254
148 064 148Gd 137752.134 1220.761 8.248
148 065 148Tb 137757.359 1214.243 8.204
148 066 148Dy 137759.529 1210.78 8.181
148 067 148Ho 137768.857 1200.159 8.109
149 058 149Ce 138696.27 1223.951 8.214
149 059 149Pr 138691.399 1227.529 8.238
149 060 149Nd 138687.567 1230.067 8.255
149 061 149Pm 138685.366 1230.975 8.262
149 062 149Sm 138683.784 1231.263 8.264
149 063 149Eu 138683.968 1229.786 8.254
149 064 149Gd 138684.771 1227.69 8.24
149 065 149Tb 138687.897 1223.271 8.21
149 066 149Dy 138691.167 1218.707 8.179
149 067 149Ho 138696.683 1211.898 8.134
149 068 149Er 138704.118 1203.17 8.075
150 058 150Ce 139629.644 1230.142 8.201
150 059 150Pr 139625.649 1232.844 8.219
150 060 150Nd 139619.752 1237.448 8.25
150 061 150Pm 139619.328 1236.578 8.244
150 062 150Sm 139615.363 1239.25 8.262
150 063 150Eu 139617.112 1236.208 8.241
150 064 150Gd 139615.629 1236.397 8.243
150 065 150Tb 139619.776 1230.957 8.206
150 066 150Dy 139621.059 1228.381 8.189
150 067 150Ho 139627.917 1220.229 8.135
150 068 150Er 139631.521 1215.332 8.102
151 058 151Ce 140564.458 1234.894 8.178
151 059 151Pr 140558.676 1239.382 8.208
151 060 151Nd 140553.983 1242.782 8.23
151 061 151Pm 140551.03 1244.442 8.241
151 062 151Sm 140549.332 1244.847 8.244
151 063 151Eu 140548.744 1244.141 8.239
151 064 151Gd 140548.697 1242.895 8.231
151 065 151Tb 140550.751 1239.547 8.209
151 066 151Dy 140553.111 1235.894 8.185
151 067 151Ho 140557.727 1229.985 8.146
151 068 151Er 140562.582 1223.836 8.105
151 069 151Tm 140569.555 1215.57 8.05
151 070 151Yb 140578.286 1205.546 7.984
152 059 152Pr 141493.131 1244.493 8.187
152 060 152Nd 141486.272 1250.058 8.224
152 061 152Pm 141484.657 1250.38 8.226
152 062 152Sm 141480.639 1253.104 8.244
152 063 152Eu 141482.003 1250.448 8.227
152 064 152Gd 141479.672 1251.485 8.233
152 065 152Tb 141483.155 1246.709 8.202
152 066 152Dy 141483.24 1245.33 8.193
152 067 152Ho 141489.245 1238.032 8.145
152 068 152Er 141491.842 1234.142 8.119
152 069 152Tm 141500.061 1224.629 8.057
152 070 152Yb 141505.01 1218.387 8.016
153 059 153Pr 142426.805 1250.384 8.172
153 060 153Nd 142420.575 1255.321 8.205
153 061 153Pm 142416.728 1257.874 8.221
153 062 153Sm 142414.336 1258.973 8.229
153 063 153Eu 142413.018 1258.998 8.229
153 064 153Gd 142412.99 1257.732 8.22
153 065 153Tb 142414.049 1255.38 8.205
153 066 153Dy 142415.708 1252.428 8.186
153 067 153Ho 142419.328 1247.514 8.154
153 068 153Er 142423.348 1242.201 8.119
153 069 153Tm 142429.31 1234.946 8.072
153 071 153Lu 142443.893 1217.776 7.959
154 059 154Pr 143361.729 1255.025 8.15
154 060 154Nd 143353.728 1261.733 8.193
154 061 154Pm 143350.407 1263.76 8.206
154 062 154Sm 143345.934 1266.94 8.227
154 063 154Eu 143346.141 1265.44 8.217
154 064 154Gd 143343.661 1266.627 8.225
154 065 154Tb 143346.703 1262.291 8.197
154 066 154Dy 143345.954 1261.747 8.193
154 067 154Ho 143351.197 1255.211 8.151
154 068 154Er 143352.718 1252.396 8.132
154 069 154Tm 143360.39 1243.431 8.074
154 070 154Yb 143364.374 1238.154 8.04
155 061 155Pm 144283.431 1270.302 8.195
155 062 155Sm 144279.693 1272.747 8.211
155 063 155Eu 144277.555 1273.592 8.217
155 064 155Gd 144276.791 1273.062 8.213
155 065 155Tb 144277.103 1271.456 8.203
155 066 155Dy 144278.686 1268.58 8.184
155 067 155Ho 144281.295 1264.678 8.159
155 068 155Er 144284.609 1260.07 8.129
155 069 155Tm 144289.678 1253.708 8.088
155 070 155Yb 144295.299 1246.794 8.044
155 071 155Lu 144302.737 1238.062 7.987
156 060 156Nd 145221.876 1272.715 8.158
156 061 156Pm 145217.675 1275.623 8.177
156 062 156Sm 145212.014 1279.991 8.205
156 063 156Eu 145210.78 1279.931 8.205
156 064 156Gd 145207.82 1281.598 8.215
156 065 156Tb 145209.753 1278.372 8.195
156 066 156Dy 145208.81 1278.021 8.192
156 067 156Ho 145213.479 1272.059 8.154
156 068 156Er 145214.105 1270.14 8.142
156 069 156Tm 145220.967 1261.984 8.09
156 070 156Yb 145224.032 1257.626 8.062
156 071 156Lu 145233.035 1247.33 7.996
156 072 156Hf 145238.424 1240.647 7.953
157 061 157Pm 146151.019 1281.844 8.165
157 062 157Sm 146146.148 1285.422 8.187
157 063 157Eu 146142.9 1287.377 8.2
157 064 157Gd 146141.025 1287.958 8.204
157 065 157Tb 146140.575 1287.116 8.198
157 066 157Dy 146141.406 1284.991 8.185
157 067 157Ho 146143.494 1281.609 8.163
157 068 157Er 146146.392 1277.418 8.136
157 069 157Tm 146150.592 1271.925 8.101
157 070 157Yb 146155.348 1265.875 8.063
157 071 157Lu 146161.796 1258.134 8.014
157 073 157Ta 146177.627 1239.716 7.896
158 061 158Pm 147085.793 1286.636 8.143
158 062 158Sm 147079.162 1291.973 8.177
158 063 158Eu 147076.651 1293.191 8.185
158 064 158Gd 147072.653 1295.896 8.202
158 065 158Tb 147073.362 1293.894 8.189
158 066 158Dy 147071.916 1294.046 8.19
158 067 158Ho 147075.626 1289.043 8.158
158 068 158Er 147076.002 1287.373 8.148
158 069 158Tm 147082.092 1279.99 8.101
158 070 158Yb 147084.269 1276.52 8.079
158 071 158Lu 147092.559 1266.936 8.019
158 072 158Hf 147097.158 1261.044 7.981
159 062 159Sm 148013.656 1297.045 8.158
159 063 159Eu 148009.302 1300.105 8.177
159 064 159Gd 148006.276 1301.839 8.188
159 065 159Tb 148004.794 1302.027 8.189
159 066 159Dy 148004.649 1300.879 8.182
159 067 159Ho 148005.975 1298.259 8.165
159 068 159Er 148008.233 1294.708 8.143
159 069 159Tm 148011.719 1289.928 8.113
159 070 159Yb 148015.935 1284.419 8.078
159 071 159Lu 148021.557 1277.504 8.035
159 072 159Hf 148027.902 1269.865 7.987
159 073 159Ta 148035.797 1260.677 7.929
160 064 160Gd 148938.39 1309.29 8.183
160 065 160Tb 148937.984 1308.402 8.178
160 066 160Dy 148935.638 1309.455 8.184
160 067 160Ho 148938.417 1305.382 8.159
160 068 160Er 148938.236 1304.27 8.152
160 069 160Tm 148943.483 1297.73 8.111
160 070 160Yb 148945.102 1294.817 8.093
160 071 160Lu 148952.491 1286.135 8.038
160 072 160Hf 148956.313 1281.02 8.006
160 073 160Ta 148965.859 1270.18 7.939
160 074 160W 148971.868 1262.878 7.893
161 064 161Gd 149872.319 1314.925 8.167
161 065 161Tb 149869.853 1316.099 8.175
161 066 161Dy 149868.749 1315.909 8.173
161 067 161Ho 149869.096 1314.269 8.163
161 068 161Er 149870.579 1311.492 8.146
161 069 161Tm 149873.378 1307.4 8.12
161 070 161Yb 149876.922 1302.563 8.09
161 071 161Lu 149881.693 1296.498 8.053
161 072 161Hf 149887.425 1289.473 8.009
161 075 161Re 149911.331 1261.687 7.837
162 064 162Gd 150805.039 1321.771 8.159
162 065 162Tb 150803.135 1322.382 8.163
162 066 162Dy 150800.117 1324.106 8.173
162 067 162Ho 150801.746 1321.184 8.155
162 068 162Er 150800.939 1320.698 8.152
162 069 162Tm 150805.287 1315.056 8.118
162 070 162Yb 150806.428 1312.622 8.103
162 071 162Lu 150812.909 1304.848 8.055
162 072 162Hf 150816.065 1300.398 8.027
162 073 162Ta 150824.947 1290.223 7.964
162 074 162W 150830.214 1283.663 7.924
163 065 163Tb 151735.708 1329.374 8.156
163 066 163Dy 151733.412 1330.377 8.162
163 067 163Ho 151732.903 1329.592 8.157
163 068 163Er 151733.602 1327.6 8.145
163 069 163Tm 151735.53 1324.379 8.125
163 070 163Yb 151738.45 1320.165 8.099
163 071 163Lu 151742.452 1314.87 8.067
163 072 163Hf 151747.446 1308.583 8.028
163 073 163Ta 151753.681 1301.054 7.982
163 074 163W 151760.8 1292.642 7.93
163 075 163Re 151769.192 1282.957 7.871
164 065 164Tb 152669.723 1334.924 8.14
164 066 164Dy 152665.319 1338.035 8.159
164 067 164Ho 152665.794 1336.267 8.148
164 068 164Er 152664.32 1336.447 8.149
164 069 164Tm 152667.871 1331.603 8.12
164 070 164Yb 152668.225 1329.956 8.109
164 071 164Lu 152674.095 1322.792 8.066
164 072 164Hf 152676.404 1319.19 8.044
164 073 164Ta 152684.432 1309.869 7.987
164 074 164W 152688.97 1304.037 7.951
164 076 164Os 152705.722 1284.699 7.834
165 066 165Dy 153599.168 1343.751 8.144
165 067 165Ho 153597.371 1344.256 8.147
165 068 165Er 153597.236 1343.097 8.14
165 069 165Tm 153598.317 1340.722 8.126
165 070 165Yb 153600.455 1337.291 8.105
165 071 165Lu 153603.789 1332.664 8.077
165 072 165Hf 153608.084 1327.075 8.043
165 073 165Ta 153613.354 1320.512 8.003
165 074 165W 153619.836 1312.737 7.956
165 075 165Re 153627.53 1303.749 7.902
166 065 166Tb 154537.031 1346.747 8.113
166 066 166Dy 154531.69 1350.795 8.137
166 067 166Ho 154530.692 1350.499 8.136
166 068 166Er 154528.327 1351.572 8.142
166 069 166Tm 154530.853 1347.752 8.119
166 070 166Yb 154530.648 1346.663 8.112
166 071 166Lu 154535.704 1340.314 8.074
166 072 166Hf 154537.355 1337.37 8.056
166 073 166Ta 154544.605 1328.826 8.005
166 074 166W 154548.3 1323.838 7.975
166 076 166Os 154563.732 1305.819 7.866
167 066 167Dy 155465.834 1356.216 8.121
167 067 167Ho 155462.976 1357.781 8.13
167 068 167Er 155461.456 1358.008 8.132
167 069 167Tm 155461.693 1356.477 8.123
167 070 167Yb 155463.136 1353.741 8.106
167 071 167Lu 155465.719 1349.864 8.083
167 072 167Hf 155469.24 1345.05 8.054
167 073 167Ta 155473.846 1339.151 8.019
167 074 167W 155479.597 1332.106 7.977
167 076 167Os 155494.164 1314.953 7.874
167 077 167Ir 155503.074 1304.749 7.813
168 066 168Dy 156398.708 1362.907 8.113
168 067 168Ho 156396.687 1363.635 8.117
168 068 168Er 156393.25 1365.779 8.13
168 069 168Tm 156394.418 1363.318 8.115
168 070 168Yb 156393.649 1362.793 8.112
168 071 168Lu 156397.653 1357.496 8.08
168 072 168Hf 156398.841 1355.014 8.066
168 073 168Ta 156405.297 1347.265 8.019
168 074 168W 156408.29 1342.979 7.994
168 075 168Re 156416.879 1333.096 7.935
168 076 168Os 156422.167 1326.515 7.896
168 078 168Pt 156440.096 1305.999 7.774
169 066 169Dy 157333.162 1368.019 8.095
169 067 169Ho 157329.448 1370.439 8.109
169 068 169Er 157326.812 1371.783 8.117
169 069 169Tm 157325.949 1371.352 8.115
169 070 169Yb 157326.348 1369.659 8.104
169 071 169Lu 157328.13 1366.584 8.086
169 072 169Hf 157330.979 1362.442 8.062
169 073 169Ta 157334.895 1357.232 8.031
169 074 169W 157339.756 1351.078 7.995
169 075 169Re 157345.777 1343.764 7.951
169 076 169Os 157352.931 1335.316 7.901
169 077 169Ir 157361.06 1325.894 7.846
170 067 170Ho 158263.505 1375.948 8.094
170 068 170Er 158259.12 1379.04 8.112
170 069 170Tm 158258.923 1377.944 8.106
170 070 170Yb 158257.443 1378.13 8.107
170 071 170Lu 158260.391 1373.888 8.082
170 072 170Hf 158260.936 1372.05 8.071
170 073 170Ta 158266.541 1365.152 8.03
170 074 170W 158268.875 1361.524 8.009
170 075 170Re 158276.739 1352.367 7.955
170 076 170Os 158281.218 1346.595 7.921
170 078 170Pt 158297.818 1327.408 7.808
171 067 171Ho 159196.719 1382.299 8.084
171 068 171Er 159193.003 1384.721 8.098
171 069 171Tm 159191.002 1385.43 8.102
171 070 171Yb 159190.394 1384.744 8.098
171 071 171Lu 159191.362 1382.483 8.085
171 072 171Hf 159193.253 1379.298 8.066
171 073 171Ta 159196.453 1374.805 8.04
171 074 171W 159200.576 1369.389 8.008
171 075 171Re 159205.901 1362.77 7.969
171 076 171Os 159212.347 1355.031 7.924
171 077 171Ir 159219.699 1346.386 7.874
171 078 171Pt 159228.148 1336.643 7.817
171 079 171Au 159237.542 1325.956 7.754
172 068 172Er 160125.733 1391.557 8.09
172 069 172Tm 160124.331 1391.666 8.091
172 070 172Yb 160121.94 1392.764 8.097
172 071 172Lu 160123.948 1389.462 8.078
172 072 172Hf 160123.774 1388.343 8.072
172 073 172Ta 160128.337 1382.486 8.038
172 074 172W 160130.059 1379.471 8.02
172 075 172Re 160137.125 1371.112 7.972
172 076 172Os 160140.896 1366.047 7.942
172 078 172Pt 160156.011 1348.346 7.839
172 080 172Hg 160175 1326.77 7.714
173 069 173Tm 161056.946 1398.616 8.084
173 070 173Yb 161055.138 1399.131 8.087
173 071 173Lu 161055.298 1397.678 8.079
173 072 173Hf 161056.26 1395.422 8.066
173 073 173Ta 161058.764 1391.625 8.044
173 074 173W 161061.923 1387.172 8.018
173 075 173Re 161066.585 1381.217 7.984
173 076 173Os 161072.19 1374.319 7.944
173 077 173Ir 161078.845 1366.37 7.898
173 078 173Pt 161086.666 1357.256 7.845
173 079 173Au 161095.275 1347.354 7.788
174 069 174Tm 161990.829 1404.298 8.071
174 070 174Yb 161987.239 1406.595 8.084
174 071 174Lu 161988.102 1404.439 8.071
174 072 174Hf 161987.32 1403.928 8.069
174 073 174Ta 161990.914 1399.04 8.04
174 074 174W 161991.917 1396.744 8.027
174 075 174Re 161997.96 1389.407 7.985
174 076 174Os 162001.126 1384.948 7.959
174 077 174Ir 162009.742 1375.039 7.903
174 078 174Pt 162014.781 1368.706 7.866
174 080 174Hg 162032.431 1348.47 7.75
175 069 175Tm 162923.873 1410.819 8.062
175 070 175Yb 162920.982 1412.418 8.071
175 071 175Lu 162920.001 1412.106 8.069
175 072 175Hf 162920.177 1410.636 8.061
175 073 175Ta 162921.74 1407.779 8.044
175 074 175W 162924.005 1404.221 8.024
175 075 175Re 162927.839 1399.093 7.995
175 076 175Os 162932.511 1393.128 7.961
175 077 175Ir 162938.676 1385.67 7.918
175 078 175Pt 162945.904 1377.148 7.869
175 079 175Au 162953.643 1368.116 7.818
175 080 175Hg 162962.582 1357.884 7.759
176 069 176Tm 163858.317 1415.941 8.045
176 070 176Yb 163853.682 1419.283 8.064
176 071 176Lu 163853.278 1418.394 8.059
176 072 176Hf 163851.577 1418.801 8.061
176 073 176Ta 163854.273 1414.811 8.039
176 074 176W 163854.49 1413.301 8.03
176 075 176Re 163859.558 1406.94 7.994
176 076 176Os 163862.012 1403.192 7.973
176 077 176Ir 163869.738 1394.173 7.921
176 078 176Pt 163874.16 1388.458 7.889
176 080 176Hg 163890.287 1369.744 7.783
177 070 177Yb 164787.681 1424.849 8.05
177 071 177Lu 164785.77 1425.466 8.053
177 072 177Hf 164784.759 1425.185 8.052
177 073 177Ta 164785.413 1423.237 8.041
177 074 177W 164786.924 1420.432 8.025
177 075 177Re 164789.846 1416.217 8.001
177 076 177Os 164793.654 1411.116 7.972
177 077 177Ir 164799.046 1404.43 7.935
177 078 177Pt 164805.212 1396.971 7.892
177 079 177Au 164812.521 1388.369 7.844
177 080 177Hg 164820.78 1378.816 7.79
177 081 177Tl 164829.721 1368.582 7.732
178 070 178Yb 165720.466 1431.629 8.043
178 071 178Lu 165719.31 1431.492 8.042
178 072 178Hf 165716.698 1432.811 8.049
178 073 178Ta 165718.124 1430.091 8.034
178 074 178W 165717.704 1429.218 8.029
178 075 178Re 165721.956 1423.672 7.998
178 076 178Os 165723.552 1420.783 7.982
178 077 178Ir 165730.335 1412.707 7.937
178 078 178Pt 165734.078 1407.67 7.908
178 079 178Au 165743.235 1397.22 7.85
178 080 178Hg 165748.737 1390.425 7.811
178 082 178Pb 165767.6 1368.975 7.691
179 071 179Lu 166652.083 1438.284 8.035
179 072 179Hf 166650.165 1438.91 8.039
179 073 179Ta 166649.759 1438.022 8.034
179 074 179W 166650.31 1436.177 8.023
179 075 179Re 166652.517 1432.677 8.004
179 076 179Os 166655.572 1428.328 7.979
179 077 179Ir 166660.004 1422.603 7.948
179 078 179Pt 166665.306 1416.008 7.911
179 079 179Au 166672.107 1407.913 7.865
179 080 179Hg 166679.626 1399.101 7.816
179 081 179Tl 166687.737 1389.697 7.764
180 071 180Lu 167585.951 1443.981 8.022
180 072 180Hf 167582.342 1446.297 8.035
180 073 180Ta 167582.683 1444.663 8.026
180 074 180W 167581.464 1444.588 8.025
180 075 180Re 167584.757 1440.002 8
180 076 180Os 167585.727 1437.739 7.987
180 077 180Ir 167591.597 1430.575 7.948
180 078 180Pt 167594.628 1426.251 7.924
180 079 180Au 167602.957 1416.629 7.87
180 080 180Hg 167607.797 1410.495 7.836
180 082 180Pb 167625.081 1390.625 7.726
181 072 181Hf 168516.213 1451.992 8.022
181 073 181Ta 168514.672 1452.24 8.023
181 074 181W 168514.348 1451.27 8.018
181 075 181Re 168515.58 1448.744 8.004
181 076 181Os 168518.03 1445.001 7.983
181 077 181Ir 168521.597 1440.141 7.957
181 078 181Pt 168526.183 1434.261 7.924
181 079 181Au 168532.176 1426.975 7.884
181 080 181Hg 168538.875 1418.983 7.84
181 081 181Tl 168546.224 1410.34 7.792
181 082 181Pb 168555.374 1399.897 7.734
182 072 182Hf 169449.059 1458.711 8.015
182 073 182Ta 169448.174 1458.303 8.013
182 074 182W 169445.849 1459.335 8.018
182 075 182Re 169448.135 1455.755 7.999
182 076 182Os 169448.465 1454.131 7.99
182 077 182Ir 169453.511 1447.792 7.955
182 078 182Pt 169455.883 1444.127 7.935
182 079 182Au 169463.24 1435.476 7.887
182 080 182Hg 169467.454 1429.969 7.857
182 081 182Tl 169477.169 1418.961 7.796
182 082 182Pb 169483.182 1411.654 7.756
183 072 183Hf 170383.322 1464.013 8
183 073 183Ta 170380.805 1465.237 8.007
183 074 183W 170379.223 1465.525 8.008
183 075 183Re 170379.268 1464.187 8.001
183 076 183Os 170380.908 1461.254 7.985
183 077 183Ir 170383.86 1457.008 7.962
183 078 183Pt 170387.774 1451.801 7.933
183 079 183Au 170392.848 1445.434 7.899
183 080 183Hg 170398.724 1438.264 7.859
183 081 183Tl 170405.426 1430.269 7.816
183 082 183Pb 170413.933 1420.469 7.762
184 072 184Hf 171316.606 1470.294 7.991
184 073 184Ta 171314.754 1470.853 7.994
184 074 184W 171311.377 1472.937 8.005
184 075 184Re 171312.346 1470.674 7.993
184 076 184Os 171311.806 1469.921 7.989
184 077 184Ir 171315.94 1464.494 7.959
184 078 184Pt 171317.708 1461.432 7.943
184 079 184Au 171324.21 1453.637 7.9
184 080 184Hg 171327.669 1448.885 7.874
184 081 184Tl 171336.617 1438.643 7.819
184 082 184Pb 171341.951 1432.016 7.783
185 073 185Ta 172247.693 1477.479 7.986
185 074 185W 172245.189 1478.691 7.993
185 075 185Re 172244.245 1478.341 7.991
185 076 185Os 172244.747 1476.546 7.981
185 077 185Ir 172246.709 1473.29 7.964
185 078 185Pt 172249.854 1468.852 7.94
185 079 185Au 172254.156 1463.256 7.909
185 080 185Hg 172259.336 1456.783 7.875
185 081 185Tl 172265.241 1449.585 7.836
185 082 185Pb 172272.949 1440.583 7.787
186 073 186Ta 173181.973 1482.765 7.972
186 074 186W 173177.563 1485.882 7.989
186 075 186Re 173177.631 1484.52 7.981
186 076 186Os 173176.051 1484.807 7.983
186 077 186Ir 173179.367 1480.198 7.958
186 078 186Pt 173180.165 1478.107 7.947
186 079 186Au 173185.803 1471.176 7.91
186 080 186Hg 173188.468 1467.217 7.888
186 081 186Tl 173196.306 1458.086 7.839
186 082 186Pb 173201.304 1451.795 7.805
186 083 186Bi 173212.304 1439.501 7.739
187 074 187W 174111.662 1491.348 7.975
187 075 187Re 174109.84 1491.877 7.978
187 076 187Os 174109.326 1491.097 7.974
187 077 187Ir 174110.318 1488.813 7.962
187 078 187Pt 174112.81 1485.027 7.941
187 079 187Au 174116.007 1480.537 7.917
187 080 187Hg 174120.383 1474.868 7.887
187 081 187Tl 174125.546 1468.411 7.852
187 082 187Pb 174132.499 1460.165 7.808
187 083 187Bi 174140.595 1450.776 7.758
188 074 188W 175044.394 1498.182 7.969
188 075 188Re 175043.533 1497.749 7.967
188 076 188Os 175040.902 1499.087 7.974
188 077 188Ir 175043.2 1495.496 7.955
188 078 188Pt 175043.194 1494.209 7.948
188 079 188Au 175048.205 1487.904 7.914
188 080 188Hg 175049.793 1485.023 7.899
188 081 188Tl 175057.134 1476.389 7.853
188 082 188Pb 175061.158 1471.071 7.825
188 083 188Bi 175071.262 1459.674 7.764
188 084 188Po 175077.413 1452.23 7.725
189 074 189W 175979.075 1503.066 7.953
189 075 189Re 175976.066 1504.782 7.962
189 076 189Os 175974.547 1505.007 7.963
189 077 189Ir 175974.569 1503.692 7.956
189 078 189Pt 175976.028 1500.94 7.941
189 079 189Au 175978.418 1497.257 7.922
189 080 189Hg 175981.859 1492.522 7.897
189 081 189Tl 175986.376 1486.712 7.866
189 082 189Pb 175992.587 1479.208 7.826
189 083 189Bi 175999.896 1470.605 7.781
189 084 189Po 176008.03 1461.178 7.731
190 074 190W 176911.749 1509.958 7.947
190 075 190Re 176909.968 1510.445 7.95
190 076 190Os 176906.32 1512.799 7.962
190 077 190Ir 176907.764 1510.062 7.948
190 078 190Pt 176906.682 1509.851 7.947
190 079 190Au 176910.613 1504.627 7.919
190 080 190Hg 176911.613 1502.334 7.907
190 081 190Tl 176918.142 1494.511 7.866
190 082 190Pb 176921.544 1489.816 7.841
190 083 190Bi 176930.55 1479.517 7.787
190 084 190Po 176936.376 1472.397 7.749
191 075 191Re 177842.683 1517.296 7.944
191 076 191Os 177840.127 1518.558 7.951
191 077 191Ir 177839.303 1518.088 7.948
191 078 191Pt 177839.801 1516.298 7.939
191 079 191Au 177841.178 1513.627 7.925
191 080 191Hg 177843.884 1509.628 7.904
191 081 191Tl 177847.685 1504.534 7.877
191 082 191Pb 177853.205 1497.72 7.841
191 083 191Bi 177859.704 1489.928 7.801
191 084 191Po 177867.379 1480.96 7.754
192 076 192Os 178772.134 1526.116 7.949
192 077 192Ir 178772.67 1524.286 7.939
192 078 192Pt 178770.7 1524.964 7.943
192 079 192Au 178773.705 1520.666 7.92
192 080 192Hg 178773.96 1519.117 7.912
192 081 192Tl 178779.59 1512.194 7.876
192 082 192Pb 178782.393 1508.098 7.855
192 083 192Bi 178790.888 1498.309 7.804
192 084 192Po 178795.856 1492.048 7.771
193 076 193Os 179706.116 1531.699 7.936
193 077 193Ir 179704.464 1532.058 7.938
193 078 193Pt 179704.01 1531.219 7.934
193 079 193Au 179704.582 1529.354 7.924
193 080 193Hg 179706.414 1526.229 7.908
193 081 193Tl 179709.634 1521.715 7.885
193 082 193Pb 179714.253 1515.803 7.854
193 083 193Bi 179720.059 1508.704 7.817
193 084 193Po 179727.061 1500.408 7.774
193 085 193At 179734.76 1491.416 7.728
194 076 194Os 180638.57 1538.811 7.932
194 077 194Ir 180637.962 1538.125 7.928
194 078 194Pt 180635.218 1539.577 7.936
194 079 194Au 180637.208 1536.293 7.919
194 080 194Hg 180636.766 1535.442 7.915
194 081 194Tl 180641.618 1529.297 7.883
194 082 194Pb 180643.729 1525.892 7.865
194 083 194Bi 180651.436 1516.892 7.819
194 084 194Po 180655.91 1511.125 7.789
194 085 194At 180665.214 1500.527 7.735
195 076 195Os 181572.807 1544.139 7.919
195 077 195Ir 181570.296 1545.357 7.925
195 078 195Pt 181568.678 1545.682 7.927
195 079 195Au 181568.394 1544.673 7.921
195 080 195Hg 181569.453 1542.32 7.909
195 081 195Tl 181571.787 1538.693 7.891
195 082 195Pb 181575.717 1533.47 7.864
195 083 195Bi 181580.896 1526.997 7.831
195 084 195Po 181587.339 1519.261 7.791
195 085 195At 181594.422 1510.885 7.748
195 086 195Rn 181602.457 1501.556 7.7
196 076 196Os 182505.711 1550.801 7.912
196 077 196Ir 182504.04 1551.178 7.914
196 078 196Pt 182500.321 1553.604 7.927
196 079 196Au 182501.318 1551.314 7.915
196 080 196Hg 182500.12 1551.218 7.914
196 081 196Tl 182503.939 1546.106 7.888
196 082 196Pb 182505.564 1543.188 7.873
196 083 196Bi 182512.405 1535.053 7.832
196 084 196Po 182516.429 1529.736 7.805
196 085 196At 182525.472 1519.4 7.752
196 086 196Rn 182530.851 1512.727 7.718
197 077 197Ir 183436.706 1558.078 7.909
197 078 197Pt 183434.04 1559.45 7.916
197 079 197Au 183432.811 1559.386 7.916
197 080 197Hg 183432.9 1558.004 7.909
197 081 197Tl 183434.589 1555.021 7.894
197 082 197Pb 183437.67 1550.647 7.871
197 083 197Bi 183442.22 1544.804 7.842
197 084 197Po 183448.037 1537.693 7.806
197 085 197At 183454.546 1529.891 7.766
197 086 197Rn 183461.855 1521.289 7.722
198 078 198Pt 184366.049 1567.007 7.914
198 079 198Au 184365.864 1565.899 7.909
198 080 198Hg 184363.98 1566.489 7.912
198 081 198Tl 184366.934 1562.242 7.89
198 082 198Pb 184367.863 1560.019 7.879
198 083 198Bi 184374.033 1552.556 7.841
198 084 198Po 184377.418 1547.878 7.818
198 085 198At 184385.71 1538.292 7.769
198 086 198Rn 184390.638 1532.071 7.738
199 077 199Ir 185303.562 1570.352 7.891
199 078 199Pt 185300.059 1572.562 7.902
199 079 199Au 185297.845 1573.483 7.907
199 080 199Hg 185296.882 1573.153 7.905
199 081 199Tl 185297.859 1570.882 7.894
199 082 199Pb 185300.179 1567.269 7.876
199 083 199Bi 185304.098 1562.056 7.85
199 084 199Po 185309.17 1555.691 7.818
199 085 199At 185315.054 1548.514 7.781
199 086 199Rn 185321.843 1540.431 7.741
199 087 199Fr 185329.612 1531.369 7.695
200 078 200Pt 186232.342 1579.844 7.899
200 079 200Au 186231.164 1579.729 7.899
200 080 200Hg 186228.419 1581.181 7.906
200 081 200Tl 186230.364 1577.942 7.89
200 082 200Pb 186230.658 1576.355 7.882
200 083 200Bi 186236.02 1569.7 7.848
200 084 200Po 186238.925 1565.501 7.828
200 085 200At 186246.38 1556.753 7.784
200 086 200Rn 186250.851 1550.989 7.755
200 087 200Fr 186260.466 1540.08 7.7
201 078 201Pt 187166.699 1585.053 7.886
201 079 201Au 187163.527 1586.931 7.895
201 080 201Hg 187161.753 1587.411 7.898
201 081 201Tl 187161.724 1586.148 7.891
201 082 201Pb 187163.137 1583.441 7.878
201 083 201Bi 187166.468 1578.817 7.855
201 084 201Po 187170.848 1573.144 7.827
201 085 201At 187176.073 1566.625 7.794
201 086 201Rn 187182.281 1559.124 7.757
201 087 201Fr 187189.44 1550.672 7.715
202 079 202Au 188097.022 1593.002 7.886
202 080 202Hg 188093.565 1595.165 7.897
202 081 202Tl 188094.417 1593.02 7.886
202 082 202Pb 188093.955 1592.189 7.882
202 083 202Bi 188098.645 1586.205 7.853
202 084 202Po 188100.943 1582.614 7.835
202 085 202At 188107.765 1574.499 7.795
202 086 202Rn 188111.57 1569.4 7.769
202 087 202Fr 188120.474 1559.203 7.719
202 088 202Ra 188126.033 1552.351 7.685
203 079 203Au 189029.773 1599.816 7.881
203 080 203Hg 189027.136 1601.16 7.887
203 081 203Tl 189026.133 1600.87 7.886
203 082 203Pb 189026.596 1599.113 7.877
203 083 203Bi 189029.332 1595.084 7.858
203 084 203Po 189033.054 1590.068 7.833
203 085 203At 189037.687 1584.142 7.804
203 086 203Rn 189043.179 1577.357 7.77
203 087 203Fr 189049.689 1569.553 7.732
203 088 203Ra 189056.957 1560.992 7.69
204 080 204Hg 189959.209 1608.652 7.886
204 081 204Tl 189959.042 1607.526 7.88
204 082 204Pb 189957.767 1607.507 7.88
204 083 204Bi 189961.699 1602.282 7.854
204 084 204Po 189963.521 1599.167 7.839
204 085 204At 189969.469 1591.925 7.804
204 086 204Rn 189972.849 1587.252 7.781
204 087 204Fr 189980.93 1577.878 7.735
204 088 204Ra 189985.865 1571.649 7.704
205 080 205Hg 190893.106 1614.32 7.875
205 081 205Tl 190891.061 1615.072 7.878
205 082 205Pb 190890.601 1614.239 7.874
205 083 205Bi 190892.798 1610.748 7.857
205 084 205Po 190895.84 1606.413 7.836
205 085 205At 190899.866 1601.094 7.81
205 086 205Rn 190904.617 1595.049 7.781
205 087 205Fr 190910.506 1587.867 7.746
205 088 205Ra 190917.145 1579.935 7.707
206 080 206Hg 191825.941 1621.051 7.869
206 081 206Tl 191824.123 1621.575 7.872
206 082 206Pb 191822.079 1622.325 7.875
206 083 206Bi 191825.326 1617.786 7.853
206 084 206Po 191826.661 1615.157 7.841
206 085 206At 191831.912 1608.613 7.809
206 086 206Rn 191834.705 1604.527 7.789
206 087 206Fr 191842.067 1595.871 7.747
206 088 206Ra 191846.364 1590.281 7.72
206 089 206Ac 191855.798 1579.554 7.668
207 080 207Hg 192762.161 1624.396 7.847
207 081 207Tl 192756.836 1628.428 7.867
207 082 207Pb 192754.907 1629.063 7.87
207 083 207Bi 192756.793 1625.883 7.855
207 084 207Po 192759.191 1622.193 7.837
207 085 207At 192762.583 1617.507 7.814
207 086 207Rn 192766.684 1612.113 7.788
207 087 207Fr 192771.964 1605.54 7.756
207 088 207Ra 192777.833 1598.377 7.722
207 089 207Ac 192784.912 1590.005 7.681
208 081 208Tl 193692.614 1632.214 7.847
208 082 208Pb 193687.104 1636.431 7.867
208 083 208Bi 193689.472 1632.77 7.85
208 084 208Po 193690.361 1630.587 7.839
208 085 208At 193694.829 1624.827 7.812
208 086 208Rn 193697.161 1621.201 7.794
208 087 208Fr 193703.628 1613.441 7.757
208 088 208Ra 193707.501 1608.275 7.732
208 089 208Ac 193716.036 1598.446 7.685
209 081 209Tl 194627.22 1637.174 7.833
209 082 209Pb 194622.732 1640.368 7.849
209 083 209Bi 194621.577 1640.23 7.848
209 084 209Po 194622.959 1637.555 7.835
209 085 209At 194625.934 1633.287 7.815
209 086 209Rn 194629.374 1628.554 7.792
209 087 209Fr 194634.023 1622.611 7.764
209 088 209Ra 194639.131 1616.21 7.733
209 089 209Ac 194645.61 1608.438 7.696
209 090 209Th 194652.759 1599.995 7.655
210 081 210Tl 195563.106 1640.854 7.814
210 082 210Pb 195557.113 1645.554 7.836
210 083 210Bi 195556.538 1644.835 7.833
210 084 210Po 195554.866 1645.214 7.834
210 085 210At 195558.336 1640.45 7.812
210 086 210Rn 195560.199 1637.294 7.797
210 087 210Fr 195565.94 1630.26 7.763
210 088 210Ra 195569.236 1625.67 7.741
210 089 210Ac 195577.054 1616.559 7.698
210 090 210Th 195581.796 1610.524 7.669
211 082 211Pb 196492.843 1649.388 7.817
211 083 211Bi 196490.966 1649.972 7.82
211 084 211Po 196489.88 1649.764 7.819
211 085 211At 196490.155 1648.197 7.811
211 086 211Rn 196492.535 1644.523 7.794
211 087 211Fr 196496.622 1639.143 7.768
211 088 211Ra 196501.105 1633.367 7.741
211 089 211Ac 196506.958 1626.22 7.707
211 090 211Th 196513.157 1618.728 7.672
212 082 212Pb 197427.281 1654.515 7.804
212 083 212Bi 197426.201 1654.303 7.803
212 084 212Po 197423.437 1655.773 7.81
212 085 212At 197424.675 1653.242 7.798
212 086 212Rn 197424.125 1652.499 7.795
212 087 212Fr 197428.736 1646.594 7.767
212 088 212Ra 197431.572 1642.465 7.747
212 089 212Ac 197438.532 1634.212 7.709
212 090 212Th 197442.832 1628.618 7.682
212 091 212Pa 197451.84 1618.317 7.634
213 082 213Pb 198363.139 1658.223 7.785
213 083 213Bi 198360.581 1659.488 7.791
213 084 213Po 198358.648 1660.128 7.794
213 085 213At 198358.211 1659.271 7.79
213 086 213Rn 198358.581 1657.608 7.782
213 087 213Fr 198360.218 1654.678 7.768
213 088 213Ra 198363.615 1649.987 7.746
213 089 213Ac 198368.896 1643.413 7.716
213 090 213Th 198374.355 1636.661 7.684
213 091 213Pa 198381.384 1628.338 7.645
214 082 214Pb 199297.636 1663.292 7.772
214 083 214Bi 199296.106 1663.528 7.773
214 084 214Po 199292.325 1666.016 7.785
214 085 214At 199292.904 1664.144 7.776
214 086 214Rn 199291.453 1664.301 7.777
214 087 214Fr 199294.304 1660.157 7.758
214 088 214Ra 199294.852 1658.316 7.749
214 089 214Ac 199300.669 1651.205 7.716
214 090 214Th 199304.441 1646.14 7.692
214 091 214Pa 199312.708 1636.58 7.648
215 083 215Bi 200230.449 1668.751 7.762
215 084 215Po 200227.749 1670.157 7.768
215 085 215At 200226.523 1670.09 7.768
215 086 215Rn 200226.098 1669.222 7.764
215 087 215Fr 200227.074 1666.952 7.753
215 088 215Ra 200228.779 1663.954 7.739
215 089 215Ac 200231.746 1659.694 7.72
215 090 215Th 200236.15 1653.996 7.693
215 091 215Pa 200242.582 1646.271 7.657
216 083 216Bi 201166.168 1672.597 7.744
216 084 216Po 201161.567 1675.905 7.759
216 085 216At 201161.529 1674.649 7.753
216 086 216Rn 201159.017 1675.868 7.759
216 087 216Fr 201161.229 1672.362 7.742
216 088 216Ra 201161.03 1671.268 7.737
216 089 216Ac 201165.351 1665.654 7.711
216 090 216Th 201167.021 1662.69 7.698
216 091 216Pa 201174.006 1654.412 7.659
217 084 217Po 202097.178 1679.859 7.741
217 085 217At 202095.162 1680.581 7.745
217 086 217Rn 202093.914 1680.536 7.744
217 087 217Fr 202094.059 1679.098 7.738
217 088 217Ra 202095.12 1676.743 7.727
217 089 217Ac 202097.429 1673.141 7.71
217 090 217Th 202100.427 1668.85 7.691
217 091 217Pa 202104.77 1663.213 7.665
217 092 217U 202109.889 1656.801 7.635
218 084 218Po 203031.129 1685.473 7.732
218 085 218At 203030.359 1684.95 7.729
218 086 218Rn 203026.966 1687.049 7.739
218 087 218Fr 203028.297 1684.425 7.727
218 088 218Ra 203027.378 1684.051 7.725
218 089 218Ac 203031.056 1679.079 7.702
218 090 218Th 203032.079 1676.763 7.692
218 091 218Pa 203037.863 1669.686 7.659
218 092 218U 203040.603 1665.652 7.641
219 085 219At 203964.151 1690.723 7.72
219 086 219Rn 203962.074 1691.507 7.724
219 087 219Fr 203961.35 1690.937 7.721
219 088 219Ra 203961.615 1689.379 7.714
219 089 219Ac 203963.28 1686.421 7.701
219 090 219Th 203965.669 1682.738 7.684
219 091 219Pa 203969.208 1677.906 7.662
219 092 219U 203973.387 1672.434 7.637
220 085 220At 204899.598 1694.841 7.704
220 086 220Rn 204895.35 1697.796 7.717
220 087 220Fr 204895.709 1696.144 7.71
220 088 220Ra 204893.988 1696.571 7.712
220 089 220Ac 204896.956 1692.31 7.692
220 090 220Th 204897.362 1690.611 7.685
220 091 220Pa 204902.562 1684.117 7.655
221 086 221Rn 205830.703 1702.008 7.701
221 087 221Fr 205828.998 1702.42 7.703
221 088 221Ra 205828.173 1701.952 7.701
221 089 221Ac 205829.218 1699.613 7.691
221 090 221Th 205831.125 1696.413 7.676
221 091 221Pa 205834.056 1692.189 7.657
222 086 222Rn 206764.099 1708.178 7.694
222 087 222Fr 206763.563 1707.42 7.691
222 088 222Ra 206761.024 1708.666 7.697
222 089 222Ac 206762.813 1705.584 7.683
222 090 222Th 206762.884 1704.219 7.677
223 087 223Fr 207697.092 1713.457 7.684
223 088 223Ra 207695.432 1713.824 7.685
223 089 223Ac 207695.512 1712.45 7.679
223 090 223Th 207696.561 1710.108 7.669
223 091 223Pa 207698.984 1706.391 7.652
223 092 223U 207701.993 1702.089 7.633
224 087 224Fr 208631.862 1718.252 7.671
224 088 224Ra 208628.518 1720.302 7.68
224 089 224Ac 208629.415 1718.112 7.67
224 090 224Th 208628.665 1717.569 7.668
224 091 224Pa 208632.028 1712.913 7.647
224 092 224U 208633.361 1710.286 7.635
225 087 225Fr 209565.506 1724.173 7.663
225 088 225Ra 209563.179 1725.207 7.668
225 089 225Ac 209562.312 1724.781 7.666
225 090 225Th 209562.473 1723.326 7.659
225 091 225Pa 209563.992 1720.514 7.647
225 092 225U 209566.518 1716.695 7.63
225 093 225Np 209570.22 1711.699 7.608
226 087 226Fr 210500.56 1728.685 7.649
226 088 226Ra 210496.348 1731.603 7.662
226 089 226Ac 210496.478 1730.18 7.656
226 090 226Th 210494.854 1730.511 7.657
226 091 226Pa 210497.179 1726.892 7.641
226 092 226U 210497.964 1724.814 7.632
227 087 227Fr 211434.334 1734.476 7.641
227 088 227Ra 211431.352 1736.165 7.648
227 089 227Ac 211429.513 1736.71 7.651
227 090 227Th 211428.957 1735.973 7.647
227 091 227Pa 211429.472 1734.165 7.639
227 092 227U 211431.151 1731.192 7.626
227 093 227Np 211434.178 1726.872 7.607
228 088 228Ra 212364.609 1742.473 7.642
228 089 228Ac 212364.052 1741.737 7.639
228 090 228Th 212361.417 1743.078 7.645
228 091 228Pa 212363.058 1740.144 7.632
228 092 228U 212362.848 1739.061 7.627
228 094 228Pu 212368.691 1730.631 7.59
229 087 229Fr 213303.492 1744.449 7.618
229 088 229Ra 213299.724 1746.923 7.628
229 089 229Ac 213297.4 1747.954 7.633
229 090 229Th 213295.726 1748.335 7.635
229 091 229Pa 213295.526 1747.241 7.63
229 092 229U 213296.328 1745.146 7.621
229 093 229Np 213298.386 1741.795 7.606
229 094 229Pu 213301.495 1737.392 7.587
230 088 230Ra 214233.173 1753.04 7.622
230 089 230Ac 214231.954 1752.965 7.622
230 090 230Th 214228.497 1755.129 7.631
230 091 230Pa 214229.297 1753.036 7.622
230 092 230U 214228.226 1752.813 7.621
230 093 230Np 214231.34 1748.406 7.602
230 094 230Pu 214232.523 1745.93 7.591
231 089 231Ac 215165.558 1758.927 7.614
231 090 231Th 215162.944 1760.247 7.62
231 091 231Pa 215162.042 1759.856 7.618
231 092 231U 215161.912 1758.693 7.613
231 093 231Np 215163.224 1756.087 7.602
231 094 231Pu 215165.368 1752.65 7.587
232 089 232Ac 216100.282 1763.768 7.602
232 090 232Th 216096.069 1766.687 7.615
232 091 232Pa 216096.058 1765.405 7.61
232 092 232U 216094.21 1765.96 7.612
232 094 232Pu 216096.943 1760.64 7.589
233 090 233Th 217030.848 1771.474 7.603
233 091 233Pa 217029.094 1771.934 7.605
233 092 233U 217028.013 1771.722 7.604
233 093 233Np 217028.532 1769.91 7.596
233 094 233Pu 217030.121 1767.028 7.584
233 096 233Cm 217036.339 1758.223 7.546
234 090 234Th 217964.223 1777.664 7.597
234 091 234Pa 217963.439 1777.155 7.595
234 092 234U 217960.734 1778.567 7.601
234 093 234Np 217962.032 1775.975 7.59
234 094 234Pu 217961.915 1774.799 7.585
234 096 234Cm 217967.267 1766.86 7.551
235 090 235Th 218899.363 1782.09 7.583
235 091 235Pa 218896.922 1783.237 7.588
235 092 235U 218895.002 1783.864 7.591
235 093 235Np 218894.615 1782.958 7.587
235 094 235Pu 218895.243 1781.036 7.579
236 091 236Pa 219831.436 1788.289 7.577
236 092 236U 219828.021 1790.41 7.586
236 093 236Np 219828.444 1788.694 7.579
236 094 236Pu 219827.456 1788.389 7.578
237 091 237Pa 220765.22 1794.07 7.57
237 092 237U 220762.461 1795.536 7.576
237 093 237Np 220761.431 1795.272 7.575
237 094 237Pu 220761.14 1794.27 7.571
238 091 238Pa 221699.844 1799.011 7.559
238 092 238U 221695.872 1801.69 7.57
238 093 238Np 221695.508 1800.76 7.566
238 094 238Pu 221693.706 1801.269 7.568
238 095 238Am 221695.45 1798.232 7.556
238 096 238Cm 221695.919 1796.469 7.548
239 092 239U 222630.631 1806.496 7.559
239 093 239Np 222628.859 1806.975 7.561
239 094 239Pu 222627.625 1806.916 7.56
239 095 239Am 222627.916 1805.331 7.554
240 092 240U 223564.266 1812.426 7.552
240 093 240Np 223563.355 1812.044 7.55
240 094 240Pu 223560.656 1813.45 7.556
240 095 240Am 223561.53 1811.282 7.547
240 096 240Cm 223561.233 1810.287 7.543
241 093 241Np 224496.794 1818.17 7.544
241 094 241Pu 224494.98 1818.691 7.546
241 095 241Am 224494.448 1817.93 7.543
241 096 241Cm 224494.705 1816.38 7.537
242 093 242Np 225431.448 1823.082 7.533
242 094 242Pu 225428.236 1825.001 7.541
242 095 242Am 225428.476 1823.467 7.535
242 096 242Cm 225427.3 1823.35 7.535
242 098 242Cf 225430.813 1817.25 7.509
243 094 243Pu 226362.767 1830.035 7.531
243 095 243Am 226361.676 1829.832 7.53
243 096 243Cm 226361.173 1829.042 7.527
243 097 243Bk 226362.169 1826.753 7.518
244 094 244Pu 227296.311 1836.056 7.525
244 095 244Am 227295.875 1835.199 7.521
244 096 244Cm 227293.937 1835.844 7.524
244 097 244Bk 227295.688 1832.799 7.511
244 098 244Cf 227295.94 1831.254 7.505
245 094 245Pu 228231.105 1840.827 7.514
245 095 245Am 228229.388 1841.251 7.515
245 096 245Cm 228227.982 1841.364 7.516
245 097 245Bk 228228.282 1839.771 7.509
245 098 245Cf 228229.342 1837.417 7.5
246 094 246Pu 229164.888 1846.61 7.507
246 095 246Am 229163.977 1846.227 7.505
246 096 246Cm 229161.09 1847.822 7.511
246 097 246Bk 229161.93 1845.688 7.503
246 098 246Cf 229161.541 1844.784 7.499
246 100 246Fm 229166.567 1837.171 7.468
247 096 247Cm 230095.499 1852.977 7.502
247 097 247Bk 230094.945 1852.238 7.499
247 098 247Cf 230095.08 1850.81 7.493
248 096 248Cm 231028.851 1859.191 7.497
248 098 248Cf 231027.677 1857.778 7.491
248 100 248Fm 231031.321 1851.547 7.466
249 096 249Cm 231963.703 1863.904 7.486
249 097 249Bk 231962.292 1864.022 7.486
249 098 249Cf 231961.657 1863.364 7.483
250 096 250Cm 232897.436 1869.736 7.479
250 097 250Bk 232896.887 1868.992 7.476
250 098 250Cf 232894.597 1869.989 7.48
250 100 250Fm 232896.477 1865.522 7.462
251 096 251Cm 233832.589 1874.149 7.467
251 097 251Bk 233830.658 1874.786 7.469
251 098 251Cf 233829.054 1875.097 7.471
251 099 251Es 233828.92 1873.938 7.466
251 100 251Fm 233829.884 1871.68 7.457
252 098 252Cf 234762.447 1881.269 7.465
252 099 252Es 234763.192 1879.231 7.457
252 100 252Fm 234762.208 1878.922 7.456
252 102 252No 234767.25 1871.293 7.426
253 098 253Cf 235697.208 1886.074 7.455
253 099 253Es 235696.41 1885.579 7.453
253 100 253Fm 235696.235 1884.46 7.448
254 098 254Cf 236630.742 1892.105 7.449
254 099 254Es 236630.882 1890.672 7.444
254 100 254Fm 236629.284 1890.977 7.445
254 102 254No 236632.081 1885.593 7.424
255 099 255Es 237564.473 1896.646 7.438
255 100 255Fm 237563.672 1896.154 7.436
255 101 255Md 237564.205 1894.327 7.429
255 102 255No 237565.705 1891.534 7.418
256 100 256Fm 238496.853 1902.538 7.432
256 101 256Md 238498.476 1899.622 7.42
256 102 256No 238498.169 1898.635 7.417
256 104 256Rf 238503.559 1890.659 7.385
257 100 257Fm 239431.45 1907.506 7.422
257 101 257Md 239431.347 1906.317 7.418
257 102 257No 239432.08 1904.289 7.41
258 101 258Md 240365.532 1911.696 7.41
260 106 260Sg 242240.857 1909.035 7.342
261 104 261Rf 243168.109 1923.936 7.371
264 108 264Hs 245978.832 1926.736 7.298
265 106 265Sg 246904.568 1943.152 7.333
###Markdown
Remove the newline character at the end of each line by .strip()
###Code
for row in open('index.txt').readlines():
if not row.startswith('#'):
print row.strip()
###Output
001 001 1H 938.272 0 0
002 001 2H 1875.613 2.225 1.112
003 001 3H 2808.921 8.482 2.827
003 002 3He 2808.391 7.718 2.573
004 001 4H 3751.365 5.603 1.401
004 002 4He 3727.379 28.296 7.074
004 003 4Li 3749.763 4.618 1.155
005 001 5H 4689.849 6.684 1.337
005 002 5He 4667.838 27.402 5.48
005 003 5Li 4667.617 26.33 5.266
006 001 6H 5630.313 5.786 0.964
006 002 6He 5605.537 29.268 4.878
006 003 6Li 5601.518 31.994 5.332
006 004 6Be 5605.295 26.924 4.487
007 002 7He 6545.537 28.834 4.119
007 003 7Li 6533.833 39.244 5.606
007 004 7Be 6534.184 37.6 5.371
007 005 7B 6545.773 24.718 3.531
008 002 8He 7482.528 31.408 3.926
008 003 8Li 7471.366 41.277 5.16
008 004 8Be 7454.85 56.5 7.062
008 005 8B 7472.319 37.737 4.717
008 006 8C 7483.98 24.783 3.098
009 002 9He 8423.363 30.138 3.349
009 003 9Li 8406.867 45.341 5.038
009 004 9Be 8392.75 58.165 6.463
009 005 9B 8393.307 56.314 6.257
009 006 9C 8409.291 39.037 4.337
010 002 10He 9362.728 30.339 3.034
010 003 10Li 9346.458 45.315 4.532
010 004 10Be 9325.503 64.977 6.498
010 005 10B 9324.436 64.751 6.475
010 006 10C 9327.573 60.32 6.032
010 007 10N 9350.163 36.437 3.644
011 003 11Li 10285.698 45.64 4.149
011 004 11Be 10264.564 65.481 5.953
011 005 11B 10252.547 76.205 6.928
011 006 11C 10254.018 73.44 6.676
011 007 11N 10267.157 59.008 5.364
012 004 12Be 11200.961 68.649 5.721
012 005 12B 11188.742 79.575 6.631
012 006 12C 11174.862 92.162 7.68
012 007 12N 11191.689 74.041 6.17
012 008 12O 11205.888 58.549 4.879
013 004 13Be 12140.628 68.548 5.273
013 005 13B 12123.429 84.453 6.496
013 006 13C 12109.481 97.108 7.47
013 007 13N 12111.191 94.105 7.239
013 008 13O 12128.446 75.556 5.812
014 004 14Be 13078.822 69.919 4.994
014 005 14B 13062.025 85.423 6.102
014 006 14C 13040.87 105.285 7.52
014 007 14N 13040.203 104.659 7.476
014 008 14O 13044.836 98.732 7.052
015 005 15B 13998.827 88.186 5.879
015 006 15C 13979.217 106.503 7.1
015 007 15N 13968.935 115.492 7.699
015 008 15O 13971.178 111.955 7.464
015 009 15F 13984.591 97.249 6.483
016 005 16B 14938.429 88.149 5.509
016 006 16C 14914.532 110.753 6.922
016 007 16N 14906.011 117.981 7.374
016 008 16O 14895.079 127.619 7.976
016 009 16F 14909.985 111.42 6.964
016 010 16Ne 14922.79 97.322 6.083
017 005 17B 15876.613 89.531 5.267
017 006 17C 15853.371 111.479 6.558
017 007 17N 15839.692 123.865 7.286
017 008 17O 15830.501 131.763 7.751
017 009 17F 15832.751 128.22 7.542
017 010 17Ne 15846.749 112.928 6.643
018 006 18C 16788.756 115.66 6.426
018 007 18N 16776.429 126.693 7.039
018 008 18O 16762.023 139.807 7.767
018 009 18F 16763.167 137.369 7.632
018 010 18Ne 16767.099 132.143 7.341
018 011 18Na 16785.461 112.488 6.249
019 006 19C 17727.74 116.241 6.118
019 007 19N 17710.671 132.017 6.948
019 008 19O 17697.633 143.761 7.566
019 009 19F 17692.3 147.801 7.779
019 010 19Ne 17695.028 143.78 7.567
019 011 19Na 17705.692 131.822 6.938
019 012 19Mg 17725.294 110.927 5.838
020 006 20C 18664.374 119.172 5.959
020 007 20N 18648.073 134.18 6.709
020 008 20O 18629.59 151.37 7.569
020 009 20F 18625.264 154.403 7.72
020 010 20Ne 18617.728 160.645 8.032
020 011 20Na 18631.107 145.973 7.299
020 012 20Mg 18641.318 134.468 6.723
021 007 21N 19583.047 138.771 6.608
021 008 21O 19565.349 155.176 7.389
021 009 21F 19556.728 162.504 7.738
021 010 21Ne 19550.533 167.406 7.972
021 011 21Na 19553.569 163.076 7.766
021 012 21Mg 19566.153 149.199 7.105
022 007 22N 20521.331 140.053 6.366
022 008 22O 20498.06 162.03 7.365
022 009 22F 20491.062 167.735 7.624
022 010 22Ne 20479.734 177.77 8.08
022 011 22Na 20482.065 174.146 7.916
022 012 22Mg 20486.339 168.578 7.663
023 008 23O 21434.884 164.772 7.164
023 009 23F 21423.093 175.269 7.62
023 010 23Ne 21414.098 182.971 7.955
023 011 23Na 21409.211 186.564 8.111
023 012 23Mg 21412.757 181.726 7.901
023 013 23Al 21424.489 168.7 7.335
024 008 24O 22370.838 168.383 7.016
024 009 24F 22358.817 179.111 7.463
024 010 24Ne 22344.795 191.84 7.993
024 011 24Na 22341.817 193.524 8.064
024 012 24Mg 22335.791 198.257 8.261
024 013 24Al 22349.156 183.598 7.65
024 014 24Si 22359.457 172.004 7.167
025 009 25F 23294.021 183.472 7.339
025 010 25Ne 23280.132 196.068 7.843
025 011 25Na 23272.372 202.535 8.101
025 012 25Mg 23268.026 205.588 8.224
025 013 25Al 23271.791 200.529 8.021
025 014 25Si 23284.02 187.006 7.48
026 009 26F 24232.515 184.543 7.098
026 010 26Ne 24214.164 201.601 7.754
026 011 26Na 24206.361 208.111 8.004
026 012 26Mg 24196.498 216.681 8.334
026 013 26Al 24199.991 211.894 8.15
026 014 26Si 24204.545 206.047 7.925
027 009 27F 25170.669 185.955 6.887
027 010 27Ne 25152.298 203.032 7.52
027 011 27Na 25139.2 214.837 7.957
027 012 27Mg 25129.62 223.124 8.264
027 013 27Al 25126.499 224.952 8.332
027 014 27Si 25130.8 219.357 8.124
027 015 27P 25141.956 206.908 7.663
028 010 28Ne 26087.962 206.934 7.39
028 011 28Na 26075.222 218.38 7.799
028 012 28Mg 26060.682 231.627 8.272
028 013 28Al 26058.339 232.677 8.31
028 014 28Si 26053.186 236.537 8.448
028 015 28P 26067.008 221.421 7.908
028 016 28S 26077.726 209.41 7.479
029 010 29Ne 27026.276 208.185 7.179
029 011 29Na 27010.37 222.798 7.683
029 012 29Mg 26996.575 235.299 8.114
029 013 29Al 26988.468 242.113 8.349
029 014 29Si 26984.277 245.011 8.449
029 015 29P 26988.709 239.286 8.251
029 016 29S 27001.99 224.711 7.749
030 010 30Ne 27962.81 211.216 7.041
030 011 30Na 27947.56 225.173 7.506
030 012 30Mg 27929.777 241.663 8.055
030 013 30Al 27922.305 247.841 8.261
030 014 30Si 27913.233 255.62 8.521
030 015 30P 27916.955 250.605 8.354
030 016 30S 27922.581 243.685 8.123
031 011 31Na 28883.343 228.955 7.386
031 012 31Mg 28866.965 244.04 7.872
031 013 31Al 28854.717 254.994 8.226
031 014 31Si 28846.211 262.207 8.458
031 015 31P 28844.209 262.917 8.481
031 016 31S 28849.094 256.738 8.282
031 017 31Cl 28860.557 243.981 7.87
032 011 32Na 29821.247 230.616 7.207
032 012 32Mg 29800.721 249.849 7.808
032 013 32Al 29790.105 259.172 8.099
032 014 32Si 29776.574 271.41 8.482
032 015 32P 29775.838 270.852 8.464
032 016 32S 29773.617 271.781 8.493
032 017 32Cl 29785.791 258.312 8.072
032 018 32Ar 29796.41 246.4 7.7
033 011 33Na 30758.571 232.858 7.056
033 012 33Mg 30738.064 252.071 7.639
033 013 33Al 30724.129 264.713 8.022
033 014 33Si 30711.655 275.894 8.36
033 015 33P 30705.3 280.956 8.514
033 016 33S 30704.54 280.422 8.498
033 017 33Cl 30709.612 274.057 8.305
033 018 33Ar 30720.72 261.656 7.929
034 012 34Mg 31673.474 256.227 7.536
034 013 34Al 31661.223 267.184 7.858
034 014 34Si 31643.685 283.429 8.336
034 015 34P 31638.573 287.248 8.448
034 016 34S 31632.689 291.839 8.584
034 017 34Cl 31637.67 285.565 8.399
034 018 34Ar 31643.221 278.72 8.198
035 013 35Al 32595.517 272.456 7.784
035 014 35Si 32580.776 285.903 8.169
035 015 35P 32569.768 295.619 8.446
035 016 35S 32565.268 298.825 8.538
035 017 35Cl 32564.59 298.21 8.52
035 018 35Ar 32570.045 291.461 8.327
035 019 35K 32581.412 278.801 7.966
036 013 36Al 33532.921 274.617 7.628
036 014 36Si 33514.15 292.095 8.114
036 015 36P 33505.868 299.083 8.308
036 016 36S 33494.944 308.714 8.575
036 017 36Cl 33495.576 306.79 8.522
036 018 36Ar 33494.355 306.717 8.52
036 019 36K 33506.649 293.129 8.142
036 020 36Ca 33517.124 281.361 7.816
037 013 37Al 34468.585 278.518 7.528
037 014 37Si 34451.544 294.266 7.953
037 015 37P 34438.623 305.894 8.267
037 016 37S 34430.206 313.018 8.46
037 017 37Cl 34424.83 317.101 8.57
037 018 37Ar 34425.133 315.504 8.527
037 019 37K 34430.769 308.575 8.34
037 020 37Ca 34441.897 296.154 8.004
038 013 38Al 35406.18 280.49 7.381
038 014 38Si 35385.549 299.827 7.89
038 015 38P 35374.348 309.735 8.151
038 016 38S 35361.736 321.054 8.449
038 017 38Cl 35358.287 323.208 8.505
038 018 38Ar 35352.86 327.343 8.614
038 019 38K 35358.263 320.646 8.438
038 020 38Ca 35364.494 313.122 8.24
039 013 39Al 36343.024 283.211 7.262
039 014 39Si 36323.043 301.899 7.741
039 015 39P 36307.732 315.916 8.1
039 016 39S 36296.931 325.424 8.344
039 017 39Cl 36289.779 331.282 8.494
039 018 39Ar 36285.827 333.941 8.563
039 019 39K 36284.751 333.724 8.557
039 020 39Ca 36290.772 326.409 8.369
039 021 39Sc 36303.368 312.52 8.013
040 014 40Si 37258.077 306.43 7.661
040 015 40P 37243.986 319.228 7.981
040 016 40S 37228.715 333.205 8.33
040 017 40Cl 37223.514 337.113 8.428
040 018 40Ar 37215.523 343.811 8.595
040 019 40K 37216.516 341.524 8.538
040 020 40Ca 37214.694 342.052 8.551
040 021 40Sc 37228.506 326.947 8.174
040 022 40Ti 37239.669 314.491 7.862
041 014 41Si 38197.661 306.411 7.473
041 015 41P 38178.31 324.469 7.914
041 016 41S 38164.059 337.427 8.23
041 017 41Cl 38155.258 344.934 8.413
041 018 41Ar 38148.989 349.91 8.534
041 019 41K 38145.986 351.619 8.576
041 020 41Ca 38145.897 350.415 8.547
041 021 41Sc 38151.881 343.137 8.369
042 015 42P 39116.024 326.32 7.77
042 016 42S 39096.893 344.158 8.194
042 017 42Cl 39089.152 350.606 8.348
042 018 42Ar 39079.128 359.336 8.556
042 019 42K 39078.018 359.153 8.551
042 020 42Ca 39073.981 361.896 8.617
042 021 42Sc 39079.896 354.688 8.445
042 022 42Ti 39086.385 346.906 8.26
043 015 43P 40052.348 329.562 7.664
043 016 43S 40034.097 346.519 8.059
043 017 43Cl 40021.386 357.937 8.324
043 018 43Ar 40013.035 364.995 8.488
043 019 43K 40007.941 368.795 8.577
043 020 43Ca 40005.614 369.829 8.601
043 021 43Sc 40007.324 366.826 8.531
043 022 43Ti 40013.68 359.176 8.353
044 016 44S 40968.441 351.741 7.994
044 017 44Cl 40956.82 362.068 8.229
044 018 44Ar 40943.865 373.729 8.494
044 019 44K 40940.218 376.084 8.547
044 020 44Ca 40934.048 380.96 8.658
044 021 44Sc 40937.189 376.525 8.557
044 022 44Ti 40936.946 375.475 8.534
044 023 44V 40949.864 361.264 8.211
045 016 45S 41905.805 353.942 7.865
045 017 45Cl 41890.184 368.27 8.184
045 018 45Ar 41878.262 378.898 8.42
045 019 45K 41870.914 384.953 8.555
045 020 45Ca 41866.199 388.375 8.631
045 021 45Sc 41865.432 387.848 8.619
045 022 45Ti 41866.983 385.004 8.556
045 023 45V 41873.598 377.096 8.38
045 024 45Cr 41885.997 363.403 8.076
046 017 46Cl 42825.328 372.691 8.102
046 018 46Ar 42809.807 386.919 8.411
046 019 46K 42803.598 391.834 8.518
046 020 46Ca 42795.37 398.769 8.669
046 021 46Sc 42796.237 396.609 8.622
046 022 46Ti 42793.359 398.193 8.656
046 023 46V 42799.899 390.36 8.486
046 024 46Cr 42806.987 381.979 8.304
047 018 47Ar 43745.111 391.18 8.323
047 019 47K 43734.814 400.184 8.515
047 020 47Ca 43727.659 406.045 8.639
047 021 47Sc 43725.156 407.255 8.665
047 022 47Ti 43724.044 407.073 8.661
047 023 47V 43726.464 403.36 8.582
047 024 47Cr 43733.397 395.134 8.407
048 019 48K 44669.88 404.683 8.431
048 020 48Ca 44657.279 415.991 8.666
048 021 48Sc 44656.486 415.49 8.656
048 022 48Ti 44651.983 418.7 8.723
048 023 48V 44655.484 413.905 8.623
048 024 48Cr 44656.63 411.466 8.572
048 025 48Mn 44669.618 397.185 8.275
049 019 49K 45603.178 410.95 8.387
049 020 49Ca 45591.698 421.137 8.595
049 021 49Sc 45585.924 425.618 8.686
049 022 49Ti 45583.406 426.842 8.711
049 023 49V 45583.497 425.458 8.683
049 024 49Cr 45585.612 422.049 8.613
049 025 49Mn 45592.816 413.552 8.44
050 019 50K 46539.642 414.052 8.281
050 020 50Ca 46524.91 427.49 8.55
050 021 50Sc 46519.433 431.674 8.633
050 022 50Ti 46512.032 437.781 8.756
050 023 50V 46513.726 434.794 8.696
050 024 50Cr 46512.177 435.049 8.701
050 025 50Mn 46519.299 426.634 8.533
050 026 50Fe 46526.935 417.705 8.354
051 020 51Ca 47460.115 431.851 8.468
051 021 51Sc 47452.246 438.426 8.597
051 022 51Ti 47445.225 444.154 8.709
051 023 51V 47442.24 445.845 8.742
051 024 51Cr 47442.482 444.31 8.712
051 025 51Mn 47445.178 440.32 8.634
051 026 51Fe 47452.687 431.519 8.461
052 020 52Ca 48394.959 436.572 8.396
052 021 52Sc 48386.598 443.639 8.532
052 022 52Ti 48376.982 451.962 8.692
052 023 52V 48374.494 453.156 8.715
052 024 52Cr 48370.008 456.349 8.776
052 025 52Mn 48374.208 450.856 8.67
052 026 52Fe 48376.071 447.7 8.61
053 022 53Ti 49311.111 457.398 8.63
053 023 53V 49305.581 461.635 8.71
053 024 53Cr 49301.634 464.289 8.76
053 025 53Mn 49301.72 462.909 8.734
053 026 53Fe 49304.951 458.384 8.649
053 027 53Co 49312.741 449.302 8.477
054 021 54Sc 50255.726 453.642 8.401
054 022 54Ti 50243.845 464.23 8.597
054 023 54V 50239.033 467.748 8.662
054 024 54Cr 50231.48 474.008 8.778
054 025 54Mn 50232.346 471.848 8.738
054 026 54Fe 50231.138 471.763 8.736
054 027 54Co 50238.87 462.738 8.569
054 028 54Ni 50247.159 453.156 8.392
055 021 55Sc 51191.86 457.073 8.31
055 022 55Ti 51179.259 468.381 8.516
055 023 55V 51171.268 475.079 8.638
055 024 55Cr 51164.799 480.254 8.732
055 025 55Mn 51161.685 482.075 8.765
055 026 55Fe 51161.405 481.061 8.747
055 027 55Co 51164.346 476.827 8.67
055 028 55Ni 51172.527 467.353 8.497
056 022 56Ti 52113.483 473.722 8.459
056 023 56V 52105.832 480.08 8.573
056 024 56Cr 52096.12 488.499 8.723
056 025 56Mn 52093.98 489.345 8.738
056 026 56Fe 52089.773 492.258 8.79
056 027 56Co 52093.828 486.91 8.695
056 028 56Ni 52095.453 483.992 8.643
057 022 57Ti 53050.377 476.394 8.358
057 023 57V 53039.216 486.261 8.531
057 024 57Cr 53030.371 493.813 8.663
057 025 57Mn 53024.897 497.994 8.737
057 026 57Fe 53021.693 499.905 8.77
057 027 57Co 53022.018 498.286 8.742
057 028 57Ni 53024.769 494.242 8.671
057 029 57Cu 53033.03 484.687 8.503
058 023 58V 53974.69 490.353 8.454
058 024 58Cr 53962.559 501.19 8.641
058 025 58Mn 53957.968 504.488 8.698
058 026 58Fe 53951.213 509.949 8.792
058 027 58Co 53953.01 506.859 8.739
058 028 58Ni 53952.117 506.459 8.732
058 029 58Cu 53960.172 497.111 8.571
058 030 58Zn 53969.023 486.966 8.396
059 023 59V 54909.324 495.284 8.395
059 024 59Cr 54897.993 505.322 8.565
059 025 59Mn 54889.892 512.129 8.68
059 026 59Fe 54884.198 516.53 8.755
059 027 59Co 54882.121 517.313 8.768
059 028 59Ni 54882.683 515.458 8.737
059 029 59Cu 54886.971 509.877 8.642
059 030 59Zn 54895.557 499.998 8.475
060 023 60V 55845.308 498.865 8.314
060 024 60Cr 55830.877 512.003 8.533
060 025 60Mn 55823.686 517.901 8.632
060 026 60Fe 55814.943 525.35 8.756
060 027 60Co 55814.195 524.805 8.747
060 028 60Ni 55810.861 526.846 8.781
060 029 60Cu 55816.478 519.935 8.666
060 030 60Zn 55820.123 514.997 8.583
061 024 61Cr 56766.691 515.754 8.455
061 025 61Mn 56756.8 524.352 8.596
061 026 61Fe 56748.928 530.931 8.704
061 027 61Co 56744.439 534.126 8.756
061 028 61Ni 56742.606 534.666 8.765
061 029 61Cu 56744.332 531.646 8.716
061 030 61Zn 56749.46 525.225 8.61
061 031 61Ga 56758.204 515.188 8.446
062 024 62Cr 57699.955 522.056 8.42
062 025 62Mn 57691.814 528.903 8.531
062 026 62Fe 57680.442 538.982 8.693
062 027 62Co 57677.4 540.731 8.721
062 028 62Ni 57671.575 545.262 8.795
062 029 62Cu 57675.012 540.532 8.718
062 030 62Zn 57676.128 538.123 8.679
062 031 62Ga 57684.788 528.169 8.519
063 025 63Mn 58624.998 535.285 8.497
063 026 63Fe 58615.287 543.702 8.63
063 027 63Co 58608.486 549.21 8.718
063 028 63Ni 58604.302 552.1 8.763
063 029 63Cu 58603.724 551.385 8.752
063 030 63Zn 58606.58 547.236 8.686
063 031 63Ga 58611.735 540.788 8.584
064 025 64Mn 59560.222 539.626 8.432
064 026 64Fe 59547.561 550.994 8.609
064 027 64Co 59542.027 555.234 8.676
064 028 64Ni 59534.21 561.758 8.777
064 029 64Cu 59535.374 559.301 8.739
064 030 64Zn 59534.283 559.098 8.736
064 031 64Ga 59540.942 551.146 8.612
064 032 64Ge 59544.915 545.88 8.529
065 025 65Mn 60493.666 545.747 8.396
065 026 65Fe 60482.945 555.175 8.541
065 027 65Co 60474.144 562.683 8.657
065 028 65Ni 60467.677 567.856 8.736
065 029 65Cu 60465.028 569.212 8.757
065 030 65Zn 60465.869 567.077 8.724
065 031 65Ga 60468.613 563.04 8.662
065 032 65Ge 60474.349 556.011 8.554
066 026 66Fe 61415.749 561.936 8.514
066 027 66Co 61408.698 567.694 8.601
066 028 66Ni 61398.291 576.808 8.74
066 029 66Cu 61397.528 576.278 8.731
066 030 66Zn 61394.375 578.136 8.76
066 031 66Ga 61399.04 572.179 8.669
066 032 66Ge 61400.633 569.292 8.626
066 033 66As 61410.242 558.39 8.46
067 026 67Fe 62351.123 566.128 8.45
067 027 67Co 62341.242 574.715 8.578
067 028 67Ni 62332.048 582.616 8.696
067 029 67Cu 62327.961 585.409 8.737
067 030 67Zn 62326.889 585.189 8.734
067 031 67Ga 62327.378 583.406 8.708
067 032 67Ge 62331.089 578.402 8.633
067 033 67As 62336.586 571.611 8.532
068 026 68Fe 63285.177 571.639 8.406
068 027 68Co 63276.446 579.077 8.516
068 028 68Ni 63263.821 590.408 8.682
068 029 68Cu 63261.207 591.729 8.702
068 030 68Zn 63256.256 595.387 8.756
068 031 68Ga 63258.666 591.683 8.701
068 032 68Ge 63258.261 590.795 8.688
068 033 68As 63265.83 581.933 8.558
068 034 68Se 63270.009 576.46 8.477
069 027 69Co 64209.29 585.798 8.49
069 028 69Ni 64198.8 594.995 8.623
069 029 69Cu 64192.532 599.969 8.695
069 030 69Zn 64189.339 601.869 8.723
069 031 69Ga 64187.918 601.996 8.725
069 032 69Ge 64189.634 598.987 8.681
069 033 69As 64193.134 594.194 8.612
069 034 69Se 64199.413 586.622 8.502
070 027 70Co 65145.144 589.509 8.422
070 028 70Ni 65131.123 602.237 8.603
070 029 70Cu 65126.786 605.281 8.647
070 030 70Zn 65119.686 611.087 8.73
070 031 70Ga 65119.83 609.65 8.709
070 032 70Ge 65117.666 610.521 8.722
070 033 70As 65123.378 603.515 8.622
070 034 70Se 65125.157 600.443 8.578
071 027 71Co 66078.408 595.811 8.392
071 028 71Ni 66066.567 606.358 8.54
071 029 71Cu 66058.545 613.087 8.635
071 030 71Zn 66053.418 616.921 8.689
071 031 71Ga 66050.094 618.951 8.718
071 032 71Ge 66049.815 617.937 8.703
071 033 71As 66051.318 615.141 8.664
071 034 71Se 66055.581 609.584 8.586
071 035 71Br 66061.13 602.742 8.489
071 036 71Kr 66070.759 591.82 8.335
072 028 72Ni 66999.321 613.169 8.516
072 029 72Cu 66992.967 618.23 8.587
072 030 72Zn 66984.108 625.796 8.692
072 031 72Ga 66983.139 625.472 8.687
072 032 72Ge 66978.631 628.686 8.732
072 033 72As 66982.476 623.548 8.66
072 034 72Se 66982.301 622.429 8.645
072 035 72Br 66990.664 612.773 8.511
072 036 72Kr 66995.232 606.912 8.429
073 029 73Cu 67925.257 625.505 8.569
073 030 73Zn 67918.323 631.146 8.646
073 031 73Ga 67913.523 634.653 8.694
073 032 73Ge 67911.413 635.469 8.705
073 033 73As 67911.243 634.346 8.69
073 034 73Se 67913.471 630.825 8.641
073 035 73Br 67917.548 625.454 8.568
073 036 73Kr 67924.115 617.594 8.46
074 029 74Cu 68859.732 630.596 8.522
074 030 74Zn 68849.517 639.517 8.642
074 031 74Ga 68846.666 641.075 8.663
074 032 74Ge 68840.783 645.665 8.725
074 033 74As 68842.834 642.32 8.68
074 034 74Se 68840.97 642.891 8.688
074 035 74Br 68847.366 635.202 8.584
074 036 74Kr 68849.83 631.445 8.533
074 037 74Rb 68859.733 620.248 8.382
075 029 75Cu 69793.112 636.781 8.49
075 030 75Zn 69784.251 644.349 8.591
075 031 75Ga 69777.745 649.561 8.661
075 032 75Ge 69773.843 652.171 8.696
075 033 75As 69772.156 652.564 8.701
075 034 75Se 69772.508 650.918 8.679
075 035 75Br 69775.027 647.106 8.628
075 036 75Kr 69779.331 641.509 8.553
075 037 75Rb 69785.922 633.624 8.448
075 038 75Sr 69796.013 622.24 8.297
076 029 76Cu 70727.75 641.708 8.444
076 030 76Zn 70716.075 652.09 8.58
076 031 76Ga 70711.407 655.464 8.625
076 032 76Ge 70703.98 661.598 8.705
076 033 76As 70704.393 659.893 8.683
076 034 76Se 70700.919 662.073 8.711
076 035 76Br 70705.371 656.327 8.636
076 036 76Kr 70706.135 654.27 8.609
076 037 76Rb 70714.158 644.954 8.486
076 038 76Sr 70719.887 637.931 8.394
077 030 77Zn 71650.989 656.741 8.529
077 031 77Ga 71643.206 663.231 8.613
077 032 77Ge 71637.473 667.671 8.671
077 033 77As 71634.259 669.591 8.696
077 034 77Se 71633.065 669.492 8.695
077 035 77Br 71633.919 667.345 8.667
077 036 77Kr 71636.474 663.497 8.617
077 037 77Rb 71641.307 657.37 8.537
077 038 77Sr 71647.817 649.567 8.436
078 030 78Zn 72583.863 663.433 8.506
078 031 78Ga 72576.985 669.017 8.577
078 032 78Ge 72568.319 676.39 8.672
078 033 78As 72566.853 676.563 8.674
078 034 78Se 72562.133 679.99 8.718
078 035 78Br 72565.196 675.633 8.662
078 036 78Kr 72563.957 675.578 8.661
078 037 78Rb 72570.69 667.552 8.558
078 038 78Sr 72573.941 663.008 8.5
079 031 79Ga 73509.676 675.892 8.556
079 032 79Ge 73502.185 682.089 8.634
079 033 79As 73497.527 685.454 8.677
079 034 79Se 73494.735 686.952 8.696
079 035 79Br 73494.074 686.321 8.688
079 036 79Kr 73495.188 683.913 8.657
079 037 79Rb 73498.317 679.491 8.601
079 038 79Sr 73503.132 673.382 8.524
079 039 79Y 73509.738 665.483 8.424
080 030 80Zn 74452.351 674.075 8.426
080 031 80Ga 74444.54 680.593 8.507
080 032 80Ge 74433.654 690.186 8.627
080 033 80As 74430.499 692.047 8.651
080 034 80Se 74424.387 696.866 8.711
080 035 80Br 74425.747 694.213 8.678
080 036 80Kr 74423.233 695.434 8.693
080 037 80Rb 74428.441 688.932 8.612
080 038 80Sr 74429.795 686.285 8.579
080 039 80Y 74438.372 676.414 8.455
080 040 80Zr 74443.561 669.932 8.374
081 031 81Ga 75377.194 687.504 8.488
081 032 81Ge 75368.363 695.042 8.581
081 033 81As 75361.619 700.493 8.648
081 034 81Se 75357.252 703.567 8.686
081 035 81Br 75355.155 704.37 8.696
081 036 81Kr 75354.925 703.307 8.683
081 037 81Rb 75356.653 700.285 8.645
081 038 81Sr 75360.069 695.576 8.587
081 039 81Y 75365.066 689.286 8.51
081 040 81Zr 75372.085 680.973 8.407
082 032 82Ge 76300.537 702.433 8.566
082 033 82As 76295.326 706.351 8.614
082 034 82Se 76287.541 712.843 8.693
082 035 82Br 76287.128 711.963 8.682
082 036 82Kr 76283.524 714.274 8.711
082 037 82Rb 76287.414 709.09 8.647
082 038 82Sr 76287.083 708.127 8.636
082 039 82Y 76294.39 699.527 8.531
083 033 83As 77227.26 713.982 8.602
083 034 83Se 77221.288 718.661 8.659
083 035 83Br 77217.109 721.547 8.693
083 036 83Kr 77215.625 721.737 8.696
083 037 83Rb 77216.021 720.048 8.675
083 038 83Sr 77217.79 716.986 8.638
083 039 83Y 77221.744 711.738 8.575
083 040 83Zr 77227.103 705.086 8.495
083 041 83Nb 77234.092 696.804 8.395
084 034 84Se 78152.171 727.343 8.659
084 035 84Br 78149.813 728.408 8.672
084 036 84Kr 78144.67 732.258 8.717
084 037 84Rb 78146.84 728.794 8.676
084 038 84Sr 78145.435 728.906 8.677
084 039 84Y 78151.408 721.64 8.591
085 034 85Se 79087.189 731.891 8.61
085 035 85Br 79080.496 737.29 8.674
085 036 85Kr 79077.115 739.378 8.699
085 037 85Rb 79075.917 739.283 8.697
085 038 85Sr 79076.471 737.436 8.676
085 039 85Y 79079.22 733.393 8.628
085 040 85Zr 79083.401 727.919 8.564
085 041 85Nb 79088.89 721.136 8.484
086 034 86Se 80020.57 738.075 8.582
086 035 86Br 80014.96 742.392 8.632
086 036 86Kr 80006.824 749.235 8.712
086 037 86Rb 80006.831 747.934 8.697
086 038 86Sr 80004.544 748.928 8.708
086 039 86Y 80009.272 742.906 8.638
086 040 86Zr 80010.245 740.64 8.612
086 041 86Nb 80017.704 731.888 8.51
086 042 86Mo 80022.463 725.835 8.44
087 034 87Se 80956.025 742.185 8.531
087 035 87Br 80948.237 748.68 8.606
087 036 87Kr 80940.874 754.75 8.675
087 037 87Rb 80936.474 757.856 8.711
087 038 87Sr 80935.681 757.356 8.705
087 039 87Y 80937.031 754.712 8.675
087 040 87Zr 80940.191 750.259 8.624
087 041 87Nb 80944.848 744.309 8.555
087 042 87Mo 80950.827 737.037 8.472
088 034 88Se 81890.219 747.557 8.495
088 035 88Br 81882.858 753.624 8.564
088 036 88Kr 81873.385 761.804 8.657
088 037 88Rb 81869.957 763.939 8.681
088 038 88Sr 81864.133 768.469 8.733
088 039 88Y 81867.245 764.064 8.683
088 040 88Zr 81867.41 762.606 8.666
088 041 88Nb 81874.452 754.27 8.571
088 042 88Mo 81877.311 750.118 8.524
089 035 89Br 82816.512 759.536 8.534
089 036 89Kr 82807.841 766.913 8.617
089 037 89Rb 82802.347 771.114 8.664
089 038 89Sr 82797.34 774.828 8.706
089 039 89Y 82795.336 775.538 8.714
089 040 89Zr 82797.658 771.923 8.673
089 041 89Nb 82801.366 766.922 8.617
089 042 89Mo 82806.501 760.493 8.545
090 035 90Br 83751.956 763.657 8.485
090 036 90Kr 83741.095 773.225 8.591
090 037 90Rb 83736.192 776.834 8.631
090 038 90Sr 83729.102 782.631 8.696
090 039 90Y 83728.045 782.395 8.693
090 040 90Zr 83725.254 783.893 8.71
090 041 90Nb 83730.854 776.999 8.633
090 042 90Mo 83732.832 773.728 8.597
090 043 90Tc 83741.278 763.988 8.489
091 035 91Br 84686.56 768.618 8.446
091 036 91Kr 84676.249 777.636 8.545
091 037 91Rb 84669.303 783.289 8.608
091 038 91Sr 84662.892 788.406 8.664
091 039 91Y 84659.681 790.324 8.685
091 040 91Zr 84657.625 791.087 8.693
091 041 91Nb 84658.372 789.046 8.671
091 042 91Mo 84662.289 783.836 8.614
091 043 91Tc 84668.002 776.83 8.537
092 035 92Br 85622.984 771.76 8.389
092 036 92Kr 85610.268 783.182 8.513
092 037 92Rb 85603.77 788.387 8.569
092 038 92Sr 85595.163 795.701 8.649
092 039 92Y 85592.707 796.863 8.662
092 040 92Zr 85588.555 799.722 8.693
092 041 92Nb 85590.05 796.934 8.662
092 042 92Mo 85589.182 796.508 8.658
092 043 92Tc 85596.541 787.856 8.564
093 036 93Kr 86546.527 786.488 8.457
093 037 93Rb 86537.418 794.304 8.541
093 038 93Sr 86529.44 800.989 8.613
093 039 93Y 86524.791 804.344 8.649
093 040 93Zr 86521.386 806.456 8.672
093 041 93Nb 86520.784 805.765 8.664
093 042 93Mo 86520.678 804.577 8.651
093 043 93Tc 86523.367 800.595 8.609
093 044 93Ru 86529.189 793.48 8.532
094 037 94Rb 87472.977 798.31 8.493
094 038 94Sr 87462.179 807.815 8.594
094 039 94Y 87458.16 810.541 8.623
094 040 94Zr 87452.73 814.677 8.667
094 041 94Nb 87453.122 812.993 8.649
094 042 94Mo 87450.566 814.256 8.662
094 043 94Tc 87454.31 809.217 8.609
094 044 94Ru 87455.385 806.849 8.584
095 037 95Rb 88407.17 803.683 8.46
095 038 95Sr 88397.396 812.163 8.549
095 039 95Y 88390.795 817.471 8.605
095 040 95Zr 88385.833 821.14 8.644
095 041 95Nb 88384.198 821.481 8.647
095 042 95Mo 88382.762 821.625 8.649
095 043 95Tc 88383.941 819.152 8.623
095 044 95Ru 88385.997 815.802 8.587
095 045 95Rh 88390.596 809.91 8.525
096 037 96Rb 89343.293 807.125 8.408
096 038 96Sr 89331.068 818.057 8.521
096 039 96Y 89325.149 822.682 8.57
096 040 96Zr 89317.542 828.996 8.635
096 041 96Nb 89316.87 828.375 8.629
096 042 96Mo 89313.173 830.779 8.654
096 043 96Tc 89315.635 827.023 8.615
096 044 96Ru 89314.869 826.496 8.609
096 045 96Rh 89320.751 819.32 8.535
096 046 96Pd 89323.689 815.089 8.491
097 037 97Rb 90277.652 812.331 8.375
097 038 97Sr 90266.713 821.977 8.474
097 039 97Y 90258.732 828.665 8.543
097 040 97Zr 90251.533 834.571 8.604
097 041 97Nb 90248.363 836.448 8.623
097 042 97Mo 90245.917 837.6 8.635
097 043 97Tc 90245.726 836.497 8.624
097 044 97Ru 90246.323 834.607 8.604
097 045 97Rh 90249.334 830.303 8.56
097 046 97Pd 90253.613 824.73 8.502
097 047 97Ag 90260.082 816.968 8.422
098 037 98Rb 91213.286 816.263 8.329
098 038 98Sr 91200.349 827.906 8.448
098 039 98Y 91194.017 832.945 8.499
098 040 98Zr 91184.686 840.983 8.581
098 041 98Nb 91181.933 842.442 8.596
098 042 98Mo 91176.84 846.243 8.635
098 043 98Tc 91178.012 843.777 8.61
098 044 98Ru 91175.705 844.79 8.62
098 045 98Rh 91180.243 838.959 8.561
098 046 98Pd 91181.607 836.302 8.534
098 047 98Ag 91189.336 827.279 8.442
098 048 98Cd 91194.255 821.067 8.378
099 037 99Rb 92148.12 820.994 8.293
099 038 99Sr 92136.299 831.522 8.399
099 039 99Y 92127.777 838.75 8.472
099 040 99Zr 92119.699 845.535 8.541
099 041 99Nb 92114.629 849.312 8.579
099 042 99Mo 92110.48 852.168 8.608
099 043 99Tc 92108.611 852.743 8.614
099 044 99Ru 92107.806 852.255 8.609
099 045 99Rh 92109.338 849.429 8.58
099 046 99Pd 92112.213 845.261 8.538
099 047 99Ag 92117.13 839.051 8.475
100 038 100Sr 93069.763 837.623 8.376
100 039 100Y 93062.182 843.911 8.439
100 040 100Zr 93052.361 852.438 8.524
100 041 100Nb 93048.511 854.995 8.55
100 042 100Mo 93041.755 860.458 8.605
100 043 100Tc 93041.412 859.508 8.595
100 044 100Ru 93037.698 861.928 8.619
100 045 100Rh 93040.822 857.511 8.575
100 046 100Pd 93040.669 856.37 8.564
100 047 100Ag 93047.234 848.512 8.485
100 048 100Cd 93050.623 843.83 8.438
100 049 100In 93060.192 832.967 8.33
100 050 100Sn 93067.071 824.795 8.248
101 037 101Rb 94018.388 829.857 8.216
101 038 101Sr 94006.067 840.884 8.326
101 039 101Y 93996.056 849.602 8.412
101 040 101Zr 93986.995 857.37 8.489
101 041 101Nb 93981.002 862.069 8.535
101 042 101Mo 93975.922 865.856 8.573
101 043 101Tc 93972.586 867.899 8.593
101 044 101Ru 93970.462 868.73 8.601
101 045 101Rh 93970.492 867.406 8.588
101 046 101Pd 93971.961 864.644 8.561
101 047 101Ag 93975.658 859.653 8.511
101 048 101Cd 93980.617 853.401 8.45
102 038 102Sr 94939.891 846.626 8.3
102 039 102Y 94930.57 854.653 8.379
102 040 102Zr 94920.209 863.721 8.468
102 041 102Nb 94915.088 867.549 8.505
102 042 102Mo 94907.37 873.973 8.568
102 043 102Tc 94905.85 874.2 8.571
102 044 102Ru 94900.807 877.95 8.607
102 045 102Rh 94902.619 874.844 8.577
102 046 102Pd 94900.958 875.212 8.581
102 047 102Ag 94906.107 868.77 8.517
102 048 102Cd 94908.183 865.4 8.484
102 049 102In 94916.64 855.65 8.389
102 050 102Sn 94921.909 849.088 8.324
103 040 103Zr 95855.073 868.422 8.431
103 041 103Nb 95847.612 874.59 8.491
103 042 103Mo 95841.571 879.338 8.537
103 043 103Tc 95837.313 882.302 8.566
103 044 103Ru 95834.141 884.182 8.584
103 045 103Rh 95832.866 884.163 8.584
103 046 103Pd 95832.898 882.837 8.571
103 047 103Ag 95835.075 879.367 8.538
103 048 103Cd 95838.706 874.443 8.49
103 049 103In 95844.245 867.61 8.423
104 041 104Nb 96782.206 879.561 8.457
104 042 104Mo 96773.585 886.889 8.528
104 043 104Tc 96770.914 888.267 8.541
104 044 104Ru 96764.804 893.083 8.587
104 045 104Rh 96765.433 891.162 8.569
104 046 104Pd 96762.481 892.82 8.585
104 047 104Ag 96766.249 887.758 8.536
104 048 104Cd 96766.874 885.84 8.518
104 049 104In 96774.228 877.193 8.435
104 050 104Sn 96778.237 871.89 8.384
105 041 105Nb 97715.07 886.263 8.441
105 042 105Mo 97708.069 891.97 8.495
105 043 105Tc 97702.608 896.138 8.535
105 044 105Ru 97698.459 898.994 8.562
105 045 105Rh 97696.03 900.129 8.573
105 046 105Pd 97694.952 899.914 8.571
105 047 105Ag 97695.786 897.787 8.55
105 048 105Cd 97698.013 894.266 8.517
105 049 105In 97702.351 888.635 8.463
105 050 105Sn 97708.061 881.632 8.396
105 051 105Sb 97716.99 871.409 8.299
106 042 106Mo 98640.648 898.957 8.481
106 043 106Tc 98636.617 901.694 8.507
106 044 106Ru 98629.559 907.459 8.561
106 045 106Rh 98629.009 906.716 8.554
106 046 106Pd 98624.957 909.474 8.58
106 047 106Ag 98627.411 905.727 8.545
106 048 106Cd 98626.705 905.14 8.539
106 049 106In 98632.72 897.831 8.47
106 050 106Sn 98635.385 893.873 8.433
106 052 106Te 98653.583 873.088 8.237
107 042 107Mo 99575.457 903.713 8.446
107 043 107Tc 99568.786 909.091 8.496
107 044 107Ru 99563.455 913.128 8.534
107 045 107Rh 99560.001 915.289 8.554
107 046 107Pd 99557.985 916.012 8.561
107 047 107Ag 99557.44 915.263 8.554
107 048 107Cd 99558.346 913.064 8.533
107 049 107In 99561.26 908.857 8.494
107 050 107Sn 99565.729 903.094 8.44
108 043 108Tc 100503.43 914.012 8.463
108 044 108Ru 100495.199 920.95 8.527
108 045 108Rh 100493.338 921.517 8.533
108 046 108Pd 100488.323 925.239 8.567
108 047 108Ag 100489.734 922.535 8.542
108 048 108Cd 100487.573 923.402 8.55
108 049 108In 100492.198 917.484 8.495
108 050 108Sn 100493.762 914.627 8.469
108 052 108Te 100509.061 896.741 8.303
109 043 109Tc 101436.334 920.673 8.447
109 044 109Ru 101429.513 926.201 8.497
109 045 109Rh 101424.841 929.58 8.528
109 046 109Pd 101421.734 931.393 8.545
109 047 109Ag 101420.108 931.727 8.548
109 048 109Cd 101419.811 930.73 8.539
109 049 109In 101421.319 927.928 8.513
109 050 109Sn 101424.658 923.296 8.471
109 051 109Sb 101430.527 916.134 8.405
109 052 109Te 101438.665 906.702 8.318
109 053 109I 101448.154 895.92 8.219
110 043 110Tc 102371.408 925.165 8.411
110 044 110Ru 102361.877 933.402 8.485
110 045 110Rh 102358.566 935.42 8.504
110 046 110Pd 102352.486 940.207 8.547
110 047 110Ag 102352.864 938.536 8.532
110 048 110Cd 102349.46 940.646 8.551
110 049 110In 102352.827 935.986 8.509
110 050 110Sn 102352.947 934.572 8.496
110 052 110Te 102365.489 919.444 8.359
110 054 110Xe 102384.847 897.499 8.159
111 043 111Tc 103304.642 931.496 8.392
111 044 111Ru 103296.681 938.164 8.452
111 045 111Rh 103290.483 943.068 8.496
111 046 111Pd 103286.325 945.933 8.522
111 047 111Ag 103283.597 947.368 8.535
111 048 111Cd 103282.05 947.622 8.537
111 049 111In 103282.4 945.978 8.522
111 050 111Sn 103284.34 942.745 8.493
111 051 111Sb 103288.886 936.905 8.441
111 052 111Te 103295.784 928.715 8.367
112 043 112Tc 104239.357 936.347 8.36
112 044 112Ru 104229.366 945.045 8.438
112 045 112Rh 104224.595 948.523 8.469
112 046 112Pd 104217.488 954.336 8.521
112 047 112Ag 104216.689 953.842 8.516
112 048 112Cd 104212.221 957.016 8.545
112 049 112In 104214.295 953.649 8.515
112 050 112Sn 104213.119 953.532 8.514
112 051 112Sb 104219.668 945.69 8.444
112 052 112Te 104223.458 940.606 8.398
112 054 112Xe 104239.766 921.712 8.23
113 044 113Ru 105164.14 949.836 8.406
113 045 113Rh 105157.149 955.534 8.456
113 046 113Pd 105151.628 959.761 8.493
113 047 113Ag 105147.774 962.322 8.516
113 048 113Cd 105145.246 963.556 8.527
113 049 113In 105144.415 963.094 8.523
113 050 113Sn 105144.941 961.275 8.507
113 051 113Sb 105148.343 956.58 8.465
113 052 113Te 105153.905 949.724 8.405
113 053 113I 105160.611 941.725 8.334
113 054 113Xe 105169.14 931.903 8.247
113 055 113Cs 105179.019 920.731 8.148
114 045 114Rh 106091.693 960.555 8.426
114 046 114Pd 106083.315 967.64 8.488
114 047 114Ag 106081.352 968.309 8.494
114 048 114Cd 106075.769 972.599 8.532
114 049 114In 106076.707 970.368 8.512
114 050 114Sn 106074.207 971.574 8.523
114 051 114Sb 106079.742 964.746 8.463
114 052 114Te 106081.857 961.338 8.433
114 054 114Xe 106095.638 944.97 8.289
114 056 114Ba 106115.752 922.269 8.09
115 044 115Ru 107032.898 960.209 8.35
115 045 115Rh 107024.607 967.206 8.41
115 046 115Pd 107017.906 972.614 8.458
115 047 115Ag 107012.805 976.422 8.491
115 048 115Cd 107009.193 978.74 8.511
115 049 115In 107007.236 979.404 8.517
115 050 115Sn 107006.226 979.121 8.514
115 051 115Sb 107008.748 975.305 8.481
115 052 115Te 107013.177 969.583 8.431
115 053 115I 107018.391 963.076 8.375
115 054 115Xe 107025.561 954.612 8.301
116 045 116Rh 107959.571 971.808 8.378
116 046 116Pd 107949.84 980.245 8.45
116 047 116Ag 107946.719 982.073 8.466
116 048 116Cd 107940.059 987.44 8.512
116 049 116In 107940.017 986.188 8.502
116 050 116Sn 107936.227 988.684 8.523
116 051 116Sb 107940.424 983.195 8.476
116 052 116Te 107941.465 980.86 8.456
116 053 116I 107948.733 972.299 8.382
116 054 116Xe 107952.665 967.074 8.337
117 046 117Pd 108884.764 984.887 8.418
117 047 117Ag 108878.513 989.844 8.46
117 048 117Cd 108873.847 993.217 8.489
117 049 117In 108870.816 994.955 8.504
117 050 117Sn 108868.85 995.627 8.51
117 051 117Sb 108870.094 993.09 8.488
117 052 117Te 108873.131 988.76 8.451
117 053 117I 108877.282 983.315 8.404
117 054 117Xe 108883.021 976.283 8.344
117 055 117Cs 108890.255 967.756 8.271
118 046 118Pd 109817.318 991.898 8.406
118 047 118Ag 109812.707 995.216 8.434
118 048 118Cd 109805.057 1001.572 8.488
118 049 118In 109804.025 1001.311 8.486
118 050 118Sn 109799.087 1004.955 8.517
118 051 118Sb 109802.234 1000.515 8.479
118 052 118Te 109802.001 999.455 8.47
118 053 118I 109808.24 991.923 8.406
118 054 118Xe 109810.621 988.248 8.375
118 055 118Cs 109819.78 977.796 8.286
119 047 119Ag 110745.211 1002.277 8.422
119 048 119Cd 110739.35 1006.845 8.461
119 049 119In 110735.045 1009.856 8.486
119 050 119Sn 110732.169 1011.438 8.499
119 051 119Sb 110732.25 1010.065 8.488
119 052 119Te 110734.032 1006.989 8.462
119 053 119I 110736.939 1002.789 8.427
119 054 119Xe 110741.4 997.035 8.378
119 055 119Cs 110747.378 989.763 8.317
119 056 119Ba 110754.582 981.266 8.246
120 046 120Pd 111685.626 1002.721 8.356
120 047 120Ag 111679.615 1007.438 8.395
120 048 120Cd 111670.78 1014.98 8.458
120 049 120In 111668.503 1015.964 8.466
120 050 120Sn 111662.627 1020.546 8.505
120 051 120Sb 111664.797 1017.083 8.476
120 052 120Te 111663.305 1017.282 8.477
120 053 120I 111668.409 1010.884 8.424
120 054 120Xe 111669.516 1008.484 8.404
120 055 120Cs 111677.288 999.419 8.328
120 056 120Ba 111681.776 993.637 8.28
121 047 121Ag 112612.099 1014.52 8.384
121 048 121Cd 112605.188 1020.137 8.431
121 049 121In 112599.896 1024.136 8.464
121 050 121Sn 112596.022 1026.717 8.485
121 051 121Sb 112595.12 1026.325 8.482
121 052 121Te 112595.653 1024.499 8.467
121 053 121I 112597.406 1021.453 8.442
121 054 121Xe 112600.709 1016.856 8.404
121 055 121Cs 112605.571 1010.701 8.353
121 056 121Ba 112611.42 1003.559 8.294
122 048 122Cd 113537.012 1027.879 8.425
122 049 122In 113533.651 1029.946 8.442
122 050 122Sn 113526.774 1035.53 8.488
122 051 122Sb 113527.878 1033.132 8.468
122 052 122Te 113525.384 1034.333 8.478
122 053 122I 113529.107 1029.317 8.437
122 054 122Xe 113529.321 1027.81 8.425
122 055 122Cs 113536.025 1019.812 8.359
122 056 122Ba 113539.045 1015.499 8.324
123 048 123Cd 114471.926 1032.53 8.395
123 049 123In 114465.299 1037.864 8.438
123 050 123Sn 114460.393 1041.476 8.467
123 051 123Sb 114458.479 1042.097 8.472
123 052 123Te 114458.02 1041.263 8.466
123 053 123I 114458.738 1039.251 8.449
123 054 123Xe 114460.921 1035.775 8.421
123 055 123Cs 114464.615 1030.788 8.38
123 056 123Ba 114469.493 1024.616 8.33
124 048 124Cd 115404.02 1040.001 8.387
124 049 124In 115399.339 1043.389 8.414
124 050 124Sn 115391.471 1049.963 8.467
124 051 124Sb 115391.576 1048.565 8.456
124 052 124Te 115388.161 1050.686 8.473
124 053 124I 115390.81 1046.745 8.441
124 054 124Xe 115390.004 1046.257 8.438
124 055 124Cs 115395.422 1039.546 8.383
124 056 124Ba 115397.552 1036.123 8.356
124 057 124La 115405.871 1026.51 8.278
125 048 125Cd 116338.864 1044.723 8.358
125 049 125In 116331.233 1051.06 8.408
125 050 125Sn 116325.303 1055.696 8.446
125 051 125Sb 116322.435 1057.271 8.458
125 052 125Te 116321.157 1057.256 8.458
125 053 125I 116320.832 1056.287 8.45
125 054 125Xe 116321.966 1053.861 8.431
125 055 125Cs 116324.559 1049.974 8.4
125 056 125Ba 116328.468 1044.772 8.358
125 057 125La 116333.866 1038.081 8.305
126 048 126Cd 117271.388 1051.764 8.347
126 049 126In 117265.397 1056.462 8.385
126 050 126Sn 117256.676 1063.889 8.444
126 051 126Sb 117255.785 1063.487 8.44
126 052 126Te 117251.609 1066.369 8.463
126 053 126I 117253.252 1063.433 8.44
126 054 126Xe 117251.483 1063.909 8.444
126 055 126Cs 117255.796 1058.303 8.399
126 056 126Ba 117256.96 1055.845 8.38
126 057 126La 117264.149 1047.363 8.312
126 058 126Ce 117267.787 1042.432 8.273
127 048 127Cd 118206.692 1056.025 8.315
127 049 127In 118197.711 1063.713 8.376
127 050 127Sn 118190.691 1069.44 8.421
127 051 127Sb 118186.979 1071.858 8.44
127 052 127Te 118184.887 1072.657 8.446
127 053 127I 118183.674 1072.577 8.445
127 054 127Xe 118183.825 1071.132 8.434
127 055 127Cs 118185.395 1068.269 8.412
127 056 127Ba 118188.308 1064.063 8.378
127 057 127La 118192.717 1058.36 8.334
127 058 127Ce 118198.122 1051.662 8.281
128 048 128Cd 119139.416 1062.867 8.304
128 049 128In 119131.835 1069.154 8.353
128 050 128Sn 119122.349 1077.347 8.417
128 051 128Sb 119120.564 1077.839 8.421
128 052 128Te 119115.67 1081.439 8.449
128 053 128I 119116.413 1079.403 8.433
128 054 128Xe 119113.78 1080.743 8.443
128 055 128Cs 119117.198 1076.031 8.406
128 056 128Ba 119117.216 1074.72 8.396
128 057 128La 119123.477 1067.166 8.337
128 058 128Ce 119126.062 1063.287 8.307
128 059 128Pr 119134.754 1053.302 8.229
129 049 129In 120064.749 1075.806 8.34
129 050 129Sn 120056.584 1082.677 8.393
129 051 129Sb 120052.039 1085.929 8.418
129 052 129Te 120049.153 1087.522 8.43
129 053 129I 120047.142 1088.239 8.436
129 054 129Xe 120046.436 1087.651 8.431
129 055 129Cs 120047.123 1085.672 8.416
129 056 129Ba 120049.047 1082.454 8.391
129 057 129La 120052.275 1077.933 8.356
129 058 129Ce 120056.803 1072.112 8.311
129 059 129Pr 120062.805 1064.816 8.254
130 048 130Cd 121008.124 1073.289 8.256
130 049 130In 120999.293 1080.827 8.314
130 050 130Sn 120988.533 1090.294 8.387
130 051 130Sb 120985.869 1091.664 8.397
130 052 130Te 120980.298 1095.941 8.43
130 053 130I 120980.207 1094.74 8.421
130 054 130Xe 120976.746 1096.907 8.438
130 055 130Cs 120979.217 1093.143 8.409
130 056 130Ba 120978.344 1092.722 8.406
130 057 130La 120983.467 1086.306 8.356
130 058 130Ce 120985.161 1083.319 8.333
130 059 130Pr 120992.893 1074.294 8.264
130 060 130Nd 120996.966 1068.927 8.223
131 049 131In 121932.54 1087.145 8.299
131 050 131Sn 121922.852 1095.54 8.363
131 051 131Sb 121917.667 1099.432 8.393
131 052 131Te 121913.934 1101.871 8.411
131 053 131I 121911.188 1103.323 8.422
131 054 131Xe 121909.707 1103.512 8.424
131 055 131Cs 121909.551 1102.374 8.415
131 056 131Ba 121910.416 1100.216 8.399
131 057 131La 121912.82 1096.519 8.37
131 058 131Ce 121916.358 1091.687 8.333
131 059 131Pr 121921.287 1085.465 8.286
131 060 131Nd 121927.287 1078.172 8.23
132 049 132In 122869.751 1089.5 8.254
132 050 132Sn 122855.106 1102.851 8.355
132 051 132Sb 122851.475 1105.189 8.373
132 052 132Te 122845.456 1109.915 8.408
132 053 132I 122844.427 1109.65 8.406
132 054 132Xe 122840.335 1112.448 8.428
132 055 132Cs 122841.949 1109.541 8.406
132 056 132Ba 122840.159 1110.038 8.409
132 057 132La 122844.343 1104.561 8.368
132 058 132Ce 122845.098 1102.513 8.352
132 059 132Pr 122851.851 1094.466 8.291
132 060 132Nd 122855.124 1089.9 8.257
133 050 133Sn 123792.204 1105.319 8.311
133 051 133Sb 123783.7 1112.529 8.365
133 052 133Te 123779.187 1115.749 8.389
133 053 133I 123775.734 1117.909 8.405
133 054 133Xe 123773.466 1118.883 8.413
133 055 133Cs 123772.528 1118.528 8.41
133 056 133Ba 123772.534 1117.228 8.4
133 057 133La 123774.083 1114.386 8.379
133 058 133Ce 123776.643 1110.533 8.35
133 059 133Pr 123780.617 1105.266 8.31
133 060 133Nd 123785.714 1098.875 8.262
133 061 133Pm 123792.123 1091.173 8.204
134 050 134Sn 124727.848 1109.24 8.278
134 051 134Sb 124719.967 1115.827 8.327
134 052 134Te 124711.067 1123.434 8.384
134 053 134I 124709.043 1124.165 8.389
134 054 134Xe 124704.479 1127.435 8.414
134 055 134Cs 124705.202 1125.419 8.399
134 056 134Ba 124702.632 1126.696 8.408
134 057 134La 124705.852 1122.182 8.374
134 058 134Ce 124705.724 1121.017 8.366
134 059 134Pr 124711.539 1113.909 8.313
134 060 134Nd 124713.892 1110.262 8.286
134 061 134Pm 124722.287 1100.574 8.213
135 051 135Sb 125655.921 1119.439 8.292
135 052 135Te 125647.29 1126.776 8.346
135 053 135I 125640.819 1131.954 8.385
135 054 135Xe 125637.681 1133.799 8.399
135 055 135Cs 125636.005 1134.181 8.401
135 056 135Ba 125635.225 1133.668 8.398
135 057 135La 125635.914 1131.686 8.383
135 058 135Ce 125637.429 1128.877 8.362
135 059 135Pr 125640.607 1124.406 8.329
135 060 135Nd 125644.818 1118.902 8.288
135 061 135Pm 125650.541 1111.885 8.236
135 062 135Sm 125657.15 1103.983 8.178
136 052 136Te 126582.184 1131.448 8.319
136 053 136I 126576.603 1135.735 8.351
136 054 136Xe 126569.167 1141.878 8.396
136 055 136Cs 126568.742 1141.009 8.39
136 056 136Ba 126565.683 1142.775 8.403
136 057 136La 126568.019 1139.146 8.376
136 058 136Ce 126567.08 1138.792 8.373
136 059 136Pr 126571.71 1132.868 8.33
136 060 136Nd 126573.327 1129.958 8.309
136 061 136Pm 126580.815 1121.177 8.244
136 062 136Sm 126584.693 1116.005 8.206
137 052 137Te 127518.548 1134.649 8.282
137 053 137I 127511.094 1140.81 8.327
137 054 137Xe 127504.707 1145.903 8.364
137 055 137Cs 127500.029 1149.288 8.389
137 056 137Ba 127498.343 1149.681 8.392
137 057 137La 127498.452 1148.278 8.382
137 058 137Ce 127499.163 1146.274 8.367
137 059 137Pr 127501.354 1142.79 8.342
137 060 137Nd 127504.44 1138.41 8.31
137 061 137Pm 127509.436 1132.121 8.264
137 062 137Sm 127514.968 1125.296 8.214
138 053 138I 128446.761 1144.708 8.295
138 054 138Xe 128438.43 1151.746 8.346
138 055 138Cs 128435.182 1153.7 8.36
138 056 138Ba 128429.296 1158.293 8.393
138 057 138La 128430.522 1155.774 8.375
138 058 138Ce 128428.967 1156.035 8.377
138 059 138Pr 128432.893 1150.816 8.339
138 060 138Nd 128433.496 1148.92 8.326
138 061 138Pm 128440.063 1141.059 8.269
138 062 138Sm 128442.994 1136.835 8.238
138 063 138Eu 128452.231 1126.305 8.162
139 053 139I 129381.745 1149.289 8.268
139 054 139Xe 129374.43 1155.311 8.312
139 055 139Cs 129368.862 1159.586 8.342
139 056 139Ba 129364.138 1163.016 8.367
139 057 139La 129361.309 1164.551 8.378
139 058 139Ce 129361.078 1163.49 8.37
139 059 139Pr 129362.696 1160.578 8.349
139 060 139Nd 129365.016 1156.965 8.323
139 061 139Pm 129369.001 1151.687 8.286
139 062 139Sm 129373.606 1145.788 8.243
139 063 139Eu 129380.077 1138.024 8.187
140 054 140Xe 130308.578 1160.728 8.291
140 055 140Cs 130304.006 1164.007 8.314
140 056 140Ba 130297.275 1169.445 8.353
140 057 140La 130295.714 1169.712 8.355
140 058 140Ce 130291.441 1172.692 8.376
140 059 140Pr 130294.318 1168.522 8.347
140 060 140Nd 130294.25 1167.296 8.338
140 061 140Pm 130299.781 1160.472 8.289
140 062 140Sm 130302.024 1156.936 8.264
140 063 140Eu 130309.979 1147.687 8.198
140 064 140Gd 130314.676 1141.697 8.155
140 065 140Tb 130325.467 1129.613 8.069
141 054 141Xe 131244.732 1164.14 8.256
141 055 141Cs 131238.074 1169.504 8.294
141 056 141Ba 131232.314 1173.971 8.326
141 057 141La 131228.591 1176.401 8.343
141 058 141Ce 131225.578 1178.12 8.355
141 059 141Pr 131224.486 1177.919 8.354
141 060 141Nd 131225.798 1175.314 8.336
141 061 141Pm 131228.962 1170.856 8.304
141 062 141Sm 131233.035 1165.49 8.266
141 063 141Eu 131238.536 1158.696 8.218
141 064 141Gd 131244.728 1151.21 8.165
141 065 141Tb 131252.901 1141.744 8.097
142 054 142Xe 132179.076 1169.361 8.235
142 055 142Cs 132173.53 1173.614 8.265
142 056 142Ba 132165.711 1180.139 8.311
142 057 142La 132162.988 1181.569 8.321
142 058 142Ce 132157.973 1185.29 8.347
142 059 142Pr 132158.208 1183.762 8.336
142 060 142Nd 132155.535 1185.142 8.346
142 061 142Pm 132159.822 1179.562 8.307
142 062 142Sm 132161.475 1176.615 8.286
142 063 142Eu 132168.637 1168.16 8.226
142 064 142Gd 132172.486 1163.018 8.19
143 055 143Cs 133107.868 1178.841 8.244
143 056 143Ba 133101.092 1184.324 8.282
143 057 143La 133096.33 1187.792 8.306
143 058 143Ce 133092.394 1190.435 8.325
143 059 143Pr 133090.421 1191.114 8.329
143 060 143Nd 133088.977 1191.266 8.331
143 061 143Pm 133089.507 1189.442 8.318
143 062 143Sm 133092.439 1185.217 8.288
143 063 143Eu 133097.209 1179.153 8.246
143 064 143Gd 133102.71 1172.359 8.198
143 065 143Tb 133109.999 1163.777 8.138
144 055 144Cs 134043.763 1182.511 8.212
144 056 144Ba 134034.753 1190.228 8.265
144 057 144La 134031.121 1192.567 8.282
144 058 144Ce 134025.063 1197.331 8.315
144 059 144Pr 134024.233 1196.868 8.312
144 060 144Nd 134020.725 1199.083 8.327
144 061 144Pm 134022.546 1195.968 8.305
144 062 144Sm 134021.484 1195.737 8.304
144 063 144Eu 134027.323 1188.605 8.254
144 064 144Gd 134030.674 1183.96 8.222
144 065 144Tb 134039.555 1173.786 8.151
144 066 144Dy 134044.832 1167.216 8.106
145 055 145Cs 134978.47 1187.37 8.189
145 056 145Ba 134970.606 1193.94 8.234
145 057 145La 134964.515 1198.738 8.267
145 058 145Ce 134959.894 1202.066 8.29
145 059 145Pr 134956.851 1203.815 8.302
145 060 145Nd 134954.535 1204.838 8.309
145 061 145Pm 134954.187 1203.893 8.303
145 062 145Sm 134954.292 1202.494 8.293
145 063 145Eu 134956.441 1199.052 8.269
145 064 145Gd 134961.001 1193.199 8.229
145 065 145Tb 134967.537 1185.369 8.175
145 066 145Dy 134974.616 1176.997 8.117
146 055 146Cs 135914.401 1191.004 8.158
146 056 146Ba 135904.51 1199.602 8.216
146 057 146La 135899.879 1202.939 8.239
146 058 146Ce 135892.808 1208.717 8.279
146 059 146Pr 135891.267 1208.965 8.281
146 060 146Nd 135886.535 1212.403 8.304
146 061 146Pm 135887.495 1210.15 8.289
146 062 146Sm 135885.442 1210.91 8.294
146 063 146Eu 135888.811 1206.247 8.262
146 064 146Gd 135889.329 1204.436 8.25
146 065 146Tb 135897.141 1195.331 8.187
146 066 146Dy 135901.846 1189.332 8.146
147 055 147Cs 136849.495 1195.475 8.132
147 057 147La 136833.643 1208.741 8.223
147 058 147Ce 136827.952 1213.138 8.253
147 059 147Pr 136824.016 1215.781 8.271
147 060 147Nd 136820.808 1217.696 8.284
147 061 147Pm 136819.401 1217.809 8.284
147 062 147Sm 136818.666 1217.251 8.281
147 063 147Eu 136819.877 1214.747 8.264
147 064 147Gd 136821.553 1211.777 8.243
147 065 147Tb 136825.653 1206.384 8.207
147 066 147Dy 136831.706 1199.038 8.157
147 067 147Ho 136839.546 1189.904 8.095
148 055 148Cs 137785.709 1198.827 8.1
148 056 148Ba 137774.488 1208.754 8.167
148 057 148La 137768.857 1213.092 8.197
148 058 148Ce 137761.085 1219.571 8.24
148 059 148Pr 137758.434 1220.928 8.25
148 060 148Nd 137753.041 1225.028 8.277
148 061 148Pm 137753.071 1223.705 8.268
148 062 148Sm 137750.09 1225.392 8.28
148 063 148Eu 137752.619 1221.57 8.254
148 064 148Gd 137752.134 1220.761 8.248
148 065 148Tb 137757.359 1214.243 8.204
148 066 148Dy 137759.529 1210.78 8.181
148 067 148Ho 137768.857 1200.159 8.109
149 058 149Ce 138696.27 1223.951 8.214
149 059 149Pr 138691.399 1227.529 8.238
149 060 149Nd 138687.567 1230.067 8.255
149 061 149Pm 138685.366 1230.975 8.262
149 062 149Sm 138683.784 1231.263 8.264
149 063 149Eu 138683.968 1229.786 8.254
149 064 149Gd 138684.771 1227.69 8.24
149 065 149Tb 138687.897 1223.271 8.21
149 066 149Dy 138691.167 1218.707 8.179
149 067 149Ho 138696.683 1211.898 8.134
149 068 149Er 138704.118 1203.17 8.075
150 058 150Ce 139629.644 1230.142 8.201
150 059 150Pr 139625.649 1232.844 8.219
150 060 150Nd 139619.752 1237.448 8.25
150 061 150Pm 139619.328 1236.578 8.244
150 062 150Sm 139615.363 1239.25 8.262
150 063 150Eu 139617.112 1236.208 8.241
150 064 150Gd 139615.629 1236.397 8.243
150 065 150Tb 139619.776 1230.957 8.206
150 066 150Dy 139621.059 1228.381 8.189
150 067 150Ho 139627.917 1220.229 8.135
150 068 150Er 139631.521 1215.332 8.102
151 058 151Ce 140564.458 1234.894 8.178
151 059 151Pr 140558.676 1239.382 8.208
151 060 151Nd 140553.983 1242.782 8.23
151 061 151Pm 140551.03 1244.442 8.241
151 062 151Sm 140549.332 1244.847 8.244
151 063 151Eu 140548.744 1244.141 8.239
151 064 151Gd 140548.697 1242.895 8.231
151 065 151Tb 140550.751 1239.547 8.209
151 066 151Dy 140553.111 1235.894 8.185
151 067 151Ho 140557.727 1229.985 8.146
151 068 151Er 140562.582 1223.836 8.105
151 069 151Tm 140569.555 1215.57 8.05
151 070 151Yb 140578.286 1205.546 7.984
152 059 152Pr 141493.131 1244.493 8.187
152 060 152Nd 141486.272 1250.058 8.224
152 061 152Pm 141484.657 1250.38 8.226
152 062 152Sm 141480.639 1253.104 8.244
152 063 152Eu 141482.003 1250.448 8.227
152 064 152Gd 141479.672 1251.485 8.233
152 065 152Tb 141483.155 1246.709 8.202
152 066 152Dy 141483.24 1245.33 8.193
152 067 152Ho 141489.245 1238.032 8.145
152 068 152Er 141491.842 1234.142 8.119
152 069 152Tm 141500.061 1224.629 8.057
152 070 152Yb 141505.01 1218.387 8.016
153 059 153Pr 142426.805 1250.384 8.172
153 060 153Nd 142420.575 1255.321 8.205
153 061 153Pm 142416.728 1257.874 8.221
153 062 153Sm 142414.336 1258.973 8.229
153 063 153Eu 142413.018 1258.998 8.229
153 064 153Gd 142412.99 1257.732 8.22
153 065 153Tb 142414.049 1255.38 8.205
153 066 153Dy 142415.708 1252.428 8.186
153 067 153Ho 142419.328 1247.514 8.154
153 068 153Er 142423.348 1242.201 8.119
153 069 153Tm 142429.31 1234.946 8.072
153 071 153Lu 142443.893 1217.776 7.959
154 059 154Pr 143361.729 1255.025 8.15
154 060 154Nd 143353.728 1261.733 8.193
154 061 154Pm 143350.407 1263.76 8.206
154 062 154Sm 143345.934 1266.94 8.227
154 063 154Eu 143346.141 1265.44 8.217
154 064 154Gd 143343.661 1266.627 8.225
154 065 154Tb 143346.703 1262.291 8.197
154 066 154Dy 143345.954 1261.747 8.193
154 067 154Ho 143351.197 1255.211 8.151
154 068 154Er 143352.718 1252.396 8.132
154 069 154Tm 143360.39 1243.431 8.074
154 070 154Yb 143364.374 1238.154 8.04
155 061 155Pm 144283.431 1270.302 8.195
155 062 155Sm 144279.693 1272.747 8.211
155 063 155Eu 144277.555 1273.592 8.217
155 064 155Gd 144276.791 1273.062 8.213
155 065 155Tb 144277.103 1271.456 8.203
155 066 155Dy 144278.686 1268.58 8.184
155 067 155Ho 144281.295 1264.678 8.159
155 068 155Er 144284.609 1260.07 8.129
155 069 155Tm 144289.678 1253.708 8.088
155 070 155Yb 144295.299 1246.794 8.044
155 071 155Lu 144302.737 1238.062 7.987
156 060 156Nd 145221.876 1272.715 8.158
156 061 156Pm 145217.675 1275.623 8.177
156 062 156Sm 145212.014 1279.991 8.205
156 063 156Eu 145210.78 1279.931 8.205
156 064 156Gd 145207.82 1281.598 8.215
156 065 156Tb 145209.753 1278.372 8.195
156 066 156Dy 145208.81 1278.021 8.192
156 067 156Ho 145213.479 1272.059 8.154
156 068 156Er 145214.105 1270.14 8.142
156 069 156Tm 145220.967 1261.984 8.09
156 070 156Yb 145224.032 1257.626 8.062
156 071 156Lu 145233.035 1247.33 7.996
156 072 156Hf 145238.424 1240.647 7.953
157 061 157Pm 146151.019 1281.844 8.165
157 062 157Sm 146146.148 1285.422 8.187
157 063 157Eu 146142.9 1287.377 8.2
157 064 157Gd 146141.025 1287.958 8.204
157 065 157Tb 146140.575 1287.116 8.198
157 066 157Dy 146141.406 1284.991 8.185
157 067 157Ho 146143.494 1281.609 8.163
157 068 157Er 146146.392 1277.418 8.136
157 069 157Tm 146150.592 1271.925 8.101
157 070 157Yb 146155.348 1265.875 8.063
157 071 157Lu 146161.796 1258.134 8.014
157 073 157Ta 146177.627 1239.716 7.896
158 061 158Pm 147085.793 1286.636 8.143
158 062 158Sm 147079.162 1291.973 8.177
158 063 158Eu 147076.651 1293.191 8.185
158 064 158Gd 147072.653 1295.896 8.202
158 065 158Tb 147073.362 1293.894 8.189
158 066 158Dy 147071.916 1294.046 8.19
158 067 158Ho 147075.626 1289.043 8.158
158 068 158Er 147076.002 1287.373 8.148
158 069 158Tm 147082.092 1279.99 8.101
158 070 158Yb 147084.269 1276.52 8.079
158 071 158Lu 147092.559 1266.936 8.019
158 072 158Hf 147097.158 1261.044 7.981
159 062 159Sm 148013.656 1297.045 8.158
159 063 159Eu 148009.302 1300.105 8.177
159 064 159Gd 148006.276 1301.839 8.188
159 065 159Tb 148004.794 1302.027 8.189
159 066 159Dy 148004.649 1300.879 8.182
159 067 159Ho 148005.975 1298.259 8.165
159 068 159Er 148008.233 1294.708 8.143
159 069 159Tm 148011.719 1289.928 8.113
159 070 159Yb 148015.935 1284.419 8.078
159 071 159Lu 148021.557 1277.504 8.035
159 072 159Hf 148027.902 1269.865 7.987
159 073 159Ta 148035.797 1260.677 7.929
160 064 160Gd 148938.39 1309.29 8.183
160 065 160Tb 148937.984 1308.402 8.178
160 066 160Dy 148935.638 1309.455 8.184
160 067 160Ho 148938.417 1305.382 8.159
160 068 160Er 148938.236 1304.27 8.152
160 069 160Tm 148943.483 1297.73 8.111
160 070 160Yb 148945.102 1294.817 8.093
160 071 160Lu 148952.491 1286.135 8.038
160 072 160Hf 148956.313 1281.02 8.006
160 073 160Ta 148965.859 1270.18 7.939
160 074 160W 148971.868 1262.878 7.893
161 064 161Gd 149872.319 1314.925 8.167
161 065 161Tb 149869.853 1316.099 8.175
161 066 161Dy 149868.749 1315.909 8.173
161 067 161Ho 149869.096 1314.269 8.163
161 068 161Er 149870.579 1311.492 8.146
161 069 161Tm 149873.378 1307.4 8.12
161 070 161Yb 149876.922 1302.563 8.09
161 071 161Lu 149881.693 1296.498 8.053
161 072 161Hf 149887.425 1289.473 8.009
161 075 161Re 149911.331 1261.687 7.837
162 064 162Gd 150805.039 1321.771 8.159
162 065 162Tb 150803.135 1322.382 8.163
162 066 162Dy 150800.117 1324.106 8.173
162 067 162Ho 150801.746 1321.184 8.155
162 068 162Er 150800.939 1320.698 8.152
162 069 162Tm 150805.287 1315.056 8.118
162 070 162Yb 150806.428 1312.622 8.103
162 071 162Lu 150812.909 1304.848 8.055
162 072 162Hf 150816.065 1300.398 8.027
162 073 162Ta 150824.947 1290.223 7.964
162 074 162W 150830.214 1283.663 7.924
163 065 163Tb 151735.708 1329.374 8.156
163 066 163Dy 151733.412 1330.377 8.162
163 067 163Ho 151732.903 1329.592 8.157
163 068 163Er 151733.602 1327.6 8.145
163 069 163Tm 151735.53 1324.379 8.125
163 070 163Yb 151738.45 1320.165 8.099
163 071 163Lu 151742.452 1314.87 8.067
163 072 163Hf 151747.446 1308.583 8.028
163 073 163Ta 151753.681 1301.054 7.982
163 074 163W 151760.8 1292.642 7.93
163 075 163Re 151769.192 1282.957 7.871
164 065 164Tb 152669.723 1334.924 8.14
164 066 164Dy 152665.319 1338.035 8.159
164 067 164Ho 152665.794 1336.267 8.148
164 068 164Er 152664.32 1336.447 8.149
164 069 164Tm 152667.871 1331.603 8.12
164 070 164Yb 152668.225 1329.956 8.109
164 071 164Lu 152674.095 1322.792 8.066
164 072 164Hf 152676.404 1319.19 8.044
164 073 164Ta 152684.432 1309.869 7.987
164 074 164W 152688.97 1304.037 7.951
164 076 164Os 152705.722 1284.699 7.834
165 066 165Dy 153599.168 1343.751 8.144
165 067 165Ho 153597.371 1344.256 8.147
165 068 165Er 153597.236 1343.097 8.14
165 069 165Tm 153598.317 1340.722 8.126
165 070 165Yb 153600.455 1337.291 8.105
165 071 165Lu 153603.789 1332.664 8.077
165 072 165Hf 153608.084 1327.075 8.043
165 073 165Ta 153613.354 1320.512 8.003
165 074 165W 153619.836 1312.737 7.956
165 075 165Re 153627.53 1303.749 7.902
166 065 166Tb 154537.031 1346.747 8.113
166 066 166Dy 154531.69 1350.795 8.137
166 067 166Ho 154530.692 1350.499 8.136
166 068 166Er 154528.327 1351.572 8.142
166 069 166Tm 154530.853 1347.752 8.119
166 070 166Yb 154530.648 1346.663 8.112
166 071 166Lu 154535.704 1340.314 8.074
166 072 166Hf 154537.355 1337.37 8.056
166 073 166Ta 154544.605 1328.826 8.005
166 074 166W 154548.3 1323.838 7.975
166 076 166Os 154563.732 1305.819 7.866
167 066 167Dy 155465.834 1356.216 8.121
167 067 167Ho 155462.976 1357.781 8.13
167 068 167Er 155461.456 1358.008 8.132
167 069 167Tm 155461.693 1356.477 8.123
167 070 167Yb 155463.136 1353.741 8.106
167 071 167Lu 155465.719 1349.864 8.083
167 072 167Hf 155469.24 1345.05 8.054
167 073 167Ta 155473.846 1339.151 8.019
167 074 167W 155479.597 1332.106 7.977
167 076 167Os 155494.164 1314.953 7.874
167 077 167Ir 155503.074 1304.749 7.813
168 066 168Dy 156398.708 1362.907 8.113
168 067 168Ho 156396.687 1363.635 8.117
168 068 168Er 156393.25 1365.779 8.13
168 069 168Tm 156394.418 1363.318 8.115
168 070 168Yb 156393.649 1362.793 8.112
168 071 168Lu 156397.653 1357.496 8.08
168 072 168Hf 156398.841 1355.014 8.066
168 073 168Ta 156405.297 1347.265 8.019
168 074 168W 156408.29 1342.979 7.994
168 075 168Re 156416.879 1333.096 7.935
168 076 168Os 156422.167 1326.515 7.896
168 078 168Pt 156440.096 1305.999 7.774
169 066 169Dy 157333.162 1368.019 8.095
169 067 169Ho 157329.448 1370.439 8.109
169 068 169Er 157326.812 1371.783 8.117
169 069 169Tm 157325.949 1371.352 8.115
169 070 169Yb 157326.348 1369.659 8.104
169 071 169Lu 157328.13 1366.584 8.086
169 072 169Hf 157330.979 1362.442 8.062
169 073 169Ta 157334.895 1357.232 8.031
169 074 169W 157339.756 1351.078 7.995
169 075 169Re 157345.777 1343.764 7.951
169 076 169Os 157352.931 1335.316 7.901
169 077 169Ir 157361.06 1325.894 7.846
170 067 170Ho 158263.505 1375.948 8.094
170 068 170Er 158259.12 1379.04 8.112
170 069 170Tm 158258.923 1377.944 8.106
170 070 170Yb 158257.443 1378.13 8.107
170 071 170Lu 158260.391 1373.888 8.082
170 072 170Hf 158260.936 1372.05 8.071
170 073 170Ta 158266.541 1365.152 8.03
170 074 170W 158268.875 1361.524 8.009
170 075 170Re 158276.739 1352.367 7.955
170 076 170Os 158281.218 1346.595 7.921
170 078 170Pt 158297.818 1327.408 7.808
171 067 171Ho 159196.719 1382.299 8.084
171 068 171Er 159193.003 1384.721 8.098
171 069 171Tm 159191.002 1385.43 8.102
171 070 171Yb 159190.394 1384.744 8.098
171 071 171Lu 159191.362 1382.483 8.085
171 072 171Hf 159193.253 1379.298 8.066
171 073 171Ta 159196.453 1374.805 8.04
171 074 171W 159200.576 1369.389 8.008
171 075 171Re 159205.901 1362.77 7.969
171 076 171Os 159212.347 1355.031 7.924
171 077 171Ir 159219.699 1346.386 7.874
171 078 171Pt 159228.148 1336.643 7.817
171 079 171Au 159237.542 1325.956 7.754
172 068 172Er 160125.733 1391.557 8.09
172 069 172Tm 160124.331 1391.666 8.091
172 070 172Yb 160121.94 1392.764 8.097
172 071 172Lu 160123.948 1389.462 8.078
172 072 172Hf 160123.774 1388.343 8.072
172 073 172Ta 160128.337 1382.486 8.038
172 074 172W 160130.059 1379.471 8.02
172 075 172Re 160137.125 1371.112 7.972
172 076 172Os 160140.896 1366.047 7.942
172 078 172Pt 160156.011 1348.346 7.839
172 080 172Hg 160175 1326.77 7.714
173 069 173Tm 161056.946 1398.616 8.084
173 070 173Yb 161055.138 1399.131 8.087
173 071 173Lu 161055.298 1397.678 8.079
173 072 173Hf 161056.26 1395.422 8.066
173 073 173Ta 161058.764 1391.625 8.044
173 074 173W 161061.923 1387.172 8.018
173 075 173Re 161066.585 1381.217 7.984
173 076 173Os 161072.19 1374.319 7.944
173 077 173Ir 161078.845 1366.37 7.898
173 078 173Pt 161086.666 1357.256 7.845
173 079 173Au 161095.275 1347.354 7.788
174 069 174Tm 161990.829 1404.298 8.071
174 070 174Yb 161987.239 1406.595 8.084
174 071 174Lu 161988.102 1404.439 8.071
174 072 174Hf 161987.32 1403.928 8.069
174 073 174Ta 161990.914 1399.04 8.04
174 074 174W 161991.917 1396.744 8.027
174 075 174Re 161997.96 1389.407 7.985
174 076 174Os 162001.126 1384.948 7.959
174 077 174Ir 162009.742 1375.039 7.903
174 078 174Pt 162014.781 1368.706 7.866
174 080 174Hg 162032.431 1348.47 7.75
175 069 175Tm 162923.873 1410.819 8.062
175 070 175Yb 162920.982 1412.418 8.071
175 071 175Lu 162920.001 1412.106 8.069
175 072 175Hf 162920.177 1410.636 8.061
175 073 175Ta 162921.74 1407.779 8.044
175 074 175W 162924.005 1404.221 8.024
175 075 175Re 162927.839 1399.093 7.995
175 076 175Os 162932.511 1393.128 7.961
175 077 175Ir 162938.676 1385.67 7.918
175 078 175Pt 162945.904 1377.148 7.869
175 079 175Au 162953.643 1368.116 7.818
175 080 175Hg 162962.582 1357.884 7.759
176 069 176Tm 163858.317 1415.941 8.045
176 070 176Yb 163853.682 1419.283 8.064
176 071 176Lu 163853.278 1418.394 8.059
176 072 176Hf 163851.577 1418.801 8.061
176 073 176Ta 163854.273 1414.811 8.039
176 074 176W 163854.49 1413.301 8.03
176 075 176Re 163859.558 1406.94 7.994
176 076 176Os 163862.012 1403.192 7.973
176 077 176Ir 163869.738 1394.173 7.921
176 078 176Pt 163874.16 1388.458 7.889
176 080 176Hg 163890.287 1369.744 7.783
177 070 177Yb 164787.681 1424.849 8.05
177 071 177Lu 164785.77 1425.466 8.053
177 072 177Hf 164784.759 1425.185 8.052
177 073 177Ta 164785.413 1423.237 8.041
177 074 177W 164786.924 1420.432 8.025
177 075 177Re 164789.846 1416.217 8.001
177 076 177Os 164793.654 1411.116 7.972
177 077 177Ir 164799.046 1404.43 7.935
177 078 177Pt 164805.212 1396.971 7.892
177 079 177Au 164812.521 1388.369 7.844
177 080 177Hg 164820.78 1378.816 7.79
177 081 177Tl 164829.721 1368.582 7.732
178 070 178Yb 165720.466 1431.629 8.043
178 071 178Lu 165719.31 1431.492 8.042
178 072 178Hf 165716.698 1432.811 8.049
178 073 178Ta 165718.124 1430.091 8.034
178 074 178W 165717.704 1429.218 8.029
178 075 178Re 165721.956 1423.672 7.998
178 076 178Os 165723.552 1420.783 7.982
178 077 178Ir 165730.335 1412.707 7.937
178 078 178Pt 165734.078 1407.67 7.908
178 079 178Au 165743.235 1397.22 7.85
178 080 178Hg 165748.737 1390.425 7.811
178 082 178Pb 165767.6 1368.975 7.691
179 071 179Lu 166652.083 1438.284 8.035
179 072 179Hf 166650.165 1438.91 8.039
179 073 179Ta 166649.759 1438.022 8.034
179 074 179W 166650.31 1436.177 8.023
179 075 179Re 166652.517 1432.677 8.004
179 076 179Os 166655.572 1428.328 7.979
179 077 179Ir 166660.004 1422.603 7.948
179 078 179Pt 166665.306 1416.008 7.911
179 079 179Au 166672.107 1407.913 7.865
179 080 179Hg 166679.626 1399.101 7.816
179 081 179Tl 166687.737 1389.697 7.764
180 071 180Lu 167585.951 1443.981 8.022
180 072 180Hf 167582.342 1446.297 8.035
180 073 180Ta 167582.683 1444.663 8.026
180 074 180W 167581.464 1444.588 8.025
180 075 180Re 167584.757 1440.002 8
180 076 180Os 167585.727 1437.739 7.987
180 077 180Ir 167591.597 1430.575 7.948
180 078 180Pt 167594.628 1426.251 7.924
180 079 180Au 167602.957 1416.629 7.87
180 080 180Hg 167607.797 1410.495 7.836
180 082 180Pb 167625.081 1390.625 7.726
181 072 181Hf 168516.213 1451.992 8.022
181 073 181Ta 168514.672 1452.24 8.023
181 074 181W 168514.348 1451.27 8.018
181 075 181Re 168515.58 1448.744 8.004
181 076 181Os 168518.03 1445.001 7.983
181 077 181Ir 168521.597 1440.141 7.957
181 078 181Pt 168526.183 1434.261 7.924
181 079 181Au 168532.176 1426.975 7.884
181 080 181Hg 168538.875 1418.983 7.84
181 081 181Tl 168546.224 1410.34 7.792
181 082 181Pb 168555.374 1399.897 7.734
182 072 182Hf 169449.059 1458.711 8.015
182 073 182Ta 169448.174 1458.303 8.013
182 074 182W 169445.849 1459.335 8.018
182 075 182Re 169448.135 1455.755 7.999
182 076 182Os 169448.465 1454.131 7.99
182 077 182Ir 169453.511 1447.792 7.955
182 078 182Pt 169455.883 1444.127 7.935
182 079 182Au 169463.24 1435.476 7.887
182 080 182Hg 169467.454 1429.969 7.857
182 081 182Tl 169477.169 1418.961 7.796
182 082 182Pb 169483.182 1411.654 7.756
183 072 183Hf 170383.322 1464.013 8
183 073 183Ta 170380.805 1465.237 8.007
183 074 183W 170379.223 1465.525 8.008
183 075 183Re 170379.268 1464.187 8.001
183 076 183Os 170380.908 1461.254 7.985
183 077 183Ir 170383.86 1457.008 7.962
183 078 183Pt 170387.774 1451.801 7.933
183 079 183Au 170392.848 1445.434 7.899
183 080 183Hg 170398.724 1438.264 7.859
183 081 183Tl 170405.426 1430.269 7.816
183 082 183Pb 170413.933 1420.469 7.762
184 072 184Hf 171316.606 1470.294 7.991
184 073 184Ta 171314.754 1470.853 7.994
184 074 184W 171311.377 1472.937 8.005
184 075 184Re 171312.346 1470.674 7.993
184 076 184Os 171311.806 1469.921 7.989
184 077 184Ir 171315.94 1464.494 7.959
184 078 184Pt 171317.708 1461.432 7.943
184 079 184Au 171324.21 1453.637 7.9
184 080 184Hg 171327.669 1448.885 7.874
184 081 184Tl 171336.617 1438.643 7.819
184 082 184Pb 171341.951 1432.016 7.783
185 073 185Ta 172247.693 1477.479 7.986
185 074 185W 172245.189 1478.691 7.993
185 075 185Re 172244.245 1478.341 7.991
185 076 185Os 172244.747 1476.546 7.981
185 077 185Ir 172246.709 1473.29 7.964
185 078 185Pt 172249.854 1468.852 7.94
185 079 185Au 172254.156 1463.256 7.909
185 080 185Hg 172259.336 1456.783 7.875
185 081 185Tl 172265.241 1449.585 7.836
185 082 185Pb 172272.949 1440.583 7.787
186 073 186Ta 173181.973 1482.765 7.972
186 074 186W 173177.563 1485.882 7.989
186 075 186Re 173177.631 1484.52 7.981
186 076 186Os 173176.051 1484.807 7.983
186 077 186Ir 173179.367 1480.198 7.958
186 078 186Pt 173180.165 1478.107 7.947
186 079 186Au 173185.803 1471.176 7.91
186 080 186Hg 173188.468 1467.217 7.888
186 081 186Tl 173196.306 1458.086 7.839
186 082 186Pb 173201.304 1451.795 7.805
186 083 186Bi 173212.304 1439.501 7.739
187 074 187W 174111.662 1491.348 7.975
187 075 187Re 174109.84 1491.877 7.978
187 076 187Os 174109.326 1491.097 7.974
187 077 187Ir 174110.318 1488.813 7.962
187 078 187Pt 174112.81 1485.027 7.941
187 079 187Au 174116.007 1480.537 7.917
187 080 187Hg 174120.383 1474.868 7.887
187 081 187Tl 174125.546 1468.411 7.852
187 082 187Pb 174132.499 1460.165 7.808
187 083 187Bi 174140.595 1450.776 7.758
188 074 188W 175044.394 1498.182 7.969
188 075 188Re 175043.533 1497.749 7.967
188 076 188Os 175040.902 1499.087 7.974
188 077 188Ir 175043.2 1495.496 7.955
188 078 188Pt 175043.194 1494.209 7.948
188 079 188Au 175048.205 1487.904 7.914
188 080 188Hg 175049.793 1485.023 7.899
188 081 188Tl 175057.134 1476.389 7.853
188 082 188Pb 175061.158 1471.071 7.825
188 083 188Bi 175071.262 1459.674 7.764
188 084 188Po 175077.413 1452.23 7.725
189 074 189W 175979.075 1503.066 7.953
189 075 189Re 175976.066 1504.782 7.962
189 076 189Os 175974.547 1505.007 7.963
189 077 189Ir 175974.569 1503.692 7.956
189 078 189Pt 175976.028 1500.94 7.941
189 079 189Au 175978.418 1497.257 7.922
189 080 189Hg 175981.859 1492.522 7.897
189 081 189Tl 175986.376 1486.712 7.866
189 082 189Pb 175992.587 1479.208 7.826
189 083 189Bi 175999.896 1470.605 7.781
189 084 189Po 176008.03 1461.178 7.731
190 074 190W 176911.749 1509.958 7.947
190 075 190Re 176909.968 1510.445 7.95
190 076 190Os 176906.32 1512.799 7.962
190 077 190Ir 176907.764 1510.062 7.948
190 078 190Pt 176906.682 1509.851 7.947
190 079 190Au 176910.613 1504.627 7.919
190 080 190Hg 176911.613 1502.334 7.907
190 081 190Tl 176918.142 1494.511 7.866
190 082 190Pb 176921.544 1489.816 7.841
190 083 190Bi 176930.55 1479.517 7.787
190 084 190Po 176936.376 1472.397 7.749
191 075 191Re 177842.683 1517.296 7.944
191 076 191Os 177840.127 1518.558 7.951
191 077 191Ir 177839.303 1518.088 7.948
191 078 191Pt 177839.801 1516.298 7.939
191 079 191Au 177841.178 1513.627 7.925
191 080 191Hg 177843.884 1509.628 7.904
191 081 191Tl 177847.685 1504.534 7.877
191 082 191Pb 177853.205 1497.72 7.841
191 083 191Bi 177859.704 1489.928 7.801
191 084 191Po 177867.379 1480.96 7.754
192 076 192Os 178772.134 1526.116 7.949
192 077 192Ir 178772.67 1524.286 7.939
192 078 192Pt 178770.7 1524.964 7.943
192 079 192Au 178773.705 1520.666 7.92
192 080 192Hg 178773.96 1519.117 7.912
192 081 192Tl 178779.59 1512.194 7.876
192 082 192Pb 178782.393 1508.098 7.855
192 083 192Bi 178790.888 1498.309 7.804
192 084 192Po 178795.856 1492.048 7.771
193 076 193Os 179706.116 1531.699 7.936
193 077 193Ir 179704.464 1532.058 7.938
193 078 193Pt 179704.01 1531.219 7.934
193 079 193Au 179704.582 1529.354 7.924
193 080 193Hg 179706.414 1526.229 7.908
193 081 193Tl 179709.634 1521.715 7.885
193 082 193Pb 179714.253 1515.803 7.854
193 083 193Bi 179720.059 1508.704 7.817
193 084 193Po 179727.061 1500.408 7.774
193 085 193At 179734.76 1491.416 7.728
194 076 194Os 180638.57 1538.811 7.932
194 077 194Ir 180637.962 1538.125 7.928
194 078 194Pt 180635.218 1539.577 7.936
194 079 194Au 180637.208 1536.293 7.919
194 080 194Hg 180636.766 1535.442 7.915
194 081 194Tl 180641.618 1529.297 7.883
194 082 194Pb 180643.729 1525.892 7.865
194 083 194Bi 180651.436 1516.892 7.819
194 084 194Po 180655.91 1511.125 7.789
194 085 194At 180665.214 1500.527 7.735
195 076 195Os 181572.807 1544.139 7.919
195 077 195Ir 181570.296 1545.357 7.925
195 078 195Pt 181568.678 1545.682 7.927
195 079 195Au 181568.394 1544.673 7.921
195 080 195Hg 181569.453 1542.32 7.909
195 081 195Tl 181571.787 1538.693 7.891
195 082 195Pb 181575.717 1533.47 7.864
195 083 195Bi 181580.896 1526.997 7.831
195 084 195Po 181587.339 1519.261 7.791
195 085 195At 181594.422 1510.885 7.748
195 086 195Rn 181602.457 1501.556 7.7
196 076 196Os 182505.711 1550.801 7.912
196 077 196Ir 182504.04 1551.178 7.914
196 078 196Pt 182500.321 1553.604 7.927
196 079 196Au 182501.318 1551.314 7.915
196 080 196Hg 182500.12 1551.218 7.914
196 081 196Tl 182503.939 1546.106 7.888
196 082 196Pb 182505.564 1543.188 7.873
196 083 196Bi 182512.405 1535.053 7.832
196 084 196Po 182516.429 1529.736 7.805
196 085 196At 182525.472 1519.4 7.752
196 086 196Rn 182530.851 1512.727 7.718
197 077 197Ir 183436.706 1558.078 7.909
197 078 197Pt 183434.04 1559.45 7.916
197 079 197Au 183432.811 1559.386 7.916
197 080 197Hg 183432.9 1558.004 7.909
197 081 197Tl 183434.589 1555.021 7.894
197 082 197Pb 183437.67 1550.647 7.871
197 083 197Bi 183442.22 1544.804 7.842
197 084 197Po 183448.037 1537.693 7.806
197 085 197At 183454.546 1529.891 7.766
197 086 197Rn 183461.855 1521.289 7.722
198 078 198Pt 184366.049 1567.007 7.914
198 079 198Au 184365.864 1565.899 7.909
198 080 198Hg 184363.98 1566.489 7.912
198 081 198Tl 184366.934 1562.242 7.89
198 082 198Pb 184367.863 1560.019 7.879
198 083 198Bi 184374.033 1552.556 7.841
198 084 198Po 184377.418 1547.878 7.818
198 085 198At 184385.71 1538.292 7.769
198 086 198Rn 184390.638 1532.071 7.738
199 077 199Ir 185303.562 1570.352 7.891
199 078 199Pt 185300.059 1572.562 7.902
199 079 199Au 185297.845 1573.483 7.907
199 080 199Hg 185296.882 1573.153 7.905
199 081 199Tl 185297.859 1570.882 7.894
199 082 199Pb 185300.179 1567.269 7.876
199 083 199Bi 185304.098 1562.056 7.85
199 084 199Po 185309.17 1555.691 7.818
199 085 199At 185315.054 1548.514 7.781
199 086 199Rn 185321.843 1540.431 7.741
199 087 199Fr 185329.612 1531.369 7.695
200 078 200Pt 186232.342 1579.844 7.899
200 079 200Au 186231.164 1579.729 7.899
200 080 200Hg 186228.419 1581.181 7.906
200 081 200Tl 186230.364 1577.942 7.89
200 082 200Pb 186230.658 1576.355 7.882
200 083 200Bi 186236.02 1569.7 7.848
200 084 200Po 186238.925 1565.501 7.828
200 085 200At 186246.38 1556.753 7.784
200 086 200Rn 186250.851 1550.989 7.755
200 087 200Fr 186260.466 1540.08 7.7
201 078 201Pt 187166.699 1585.053 7.886
201 079 201Au 187163.527 1586.931 7.895
201 080 201Hg 187161.753 1587.411 7.898
201 081 201Tl 187161.724 1586.148 7.891
201 082 201Pb 187163.137 1583.441 7.878
201 083 201Bi 187166.468 1578.817 7.855
201 084 201Po 187170.848 1573.144 7.827
201 085 201At 187176.073 1566.625 7.794
201 086 201Rn 187182.281 1559.124 7.757
201 087 201Fr 187189.44 1550.672 7.715
202 079 202Au 188097.022 1593.002 7.886
202 080 202Hg 188093.565 1595.165 7.897
202 081 202Tl 188094.417 1593.02 7.886
202 082 202Pb 188093.955 1592.189 7.882
202 083 202Bi 188098.645 1586.205 7.853
202 084 202Po 188100.943 1582.614 7.835
202 085 202At 188107.765 1574.499 7.795
202 086 202Rn 188111.57 1569.4 7.769
202 087 202Fr 188120.474 1559.203 7.719
202 088 202Ra 188126.033 1552.351 7.685
203 079 203Au 189029.773 1599.816 7.881
203 080 203Hg 189027.136 1601.16 7.887
203 081 203Tl 189026.133 1600.87 7.886
203 082 203Pb 189026.596 1599.113 7.877
203 083 203Bi 189029.332 1595.084 7.858
203 084 203Po 189033.054 1590.068 7.833
203 085 203At 189037.687 1584.142 7.804
203 086 203Rn 189043.179 1577.357 7.77
203 087 203Fr 189049.689 1569.553 7.732
203 088 203Ra 189056.957 1560.992 7.69
204 080 204Hg 189959.209 1608.652 7.886
204 081 204Tl 189959.042 1607.526 7.88
204 082 204Pb 189957.767 1607.507 7.88
204 083 204Bi 189961.699 1602.282 7.854
204 084 204Po 189963.521 1599.167 7.839
204 085 204At 189969.469 1591.925 7.804
204 086 204Rn 189972.849 1587.252 7.781
204 087 204Fr 189980.93 1577.878 7.735
204 088 204Ra 189985.865 1571.649 7.704
205 080 205Hg 190893.106 1614.32 7.875
205 081 205Tl 190891.061 1615.072 7.878
205 082 205Pb 190890.601 1614.239 7.874
205 083 205Bi 190892.798 1610.748 7.857
205 084 205Po 190895.84 1606.413 7.836
205 085 205At 190899.866 1601.094 7.81
205 086 205Rn 190904.617 1595.049 7.781
205 087 205Fr 190910.506 1587.867 7.746
205 088 205Ra 190917.145 1579.935 7.707
206 080 206Hg 191825.941 1621.051 7.869
206 081 206Tl 191824.123 1621.575 7.872
206 082 206Pb 191822.079 1622.325 7.875
206 083 206Bi 191825.326 1617.786 7.853
206 084 206Po 191826.661 1615.157 7.841
206 085 206At 191831.912 1608.613 7.809
206 086 206Rn 191834.705 1604.527 7.789
206 087 206Fr 191842.067 1595.871 7.747
206 088 206Ra 191846.364 1590.281 7.72
206 089 206Ac 191855.798 1579.554 7.668
207 080 207Hg 192762.161 1624.396 7.847
207 081 207Tl 192756.836 1628.428 7.867
207 082 207Pb 192754.907 1629.063 7.87
207 083 207Bi 192756.793 1625.883 7.855
207 084 207Po 192759.191 1622.193 7.837
207 085 207At 192762.583 1617.507 7.814
207 086 207Rn 192766.684 1612.113 7.788
207 087 207Fr 192771.964 1605.54 7.756
207 088 207Ra 192777.833 1598.377 7.722
207 089 207Ac 192784.912 1590.005 7.681
208 081 208Tl 193692.614 1632.214 7.847
208 082 208Pb 193687.104 1636.431 7.867
208 083 208Bi 193689.472 1632.77 7.85
208 084 208Po 193690.361 1630.587 7.839
208 085 208At 193694.829 1624.827 7.812
208 086 208Rn 193697.161 1621.201 7.794
208 087 208Fr 193703.628 1613.441 7.757
208 088 208Ra 193707.501 1608.275 7.732
208 089 208Ac 193716.036 1598.446 7.685
209 081 209Tl 194627.22 1637.174 7.833
209 082 209Pb 194622.732 1640.368 7.849
209 083 209Bi 194621.577 1640.23 7.848
209 084 209Po 194622.959 1637.555 7.835
209 085 209At 194625.934 1633.287 7.815
209 086 209Rn 194629.374 1628.554 7.792
209 087 209Fr 194634.023 1622.611 7.764
209 088 209Ra 194639.131 1616.21 7.733
209 089 209Ac 194645.61 1608.438 7.696
209 090 209Th 194652.759 1599.995 7.655
210 081 210Tl 195563.106 1640.854 7.814
210 082 210Pb 195557.113 1645.554 7.836
210 083 210Bi 195556.538 1644.835 7.833
210 084 210Po 195554.866 1645.214 7.834
210 085 210At 195558.336 1640.45 7.812
210 086 210Rn 195560.199 1637.294 7.797
210 087 210Fr 195565.94 1630.26 7.763
210 088 210Ra 195569.236 1625.67 7.741
210 089 210Ac 195577.054 1616.559 7.698
210 090 210Th 195581.796 1610.524 7.669
211 082 211Pb 196492.843 1649.388 7.817
211 083 211Bi 196490.966 1649.972 7.82
211 084 211Po 196489.88 1649.764 7.819
211 085 211At 196490.155 1648.197 7.811
211 086 211Rn 196492.535 1644.523 7.794
211 087 211Fr 196496.622 1639.143 7.768
211 088 211Ra 196501.105 1633.367 7.741
211 089 211Ac 196506.958 1626.22 7.707
211 090 211Th 196513.157 1618.728 7.672
212 082 212Pb 197427.281 1654.515 7.804
212 083 212Bi 197426.201 1654.303 7.803
212 084 212Po 197423.437 1655.773 7.81
212 085 212At 197424.675 1653.242 7.798
212 086 212Rn 197424.125 1652.499 7.795
212 087 212Fr 197428.736 1646.594 7.767
212 088 212Ra 197431.572 1642.465 7.747
212 089 212Ac 197438.532 1634.212 7.709
212 090 212Th 197442.832 1628.618 7.682
212 091 212Pa 197451.84 1618.317 7.634
213 082 213Pb 198363.139 1658.223 7.785
213 083 213Bi 198360.581 1659.488 7.791
213 084 213Po 198358.648 1660.128 7.794
213 085 213At 198358.211 1659.271 7.79
213 086 213Rn 198358.581 1657.608 7.782
213 087 213Fr 198360.218 1654.678 7.768
213 088 213Ra 198363.615 1649.987 7.746
213 089 213Ac 198368.896 1643.413 7.716
213 090 213Th 198374.355 1636.661 7.684
213 091 213Pa 198381.384 1628.338 7.645
214 082 214Pb 199297.636 1663.292 7.772
214 083 214Bi 199296.106 1663.528 7.773
214 084 214Po 199292.325 1666.016 7.785
214 085 214At 199292.904 1664.144 7.776
214 086 214Rn 199291.453 1664.301 7.777
214 087 214Fr 199294.304 1660.157 7.758
214 088 214Ra 199294.852 1658.316 7.749
214 089 214Ac 199300.669 1651.205 7.716
214 090 214Th 199304.441 1646.14 7.692
214 091 214Pa 199312.708 1636.58 7.648
215 083 215Bi 200230.449 1668.751 7.762
215 084 215Po 200227.749 1670.157 7.768
215 085 215At 200226.523 1670.09 7.768
215 086 215Rn 200226.098 1669.222 7.764
215 087 215Fr 200227.074 1666.952 7.753
215 088 215Ra 200228.779 1663.954 7.739
215 089 215Ac 200231.746 1659.694 7.72
215 090 215Th 200236.15 1653.996 7.693
215 091 215Pa 200242.582 1646.271 7.657
216 083 216Bi 201166.168 1672.597 7.744
216 084 216Po 201161.567 1675.905 7.759
216 085 216At 201161.529 1674.649 7.753
216 086 216Rn 201159.017 1675.868 7.759
216 087 216Fr 201161.229 1672.362 7.742
216 088 216Ra 201161.03 1671.268 7.737
216 089 216Ac 201165.351 1665.654 7.711
216 090 216Th 201167.021 1662.69 7.698
216 091 216Pa 201174.006 1654.412 7.659
217 084 217Po 202097.178 1679.859 7.741
217 085 217At 202095.162 1680.581 7.745
217 086 217Rn 202093.914 1680.536 7.744
217 087 217Fr 202094.059 1679.098 7.738
217 088 217Ra 202095.12 1676.743 7.727
217 089 217Ac 202097.429 1673.141 7.71
217 090 217Th 202100.427 1668.85 7.691
217 091 217Pa 202104.77 1663.213 7.665
217 092 217U 202109.889 1656.801 7.635
218 084 218Po 203031.129 1685.473 7.732
218 085 218At 203030.359 1684.95 7.729
218 086 218Rn 203026.966 1687.049 7.739
218 087 218Fr 203028.297 1684.425 7.727
218 088 218Ra 203027.378 1684.051 7.725
218 089 218Ac 203031.056 1679.079 7.702
218 090 218Th 203032.079 1676.763 7.692
218 091 218Pa 203037.863 1669.686 7.659
218 092 218U 203040.603 1665.652 7.641
219 085 219At 203964.151 1690.723 7.72
219 086 219Rn 203962.074 1691.507 7.724
219 087 219Fr 203961.35 1690.937 7.721
219 088 219Ra 203961.615 1689.379 7.714
219 089 219Ac 203963.28 1686.421 7.701
219 090 219Th 203965.669 1682.738 7.684
219 091 219Pa 203969.208 1677.906 7.662
219 092 219U 203973.387 1672.434 7.637
220 085 220At 204899.598 1694.841 7.704
220 086 220Rn 204895.35 1697.796 7.717
220 087 220Fr 204895.709 1696.144 7.71
220 088 220Ra 204893.988 1696.571 7.712
220 089 220Ac 204896.956 1692.31 7.692
220 090 220Th 204897.362 1690.611 7.685
220 091 220Pa 204902.562 1684.117 7.655
221 086 221Rn 205830.703 1702.008 7.701
221 087 221Fr 205828.998 1702.42 7.703
221 088 221Ra 205828.173 1701.952 7.701
221 089 221Ac 205829.218 1699.613 7.691
221 090 221Th 205831.125 1696.413 7.676
221 091 221Pa 205834.056 1692.189 7.657
222 086 222Rn 206764.099 1708.178 7.694
222 087 222Fr 206763.563 1707.42 7.691
222 088 222Ra 206761.024 1708.666 7.697
222 089 222Ac 206762.813 1705.584 7.683
222 090 222Th 206762.884 1704.219 7.677
223 087 223Fr 207697.092 1713.457 7.684
223 088 223Ra 207695.432 1713.824 7.685
223 089 223Ac 207695.512 1712.45 7.679
223 090 223Th 207696.561 1710.108 7.669
223 091 223Pa 207698.984 1706.391 7.652
223 092 223U 207701.993 1702.089 7.633
224 087 224Fr 208631.862 1718.252 7.671
224 088 224Ra 208628.518 1720.302 7.68
224 089 224Ac 208629.415 1718.112 7.67
224 090 224Th 208628.665 1717.569 7.668
224 091 224Pa 208632.028 1712.913 7.647
224 092 224U 208633.361 1710.286 7.635
225 087 225Fr 209565.506 1724.173 7.663
225 088 225Ra 209563.179 1725.207 7.668
225 089 225Ac 209562.312 1724.781 7.666
225 090 225Th 209562.473 1723.326 7.659
225 091 225Pa 209563.992 1720.514 7.647
225 092 225U 209566.518 1716.695 7.63
225 093 225Np 209570.22 1711.699 7.608
226 087 226Fr 210500.56 1728.685 7.649
226 088 226Ra 210496.348 1731.603 7.662
226 089 226Ac 210496.478 1730.18 7.656
226 090 226Th 210494.854 1730.511 7.657
226 091 226Pa 210497.179 1726.892 7.641
226 092 226U 210497.964 1724.814 7.632
227 087 227Fr 211434.334 1734.476 7.641
227 088 227Ra 211431.352 1736.165 7.648
227 089 227Ac 211429.513 1736.71 7.651
227 090 227Th 211428.957 1735.973 7.647
227 091 227Pa 211429.472 1734.165 7.639
227 092 227U 211431.151 1731.192 7.626
227 093 227Np 211434.178 1726.872 7.607
228 088 228Ra 212364.609 1742.473 7.642
228 089 228Ac 212364.052 1741.737 7.639
228 090 228Th 212361.417 1743.078 7.645
228 091 228Pa 212363.058 1740.144 7.632
228 092 228U 212362.848 1739.061 7.627
228 094 228Pu 212368.691 1730.631 7.59
229 087 229Fr 213303.492 1744.449 7.618
229 088 229Ra 213299.724 1746.923 7.628
229 089 229Ac 213297.4 1747.954 7.633
229 090 229Th 213295.726 1748.335 7.635
229 091 229Pa 213295.526 1747.241 7.63
229 092 229U 213296.328 1745.146 7.621
229 093 229Np 213298.386 1741.795 7.606
229 094 229Pu 213301.495 1737.392 7.587
230 088 230Ra 214233.173 1753.04 7.622
230 089 230Ac 214231.954 1752.965 7.622
230 090 230Th 214228.497 1755.129 7.631
230 091 230Pa 214229.297 1753.036 7.622
230 092 230U 214228.226 1752.813 7.621
230 093 230Np 214231.34 1748.406 7.602
230 094 230Pu 214232.523 1745.93 7.591
231 089 231Ac 215165.558 1758.927 7.614
231 090 231Th 215162.944 1760.247 7.62
231 091 231Pa 215162.042 1759.856 7.618
231 092 231U 215161.912 1758.693 7.613
231 093 231Np 215163.224 1756.087 7.602
231 094 231Pu 215165.368 1752.65 7.587
232 089 232Ac 216100.282 1763.768 7.602
232 090 232Th 216096.069 1766.687 7.615
232 091 232Pa 216096.058 1765.405 7.61
232 092 232U 216094.21 1765.96 7.612
232 094 232Pu 216096.943 1760.64 7.589
233 090 233Th 217030.848 1771.474 7.603
233 091 233Pa 217029.094 1771.934 7.605
233 092 233U 217028.013 1771.722 7.604
233 093 233Np 217028.532 1769.91 7.596
233 094 233Pu 217030.121 1767.028 7.584
233 096 233Cm 217036.339 1758.223 7.546
234 090 234Th 217964.223 1777.664 7.597
234 091 234Pa 217963.439 1777.155 7.595
234 092 234U 217960.734 1778.567 7.601
234 093 234Np 217962.032 1775.975 7.59
234 094 234Pu 217961.915 1774.799 7.585
234 096 234Cm 217967.267 1766.86 7.551
235 090 235Th 218899.363 1782.09 7.583
235 091 235Pa 218896.922 1783.237 7.588
235 092 235U 218895.002 1783.864 7.591
235 093 235Np 218894.615 1782.958 7.587
235 094 235Pu 218895.243 1781.036 7.579
236 091 236Pa 219831.436 1788.289 7.577
236 092 236U 219828.021 1790.41 7.586
236 093 236Np 219828.444 1788.694 7.579
236 094 236Pu 219827.456 1788.389 7.578
237 091 237Pa 220765.22 1794.07 7.57
237 092 237U 220762.461 1795.536 7.576
237 093 237Np 220761.431 1795.272 7.575
237 094 237Pu 220761.14 1794.27 7.571
238 091 238Pa 221699.844 1799.011 7.559
238 092 238U 221695.872 1801.69 7.57
238 093 238Np 221695.508 1800.76 7.566
238 094 238Pu 221693.706 1801.269 7.568
238 095 238Am 221695.45 1798.232 7.556
238 096 238Cm 221695.919 1796.469 7.548
239 092 239U 222630.631 1806.496 7.559
239 093 239Np 222628.859 1806.975 7.561
239 094 239Pu 222627.625 1806.916 7.56
239 095 239Am 222627.916 1805.331 7.554
240 092 240U 223564.266 1812.426 7.552
240 093 240Np 223563.355 1812.044 7.55
240 094 240Pu 223560.656 1813.45 7.556
240 095 240Am 223561.53 1811.282 7.547
240 096 240Cm 223561.233 1810.287 7.543
241 093 241Np 224496.794 1818.17 7.544
241 094 241Pu 224494.98 1818.691 7.546
241 095 241Am 224494.448 1817.93 7.543
241 096 241Cm 224494.705 1816.38 7.537
242 093 242Np 225431.448 1823.082 7.533
242 094 242Pu 225428.236 1825.001 7.541
242 095 242Am 225428.476 1823.467 7.535
242 096 242Cm 225427.3 1823.35 7.535
242 098 242Cf 225430.813 1817.25 7.509
243 094 243Pu 226362.767 1830.035 7.531
243 095 243Am 226361.676 1829.832 7.53
243 096 243Cm 226361.173 1829.042 7.527
243 097 243Bk 226362.169 1826.753 7.518
244 094 244Pu 227296.311 1836.056 7.525
244 095 244Am 227295.875 1835.199 7.521
244 096 244Cm 227293.937 1835.844 7.524
244 097 244Bk 227295.688 1832.799 7.511
244 098 244Cf 227295.94 1831.254 7.505
245 094 245Pu 228231.105 1840.827 7.514
245 095 245Am 228229.388 1841.251 7.515
245 096 245Cm 228227.982 1841.364 7.516
245 097 245Bk 228228.282 1839.771 7.509
245 098 245Cf 228229.342 1837.417 7.5
246 094 246Pu 229164.888 1846.61 7.507
246 095 246Am 229163.977 1846.227 7.505
246 096 246Cm 229161.09 1847.822 7.511
246 097 246Bk 229161.93 1845.688 7.503
246 098 246Cf 229161.541 1844.784 7.499
246 100 246Fm 229166.567 1837.171 7.468
247 096 247Cm 230095.499 1852.977 7.502
247 097 247Bk 230094.945 1852.238 7.499
247 098 247Cf 230095.08 1850.81 7.493
248 096 248Cm 231028.851 1859.191 7.497
248 098 248Cf 231027.677 1857.778 7.491
248 100 248Fm 231031.321 1851.547 7.466
249 096 249Cm 231963.703 1863.904 7.486
249 097 249Bk 231962.292 1864.022 7.486
249 098 249Cf 231961.657 1863.364 7.483
250 096 250Cm 232897.436 1869.736 7.479
250 097 250Bk 232896.887 1868.992 7.476
250 098 250Cf 232894.597 1869.989 7.48
250 100 250Fm 232896.477 1865.522 7.462
251 096 251Cm 233832.589 1874.149 7.467
251 097 251Bk 233830.658 1874.786 7.469
251 098 251Cf 233829.054 1875.097 7.471
251 099 251Es 233828.92 1873.938 7.466
251 100 251Fm 233829.884 1871.68 7.457
252 098 252Cf 234762.447 1881.269 7.465
252 099 252Es 234763.192 1879.231 7.457
252 100 252Fm 234762.208 1878.922 7.456
252 102 252No 234767.25 1871.293 7.426
253 098 253Cf 235697.208 1886.074 7.455
253 099 253Es 235696.41 1885.579 7.453
253 100 253Fm 235696.235 1884.46 7.448
254 098 254Cf 236630.742 1892.105 7.449
254 099 254Es 236630.882 1890.672 7.444
254 100 254Fm 236629.284 1890.977 7.445
254 102 254No 236632.081 1885.593 7.424
255 099 255Es 237564.473 1896.646 7.438
255 100 255Fm 237563.672 1896.154 7.436
255 101 255Md 237564.205 1894.327 7.429
255 102 255No 237565.705 1891.534 7.418
256 100 256Fm 238496.853 1902.538 7.432
256 101 256Md 238498.476 1899.622 7.42
256 102 256No 238498.169 1898.635 7.417
256 104 256Rf 238503.559 1890.659 7.385
257 100 257Fm 239431.45 1907.506 7.422
257 101 257Md 239431.347 1906.317 7.418
257 102 257No 239432.08 1904.289 7.41
258 101 258Md 240365.532 1911.696 7.41
260 106 260Sg 242240.857 1909.035 7.342
261 104 261Rf 243168.109 1923.936 7.371
264 108 264Hs 245978.832 1926.736 7.298
265 106 265Sg 246904.568 1943.152 7.333
###Markdown
Split the line into list by .split()!
###Code
for row in open('index.txt').readlines():
if not row.startswith('#'):
print row.strip().split()
###Output
['001', '001', '1H', '938.272', '0', '0']
['002', '001', '2H', '1875.613', '2.225', '1.112']
['003', '001', '3H', '2808.921', '8.482', '2.827']
['003', '002', '3He', '2808.391', '7.718', '2.573']
['004', '001', '4H', '3751.365', '5.603', '1.401']
['004', '002', '4He', '3727.379', '28.296', '7.074']
['004', '003', '4Li', '3749.763', '4.618', '1.155']
['005', '001', '5H', '4689.849', '6.684', '1.337']
['005', '002', '5He', '4667.838', '27.402', '5.48']
['005', '003', '5Li', '4667.617', '26.33', '5.266']
['006', '001', '6H', '5630.313', '5.786', '0.964']
['006', '002', '6He', '5605.537', '29.268', '4.878']
['006', '003', '6Li', '5601.518', '31.994', '5.332']
['006', '004', '6Be', '5605.295', '26.924', '4.487']
['007', '002', '7He', '6545.537', '28.834', '4.119']
['007', '003', '7Li', '6533.833', '39.244', '5.606']
['007', '004', '7Be', '6534.184', '37.6', '5.371']
['007', '005', '7B', '6545.773', '24.718', '3.531']
['008', '002', '8He', '7482.528', '31.408', '3.926']
['008', '003', '8Li', '7471.366', '41.277', '5.16']
['008', '004', '8Be', '7454.85', '56.5', '7.062']
['008', '005', '8B', '7472.319', '37.737', '4.717']
['008', '006', '8C', '7483.98', '24.783', '3.098']
['009', '002', '9He', '8423.363', '30.138', '3.349']
['009', '003', '9Li', '8406.867', '45.341', '5.038']
['009', '004', '9Be', '8392.75', '58.165', '6.463']
['009', '005', '9B', '8393.307', '56.314', '6.257']
['009', '006', '9C', '8409.291', '39.037', '4.337']
['010', '002', '10He', '9362.728', '30.339', '3.034']
['010', '003', '10Li', '9346.458', '45.315', '4.532']
['010', '004', '10Be', '9325.503', '64.977', '6.498']
['010', '005', '10B', '9324.436', '64.751', '6.475']
['010', '006', '10C', '9327.573', '60.32', '6.032']
['010', '007', '10N', '9350.163', '36.437', '3.644']
['011', '003', '11Li', '10285.698', '45.64', '4.149']
['011', '004', '11Be', '10264.564', '65.481', '5.953']
['011', '005', '11B', '10252.547', '76.205', '6.928']
['011', '006', '11C', '10254.018', '73.44', '6.676']
['011', '007', '11N', '10267.157', '59.008', '5.364']
['012', '004', '12Be', '11200.961', '68.649', '5.721']
['012', '005', '12B', '11188.742', '79.575', '6.631']
['012', '006', '12C', '11174.862', '92.162', '7.68']
['012', '007', '12N', '11191.689', '74.041', '6.17']
['012', '008', '12O', '11205.888', '58.549', '4.879']
['013', '004', '13Be', '12140.628', '68.548', '5.273']
['013', '005', '13B', '12123.429', '84.453', '6.496']
['013', '006', '13C', '12109.481', '97.108', '7.47']
['013', '007', '13N', '12111.191', '94.105', '7.239']
['013', '008', '13O', '12128.446', '75.556', '5.812']
['014', '004', '14Be', '13078.822', '69.919', '4.994']
['014', '005', '14B', '13062.025', '85.423', '6.102']
['014', '006', '14C', '13040.87', '105.285', '7.52']
['014', '007', '14N', '13040.203', '104.659', '7.476']
['014', '008', '14O', '13044.836', '98.732', '7.052']
['015', '005', '15B', '13998.827', '88.186', '5.879']
['015', '006', '15C', '13979.217', '106.503', '7.1']
['015', '007', '15N', '13968.935', '115.492', '7.699']
['015', '008', '15O', '13971.178', '111.955', '7.464']
['015', '009', '15F', '13984.591', '97.249', '6.483']
['016', '005', '16B', '14938.429', '88.149', '5.509']
['016', '006', '16C', '14914.532', '110.753', '6.922']
['016', '007', '16N', '14906.011', '117.981', '7.374']
['016', '008', '16O', '14895.079', '127.619', '7.976']
['016', '009', '16F', '14909.985', '111.42', '6.964']
['016', '010', '16Ne', '14922.79', '97.322', '6.083']
['017', '005', '17B', '15876.613', '89.531', '5.267']
['017', '006', '17C', '15853.371', '111.479', '6.558']
['017', '007', '17N', '15839.692', '123.865', '7.286']
['017', '008', '17O', '15830.501', '131.763', '7.751']
['017', '009', '17F', '15832.751', '128.22', '7.542']
['017', '010', '17Ne', '15846.749', '112.928', '6.643']
['018', '006', '18C', '16788.756', '115.66', '6.426']
['018', '007', '18N', '16776.429', '126.693', '7.039']
['018', '008', '18O', '16762.023', '139.807', '7.767']
['018', '009', '18F', '16763.167', '137.369', '7.632']
['018', '010', '18Ne', '16767.099', '132.143', '7.341']
['018', '011', '18Na', '16785.461', '112.488', '6.249']
['019', '006', '19C', '17727.74', '116.241', '6.118']
['019', '007', '19N', '17710.671', '132.017', '6.948']
['019', '008', '19O', '17697.633', '143.761', '7.566']
['019', '009', '19F', '17692.3', '147.801', '7.779']
['019', '010', '19Ne', '17695.028', '143.78', '7.567']
['019', '011', '19Na', '17705.692', '131.822', '6.938']
['019', '012', '19Mg', '17725.294', '110.927', '5.838']
['020', '006', '20C', '18664.374', '119.172', '5.959']
['020', '007', '20N', '18648.073', '134.18', '6.709']
['020', '008', '20O', '18629.59', '151.37', '7.569']
['020', '009', '20F', '18625.264', '154.403', '7.72']
['020', '010', '20Ne', '18617.728', '160.645', '8.032']
['020', '011', '20Na', '18631.107', '145.973', '7.299']
['020', '012', '20Mg', '18641.318', '134.468', '6.723']
['021', '007', '21N', '19583.047', '138.771', '6.608']
['021', '008', '21O', '19565.349', '155.176', '7.389']
['021', '009', '21F', '19556.728', '162.504', '7.738']
['021', '010', '21Ne', '19550.533', '167.406', '7.972']
['021', '011', '21Na', '19553.569', '163.076', '7.766']
['021', '012', '21Mg', '19566.153', '149.199', '7.105']
['022', '007', '22N', '20521.331', '140.053', '6.366']
['022', '008', '22O', '20498.06', '162.03', '7.365']
['022', '009', '22F', '20491.062', '167.735', '7.624']
['022', '010', '22Ne', '20479.734', '177.77', '8.08']
['022', '011', '22Na', '20482.065', '174.146', '7.916']
['022', '012', '22Mg', '20486.339', '168.578', '7.663']
['023', '008', '23O', '21434.884', '164.772', '7.164']
['023', '009', '23F', '21423.093', '175.269', '7.62']
['023', '010', '23Ne', '21414.098', '182.971', '7.955']
['023', '011', '23Na', '21409.211', '186.564', '8.111']
['023', '012', '23Mg', '21412.757', '181.726', '7.901']
['023', '013', '23Al', '21424.489', '168.7', '7.335']
['024', '008', '24O', '22370.838', '168.383', '7.016']
['024', '009', '24F', '22358.817', '179.111', '7.463']
['024', '010', '24Ne', '22344.795', '191.84', '7.993']
['024', '011', '24Na', '22341.817', '193.524', '8.064']
['024', '012', '24Mg', '22335.791', '198.257', '8.261']
['024', '013', '24Al', '22349.156', '183.598', '7.65']
['024', '014', '24Si', '22359.457', '172.004', '7.167']
['025', '009', '25F', '23294.021', '183.472', '7.339']
['025', '010', '25Ne', '23280.132', '196.068', '7.843']
['025', '011', '25Na', '23272.372', '202.535', '8.101']
['025', '012', '25Mg', '23268.026', '205.588', '8.224']
['025', '013', '25Al', '23271.791', '200.529', '8.021']
['025', '014', '25Si', '23284.02', '187.006', '7.48']
['026', '009', '26F', '24232.515', '184.543', '7.098']
['026', '010', '26Ne', '24214.164', '201.601', '7.754']
['026', '011', '26Na', '24206.361', '208.111', '8.004']
['026', '012', '26Mg', '24196.498', '216.681', '8.334']
['026', '013', '26Al', '24199.991', '211.894', '8.15']
['026', '014', '26Si', '24204.545', '206.047', '7.925']
['027', '009', '27F', '25170.669', '185.955', '6.887']
['027', '010', '27Ne', '25152.298', '203.032', '7.52']
['027', '011', '27Na', '25139.2', '214.837', '7.957']
['027', '012', '27Mg', '25129.62', '223.124', '8.264']
['027', '013', '27Al', '25126.499', '224.952', '8.332']
['027', '014', '27Si', '25130.8', '219.357', '8.124']
['027', '015', '27P', '25141.956', '206.908', '7.663']
['028', '010', '28Ne', '26087.962', '206.934', '7.39']
['028', '011', '28Na', '26075.222', '218.38', '7.799']
['028', '012', '28Mg', '26060.682', '231.627', '8.272']
['028', '013', '28Al', '26058.339', '232.677', '8.31']
['028', '014', '28Si', '26053.186', '236.537', '8.448']
['028', '015', '28P', '26067.008', '221.421', '7.908']
['028', '016', '28S', '26077.726', '209.41', '7.479']
['029', '010', '29Ne', '27026.276', '208.185', '7.179']
['029', '011', '29Na', '27010.37', '222.798', '7.683']
['029', '012', '29Mg', '26996.575', '235.299', '8.114']
['029', '013', '29Al', '26988.468', '242.113', '8.349']
['029', '014', '29Si', '26984.277', '245.011', '8.449']
['029', '015', '29P', '26988.709', '239.286', '8.251']
['029', '016', '29S', '27001.99', '224.711', '7.749']
['030', '010', '30Ne', '27962.81', '211.216', '7.041']
['030', '011', '30Na', '27947.56', '225.173', '7.506']
['030', '012', '30Mg', '27929.777', '241.663', '8.055']
['030', '013', '30Al', '27922.305', '247.841', '8.261']
['030', '014', '30Si', '27913.233', '255.62', '8.521']
['030', '015', '30P', '27916.955', '250.605', '8.354']
['030', '016', '30S', '27922.581', '243.685', '8.123']
['031', '011', '31Na', '28883.343', '228.955', '7.386']
['031', '012', '31Mg', '28866.965', '244.04', '7.872']
['031', '013', '31Al', '28854.717', '254.994', '8.226']
['031', '014', '31Si', '28846.211', '262.207', '8.458']
['031', '015', '31P', '28844.209', '262.917', '8.481']
['031', '016', '31S', '28849.094', '256.738', '8.282']
['031', '017', '31Cl', '28860.557', '243.981', '7.87']
['032', '011', '32Na', '29821.247', '230.616', '7.207']
['032', '012', '32Mg', '29800.721', '249.849', '7.808']
['032', '013', '32Al', '29790.105', '259.172', '8.099']
['032', '014', '32Si', '29776.574', '271.41', '8.482']
['032', '015', '32P', '29775.838', '270.852', '8.464']
['032', '016', '32S', '29773.617', '271.781', '8.493']
['032', '017', '32Cl', '29785.791', '258.312', '8.072']
['032', '018', '32Ar', '29796.41', '246.4', '7.7']
['033', '011', '33Na', '30758.571', '232.858', '7.056']
['033', '012', '33Mg', '30738.064', '252.071', '7.639']
['033', '013', '33Al', '30724.129', '264.713', '8.022']
['033', '014', '33Si', '30711.655', '275.894', '8.36']
['033', '015', '33P', '30705.3', '280.956', '8.514']
['033', '016', '33S', '30704.54', '280.422', '8.498']
['033', '017', '33Cl', '30709.612', '274.057', '8.305']
['033', '018', '33Ar', '30720.72', '261.656', '7.929']
['034', '012', '34Mg', '31673.474', '256.227', '7.536']
['034', '013', '34Al', '31661.223', '267.184', '7.858']
['034', '014', '34Si', '31643.685', '283.429', '8.336']
['034', '015', '34P', '31638.573', '287.248', '8.448']
['034', '016', '34S', '31632.689', '291.839', '8.584']
['034', '017', '34Cl', '31637.67', '285.565', '8.399']
['034', '018', '34Ar', '31643.221', '278.72', '8.198']
['035', '013', '35Al', '32595.517', '272.456', '7.784']
['035', '014', '35Si', '32580.776', '285.903', '8.169']
['035', '015', '35P', '32569.768', '295.619', '8.446']
['035', '016', '35S', '32565.268', '298.825', '8.538']
['035', '017', '35Cl', '32564.59', '298.21', '8.52']
['035', '018', '35Ar', '32570.045', '291.461', '8.327']
['035', '019', '35K', '32581.412', '278.801', '7.966']
['036', '013', '36Al', '33532.921', '274.617', '7.628']
['036', '014', '36Si', '33514.15', '292.095', '8.114']
['036', '015', '36P', '33505.868', '299.083', '8.308']
['036', '016', '36S', '33494.944', '308.714', '8.575']
['036', '017', '36Cl', '33495.576', '306.79', '8.522']
['036', '018', '36Ar', '33494.355', '306.717', '8.52']
['036', '019', '36K', '33506.649', '293.129', '8.142']
['036', '020', '36Ca', '33517.124', '281.361', '7.816']
['037', '013', '37Al', '34468.585', '278.518', '7.528']
['037', '014', '37Si', '34451.544', '294.266', '7.953']
['037', '015', '37P', '34438.623', '305.894', '8.267']
['037', '016', '37S', '34430.206', '313.018', '8.46']
['037', '017', '37Cl', '34424.83', '317.101', '8.57']
['037', '018', '37Ar', '34425.133', '315.504', '8.527']
['037', '019', '37K', '34430.769', '308.575', '8.34']
['037', '020', '37Ca', '34441.897', '296.154', '8.004']
['038', '013', '38Al', '35406.18', '280.49', '7.381']
['038', '014', '38Si', '35385.549', '299.827', '7.89']
['038', '015', '38P', '35374.348', '309.735', '8.151']
['038', '016', '38S', '35361.736', '321.054', '8.449']
['038', '017', '38Cl', '35358.287', '323.208', '8.505']
['038', '018', '38Ar', '35352.86', '327.343', '8.614']
['038', '019', '38K', '35358.263', '320.646', '8.438']
['038', '020', '38Ca', '35364.494', '313.122', '8.24']
['039', '013', '39Al', '36343.024', '283.211', '7.262']
['039', '014', '39Si', '36323.043', '301.899', '7.741']
['039', '015', '39P', '36307.732', '315.916', '8.1']
['039', '016', '39S', '36296.931', '325.424', '8.344']
['039', '017', '39Cl', '36289.779', '331.282', '8.494']
['039', '018', '39Ar', '36285.827', '333.941', '8.563']
['039', '019', '39K', '36284.751', '333.724', '8.557']
['039', '020', '39Ca', '36290.772', '326.409', '8.369']
['039', '021', '39Sc', '36303.368', '312.52', '8.013']
['040', '014', '40Si', '37258.077', '306.43', '7.661']
['040', '015', '40P', '37243.986', '319.228', '7.981']
['040', '016', '40S', '37228.715', '333.205', '8.33']
['040', '017', '40Cl', '37223.514', '337.113', '8.428']
['040', '018', '40Ar', '37215.523', '343.811', '8.595']
['040', '019', '40K', '37216.516', '341.524', '8.538']
['040', '020', '40Ca', '37214.694', '342.052', '8.551']
['040', '021', '40Sc', '37228.506', '326.947', '8.174']
['040', '022', '40Ti', '37239.669', '314.491', '7.862']
['041', '014', '41Si', '38197.661', '306.411', '7.473']
['041', '015', '41P', '38178.31', '324.469', '7.914']
['041', '016', '41S', '38164.059', '337.427', '8.23']
['041', '017', '41Cl', '38155.258', '344.934', '8.413']
['041', '018', '41Ar', '38148.989', '349.91', '8.534']
['041', '019', '41K', '38145.986', '351.619', '8.576']
['041', '020', '41Ca', '38145.897', '350.415', '8.547']
['041', '021', '41Sc', '38151.881', '343.137', '8.369']
['042', '015', '42P', '39116.024', '326.32', '7.77']
['042', '016', '42S', '39096.893', '344.158', '8.194']
['042', '017', '42Cl', '39089.152', '350.606', '8.348']
['042', '018', '42Ar', '39079.128', '359.336', '8.556']
['042', '019', '42K', '39078.018', '359.153', '8.551']
['042', '020', '42Ca', '39073.981', '361.896', '8.617']
['042', '021', '42Sc', '39079.896', '354.688', '8.445']
['042', '022', '42Ti', '39086.385', '346.906', '8.26']
['043', '015', '43P', '40052.348', '329.562', '7.664']
['043', '016', '43S', '40034.097', '346.519', '8.059']
['043', '017', '43Cl', '40021.386', '357.937', '8.324']
['043', '018', '43Ar', '40013.035', '364.995', '8.488']
['043', '019', '43K', '40007.941', '368.795', '8.577']
['043', '020', '43Ca', '40005.614', '369.829', '8.601']
['043', '021', '43Sc', '40007.324', '366.826', '8.531']
['043', '022', '43Ti', '40013.68', '359.176', '8.353']
['044', '016', '44S', '40968.441', '351.741', '7.994']
['044', '017', '44Cl', '40956.82', '362.068', '8.229']
['044', '018', '44Ar', '40943.865', '373.729', '8.494']
['044', '019', '44K', '40940.218', '376.084', '8.547']
['044', '020', '44Ca', '40934.048', '380.96', '8.658']
['044', '021', '44Sc', '40937.189', '376.525', '8.557']
['044', '022', '44Ti', '40936.946', '375.475', '8.534']
['044', '023', '44V', '40949.864', '361.264', '8.211']
['045', '016', '45S', '41905.805', '353.942', '7.865']
['045', '017', '45Cl', '41890.184', '368.27', '8.184']
['045', '018', '45Ar', '41878.262', '378.898', '8.42']
['045', '019', '45K', '41870.914', '384.953', '8.555']
['045', '020', '45Ca', '41866.199', '388.375', '8.631']
['045', '021', '45Sc', '41865.432', '387.848', '8.619']
['045', '022', '45Ti', '41866.983', '385.004', '8.556']
['045', '023', '45V', '41873.598', '377.096', '8.38']
['045', '024', '45Cr', '41885.997', '363.403', '8.076']
['046', '017', '46Cl', '42825.328', '372.691', '8.102']
['046', '018', '46Ar', '42809.807', '386.919', '8.411']
['046', '019', '46K', '42803.598', '391.834', '8.518']
['046', '020', '46Ca', '42795.37', '398.769', '8.669']
['046', '021', '46Sc', '42796.237', '396.609', '8.622']
['046', '022', '46Ti', '42793.359', '398.193', '8.656']
['046', '023', '46V', '42799.899', '390.36', '8.486']
['046', '024', '46Cr', '42806.987', '381.979', '8.304']
['047', '018', '47Ar', '43745.111', '391.18', '8.323']
['047', '019', '47K', '43734.814', '400.184', '8.515']
['047', '020', '47Ca', '43727.659', '406.045', '8.639']
['047', '021', '47Sc', '43725.156', '407.255', '8.665']
['047', '022', '47Ti', '43724.044', '407.073', '8.661']
['047', '023', '47V', '43726.464', '403.36', '8.582']
['047', '024', '47Cr', '43733.397', '395.134', '8.407']
['048', '019', '48K', '44669.88', '404.683', '8.431']
['048', '020', '48Ca', '44657.279', '415.991', '8.666']
['048', '021', '48Sc', '44656.486', '415.49', '8.656']
['048', '022', '48Ti', '44651.983', '418.7', '8.723']
['048', '023', '48V', '44655.484', '413.905', '8.623']
['048', '024', '48Cr', '44656.63', '411.466', '8.572']
['048', '025', '48Mn', '44669.618', '397.185', '8.275']
['049', '019', '49K', '45603.178', '410.95', '8.387']
['049', '020', '49Ca', '45591.698', '421.137', '8.595']
['049', '021', '49Sc', '45585.924', '425.618', '8.686']
['049', '022', '49Ti', '45583.406', '426.842', '8.711']
['049', '023', '49V', '45583.497', '425.458', '8.683']
['049', '024', '49Cr', '45585.612', '422.049', '8.613']
['049', '025', '49Mn', '45592.816', '413.552', '8.44']
['050', '019', '50K', '46539.642', '414.052', '8.281']
['050', '020', '50Ca', '46524.91', '427.49', '8.55']
['050', '021', '50Sc', '46519.433', '431.674', '8.633']
['050', '022', '50Ti', '46512.032', '437.781', '8.756']
['050', '023', '50V', '46513.726', '434.794', '8.696']
['050', '024', '50Cr', '46512.177', '435.049', '8.701']
['050', '025', '50Mn', '46519.299', '426.634', '8.533']
['050', '026', '50Fe', '46526.935', '417.705', '8.354']
['051', '020', '51Ca', '47460.115', '431.851', '8.468']
['051', '021', '51Sc', '47452.246', '438.426', '8.597']
['051', '022', '51Ti', '47445.225', '444.154', '8.709']
['051', '023', '51V', '47442.24', '445.845', '8.742']
['051', '024', '51Cr', '47442.482', '444.31', '8.712']
['051', '025', '51Mn', '47445.178', '440.32', '8.634']
['051', '026', '51Fe', '47452.687', '431.519', '8.461']
['052', '020', '52Ca', '48394.959', '436.572', '8.396']
['052', '021', '52Sc', '48386.598', '443.639', '8.532']
['052', '022', '52Ti', '48376.982', '451.962', '8.692']
['052', '023', '52V', '48374.494', '453.156', '8.715']
['052', '024', '52Cr', '48370.008', '456.349', '8.776']
['052', '025', '52Mn', '48374.208', '450.856', '8.67']
['052', '026', '52Fe', '48376.071', '447.7', '8.61']
['053', '022', '53Ti', '49311.111', '457.398', '8.63']
['053', '023', '53V', '49305.581', '461.635', '8.71']
['053', '024', '53Cr', '49301.634', '464.289', '8.76']
['053', '025', '53Mn', '49301.72', '462.909', '8.734']
['053', '026', '53Fe', '49304.951', '458.384', '8.649']
['053', '027', '53Co', '49312.741', '449.302', '8.477']
['054', '021', '54Sc', '50255.726', '453.642', '8.401']
['054', '022', '54Ti', '50243.845', '464.23', '8.597']
['054', '023', '54V', '50239.033', '467.748', '8.662']
['054', '024', '54Cr', '50231.48', '474.008', '8.778']
['054', '025', '54Mn', '50232.346', '471.848', '8.738']
['054', '026', '54Fe', '50231.138', '471.763', '8.736']
['054', '027', '54Co', '50238.87', '462.738', '8.569']
['054', '028', '54Ni', '50247.159', '453.156', '8.392']
['055', '021', '55Sc', '51191.86', '457.073', '8.31']
['055', '022', '55Ti', '51179.259', '468.381', '8.516']
['055', '023', '55V', '51171.268', '475.079', '8.638']
['055', '024', '55Cr', '51164.799', '480.254', '8.732']
['055', '025', '55Mn', '51161.685', '482.075', '8.765']
['055', '026', '55Fe', '51161.405', '481.061', '8.747']
['055', '027', '55Co', '51164.346', '476.827', '8.67']
['055', '028', '55Ni', '51172.527', '467.353', '8.497']
['056', '022', '56Ti', '52113.483', '473.722', '8.459']
['056', '023', '56V', '52105.832', '480.08', '8.573']
['056', '024', '56Cr', '52096.12', '488.499', '8.723']
['056', '025', '56Mn', '52093.98', '489.345', '8.738']
['056', '026', '56Fe', '52089.773', '492.258', '8.79']
['056', '027', '56Co', '52093.828', '486.91', '8.695']
['056', '028', '56Ni', '52095.453', '483.992', '8.643']
['057', '022', '57Ti', '53050.377', '476.394', '8.358']
['057', '023', '57V', '53039.216', '486.261', '8.531']
['057', '024', '57Cr', '53030.371', '493.813', '8.663']
['057', '025', '57Mn', '53024.897', '497.994', '8.737']
['057', '026', '57Fe', '53021.693', '499.905', '8.77']
['057', '027', '57Co', '53022.018', '498.286', '8.742']
['057', '028', '57Ni', '53024.769', '494.242', '8.671']
['057', '029', '57Cu', '53033.03', '484.687', '8.503']
['058', '023', '58V', '53974.69', '490.353', '8.454']
['058', '024', '58Cr', '53962.559', '501.19', '8.641']
['058', '025', '58Mn', '53957.968', '504.488', '8.698']
['058', '026', '58Fe', '53951.213', '509.949', '8.792']
['058', '027', '58Co', '53953.01', '506.859', '8.739']
['058', '028', '58Ni', '53952.117', '506.459', '8.732']
['058', '029', '58Cu', '53960.172', '497.111', '8.571']
['058', '030', '58Zn', '53969.023', '486.966', '8.396']
['059', '023', '59V', '54909.324', '495.284', '8.395']
['059', '024', '59Cr', '54897.993', '505.322', '8.565']
['059', '025', '59Mn', '54889.892', '512.129', '8.68']
['059', '026', '59Fe', '54884.198', '516.53', '8.755']
['059', '027', '59Co', '54882.121', '517.313', '8.768']
['059', '028', '59Ni', '54882.683', '515.458', '8.737']
['059', '029', '59Cu', '54886.971', '509.877', '8.642']
['059', '030', '59Zn', '54895.557', '499.998', '8.475']
['060', '023', '60V', '55845.308', '498.865', '8.314']
['060', '024', '60Cr', '55830.877', '512.003', '8.533']
['060', '025', '60Mn', '55823.686', '517.901', '8.632']
['060', '026', '60Fe', '55814.943', '525.35', '8.756']
['060', '027', '60Co', '55814.195', '524.805', '8.747']
['060', '028', '60Ni', '55810.861', '526.846', '8.781']
['060', '029', '60Cu', '55816.478', '519.935', '8.666']
['060', '030', '60Zn', '55820.123', '514.997', '8.583']
['061', '024', '61Cr', '56766.691', '515.754', '8.455']
['061', '025', '61Mn', '56756.8', '524.352', '8.596']
['061', '026', '61Fe', '56748.928', '530.931', '8.704']
['061', '027', '61Co', '56744.439', '534.126', '8.756']
['061', '028', '61Ni', '56742.606', '534.666', '8.765']
['061', '029', '61Cu', '56744.332', '531.646', '8.716']
['061', '030', '61Zn', '56749.46', '525.225', '8.61']
['061', '031', '61Ga', '56758.204', '515.188', '8.446']
['062', '024', '62Cr', '57699.955', '522.056', '8.42']
['062', '025', '62Mn', '57691.814', '528.903', '8.531']
['062', '026', '62Fe', '57680.442', '538.982', '8.693']
['062', '027', '62Co', '57677.4', '540.731', '8.721']
['062', '028', '62Ni', '57671.575', '545.262', '8.795']
['062', '029', '62Cu', '57675.012', '540.532', '8.718']
['062', '030', '62Zn', '57676.128', '538.123', '8.679']
['062', '031', '62Ga', '57684.788', '528.169', '8.519']
['063', '025', '63Mn', '58624.998', '535.285', '8.497']
['063', '026', '63Fe', '58615.287', '543.702', '8.63']
['063', '027', '63Co', '58608.486', '549.21', '8.718']
['063', '028', '63Ni', '58604.302', '552.1', '8.763']
['063', '029', '63Cu', '58603.724', '551.385', '8.752']
['063', '030', '63Zn', '58606.58', '547.236', '8.686']
['063', '031', '63Ga', '58611.735', '540.788', '8.584']
['064', '025', '64Mn', '59560.222', '539.626', '8.432']
['064', '026', '64Fe', '59547.561', '550.994', '8.609']
['064', '027', '64Co', '59542.027', '555.234', '8.676']
['064', '028', '64Ni', '59534.21', '561.758', '8.777']
['064', '029', '64Cu', '59535.374', '559.301', '8.739']
['064', '030', '64Zn', '59534.283', '559.098', '8.736']
['064', '031', '64Ga', '59540.942', '551.146', '8.612']
['064', '032', '64Ge', '59544.915', '545.88', '8.529']
['065', '025', '65Mn', '60493.666', '545.747', '8.396']
['065', '026', '65Fe', '60482.945', '555.175', '8.541']
['065', '027', '65Co', '60474.144', '562.683', '8.657']
['065', '028', '65Ni', '60467.677', '567.856', '8.736']
['065', '029', '65Cu', '60465.028', '569.212', '8.757']
['065', '030', '65Zn', '60465.869', '567.077', '8.724']
['065', '031', '65Ga', '60468.613', '563.04', '8.662']
['065', '032', '65Ge', '60474.349', '556.011', '8.554']
['066', '026', '66Fe', '61415.749', '561.936', '8.514']
['066', '027', '66Co', '61408.698', '567.694', '8.601']
['066', '028', '66Ni', '61398.291', '576.808', '8.74']
['066', '029', '66Cu', '61397.528', '576.278', '8.731']
['066', '030', '66Zn', '61394.375', '578.136', '8.76']
['066', '031', '66Ga', '61399.04', '572.179', '8.669']
['066', '032', '66Ge', '61400.633', '569.292', '8.626']
['066', '033', '66As', '61410.242', '558.39', '8.46']
['067', '026', '67Fe', '62351.123', '566.128', '8.45']
['067', '027', '67Co', '62341.242', '574.715', '8.578']
['067', '028', '67Ni', '62332.048', '582.616', '8.696']
['067', '029', '67Cu', '62327.961', '585.409', '8.737']
['067', '030', '67Zn', '62326.889', '585.189', '8.734']
['067', '031', '67Ga', '62327.378', '583.406', '8.708']
['067', '032', '67Ge', '62331.089', '578.402', '8.633']
['067', '033', '67As', '62336.586', '571.611', '8.532']
['068', '026', '68Fe', '63285.177', '571.639', '8.406']
['068', '027', '68Co', '63276.446', '579.077', '8.516']
['068', '028', '68Ni', '63263.821', '590.408', '8.682']
['068', '029', '68Cu', '63261.207', '591.729', '8.702']
['068', '030', '68Zn', '63256.256', '595.387', '8.756']
['068', '031', '68Ga', '63258.666', '591.683', '8.701']
['068', '032', '68Ge', '63258.261', '590.795', '8.688']
['068', '033', '68As', '63265.83', '581.933', '8.558']
['068', '034', '68Se', '63270.009', '576.46', '8.477']
['069', '027', '69Co', '64209.29', '585.798', '8.49']
['069', '028', '69Ni', '64198.8', '594.995', '8.623']
['069', '029', '69Cu', '64192.532', '599.969', '8.695']
['069', '030', '69Zn', '64189.339', '601.869', '8.723']
['069', '031', '69Ga', '64187.918', '601.996', '8.725']
['069', '032', '69Ge', '64189.634', '598.987', '8.681']
['069', '033', '69As', '64193.134', '594.194', '8.612']
['069', '034', '69Se', '64199.413', '586.622', '8.502']
['070', '027', '70Co', '65145.144', '589.509', '8.422']
['070', '028', '70Ni', '65131.123', '602.237', '8.603']
['070', '029', '70Cu', '65126.786', '605.281', '8.647']
['070', '030', '70Zn', '65119.686', '611.087', '8.73']
['070', '031', '70Ga', '65119.83', '609.65', '8.709']
['070', '032', '70Ge', '65117.666', '610.521', '8.722']
['070', '033', '70As', '65123.378', '603.515', '8.622']
['070', '034', '70Se', '65125.157', '600.443', '8.578']
['071', '027', '71Co', '66078.408', '595.811', '8.392']
['071', '028', '71Ni', '66066.567', '606.358', '8.54']
['071', '029', '71Cu', '66058.545', '613.087', '8.635']
['071', '030', '71Zn', '66053.418', '616.921', '8.689']
['071', '031', '71Ga', '66050.094', '618.951', '8.718']
['071', '032', '71Ge', '66049.815', '617.937', '8.703']
['071', '033', '71As', '66051.318', '615.141', '8.664']
['071', '034', '71Se', '66055.581', '609.584', '8.586']
['071', '035', '71Br', '66061.13', '602.742', '8.489']
['071', '036', '71Kr', '66070.759', '591.82', '8.335']
['072', '028', '72Ni', '66999.321', '613.169', '8.516']
['072', '029', '72Cu', '66992.967', '618.23', '8.587']
['072', '030', '72Zn', '66984.108', '625.796', '8.692']
['072', '031', '72Ga', '66983.139', '625.472', '8.687']
['072', '032', '72Ge', '66978.631', '628.686', '8.732']
['072', '033', '72As', '66982.476', '623.548', '8.66']
['072', '034', '72Se', '66982.301', '622.429', '8.645']
['072', '035', '72Br', '66990.664', '612.773', '8.511']
['072', '036', '72Kr', '66995.232', '606.912', '8.429']
['073', '029', '73Cu', '67925.257', '625.505', '8.569']
['073', '030', '73Zn', '67918.323', '631.146', '8.646']
['073', '031', '73Ga', '67913.523', '634.653', '8.694']
['073', '032', '73Ge', '67911.413', '635.469', '8.705']
['073', '033', '73As', '67911.243', '634.346', '8.69']
['073', '034', '73Se', '67913.471', '630.825', '8.641']
['073', '035', '73Br', '67917.548', '625.454', '8.568']
['073', '036', '73Kr', '67924.115', '617.594', '8.46']
['074', '029', '74Cu', '68859.732', '630.596', '8.522']
['074', '030', '74Zn', '68849.517', '639.517', '8.642']
['074', '031', '74Ga', '68846.666', '641.075', '8.663']
['074', '032', '74Ge', '68840.783', '645.665', '8.725']
['074', '033', '74As', '68842.834', '642.32', '8.68']
['074', '034', '74Se', '68840.97', '642.891', '8.688']
['074', '035', '74Br', '68847.366', '635.202', '8.584']
['074', '036', '74Kr', '68849.83', '631.445', '8.533']
['074', '037', '74Rb', '68859.733', '620.248', '8.382']
['075', '029', '75Cu', '69793.112', '636.781', '8.49']
['075', '030', '75Zn', '69784.251', '644.349', '8.591']
['075', '031', '75Ga', '69777.745', '649.561', '8.661']
['075', '032', '75Ge', '69773.843', '652.171', '8.696']
['075', '033', '75As', '69772.156', '652.564', '8.701']
['075', '034', '75Se', '69772.508', '650.918', '8.679']
['075', '035', '75Br', '69775.027', '647.106', '8.628']
['075', '036', '75Kr', '69779.331', '641.509', '8.553']
['075', '037', '75Rb', '69785.922', '633.624', '8.448']
['075', '038', '75Sr', '69796.013', '622.24', '8.297']
['076', '029', '76Cu', '70727.75', '641.708', '8.444']
['076', '030', '76Zn', '70716.075', '652.09', '8.58']
['076', '031', '76Ga', '70711.407', '655.464', '8.625']
['076', '032', '76Ge', '70703.98', '661.598', '8.705']
['076', '033', '76As', '70704.393', '659.893', '8.683']
['076', '034', '76Se', '70700.919', '662.073', '8.711']
['076', '035', '76Br', '70705.371', '656.327', '8.636']
['076', '036', '76Kr', '70706.135', '654.27', '8.609']
['076', '037', '76Rb', '70714.158', '644.954', '8.486']
['076', '038', '76Sr', '70719.887', '637.931', '8.394']
['077', '030', '77Zn', '71650.989', '656.741', '8.529']
['077', '031', '77Ga', '71643.206', '663.231', '8.613']
['077', '032', '77Ge', '71637.473', '667.671', '8.671']
['077', '033', '77As', '71634.259', '669.591', '8.696']
['077', '034', '77Se', '71633.065', '669.492', '8.695']
['077', '035', '77Br', '71633.919', '667.345', '8.667']
['077', '036', '77Kr', '71636.474', '663.497', '8.617']
['077', '037', '77Rb', '71641.307', '657.37', '8.537']
['077', '038', '77Sr', '71647.817', '649.567', '8.436']
['078', '030', '78Zn', '72583.863', '663.433', '8.506']
['078', '031', '78Ga', '72576.985', '669.017', '8.577']
['078', '032', '78Ge', '72568.319', '676.39', '8.672']
['078', '033', '78As', '72566.853', '676.563', '8.674']
['078', '034', '78Se', '72562.133', '679.99', '8.718']
['078', '035', '78Br', '72565.196', '675.633', '8.662']
['078', '036', '78Kr', '72563.957', '675.578', '8.661']
['078', '037', '78Rb', '72570.69', '667.552', '8.558']
['078', '038', '78Sr', '72573.941', '663.008', '8.5']
['079', '031', '79Ga', '73509.676', '675.892', '8.556']
['079', '032', '79Ge', '73502.185', '682.089', '8.634']
['079', '033', '79As', '73497.527', '685.454', '8.677']
['079', '034', '79Se', '73494.735', '686.952', '8.696']
['079', '035', '79Br', '73494.074', '686.321', '8.688']
['079', '036', '79Kr', '73495.188', '683.913', '8.657']
['079', '037', '79Rb', '73498.317', '679.491', '8.601']
['079', '038', '79Sr', '73503.132', '673.382', '8.524']
['079', '039', '79Y', '73509.738', '665.483', '8.424']
['080', '030', '80Zn', '74452.351', '674.075', '8.426']
['080', '031', '80Ga', '74444.54', '680.593', '8.507']
['080', '032', '80Ge', '74433.654', '690.186', '8.627']
['080', '033', '80As', '74430.499', '692.047', '8.651']
['080', '034', '80Se', '74424.387', '696.866', '8.711']
['080', '035', '80Br', '74425.747', '694.213', '8.678']
['080', '036', '80Kr', '74423.233', '695.434', '8.693']
['080', '037', '80Rb', '74428.441', '688.932', '8.612']
['080', '038', '80Sr', '74429.795', '686.285', '8.579']
['080', '039', '80Y', '74438.372', '676.414', '8.455']
['080', '040', '80Zr', '74443.561', '669.932', '8.374']
['081', '031', '81Ga', '75377.194', '687.504', '8.488']
['081', '032', '81Ge', '75368.363', '695.042', '8.581']
['081', '033', '81As', '75361.619', '700.493', '8.648']
['081', '034', '81Se', '75357.252', '703.567', '8.686']
['081', '035', '81Br', '75355.155', '704.37', '8.696']
['081', '036', '81Kr', '75354.925', '703.307', '8.683']
['081', '037', '81Rb', '75356.653', '700.285', '8.645']
['081', '038', '81Sr', '75360.069', '695.576', '8.587']
['081', '039', '81Y', '75365.066', '689.286', '8.51']
['081', '040', '81Zr', '75372.085', '680.973', '8.407']
['082', '032', '82Ge', '76300.537', '702.433', '8.566']
['082', '033', '82As', '76295.326', '706.351', '8.614']
['082', '034', '82Se', '76287.541', '712.843', '8.693']
['082', '035', '82Br', '76287.128', '711.963', '8.682']
['082', '036', '82Kr', '76283.524', '714.274', '8.711']
['082', '037', '82Rb', '76287.414', '709.09', '8.647']
['082', '038', '82Sr', '76287.083', '708.127', '8.636']
['082', '039', '82Y', '76294.39', '699.527', '8.531']
['083', '033', '83As', '77227.26', '713.982', '8.602']
['083', '034', '83Se', '77221.288', '718.661', '8.659']
['083', '035', '83Br', '77217.109', '721.547', '8.693']
['083', '036', '83Kr', '77215.625', '721.737', '8.696']
['083', '037', '83Rb', '77216.021', '720.048', '8.675']
['083', '038', '83Sr', '77217.79', '716.986', '8.638']
['083', '039', '83Y', '77221.744', '711.738', '8.575']
['083', '040', '83Zr', '77227.103', '705.086', '8.495']
['083', '041', '83Nb', '77234.092', '696.804', '8.395']
['084', '034', '84Se', '78152.171', '727.343', '8.659']
['084', '035', '84Br', '78149.813', '728.408', '8.672']
['084', '036', '84Kr', '78144.67', '732.258', '8.717']
['084', '037', '84Rb', '78146.84', '728.794', '8.676']
['084', '038', '84Sr', '78145.435', '728.906', '8.677']
['084', '039', '84Y', '78151.408', '721.64', '8.591']
['085', '034', '85Se', '79087.189', '731.891', '8.61']
['085', '035', '85Br', '79080.496', '737.29', '8.674']
['085', '036', '85Kr', '79077.115', '739.378', '8.699']
['085', '037', '85Rb', '79075.917', '739.283', '8.697']
['085', '038', '85Sr', '79076.471', '737.436', '8.676']
['085', '039', '85Y', '79079.22', '733.393', '8.628']
['085', '040', '85Zr', '79083.401', '727.919', '8.564']
['085', '041', '85Nb', '79088.89', '721.136', '8.484']
['086', '034', '86Se', '80020.57', '738.075', '8.582']
['086', '035', '86Br', '80014.96', '742.392', '8.632']
['086', '036', '86Kr', '80006.824', '749.235', '8.712']
['086', '037', '86Rb', '80006.831', '747.934', '8.697']
['086', '038', '86Sr', '80004.544', '748.928', '8.708']
['086', '039', '86Y', '80009.272', '742.906', '8.638']
['086', '040', '86Zr', '80010.245', '740.64', '8.612']
['086', '041', '86Nb', '80017.704', '731.888', '8.51']
['086', '042', '86Mo', '80022.463', '725.835', '8.44']
['087', '034', '87Se', '80956.025', '742.185', '8.531']
['087', '035', '87Br', '80948.237', '748.68', '8.606']
['087', '036', '87Kr', '80940.874', '754.75', '8.675']
['087', '037', '87Rb', '80936.474', '757.856', '8.711']
['087', '038', '87Sr', '80935.681', '757.356', '8.705']
['087', '039', '87Y', '80937.031', '754.712', '8.675']
['087', '040', '87Zr', '80940.191', '750.259', '8.624']
['087', '041', '87Nb', '80944.848', '744.309', '8.555']
['087', '042', '87Mo', '80950.827', '737.037', '8.472']
['088', '034', '88Se', '81890.219', '747.557', '8.495']
['088', '035', '88Br', '81882.858', '753.624', '8.564']
['088', '036', '88Kr', '81873.385', '761.804', '8.657']
['088', '037', '88Rb', '81869.957', '763.939', '8.681']
['088', '038', '88Sr', '81864.133', '768.469', '8.733']
['088', '039', '88Y', '81867.245', '764.064', '8.683']
['088', '040', '88Zr', '81867.41', '762.606', '8.666']
['088', '041', '88Nb', '81874.452', '754.27', '8.571']
['088', '042', '88Mo', '81877.311', '750.118', '8.524']
['089', '035', '89Br', '82816.512', '759.536', '8.534']
['089', '036', '89Kr', '82807.841', '766.913', '8.617']
['089', '037', '89Rb', '82802.347', '771.114', '8.664']
['089', '038', '89Sr', '82797.34', '774.828', '8.706']
['089', '039', '89Y', '82795.336', '775.538', '8.714']
['089', '040', '89Zr', '82797.658', '771.923', '8.673']
['089', '041', '89Nb', '82801.366', '766.922', '8.617']
['089', '042', '89Mo', '82806.501', '760.493', '8.545']
['090', '035', '90Br', '83751.956', '763.657', '8.485']
['090', '036', '90Kr', '83741.095', '773.225', '8.591']
['090', '037', '90Rb', '83736.192', '776.834', '8.631']
['090', '038', '90Sr', '83729.102', '782.631', '8.696']
['090', '039', '90Y', '83728.045', '782.395', '8.693']
['090', '040', '90Zr', '83725.254', '783.893', '8.71']
['090', '041', '90Nb', '83730.854', '776.999', '8.633']
['090', '042', '90Mo', '83732.832', '773.728', '8.597']
['090', '043', '90Tc', '83741.278', '763.988', '8.489']
['091', '035', '91Br', '84686.56', '768.618', '8.446']
['091', '036', '91Kr', '84676.249', '777.636', '8.545']
['091', '037', '91Rb', '84669.303', '783.289', '8.608']
['091', '038', '91Sr', '84662.892', '788.406', '8.664']
['091', '039', '91Y', '84659.681', '790.324', '8.685']
['091', '040', '91Zr', '84657.625', '791.087', '8.693']
['091', '041', '91Nb', '84658.372', '789.046', '8.671']
['091', '042', '91Mo', '84662.289', '783.836', '8.614']
['091', '043', '91Tc', '84668.002', '776.83', '8.537']
['092', '035', '92Br', '85622.984', '771.76', '8.389']
['092', '036', '92Kr', '85610.268', '783.182', '8.513']
['092', '037', '92Rb', '85603.77', '788.387', '8.569']
['092', '038', '92Sr', '85595.163', '795.701', '8.649']
['092', '039', '92Y', '85592.707', '796.863', '8.662']
['092', '040', '92Zr', '85588.555', '799.722', '8.693']
['092', '041', '92Nb', '85590.05', '796.934', '8.662']
['092', '042', '92Mo', '85589.182', '796.508', '8.658']
['092', '043', '92Tc', '85596.541', '787.856', '8.564']
['093', '036', '93Kr', '86546.527', '786.488', '8.457']
['093', '037', '93Rb', '86537.418', '794.304', '8.541']
['093', '038', '93Sr', '86529.44', '800.989', '8.613']
['093', '039', '93Y', '86524.791', '804.344', '8.649']
['093', '040', '93Zr', '86521.386', '806.456', '8.672']
['093', '041', '93Nb', '86520.784', '805.765', '8.664']
['093', '042', '93Mo', '86520.678', '804.577', '8.651']
['093', '043', '93Tc', '86523.367', '800.595', '8.609']
['093', '044', '93Ru', '86529.189', '793.48', '8.532']
['094', '037', '94Rb', '87472.977', '798.31', '8.493']
['094', '038', '94Sr', '87462.179', '807.815', '8.594']
['094', '039', '94Y', '87458.16', '810.541', '8.623']
['094', '040', '94Zr', '87452.73', '814.677', '8.667']
['094', '041', '94Nb', '87453.122', '812.993', '8.649']
['094', '042', '94Mo', '87450.566', '814.256', '8.662']
['094', '043', '94Tc', '87454.31', '809.217', '8.609']
['094', '044', '94Ru', '87455.385', '806.849', '8.584']
['095', '037', '95Rb', '88407.17', '803.683', '8.46']
['095', '038', '95Sr', '88397.396', '812.163', '8.549']
['095', '039', '95Y', '88390.795', '817.471', '8.605']
['095', '040', '95Zr', '88385.833', '821.14', '8.644']
['095', '041', '95Nb', '88384.198', '821.481', '8.647']
['095', '042', '95Mo', '88382.762', '821.625', '8.649']
['095', '043', '95Tc', '88383.941', '819.152', '8.623']
['095', '044', '95Ru', '88385.997', '815.802', '8.587']
['095', '045', '95Rh', '88390.596', '809.91', '8.525']
['096', '037', '96Rb', '89343.293', '807.125', '8.408']
['096', '038', '96Sr', '89331.068', '818.057', '8.521']
['096', '039', '96Y', '89325.149', '822.682', '8.57']
['096', '040', '96Zr', '89317.542', '828.996', '8.635']
['096', '041', '96Nb', '89316.87', '828.375', '8.629']
['096', '042', '96Mo', '89313.173', '830.779', '8.654']
['096', '043', '96Tc', '89315.635', '827.023', '8.615']
['096', '044', '96Ru', '89314.869', '826.496', '8.609']
['096', '045', '96Rh', '89320.751', '819.32', '8.535']
['096', '046', '96Pd', '89323.689', '815.089', '8.491']
['097', '037', '97Rb', '90277.652', '812.331', '8.375']
['097', '038', '97Sr', '90266.713', '821.977', '8.474']
['097', '039', '97Y', '90258.732', '828.665', '8.543']
['097', '040', '97Zr', '90251.533', '834.571', '8.604']
['097', '041', '97Nb', '90248.363', '836.448', '8.623']
['097', '042', '97Mo', '90245.917', '837.6', '8.635']
['097', '043', '97Tc', '90245.726', '836.497', '8.624']
['097', '044', '97Ru', '90246.323', '834.607', '8.604']
['097', '045', '97Rh', '90249.334', '830.303', '8.56']
['097', '046', '97Pd', '90253.613', '824.73', '8.502']
['097', '047', '97Ag', '90260.082', '816.968', '8.422']
['098', '037', '98Rb', '91213.286', '816.263', '8.329']
['098', '038', '98Sr', '91200.349', '827.906', '8.448']
['098', '039', '98Y', '91194.017', '832.945', '8.499']
['098', '040', '98Zr', '91184.686', '840.983', '8.581']
['098', '041', '98Nb', '91181.933', '842.442', '8.596']
['098', '042', '98Mo', '91176.84', '846.243', '8.635']
['098', '043', '98Tc', '91178.012', '843.777', '8.61']
['098', '044', '98Ru', '91175.705', '844.79', '8.62']
['098', '045', '98Rh', '91180.243', '838.959', '8.561']
['098', '046', '98Pd', '91181.607', '836.302', '8.534']
['098', '047', '98Ag', '91189.336', '827.279', '8.442']
['098', '048', '98Cd', '91194.255', '821.067', '8.378']
['099', '037', '99Rb', '92148.12', '820.994', '8.293']
['099', '038', '99Sr', '92136.299', '831.522', '8.399']
['099', '039', '99Y', '92127.777', '838.75', '8.472']
['099', '040', '99Zr', '92119.699', '845.535', '8.541']
['099', '041', '99Nb', '92114.629', '849.312', '8.579']
['099', '042', '99Mo', '92110.48', '852.168', '8.608']
['099', '043', '99Tc', '92108.611', '852.743', '8.614']
['099', '044', '99Ru', '92107.806', '852.255', '8.609']
['099', '045', '99Rh', '92109.338', '849.429', '8.58']
['099', '046', '99Pd', '92112.213', '845.261', '8.538']
['099', '047', '99Ag', '92117.13', '839.051', '8.475']
['100', '038', '100Sr', '93069.763', '837.623', '8.376']
['100', '039', '100Y', '93062.182', '843.911', '8.439']
['100', '040', '100Zr', '93052.361', '852.438', '8.524']
['100', '041', '100Nb', '93048.511', '854.995', '8.55']
['100', '042', '100Mo', '93041.755', '860.458', '8.605']
['100', '043', '100Tc', '93041.412', '859.508', '8.595']
['100', '044', '100Ru', '93037.698', '861.928', '8.619']
['100', '045', '100Rh', '93040.822', '857.511', '8.575']
['100', '046', '100Pd', '93040.669', '856.37', '8.564']
['100', '047', '100Ag', '93047.234', '848.512', '8.485']
['100', '048', '100Cd', '93050.623', '843.83', '8.438']
['100', '049', '100In', '93060.192', '832.967', '8.33']
['100', '050', '100Sn', '93067.071', '824.795', '8.248']
['101', '037', '101Rb', '94018.388', '829.857', '8.216']
['101', '038', '101Sr', '94006.067', '840.884', '8.326']
['101', '039', '101Y', '93996.056', '849.602', '8.412']
['101', '040', '101Zr', '93986.995', '857.37', '8.489']
['101', '041', '101Nb', '93981.002', '862.069', '8.535']
['101', '042', '101Mo', '93975.922', '865.856', '8.573']
['101', '043', '101Tc', '93972.586', '867.899', '8.593']
['101', '044', '101Ru', '93970.462', '868.73', '8.601']
['101', '045', '101Rh', '93970.492', '867.406', '8.588']
['101', '046', '101Pd', '93971.961', '864.644', '8.561']
['101', '047', '101Ag', '93975.658', '859.653', '8.511']
['101', '048', '101Cd', '93980.617', '853.401', '8.45']
['102', '038', '102Sr', '94939.891', '846.626', '8.3']
['102', '039', '102Y', '94930.57', '854.653', '8.379']
['102', '040', '102Zr', '94920.209', '863.721', '8.468']
['102', '041', '102Nb', '94915.088', '867.549', '8.505']
['102', '042', '102Mo', '94907.37', '873.973', '8.568']
['102', '043', '102Tc', '94905.85', '874.2', '8.571']
['102', '044', '102Ru', '94900.807', '877.95', '8.607']
['102', '045', '102Rh', '94902.619', '874.844', '8.577']
['102', '046', '102Pd', '94900.958', '875.212', '8.581']
['102', '047', '102Ag', '94906.107', '868.77', '8.517']
['102', '048', '102Cd', '94908.183', '865.4', '8.484']
['102', '049', '102In', '94916.64', '855.65', '8.389']
['102', '050', '102Sn', '94921.909', '849.088', '8.324']
['103', '040', '103Zr', '95855.073', '868.422', '8.431']
['103', '041', '103Nb', '95847.612', '874.59', '8.491']
['103', '042', '103Mo', '95841.571', '879.338', '8.537']
['103', '043', '103Tc', '95837.313', '882.302', '8.566']
['103', '044', '103Ru', '95834.141', '884.182', '8.584']
['103', '045', '103Rh', '95832.866', '884.163', '8.584']
['103', '046', '103Pd', '95832.898', '882.837', '8.571']
['103', '047', '103Ag', '95835.075', '879.367', '8.538']
['103', '048', '103Cd', '95838.706', '874.443', '8.49']
['103', '049', '103In', '95844.245', '867.61', '8.423']
['104', '041', '104Nb', '96782.206', '879.561', '8.457']
['104', '042', '104Mo', '96773.585', '886.889', '8.528']
['104', '043', '104Tc', '96770.914', '888.267', '8.541']
['104', '044', '104Ru', '96764.804', '893.083', '8.587']
['104', '045', '104Rh', '96765.433', '891.162', '8.569']
['104', '046', '104Pd', '96762.481', '892.82', '8.585']
['104', '047', '104Ag', '96766.249', '887.758', '8.536']
['104', '048', '104Cd', '96766.874', '885.84', '8.518']
['104', '049', '104In', '96774.228', '877.193', '8.435']
['104', '050', '104Sn', '96778.237', '871.89', '8.384']
['105', '041', '105Nb', '97715.07', '886.263', '8.441']
['105', '042', '105Mo', '97708.069', '891.97', '8.495']
['105', '043', '105Tc', '97702.608', '896.138', '8.535']
['105', '044', '105Ru', '97698.459', '898.994', '8.562']
['105', '045', '105Rh', '97696.03', '900.129', '8.573']
['105', '046', '105Pd', '97694.952', '899.914', '8.571']
['105', '047', '105Ag', '97695.786', '897.787', '8.55']
['105', '048', '105Cd', '97698.013', '894.266', '8.517']
['105', '049', '105In', '97702.351', '888.635', '8.463']
['105', '050', '105Sn', '97708.061', '881.632', '8.396']
['105', '051', '105Sb', '97716.99', '871.409', '8.299']
['106', '042', '106Mo', '98640.648', '898.957', '8.481']
['106', '043', '106Tc', '98636.617', '901.694', '8.507']
['106', '044', '106Ru', '98629.559', '907.459', '8.561']
['106', '045', '106Rh', '98629.009', '906.716', '8.554']
['106', '046', '106Pd', '98624.957', '909.474', '8.58']
['106', '047', '106Ag', '98627.411', '905.727', '8.545']
['106', '048', '106Cd', '98626.705', '905.14', '8.539']
['106', '049', '106In', '98632.72', '897.831', '8.47']
['106', '050', '106Sn', '98635.385', '893.873', '8.433']
['106', '052', '106Te', '98653.583', '873.088', '8.237']
['107', '042', '107Mo', '99575.457', '903.713', '8.446']
['107', '043', '107Tc', '99568.786', '909.091', '8.496']
['107', '044', '107Ru', '99563.455', '913.128', '8.534']
['107', '045', '107Rh', '99560.001', '915.289', '8.554']
['107', '046', '107Pd', '99557.985', '916.012', '8.561']
['107', '047', '107Ag', '99557.44', '915.263', '8.554']
['107', '048', '107Cd', '99558.346', '913.064', '8.533']
['107', '049', '107In', '99561.26', '908.857', '8.494']
['107', '050', '107Sn', '99565.729', '903.094', '8.44']
['108', '043', '108Tc', '100503.43', '914.012', '8.463']
['108', '044', '108Ru', '100495.199', '920.95', '8.527']
['108', '045', '108Rh', '100493.338', '921.517', '8.533']
['108', '046', '108Pd', '100488.323', '925.239', '8.567']
['108', '047', '108Ag', '100489.734', '922.535', '8.542']
['108', '048', '108Cd', '100487.573', '923.402', '8.55']
['108', '049', '108In', '100492.198', '917.484', '8.495']
['108', '050', '108Sn', '100493.762', '914.627', '8.469']
['108', '052', '108Te', '100509.061', '896.741', '8.303']
['109', '043', '109Tc', '101436.334', '920.673', '8.447']
['109', '044', '109Ru', '101429.513', '926.201', '8.497']
['109', '045', '109Rh', '101424.841', '929.58', '8.528']
['109', '046', '109Pd', '101421.734', '931.393', '8.545']
['109', '047', '109Ag', '101420.108', '931.727', '8.548']
['109', '048', '109Cd', '101419.811', '930.73', '8.539']
['109', '049', '109In', '101421.319', '927.928', '8.513']
['109', '050', '109Sn', '101424.658', '923.296', '8.471']
['109', '051', '109Sb', '101430.527', '916.134', '8.405']
['109', '052', '109Te', '101438.665', '906.702', '8.318']
['109', '053', '109I', '101448.154', '895.92', '8.219']
['110', '043', '110Tc', '102371.408', '925.165', '8.411']
['110', '044', '110Ru', '102361.877', '933.402', '8.485']
['110', '045', '110Rh', '102358.566', '935.42', '8.504']
['110', '046', '110Pd', '102352.486', '940.207', '8.547']
['110', '047', '110Ag', '102352.864', '938.536', '8.532']
['110', '048', '110Cd', '102349.46', '940.646', '8.551']
['110', '049', '110In', '102352.827', '935.986', '8.509']
['110', '050', '110Sn', '102352.947', '934.572', '8.496']
['110', '052', '110Te', '102365.489', '919.444', '8.359']
['110', '054', '110Xe', '102384.847', '897.499', '8.159']
['111', '043', '111Tc', '103304.642', '931.496', '8.392']
['111', '044', '111Ru', '103296.681', '938.164', '8.452']
['111', '045', '111Rh', '103290.483', '943.068', '8.496']
['111', '046', '111Pd', '103286.325', '945.933', '8.522']
['111', '047', '111Ag', '103283.597', '947.368', '8.535']
['111', '048', '111Cd', '103282.05', '947.622', '8.537']
['111', '049', '111In', '103282.4', '945.978', '8.522']
['111', '050', '111Sn', '103284.34', '942.745', '8.493']
['111', '051', '111Sb', '103288.886', '936.905', '8.441']
['111', '052', '111Te', '103295.784', '928.715', '8.367']
['112', '043', '112Tc', '104239.357', '936.347', '8.36']
['112', '044', '112Ru', '104229.366', '945.045', '8.438']
['112', '045', '112Rh', '104224.595', '948.523', '8.469']
['112', '046', '112Pd', '104217.488', '954.336', '8.521']
['112', '047', '112Ag', '104216.689', '953.842', '8.516']
['112', '048', '112Cd', '104212.221', '957.016', '8.545']
['112', '049', '112In', '104214.295', '953.649', '8.515']
['112', '050', '112Sn', '104213.119', '953.532', '8.514']
['112', '051', '112Sb', '104219.668', '945.69', '8.444']
['112', '052', '112Te', '104223.458', '940.606', '8.398']
['112', '054', '112Xe', '104239.766', '921.712', '8.23']
['113', '044', '113Ru', '105164.14', '949.836', '8.406']
['113', '045', '113Rh', '105157.149', '955.534', '8.456']
['113', '046', '113Pd', '105151.628', '959.761', '8.493']
['113', '047', '113Ag', '105147.774', '962.322', '8.516']
['113', '048', '113Cd', '105145.246', '963.556', '8.527']
['113', '049', '113In', '105144.415', '963.094', '8.523']
['113', '050', '113Sn', '105144.941', '961.275', '8.507']
['113', '051', '113Sb', '105148.343', '956.58', '8.465']
['113', '052', '113Te', '105153.905', '949.724', '8.405']
['113', '053', '113I', '105160.611', '941.725', '8.334']
['113', '054', '113Xe', '105169.14', '931.903', '8.247']
['113', '055', '113Cs', '105179.019', '920.731', '8.148']
['114', '045', '114Rh', '106091.693', '960.555', '8.426']
['114', '046', '114Pd', '106083.315', '967.64', '8.488']
['114', '047', '114Ag', '106081.352', '968.309', '8.494']
['114', '048', '114Cd', '106075.769', '972.599', '8.532']
['114', '049', '114In', '106076.707', '970.368', '8.512']
['114', '050', '114Sn', '106074.207', '971.574', '8.523']
['114', '051', '114Sb', '106079.742', '964.746', '8.463']
['114', '052', '114Te', '106081.857', '961.338', '8.433']
['114', '054', '114Xe', '106095.638', '944.97', '8.289']
['114', '056', '114Ba', '106115.752', '922.269', '8.09']
['115', '044', '115Ru', '107032.898', '960.209', '8.35']
['115', '045', '115Rh', '107024.607', '967.206', '8.41']
['115', '046', '115Pd', '107017.906', '972.614', '8.458']
['115', '047', '115Ag', '107012.805', '976.422', '8.491']
['115', '048', '115Cd', '107009.193', '978.74', '8.511']
['115', '049', '115In', '107007.236', '979.404', '8.517']
['115', '050', '115Sn', '107006.226', '979.121', '8.514']
['115', '051', '115Sb', '107008.748', '975.305', '8.481']
['115', '052', '115Te', '107013.177', '969.583', '8.431']
['115', '053', '115I', '107018.391', '963.076', '8.375']
['115', '054', '115Xe', '107025.561', '954.612', '8.301']
['116', '045', '116Rh', '107959.571', '971.808', '8.378']
['116', '046', '116Pd', '107949.84', '980.245', '8.45']
['116', '047', '116Ag', '107946.719', '982.073', '8.466']
['116', '048', '116Cd', '107940.059', '987.44', '8.512']
['116', '049', '116In', '107940.017', '986.188', '8.502']
['116', '050', '116Sn', '107936.227', '988.684', '8.523']
['116', '051', '116Sb', '107940.424', '983.195', '8.476']
['116', '052', '116Te', '107941.465', '980.86', '8.456']
['116', '053', '116I', '107948.733', '972.299', '8.382']
['116', '054', '116Xe', '107952.665', '967.074', '8.337']
['117', '046', '117Pd', '108884.764', '984.887', '8.418']
['117', '047', '117Ag', '108878.513', '989.844', '8.46']
['117', '048', '117Cd', '108873.847', '993.217', '8.489']
['117', '049', '117In', '108870.816', '994.955', '8.504']
['117', '050', '117Sn', '108868.85', '995.627', '8.51']
['117', '051', '117Sb', '108870.094', '993.09', '8.488']
['117', '052', '117Te', '108873.131', '988.76', '8.451']
['117', '053', '117I', '108877.282', '983.315', '8.404']
['117', '054', '117Xe', '108883.021', '976.283', '8.344']
['117', '055', '117Cs', '108890.255', '967.756', '8.271']
['118', '046', '118Pd', '109817.318', '991.898', '8.406']
['118', '047', '118Ag', '109812.707', '995.216', '8.434']
['118', '048', '118Cd', '109805.057', '1001.572', '8.488']
['118', '049', '118In', '109804.025', '1001.311', '8.486']
['118', '050', '118Sn', '109799.087', '1004.955', '8.517']
['118', '051', '118Sb', '109802.234', '1000.515', '8.479']
['118', '052', '118Te', '109802.001', '999.455', '8.47']
['118', '053', '118I', '109808.24', '991.923', '8.406']
['118', '054', '118Xe', '109810.621', '988.248', '8.375']
['118', '055', '118Cs', '109819.78', '977.796', '8.286']
['119', '047', '119Ag', '110745.211', '1002.277', '8.422']
['119', '048', '119Cd', '110739.35', '1006.845', '8.461']
['119', '049', '119In', '110735.045', '1009.856', '8.486']
['119', '050', '119Sn', '110732.169', '1011.438', '8.499']
['119', '051', '119Sb', '110732.25', '1010.065', '8.488']
['119', '052', '119Te', '110734.032', '1006.989', '8.462']
['119', '053', '119I', '110736.939', '1002.789', '8.427']
['119', '054', '119Xe', '110741.4', '997.035', '8.378']
['119', '055', '119Cs', '110747.378', '989.763', '8.317']
['119', '056', '119Ba', '110754.582', '981.266', '8.246']
['120', '046', '120Pd', '111685.626', '1002.721', '8.356']
['120', '047', '120Ag', '111679.615', '1007.438', '8.395']
['120', '048', '120Cd', '111670.78', '1014.98', '8.458']
['120', '049', '120In', '111668.503', '1015.964', '8.466']
['120', '050', '120Sn', '111662.627', '1020.546', '8.505']
['120', '051', '120Sb', '111664.797', '1017.083', '8.476']
['120', '052', '120Te', '111663.305', '1017.282', '8.477']
['120', '053', '120I', '111668.409', '1010.884', '8.424']
['120', '054', '120Xe', '111669.516', '1008.484', '8.404']
['120', '055', '120Cs', '111677.288', '999.419', '8.328']
['120', '056', '120Ba', '111681.776', '993.637', '8.28']
['121', '047', '121Ag', '112612.099', '1014.52', '8.384']
['121', '048', '121Cd', '112605.188', '1020.137', '8.431']
['121', '049', '121In', '112599.896', '1024.136', '8.464']
['121', '050', '121Sn', '112596.022', '1026.717', '8.485']
['121', '051', '121Sb', '112595.12', '1026.325', '8.482']
['121', '052', '121Te', '112595.653', '1024.499', '8.467']
['121', '053', '121I', '112597.406', '1021.453', '8.442']
['121', '054', '121Xe', '112600.709', '1016.856', '8.404']
['121', '055', '121Cs', '112605.571', '1010.701', '8.353']
['121', '056', '121Ba', '112611.42', '1003.559', '8.294']
['122', '048', '122Cd', '113537.012', '1027.879', '8.425']
['122', '049', '122In', '113533.651', '1029.946', '8.442']
['122', '050', '122Sn', '113526.774', '1035.53', '8.488']
['122', '051', '122Sb', '113527.878', '1033.132', '8.468']
['122', '052', '122Te', '113525.384', '1034.333', '8.478']
['122', '053', '122I', '113529.107', '1029.317', '8.437']
['122', '054', '122Xe', '113529.321', '1027.81', '8.425']
['122', '055', '122Cs', '113536.025', '1019.812', '8.359']
['122', '056', '122Ba', '113539.045', '1015.499', '8.324']
['123', '048', '123Cd', '114471.926', '1032.53', '8.395']
['123', '049', '123In', '114465.299', '1037.864', '8.438']
['123', '050', '123Sn', '114460.393', '1041.476', '8.467']
['123', '051', '123Sb', '114458.479', '1042.097', '8.472']
['123', '052', '123Te', '114458.02', '1041.263', '8.466']
['123', '053', '123I', '114458.738', '1039.251', '8.449']
['123', '054', '123Xe', '114460.921', '1035.775', '8.421']
['123', '055', '123Cs', '114464.615', '1030.788', '8.38']
['123', '056', '123Ba', '114469.493', '1024.616', '8.33']
['124', '048', '124Cd', '115404.02', '1040.001', '8.387']
['124', '049', '124In', '115399.339', '1043.389', '8.414']
['124', '050', '124Sn', '115391.471', '1049.963', '8.467']
['124', '051', '124Sb', '115391.576', '1048.565', '8.456']
['124', '052', '124Te', '115388.161', '1050.686', '8.473']
['124', '053', '124I', '115390.81', '1046.745', '8.441']
['124', '054', '124Xe', '115390.004', '1046.257', '8.438']
['124', '055', '124Cs', '115395.422', '1039.546', '8.383']
['124', '056', '124Ba', '115397.552', '1036.123', '8.356']
['124', '057', '124La', '115405.871', '1026.51', '8.278']
['125', '048', '125Cd', '116338.864', '1044.723', '8.358']
['125', '049', '125In', '116331.233', '1051.06', '8.408']
['125', '050', '125Sn', '116325.303', '1055.696', '8.446']
['125', '051', '125Sb', '116322.435', '1057.271', '8.458']
['125', '052', '125Te', '116321.157', '1057.256', '8.458']
['125', '053', '125I', '116320.832', '1056.287', '8.45']
['125', '054', '125Xe', '116321.966', '1053.861', '8.431']
['125', '055', '125Cs', '116324.559', '1049.974', '8.4']
['125', '056', '125Ba', '116328.468', '1044.772', '8.358']
['125', '057', '125La', '116333.866', '1038.081', '8.305']
['126', '048', '126Cd', '117271.388', '1051.764', '8.347']
['126', '049', '126In', '117265.397', '1056.462', '8.385']
['126', '050', '126Sn', '117256.676', '1063.889', '8.444']
['126', '051', '126Sb', '117255.785', '1063.487', '8.44']
['126', '052', '126Te', '117251.609', '1066.369', '8.463']
['126', '053', '126I', '117253.252', '1063.433', '8.44']
['126', '054', '126Xe', '117251.483', '1063.909', '8.444']
['126', '055', '126Cs', '117255.796', '1058.303', '8.399']
['126', '056', '126Ba', '117256.96', '1055.845', '8.38']
['126', '057', '126La', '117264.149', '1047.363', '8.312']
['126', '058', '126Ce', '117267.787', '1042.432', '8.273']
['127', '048', '127Cd', '118206.692', '1056.025', '8.315']
['127', '049', '127In', '118197.711', '1063.713', '8.376']
['127', '050', '127Sn', '118190.691', '1069.44', '8.421']
['127', '051', '127Sb', '118186.979', '1071.858', '8.44']
['127', '052', '127Te', '118184.887', '1072.657', '8.446']
['127', '053', '127I', '118183.674', '1072.577', '8.445']
['127', '054', '127Xe', '118183.825', '1071.132', '8.434']
['127', '055', '127Cs', '118185.395', '1068.269', '8.412']
['127', '056', '127Ba', '118188.308', '1064.063', '8.378']
['127', '057', '127La', '118192.717', '1058.36', '8.334']
['127', '058', '127Ce', '118198.122', '1051.662', '8.281']
['128', '048', '128Cd', '119139.416', '1062.867', '8.304']
['128', '049', '128In', '119131.835', '1069.154', '8.353']
['128', '050', '128Sn', '119122.349', '1077.347', '8.417']
['128', '051', '128Sb', '119120.564', '1077.839', '8.421']
['128', '052', '128Te', '119115.67', '1081.439', '8.449']
['128', '053', '128I', '119116.413', '1079.403', '8.433']
['128', '054', '128Xe', '119113.78', '1080.743', '8.443']
['128', '055', '128Cs', '119117.198', '1076.031', '8.406']
['128', '056', '128Ba', '119117.216', '1074.72', '8.396']
['128', '057', '128La', '119123.477', '1067.166', '8.337']
['128', '058', '128Ce', '119126.062', '1063.287', '8.307']
['128', '059', '128Pr', '119134.754', '1053.302', '8.229']
['129', '049', '129In', '120064.749', '1075.806', '8.34']
['129', '050', '129Sn', '120056.584', '1082.677', '8.393']
['129', '051', '129Sb', '120052.039', '1085.929', '8.418']
['129', '052', '129Te', '120049.153', '1087.522', '8.43']
['129', '053', '129I', '120047.142', '1088.239', '8.436']
['129', '054', '129Xe', '120046.436', '1087.651', '8.431']
['129', '055', '129Cs', '120047.123', '1085.672', '8.416']
['129', '056', '129Ba', '120049.047', '1082.454', '8.391']
['129', '057', '129La', '120052.275', '1077.933', '8.356']
['129', '058', '129Ce', '120056.803', '1072.112', '8.311']
['129', '059', '129Pr', '120062.805', '1064.816', '8.254']
['130', '048', '130Cd', '121008.124', '1073.289', '8.256']
['130', '049', '130In', '120999.293', '1080.827', '8.314']
['130', '050', '130Sn', '120988.533', '1090.294', '8.387']
['130', '051', '130Sb', '120985.869', '1091.664', '8.397']
['130', '052', '130Te', '120980.298', '1095.941', '8.43']
['130', '053', '130I', '120980.207', '1094.74', '8.421']
['130', '054', '130Xe', '120976.746', '1096.907', '8.438']
['130', '055', '130Cs', '120979.217', '1093.143', '8.409']
['130', '056', '130Ba', '120978.344', '1092.722', '8.406']
['130', '057', '130La', '120983.467', '1086.306', '8.356']
['130', '058', '130Ce', '120985.161', '1083.319', '8.333']
['130', '059', '130Pr', '120992.893', '1074.294', '8.264']
['130', '060', '130Nd', '120996.966', '1068.927', '8.223']
['131', '049', '131In', '121932.54', '1087.145', '8.299']
['131', '050', '131Sn', '121922.852', '1095.54', '8.363']
['131', '051', '131Sb', '121917.667', '1099.432', '8.393']
['131', '052', '131Te', '121913.934', '1101.871', '8.411']
['131', '053', '131I', '121911.188', '1103.323', '8.422']
['131', '054', '131Xe', '121909.707', '1103.512', '8.424']
['131', '055', '131Cs', '121909.551', '1102.374', '8.415']
['131', '056', '131Ba', '121910.416', '1100.216', '8.399']
['131', '057', '131La', '121912.82', '1096.519', '8.37']
['131', '058', '131Ce', '121916.358', '1091.687', '8.333']
['131', '059', '131Pr', '121921.287', '1085.465', '8.286']
['131', '060', '131Nd', '121927.287', '1078.172', '8.23']
['132', '049', '132In', '122869.751', '1089.5', '8.254']
['132', '050', '132Sn', '122855.106', '1102.851', '8.355']
['132', '051', '132Sb', '122851.475', '1105.189', '8.373']
['132', '052', '132Te', '122845.456', '1109.915', '8.408']
['132', '053', '132I', '122844.427', '1109.65', '8.406']
['132', '054', '132Xe', '122840.335', '1112.448', '8.428']
['132', '055', '132Cs', '122841.949', '1109.541', '8.406']
['132', '056', '132Ba', '122840.159', '1110.038', '8.409']
['132', '057', '132La', '122844.343', '1104.561', '8.368']
['132', '058', '132Ce', '122845.098', '1102.513', '8.352']
['132', '059', '132Pr', '122851.851', '1094.466', '8.291']
['132', '060', '132Nd', '122855.124', '1089.9', '8.257']
['133', '050', '133Sn', '123792.204', '1105.319', '8.311']
['133', '051', '133Sb', '123783.7', '1112.529', '8.365']
['133', '052', '133Te', '123779.187', '1115.749', '8.389']
['133', '053', '133I', '123775.734', '1117.909', '8.405']
['133', '054', '133Xe', '123773.466', '1118.883', '8.413']
['133', '055', '133Cs', '123772.528', '1118.528', '8.41']
['133', '056', '133Ba', '123772.534', '1117.228', '8.4']
['133', '057', '133La', '123774.083', '1114.386', '8.379']
['133', '058', '133Ce', '123776.643', '1110.533', '8.35']
['133', '059', '133Pr', '123780.617', '1105.266', '8.31']
['133', '060', '133Nd', '123785.714', '1098.875', '8.262']
['133', '061', '133Pm', '123792.123', '1091.173', '8.204']
['134', '050', '134Sn', '124727.848', '1109.24', '8.278']
['134', '051', '134Sb', '124719.967', '1115.827', '8.327']
['134', '052', '134Te', '124711.067', '1123.434', '8.384']
['134', '053', '134I', '124709.043', '1124.165', '8.389']
['134', '054', '134Xe', '124704.479', '1127.435', '8.414']
['134', '055', '134Cs', '124705.202', '1125.419', '8.399']
['134', '056', '134Ba', '124702.632', '1126.696', '8.408']
['134', '057', '134La', '124705.852', '1122.182', '8.374']
['134', '058', '134Ce', '124705.724', '1121.017', '8.366']
['134', '059', '134Pr', '124711.539', '1113.909', '8.313']
['134', '060', '134Nd', '124713.892', '1110.262', '8.286']
['134', '061', '134Pm', '124722.287', '1100.574', '8.213']
['135', '051', '135Sb', '125655.921', '1119.439', '8.292']
['135', '052', '135Te', '125647.29', '1126.776', '8.346']
['135', '053', '135I', '125640.819', '1131.954', '8.385']
['135', '054', '135Xe', '125637.681', '1133.799', '8.399']
['135', '055', '135Cs', '125636.005', '1134.181', '8.401']
['135', '056', '135Ba', '125635.225', '1133.668', '8.398']
['135', '057', '135La', '125635.914', '1131.686', '8.383']
['135', '058', '135Ce', '125637.429', '1128.877', '8.362']
['135', '059', '135Pr', '125640.607', '1124.406', '8.329']
['135', '060', '135Nd', '125644.818', '1118.902', '8.288']
['135', '061', '135Pm', '125650.541', '1111.885', '8.236']
['135', '062', '135Sm', '125657.15', '1103.983', '8.178']
['136', '052', '136Te', '126582.184', '1131.448', '8.319']
['136', '053', '136I', '126576.603', '1135.735', '8.351']
['136', '054', '136Xe', '126569.167', '1141.878', '8.396']
['136', '055', '136Cs', '126568.742', '1141.009', '8.39']
['136', '056', '136Ba', '126565.683', '1142.775', '8.403']
['136', '057', '136La', '126568.019', '1139.146', '8.376']
['136', '058', '136Ce', '126567.08', '1138.792', '8.373']
['136', '059', '136Pr', '126571.71', '1132.868', '8.33']
['136', '060', '136Nd', '126573.327', '1129.958', '8.309']
['136', '061', '136Pm', '126580.815', '1121.177', '8.244']
['136', '062', '136Sm', '126584.693', '1116.005', '8.206']
['137', '052', '137Te', '127518.548', '1134.649', '8.282']
['137', '053', '137I', '127511.094', '1140.81', '8.327']
['137', '054', '137Xe', '127504.707', '1145.903', '8.364']
['137', '055', '137Cs', '127500.029', '1149.288', '8.389']
['137', '056', '137Ba', '127498.343', '1149.681', '8.392']
['137', '057', '137La', '127498.452', '1148.278', '8.382']
['137', '058', '137Ce', '127499.163', '1146.274', '8.367']
['137', '059', '137Pr', '127501.354', '1142.79', '8.342']
['137', '060', '137Nd', '127504.44', '1138.41', '8.31']
['137', '061', '137Pm', '127509.436', '1132.121', '8.264']
['137', '062', '137Sm', '127514.968', '1125.296', '8.214']
['138', '053', '138I', '128446.761', '1144.708', '8.295']
['138', '054', '138Xe', '128438.43', '1151.746', '8.346']
['138', '055', '138Cs', '128435.182', '1153.7', '8.36']
['138', '056', '138Ba', '128429.296', '1158.293', '8.393']
['138', '057', '138La', '128430.522', '1155.774', '8.375']
['138', '058', '138Ce', '128428.967', '1156.035', '8.377']
['138', '059', '138Pr', '128432.893', '1150.816', '8.339']
['138', '060', '138Nd', '128433.496', '1148.92', '8.326']
['138', '061', '138Pm', '128440.063', '1141.059', '8.269']
['138', '062', '138Sm', '128442.994', '1136.835', '8.238']
['138', '063', '138Eu', '128452.231', '1126.305', '8.162']
['139', '053', '139I', '129381.745', '1149.289', '8.268']
['139', '054', '139Xe', '129374.43', '1155.311', '8.312']
['139', '055', '139Cs', '129368.862', '1159.586', '8.342']
['139', '056', '139Ba', '129364.138', '1163.016', '8.367']
['139', '057', '139La', '129361.309', '1164.551', '8.378']
['139', '058', '139Ce', '129361.078', '1163.49', '8.37']
['139', '059', '139Pr', '129362.696', '1160.578', '8.349']
['139', '060', '139Nd', '129365.016', '1156.965', '8.323']
['139', '061', '139Pm', '129369.001', '1151.687', '8.286']
['139', '062', '139Sm', '129373.606', '1145.788', '8.243']
['139', '063', '139Eu', '129380.077', '1138.024', '8.187']
['140', '054', '140Xe', '130308.578', '1160.728', '8.291']
['140', '055', '140Cs', '130304.006', '1164.007', '8.314']
['140', '056', '140Ba', '130297.275', '1169.445', '8.353']
['140', '057', '140La', '130295.714', '1169.712', '8.355']
['140', '058', '140Ce', '130291.441', '1172.692', '8.376']
['140', '059', '140Pr', '130294.318', '1168.522', '8.347']
['140', '060', '140Nd', '130294.25', '1167.296', '8.338']
['140', '061', '140Pm', '130299.781', '1160.472', '8.289']
['140', '062', '140Sm', '130302.024', '1156.936', '8.264']
['140', '063', '140Eu', '130309.979', '1147.687', '8.198']
['140', '064', '140Gd', '130314.676', '1141.697', '8.155']
['140', '065', '140Tb', '130325.467', '1129.613', '8.069']
['141', '054', '141Xe', '131244.732', '1164.14', '8.256']
['141', '055', '141Cs', '131238.074', '1169.504', '8.294']
['141', '056', '141Ba', '131232.314', '1173.971', '8.326']
['141', '057', '141La', '131228.591', '1176.401', '8.343']
['141', '058', '141Ce', '131225.578', '1178.12', '8.355']
['141', '059', '141Pr', '131224.486', '1177.919', '8.354']
['141', '060', '141Nd', '131225.798', '1175.314', '8.336']
['141', '061', '141Pm', '131228.962', '1170.856', '8.304']
['141', '062', '141Sm', '131233.035', '1165.49', '8.266']
['141', '063', '141Eu', '131238.536', '1158.696', '8.218']
['141', '064', '141Gd', '131244.728', '1151.21', '8.165']
['141', '065', '141Tb', '131252.901', '1141.744', '8.097']
['142', '054', '142Xe', '132179.076', '1169.361', '8.235']
['142', '055', '142Cs', '132173.53', '1173.614', '8.265']
['142', '056', '142Ba', '132165.711', '1180.139', '8.311']
['142', '057', '142La', '132162.988', '1181.569', '8.321']
['142', '058', '142Ce', '132157.973', '1185.29', '8.347']
['142', '059', '142Pr', '132158.208', '1183.762', '8.336']
['142', '060', '142Nd', '132155.535', '1185.142', '8.346']
['142', '061', '142Pm', '132159.822', '1179.562', '8.307']
['142', '062', '142Sm', '132161.475', '1176.615', '8.286']
['142', '063', '142Eu', '132168.637', '1168.16', '8.226']
['142', '064', '142Gd', '132172.486', '1163.018', '8.19']
['143', '055', '143Cs', '133107.868', '1178.841', '8.244']
['143', '056', '143Ba', '133101.092', '1184.324', '8.282']
['143', '057', '143La', '133096.33', '1187.792', '8.306']
['143', '058', '143Ce', '133092.394', '1190.435', '8.325']
['143', '059', '143Pr', '133090.421', '1191.114', '8.329']
['143', '060', '143Nd', '133088.977', '1191.266', '8.331']
['143', '061', '143Pm', '133089.507', '1189.442', '8.318']
['143', '062', '143Sm', '133092.439', '1185.217', '8.288']
['143', '063', '143Eu', '133097.209', '1179.153', '8.246']
['143', '064', '143Gd', '133102.71', '1172.359', '8.198']
['143', '065', '143Tb', '133109.999', '1163.777', '8.138']
['144', '055', '144Cs', '134043.763', '1182.511', '8.212']
['144', '056', '144Ba', '134034.753', '1190.228', '8.265']
['144', '057', '144La', '134031.121', '1192.567', '8.282']
['144', '058', '144Ce', '134025.063', '1197.331', '8.315']
['144', '059', '144Pr', '134024.233', '1196.868', '8.312']
['144', '060', '144Nd', '134020.725', '1199.083', '8.327']
['144', '061', '144Pm', '134022.546', '1195.968', '8.305']
['144', '062', '144Sm', '134021.484', '1195.737', '8.304']
['144', '063', '144Eu', '134027.323', '1188.605', '8.254']
['144', '064', '144Gd', '134030.674', '1183.96', '8.222']
['144', '065', '144Tb', '134039.555', '1173.786', '8.151']
['144', '066', '144Dy', '134044.832', '1167.216', '8.106']
['145', '055', '145Cs', '134978.47', '1187.37', '8.189']
['145', '056', '145Ba', '134970.606', '1193.94', '8.234']
['145', '057', '145La', '134964.515', '1198.738', '8.267']
['145', '058', '145Ce', '134959.894', '1202.066', '8.29']
['145', '059', '145Pr', '134956.851', '1203.815', '8.302']
['145', '060', '145Nd', '134954.535', '1204.838', '8.309']
['145', '061', '145Pm', '134954.187', '1203.893', '8.303']
['145', '062', '145Sm', '134954.292', '1202.494', '8.293']
['145', '063', '145Eu', '134956.441', '1199.052', '8.269']
['145', '064', '145Gd', '134961.001', '1193.199', '8.229']
['145', '065', '145Tb', '134967.537', '1185.369', '8.175']
['145', '066', '145Dy', '134974.616', '1176.997', '8.117']
['146', '055', '146Cs', '135914.401', '1191.004', '8.158']
['146', '056', '146Ba', '135904.51', '1199.602', '8.216']
['146', '057', '146La', '135899.879', '1202.939', '8.239']
['146', '058', '146Ce', '135892.808', '1208.717', '8.279']
['146', '059', '146Pr', '135891.267', '1208.965', '8.281']
['146', '060', '146Nd', '135886.535', '1212.403', '8.304']
['146', '061', '146Pm', '135887.495', '1210.15', '8.289']
['146', '062', '146Sm', '135885.442', '1210.91', '8.294']
['146', '063', '146Eu', '135888.811', '1206.247', '8.262']
['146', '064', '146Gd', '135889.329', '1204.436', '8.25']
['146', '065', '146Tb', '135897.141', '1195.331', '8.187']
['146', '066', '146Dy', '135901.846', '1189.332', '8.146']
['147', '055', '147Cs', '136849.495', '1195.475', '8.132']
['147', '057', '147La', '136833.643', '1208.741', '8.223']
['147', '058', '147Ce', '136827.952', '1213.138', '8.253']
['147', '059', '147Pr', '136824.016', '1215.781', '8.271']
['147', '060', '147Nd', '136820.808', '1217.696', '8.284']
['147', '061', '147Pm', '136819.401', '1217.809', '8.284']
['147', '062', '147Sm', '136818.666', '1217.251', '8.281']
['147', '063', '147Eu', '136819.877', '1214.747', '8.264']
['147', '064', '147Gd', '136821.553', '1211.777', '8.243']
['147', '065', '147Tb', '136825.653', '1206.384', '8.207']
['147', '066', '147Dy', '136831.706', '1199.038', '8.157']
['147', '067', '147Ho', '136839.546', '1189.904', '8.095']
['148', '055', '148Cs', '137785.709', '1198.827', '8.1']
['148', '056', '148Ba', '137774.488', '1208.754', '8.167']
['148', '057', '148La', '137768.857', '1213.092', '8.197']
['148', '058', '148Ce', '137761.085', '1219.571', '8.24']
['148', '059', '148Pr', '137758.434', '1220.928', '8.25']
['148', '060', '148Nd', '137753.041', '1225.028', '8.277']
['148', '061', '148Pm', '137753.071', '1223.705', '8.268']
['148', '062', '148Sm', '137750.09', '1225.392', '8.28']
['148', '063', '148Eu', '137752.619', '1221.57', '8.254']
['148', '064', '148Gd', '137752.134', '1220.761', '8.248']
['148', '065', '148Tb', '137757.359', '1214.243', '8.204']
['148', '066', '148Dy', '137759.529', '1210.78', '8.181']
['148', '067', '148Ho', '137768.857', '1200.159', '8.109']
['149', '058', '149Ce', '138696.27', '1223.951', '8.214']
['149', '059', '149Pr', '138691.399', '1227.529', '8.238']
['149', '060', '149Nd', '138687.567', '1230.067', '8.255']
['149', '061', '149Pm', '138685.366', '1230.975', '8.262']
['149', '062', '149Sm', '138683.784', '1231.263', '8.264']
['149', '063', '149Eu', '138683.968', '1229.786', '8.254']
['149', '064', '149Gd', '138684.771', '1227.69', '8.24']
['149', '065', '149Tb', '138687.897', '1223.271', '8.21']
['149', '066', '149Dy', '138691.167', '1218.707', '8.179']
['149', '067', '149Ho', '138696.683', '1211.898', '8.134']
['149', '068', '149Er', '138704.118', '1203.17', '8.075']
['150', '058', '150Ce', '139629.644', '1230.142', '8.201']
['150', '059', '150Pr', '139625.649', '1232.844', '8.219']
['150', '060', '150Nd', '139619.752', '1237.448', '8.25']
['150', '061', '150Pm', '139619.328', '1236.578', '8.244']
['150', '062', '150Sm', '139615.363', '1239.25', '8.262']
['150', '063', '150Eu', '139617.112', '1236.208', '8.241']
['150', '064', '150Gd', '139615.629', '1236.397', '8.243']
['150', '065', '150Tb', '139619.776', '1230.957', '8.206']
['150', '066', '150Dy', '139621.059', '1228.381', '8.189']
['150', '067', '150Ho', '139627.917', '1220.229', '8.135']
['150', '068', '150Er', '139631.521', '1215.332', '8.102']
['151', '058', '151Ce', '140564.458', '1234.894', '8.178']
['151', '059', '151Pr', '140558.676', '1239.382', '8.208']
['151', '060', '151Nd', '140553.983', '1242.782', '8.23']
['151', '061', '151Pm', '140551.03', '1244.442', '8.241']
['151', '062', '151Sm', '140549.332', '1244.847', '8.244']
['151', '063', '151Eu', '140548.744', '1244.141', '8.239']
['151', '064', '151Gd', '140548.697', '1242.895', '8.231']
['151', '065', '151Tb', '140550.751', '1239.547', '8.209']
['151', '066', '151Dy', '140553.111', '1235.894', '8.185']
['151', '067', '151Ho', '140557.727', '1229.985', '8.146']
['151', '068', '151Er', '140562.582', '1223.836', '8.105']
['151', '069', '151Tm', '140569.555', '1215.57', '8.05']
['151', '070', '151Yb', '140578.286', '1205.546', '7.984']
['152', '059', '152Pr', '141493.131', '1244.493', '8.187']
['152', '060', '152Nd', '141486.272', '1250.058', '8.224']
['152', '061', '152Pm', '141484.657', '1250.38', '8.226']
['152', '062', '152Sm', '141480.639', '1253.104', '8.244']
['152', '063', '152Eu', '141482.003', '1250.448', '8.227']
['152', '064', '152Gd', '141479.672', '1251.485', '8.233']
['152', '065', '152Tb', '141483.155', '1246.709', '8.202']
['152', '066', '152Dy', '141483.24', '1245.33', '8.193']
['152', '067', '152Ho', '141489.245', '1238.032', '8.145']
['152', '068', '152Er', '141491.842', '1234.142', '8.119']
['152', '069', '152Tm', '141500.061', '1224.629', '8.057']
['152', '070', '152Yb', '141505.01', '1218.387', '8.016']
['153', '059', '153Pr', '142426.805', '1250.384', '8.172']
['153', '060', '153Nd', '142420.575', '1255.321', '8.205']
['153', '061', '153Pm', '142416.728', '1257.874', '8.221']
['153', '062', '153Sm', '142414.336', '1258.973', '8.229']
['153', '063', '153Eu', '142413.018', '1258.998', '8.229']
['153', '064', '153Gd', '142412.99', '1257.732', '8.22']
['153', '065', '153Tb', '142414.049', '1255.38', '8.205']
['153', '066', '153Dy', '142415.708', '1252.428', '8.186']
['153', '067', '153Ho', '142419.328', '1247.514', '8.154']
['153', '068', '153Er', '142423.348', '1242.201', '8.119']
['153', '069', '153Tm', '142429.31', '1234.946', '8.072']
['153', '071', '153Lu', '142443.893', '1217.776', '7.959']
['154', '059', '154Pr', '143361.729', '1255.025', '8.15']
['154', '060', '154Nd', '143353.728', '1261.733', '8.193']
['154', '061', '154Pm', '143350.407', '1263.76', '8.206']
['154', '062', '154Sm', '143345.934', '1266.94', '8.227']
['154', '063', '154Eu', '143346.141', '1265.44', '8.217']
['154', '064', '154Gd', '143343.661', '1266.627', '8.225']
['154', '065', '154Tb', '143346.703', '1262.291', '8.197']
['154', '066', '154Dy', '143345.954', '1261.747', '8.193']
['154', '067', '154Ho', '143351.197', '1255.211', '8.151']
['154', '068', '154Er', '143352.718', '1252.396', '8.132']
['154', '069', '154Tm', '143360.39', '1243.431', '8.074']
['154', '070', '154Yb', '143364.374', '1238.154', '8.04']
['155', '061', '155Pm', '144283.431', '1270.302', '8.195']
['155', '062', '155Sm', '144279.693', '1272.747', '8.211']
['155', '063', '155Eu', '144277.555', '1273.592', '8.217']
['155', '064', '155Gd', '144276.791', '1273.062', '8.213']
['155', '065', '155Tb', '144277.103', '1271.456', '8.203']
['155', '066', '155Dy', '144278.686', '1268.58', '8.184']
['155', '067', '155Ho', '144281.295', '1264.678', '8.159']
['155', '068', '155Er', '144284.609', '1260.07', '8.129']
['155', '069', '155Tm', '144289.678', '1253.708', '8.088']
['155', '070', '155Yb', '144295.299', '1246.794', '8.044']
['155', '071', '155Lu', '144302.737', '1238.062', '7.987']
['156', '060', '156Nd', '145221.876', '1272.715', '8.158']
['156', '061', '156Pm', '145217.675', '1275.623', '8.177']
['156', '062', '156Sm', '145212.014', '1279.991', '8.205']
['156', '063', '156Eu', '145210.78', '1279.931', '8.205']
['156', '064', '156Gd', '145207.82', '1281.598', '8.215']
['156', '065', '156Tb', '145209.753', '1278.372', '8.195']
['156', '066', '156Dy', '145208.81', '1278.021', '8.192']
['156', '067', '156Ho', '145213.479', '1272.059', '8.154']
['156', '068', '156Er', '145214.105', '1270.14', '8.142']
['156', '069', '156Tm', '145220.967', '1261.984', '8.09']
['156', '070', '156Yb', '145224.032', '1257.626', '8.062']
['156', '071', '156Lu', '145233.035', '1247.33', '7.996']
['156', '072', '156Hf', '145238.424', '1240.647', '7.953']
['157', '061', '157Pm', '146151.019', '1281.844', '8.165']
['157', '062', '157Sm', '146146.148', '1285.422', '8.187']
['157', '063', '157Eu', '146142.9', '1287.377', '8.2']
['157', '064', '157Gd', '146141.025', '1287.958', '8.204']
['157', '065', '157Tb', '146140.575', '1287.116', '8.198']
['157', '066', '157Dy', '146141.406', '1284.991', '8.185']
['157', '067', '157Ho', '146143.494', '1281.609', '8.163']
['157', '068', '157Er', '146146.392', '1277.418', '8.136']
['157', '069', '157Tm', '146150.592', '1271.925', '8.101']
['157', '070', '157Yb', '146155.348', '1265.875', '8.063']
['157', '071', '157Lu', '146161.796', '1258.134', '8.014']
['157', '073', '157Ta', '146177.627', '1239.716', '7.896']
['158', '061', '158Pm', '147085.793', '1286.636', '8.143']
['158', '062', '158Sm', '147079.162', '1291.973', '8.177']
['158', '063', '158Eu', '147076.651', '1293.191', '8.185']
['158', '064', '158Gd', '147072.653', '1295.896', '8.202']
['158', '065', '158Tb', '147073.362', '1293.894', '8.189']
['158', '066', '158Dy', '147071.916', '1294.046', '8.19']
['158', '067', '158Ho', '147075.626', '1289.043', '8.158']
['158', '068', '158Er', '147076.002', '1287.373', '8.148']
['158', '069', '158Tm', '147082.092', '1279.99', '8.101']
['158', '070', '158Yb', '147084.269', '1276.52', '8.079']
['158', '071', '158Lu', '147092.559', '1266.936', '8.019']
['158', '072', '158Hf', '147097.158', '1261.044', '7.981']
['159', '062', '159Sm', '148013.656', '1297.045', '8.158']
['159', '063', '159Eu', '148009.302', '1300.105', '8.177']
['159', '064', '159Gd', '148006.276', '1301.839', '8.188']
['159', '065', '159Tb', '148004.794', '1302.027', '8.189']
['159', '066', '159Dy', '148004.649', '1300.879', '8.182']
['159', '067', '159Ho', '148005.975', '1298.259', '8.165']
['159', '068', '159Er', '148008.233', '1294.708', '8.143']
['159', '069', '159Tm', '148011.719', '1289.928', '8.113']
['159', '070', '159Yb', '148015.935', '1284.419', '8.078']
['159', '071', '159Lu', '148021.557', '1277.504', '8.035']
['159', '072', '159Hf', '148027.902', '1269.865', '7.987']
['159', '073', '159Ta', '148035.797', '1260.677', '7.929']
['160', '064', '160Gd', '148938.39', '1309.29', '8.183']
['160', '065', '160Tb', '148937.984', '1308.402', '8.178']
['160', '066', '160Dy', '148935.638', '1309.455', '8.184']
['160', '067', '160Ho', '148938.417', '1305.382', '8.159']
['160', '068', '160Er', '148938.236', '1304.27', '8.152']
['160', '069', '160Tm', '148943.483', '1297.73', '8.111']
['160', '070', '160Yb', '148945.102', '1294.817', '8.093']
['160', '071', '160Lu', '148952.491', '1286.135', '8.038']
['160', '072', '160Hf', '148956.313', '1281.02', '8.006']
['160', '073', '160Ta', '148965.859', '1270.18', '7.939']
['160', '074', '160W', '148971.868', '1262.878', '7.893']
['161', '064', '161Gd', '149872.319', '1314.925', '8.167']
['161', '065', '161Tb', '149869.853', '1316.099', '8.175']
['161', '066', '161Dy', '149868.749', '1315.909', '8.173']
['161', '067', '161Ho', '149869.096', '1314.269', '8.163']
['161', '068', '161Er', '149870.579', '1311.492', '8.146']
['161', '069', '161Tm', '149873.378', '1307.4', '8.12']
['161', '070', '161Yb', '149876.922', '1302.563', '8.09']
['161', '071', '161Lu', '149881.693', '1296.498', '8.053']
['161', '072', '161Hf', '149887.425', '1289.473', '8.009']
['161', '075', '161Re', '149911.331', '1261.687', '7.837']
['162', '064', '162Gd', '150805.039', '1321.771', '8.159']
['162', '065', '162Tb', '150803.135', '1322.382', '8.163']
['162', '066', '162Dy', '150800.117', '1324.106', '8.173']
['162', '067', '162Ho', '150801.746', '1321.184', '8.155']
['162', '068', '162Er', '150800.939', '1320.698', '8.152']
['162', '069', '162Tm', '150805.287', '1315.056', '8.118']
['162', '070', '162Yb', '150806.428', '1312.622', '8.103']
['162', '071', '162Lu', '150812.909', '1304.848', '8.055']
['162', '072', '162Hf', '150816.065', '1300.398', '8.027']
['162', '073', '162Ta', '150824.947', '1290.223', '7.964']
['162', '074', '162W', '150830.214', '1283.663', '7.924']
['163', '065', '163Tb', '151735.708', '1329.374', '8.156']
['163', '066', '163Dy', '151733.412', '1330.377', '8.162']
['163', '067', '163Ho', '151732.903', '1329.592', '8.157']
['163', '068', '163Er', '151733.602', '1327.6', '8.145']
['163', '069', '163Tm', '151735.53', '1324.379', '8.125']
['163', '070', '163Yb', '151738.45', '1320.165', '8.099']
['163', '071', '163Lu', '151742.452', '1314.87', '8.067']
['163', '072', '163Hf', '151747.446', '1308.583', '8.028']
['163', '073', '163Ta', '151753.681', '1301.054', '7.982']
['163', '074', '163W', '151760.8', '1292.642', '7.93']
['163', '075', '163Re', '151769.192', '1282.957', '7.871']
['164', '065', '164Tb', '152669.723', '1334.924', '8.14']
['164', '066', '164Dy', '152665.319', '1338.035', '8.159']
['164', '067', '164Ho', '152665.794', '1336.267', '8.148']
['164', '068', '164Er', '152664.32', '1336.447', '8.149']
['164', '069', '164Tm', '152667.871', '1331.603', '8.12']
['164', '070', '164Yb', '152668.225', '1329.956', '8.109']
['164', '071', '164Lu', '152674.095', '1322.792', '8.066']
['164', '072', '164Hf', '152676.404', '1319.19', '8.044']
['164', '073', '164Ta', '152684.432', '1309.869', '7.987']
['164', '074', '164W', '152688.97', '1304.037', '7.951']
['164', '076', '164Os', '152705.722', '1284.699', '7.834']
['165', '066', '165Dy', '153599.168', '1343.751', '8.144']
['165', '067', '165Ho', '153597.371', '1344.256', '8.147']
['165', '068', '165Er', '153597.236', '1343.097', '8.14']
['165', '069', '165Tm', '153598.317', '1340.722', '8.126']
['165', '070', '165Yb', '153600.455', '1337.291', '8.105']
['165', '071', '165Lu', '153603.789', '1332.664', '8.077']
['165', '072', '165Hf', '153608.084', '1327.075', '8.043']
['165', '073', '165Ta', '153613.354', '1320.512', '8.003']
['165', '074', '165W', '153619.836', '1312.737', '7.956']
['165', '075', '165Re', '153627.53', '1303.749', '7.902']
['166', '065', '166Tb', '154537.031', '1346.747', '8.113']
['166', '066', '166Dy', '154531.69', '1350.795', '8.137']
['166', '067', '166Ho', '154530.692', '1350.499', '8.136']
['166', '068', '166Er', '154528.327', '1351.572', '8.142']
['166', '069', '166Tm', '154530.853', '1347.752', '8.119']
['166', '070', '166Yb', '154530.648', '1346.663', '8.112']
['166', '071', '166Lu', '154535.704', '1340.314', '8.074']
['166', '072', '166Hf', '154537.355', '1337.37', '8.056']
['166', '073', '166Ta', '154544.605', '1328.826', '8.005']
['166', '074', '166W', '154548.3', '1323.838', '7.975']
['166', '076', '166Os', '154563.732', '1305.819', '7.866']
['167', '066', '167Dy', '155465.834', '1356.216', '8.121']
['167', '067', '167Ho', '155462.976', '1357.781', '8.13']
['167', '068', '167Er', '155461.456', '1358.008', '8.132']
['167', '069', '167Tm', '155461.693', '1356.477', '8.123']
['167', '070', '167Yb', '155463.136', '1353.741', '8.106']
['167', '071', '167Lu', '155465.719', '1349.864', '8.083']
['167', '072', '167Hf', '155469.24', '1345.05', '8.054']
['167', '073', '167Ta', '155473.846', '1339.151', '8.019']
['167', '074', '167W', '155479.597', '1332.106', '7.977']
['167', '076', '167Os', '155494.164', '1314.953', '7.874']
['167', '077', '167Ir', '155503.074', '1304.749', '7.813']
['168', '066', '168Dy', '156398.708', '1362.907', '8.113']
['168', '067', '168Ho', '156396.687', '1363.635', '8.117']
['168', '068', '168Er', '156393.25', '1365.779', '8.13']
['168', '069', '168Tm', '156394.418', '1363.318', '8.115']
['168', '070', '168Yb', '156393.649', '1362.793', '8.112']
['168', '071', '168Lu', '156397.653', '1357.496', '8.08']
['168', '072', '168Hf', '156398.841', '1355.014', '8.066']
['168', '073', '168Ta', '156405.297', '1347.265', '8.019']
['168', '074', '168W', '156408.29', '1342.979', '7.994']
['168', '075', '168Re', '156416.879', '1333.096', '7.935']
['168', '076', '168Os', '156422.167', '1326.515', '7.896']
['168', '078', '168Pt', '156440.096', '1305.999', '7.774']
['169', '066', '169Dy', '157333.162', '1368.019', '8.095']
['169', '067', '169Ho', '157329.448', '1370.439', '8.109']
['169', '068', '169Er', '157326.812', '1371.783', '8.117']
['169', '069', '169Tm', '157325.949', '1371.352', '8.115']
['169', '070', '169Yb', '157326.348', '1369.659', '8.104']
['169', '071', '169Lu', '157328.13', '1366.584', '8.086']
['169', '072', '169Hf', '157330.979', '1362.442', '8.062']
['169', '073', '169Ta', '157334.895', '1357.232', '8.031']
['169', '074', '169W', '157339.756', '1351.078', '7.995']
['169', '075', '169Re', '157345.777', '1343.764', '7.951']
['169', '076', '169Os', '157352.931', '1335.316', '7.901']
['169', '077', '169Ir', '157361.06', '1325.894', '7.846']
['170', '067', '170Ho', '158263.505', '1375.948', '8.094']
['170', '068', '170Er', '158259.12', '1379.04', '8.112']
['170', '069', '170Tm', '158258.923', '1377.944', '8.106']
['170', '070', '170Yb', '158257.443', '1378.13', '8.107']
['170', '071', '170Lu', '158260.391', '1373.888', '8.082']
['170', '072', '170Hf', '158260.936', '1372.05', '8.071']
['170', '073', '170Ta', '158266.541', '1365.152', '8.03']
['170', '074', '170W', '158268.875', '1361.524', '8.009']
['170', '075', '170Re', '158276.739', '1352.367', '7.955']
['170', '076', '170Os', '158281.218', '1346.595', '7.921']
['170', '078', '170Pt', '158297.818', '1327.408', '7.808']
['171', '067', '171Ho', '159196.719', '1382.299', '8.084']
['171', '068', '171Er', '159193.003', '1384.721', '8.098']
['171', '069', '171Tm', '159191.002', '1385.43', '8.102']
['171', '070', '171Yb', '159190.394', '1384.744', '8.098']
['171', '071', '171Lu', '159191.362', '1382.483', '8.085']
['171', '072', '171Hf', '159193.253', '1379.298', '8.066']
['171', '073', '171Ta', '159196.453', '1374.805', '8.04']
['171', '074', '171W', '159200.576', '1369.389', '8.008']
['171', '075', '171Re', '159205.901', '1362.77', '7.969']
['171', '076', '171Os', '159212.347', '1355.031', '7.924']
['171', '077', '171Ir', '159219.699', '1346.386', '7.874']
['171', '078', '171Pt', '159228.148', '1336.643', '7.817']
['171', '079', '171Au', '159237.542', '1325.956', '7.754']
['172', '068', '172Er', '160125.733', '1391.557', '8.09']
['172', '069', '172Tm', '160124.331', '1391.666', '8.091']
['172', '070', '172Yb', '160121.94', '1392.764', '8.097']
['172', '071', '172Lu', '160123.948', '1389.462', '8.078']
['172', '072', '172Hf', '160123.774', '1388.343', '8.072']
['172', '073', '172Ta', '160128.337', '1382.486', '8.038']
['172', '074', '172W', '160130.059', '1379.471', '8.02']
['172', '075', '172Re', '160137.125', '1371.112', '7.972']
['172', '076', '172Os', '160140.896', '1366.047', '7.942']
['172', '078', '172Pt', '160156.011', '1348.346', '7.839']
['172', '080', '172Hg', '160175', '1326.77', '7.714']
['173', '069', '173Tm', '161056.946', '1398.616', '8.084']
['173', '070', '173Yb', '161055.138', '1399.131', '8.087']
['173', '071', '173Lu', '161055.298', '1397.678', '8.079']
['173', '072', '173Hf', '161056.26', '1395.422', '8.066']
['173', '073', '173Ta', '161058.764', '1391.625', '8.044']
['173', '074', '173W', '161061.923', '1387.172', '8.018']
['173', '075', '173Re', '161066.585', '1381.217', '7.984']
['173', '076', '173Os', '161072.19', '1374.319', '7.944']
['173', '077', '173Ir', '161078.845', '1366.37', '7.898']
['173', '078', '173Pt', '161086.666', '1357.256', '7.845']
['173', '079', '173Au', '161095.275', '1347.354', '7.788']
['174', '069', '174Tm', '161990.829', '1404.298', '8.071']
['174', '070', '174Yb', '161987.239', '1406.595', '8.084']
['174', '071', '174Lu', '161988.102', '1404.439', '8.071']
['174', '072', '174Hf', '161987.32', '1403.928', '8.069']
['174', '073', '174Ta', '161990.914', '1399.04', '8.04']
['174', '074', '174W', '161991.917', '1396.744', '8.027']
['174', '075', '174Re', '161997.96', '1389.407', '7.985']
['174', '076', '174Os', '162001.126', '1384.948', '7.959']
['174', '077', '174Ir', '162009.742', '1375.039', '7.903']
['174', '078', '174Pt', '162014.781', '1368.706', '7.866']
['174', '080', '174Hg', '162032.431', '1348.47', '7.75']
['175', '069', '175Tm', '162923.873', '1410.819', '8.062']
['175', '070', '175Yb', '162920.982', '1412.418', '8.071']
['175', '071', '175Lu', '162920.001', '1412.106', '8.069']
['175', '072', '175Hf', '162920.177', '1410.636', '8.061']
['175', '073', '175Ta', '162921.74', '1407.779', '8.044']
['175', '074', '175W', '162924.005', '1404.221', '8.024']
['175', '075', '175Re', '162927.839', '1399.093', '7.995']
['175', '076', '175Os', '162932.511', '1393.128', '7.961']
['175', '077', '175Ir', '162938.676', '1385.67', '7.918']
['175', '078', '175Pt', '162945.904', '1377.148', '7.869']
['175', '079', '175Au', '162953.643', '1368.116', '7.818']
['175', '080', '175Hg', '162962.582', '1357.884', '7.759']
['176', '069', '176Tm', '163858.317', '1415.941', '8.045']
['176', '070', '176Yb', '163853.682', '1419.283', '8.064']
['176', '071', '176Lu', '163853.278', '1418.394', '8.059']
['176', '072', '176Hf', '163851.577', '1418.801', '8.061']
['176', '073', '176Ta', '163854.273', '1414.811', '8.039']
['176', '074', '176W', '163854.49', '1413.301', '8.03']
['176', '075', '176Re', '163859.558', '1406.94', '7.994']
['176', '076', '176Os', '163862.012', '1403.192', '7.973']
['176', '077', '176Ir', '163869.738', '1394.173', '7.921']
['176', '078', '176Pt', '163874.16', '1388.458', '7.889']
['176', '080', '176Hg', '163890.287', '1369.744', '7.783']
['177', '070', '177Yb', '164787.681', '1424.849', '8.05']
['177', '071', '177Lu', '164785.77', '1425.466', '8.053']
['177', '072', '177Hf', '164784.759', '1425.185', '8.052']
['177', '073', '177Ta', '164785.413', '1423.237', '8.041']
['177', '074', '177W', '164786.924', '1420.432', '8.025']
['177', '075', '177Re', '164789.846', '1416.217', '8.001']
['177', '076', '177Os', '164793.654', '1411.116', '7.972']
['177', '077', '177Ir', '164799.046', '1404.43', '7.935']
['177', '078', '177Pt', '164805.212', '1396.971', '7.892']
['177', '079', '177Au', '164812.521', '1388.369', '7.844']
['177', '080', '177Hg', '164820.78', '1378.816', '7.79']
['177', '081', '177Tl', '164829.721', '1368.582', '7.732']
['178', '070', '178Yb', '165720.466', '1431.629', '8.043']
['178', '071', '178Lu', '165719.31', '1431.492', '8.042']
['178', '072', '178Hf', '165716.698', '1432.811', '8.049']
['178', '073', '178Ta', '165718.124', '1430.091', '8.034']
['178', '074', '178W', '165717.704', '1429.218', '8.029']
['178', '075', '178Re', '165721.956', '1423.672', '7.998']
['178', '076', '178Os', '165723.552', '1420.783', '7.982']
['178', '077', '178Ir', '165730.335', '1412.707', '7.937']
['178', '078', '178Pt', '165734.078', '1407.67', '7.908']
['178', '079', '178Au', '165743.235', '1397.22', '7.85']
['178', '080', '178Hg', '165748.737', '1390.425', '7.811']
['178', '082', '178Pb', '165767.6', '1368.975', '7.691']
['179', '071', '179Lu', '166652.083', '1438.284', '8.035']
['179', '072', '179Hf', '166650.165', '1438.91', '8.039']
['179', '073', '179Ta', '166649.759', '1438.022', '8.034']
['179', '074', '179W', '166650.31', '1436.177', '8.023']
['179', '075', '179Re', '166652.517', '1432.677', '8.004']
['179', '076', '179Os', '166655.572', '1428.328', '7.979']
['179', '077', '179Ir', '166660.004', '1422.603', '7.948']
['179', '078', '179Pt', '166665.306', '1416.008', '7.911']
['179', '079', '179Au', '166672.107', '1407.913', '7.865']
['179', '080', '179Hg', '166679.626', '1399.101', '7.816']
['179', '081', '179Tl', '166687.737', '1389.697', '7.764']
['180', '071', '180Lu', '167585.951', '1443.981', '8.022']
['180', '072', '180Hf', '167582.342', '1446.297', '8.035']
['180', '073', '180Ta', '167582.683', '1444.663', '8.026']
['180', '074', '180W', '167581.464', '1444.588', '8.025']
['180', '075', '180Re', '167584.757', '1440.002', '8']
['180', '076', '180Os', '167585.727', '1437.739', '7.987']
['180', '077', '180Ir', '167591.597', '1430.575', '7.948']
['180', '078', '180Pt', '167594.628', '1426.251', '7.924']
['180', '079', '180Au', '167602.957', '1416.629', '7.87']
['180', '080', '180Hg', '167607.797', '1410.495', '7.836']
['180', '082', '180Pb', '167625.081', '1390.625', '7.726']
['181', '072', '181Hf', '168516.213', '1451.992', '8.022']
['181', '073', '181Ta', '168514.672', '1452.24', '8.023']
['181', '074', '181W', '168514.348', '1451.27', '8.018']
['181', '075', '181Re', '168515.58', '1448.744', '8.004']
['181', '076', '181Os', '168518.03', '1445.001', '7.983']
['181', '077', '181Ir', '168521.597', '1440.141', '7.957']
['181', '078', '181Pt', '168526.183', '1434.261', '7.924']
['181', '079', '181Au', '168532.176', '1426.975', '7.884']
['181', '080', '181Hg', '168538.875', '1418.983', '7.84']
['181', '081', '181Tl', '168546.224', '1410.34', '7.792']
['181', '082', '181Pb', '168555.374', '1399.897', '7.734']
['182', '072', '182Hf', '169449.059', '1458.711', '8.015']
['182', '073', '182Ta', '169448.174', '1458.303', '8.013']
['182', '074', '182W', '169445.849', '1459.335', '8.018']
['182', '075', '182Re', '169448.135', '1455.755', '7.999']
['182', '076', '182Os', '169448.465', '1454.131', '7.99']
['182', '077', '182Ir', '169453.511', '1447.792', '7.955']
['182', '078', '182Pt', '169455.883', '1444.127', '7.935']
['182', '079', '182Au', '169463.24', '1435.476', '7.887']
['182', '080', '182Hg', '169467.454', '1429.969', '7.857']
['182', '081', '182Tl', '169477.169', '1418.961', '7.796']
['182', '082', '182Pb', '169483.182', '1411.654', '7.756']
['183', '072', '183Hf', '170383.322', '1464.013', '8']
['183', '073', '183Ta', '170380.805', '1465.237', '8.007']
['183', '074', '183W', '170379.223', '1465.525', '8.008']
['183', '075', '183Re', '170379.268', '1464.187', '8.001']
['183', '076', '183Os', '170380.908', '1461.254', '7.985']
['183', '077', '183Ir', '170383.86', '1457.008', '7.962']
['183', '078', '183Pt', '170387.774', '1451.801', '7.933']
['183', '079', '183Au', '170392.848', '1445.434', '7.899']
['183', '080', '183Hg', '170398.724', '1438.264', '7.859']
['183', '081', '183Tl', '170405.426', '1430.269', '7.816']
['183', '082', '183Pb', '170413.933', '1420.469', '7.762']
['184', '072', '184Hf', '171316.606', '1470.294', '7.991']
['184', '073', '184Ta', '171314.754', '1470.853', '7.994']
['184', '074', '184W', '171311.377', '1472.937', '8.005']
['184', '075', '184Re', '171312.346', '1470.674', '7.993']
['184', '076', '184Os', '171311.806', '1469.921', '7.989']
['184', '077', '184Ir', '171315.94', '1464.494', '7.959']
['184', '078', '184Pt', '171317.708', '1461.432', '7.943']
['184', '079', '184Au', '171324.21', '1453.637', '7.9']
['184', '080', '184Hg', '171327.669', '1448.885', '7.874']
['184', '081', '184Tl', '171336.617', '1438.643', '7.819']
['184', '082', '184Pb', '171341.951', '1432.016', '7.783']
['185', '073', '185Ta', '172247.693', '1477.479', '7.986']
['185', '074', '185W', '172245.189', '1478.691', '7.993']
['185', '075', '185Re', '172244.245', '1478.341', '7.991']
['185', '076', '185Os', '172244.747', '1476.546', '7.981']
['185', '077', '185Ir', '172246.709', '1473.29', '7.964']
['185', '078', '185Pt', '172249.854', '1468.852', '7.94']
['185', '079', '185Au', '172254.156', '1463.256', '7.909']
['185', '080', '185Hg', '172259.336', '1456.783', '7.875']
['185', '081', '185Tl', '172265.241', '1449.585', '7.836']
['185', '082', '185Pb', '172272.949', '1440.583', '7.787']
['186', '073', '186Ta', '173181.973', '1482.765', '7.972']
['186', '074', '186W', '173177.563', '1485.882', '7.989']
['186', '075', '186Re', '173177.631', '1484.52', '7.981']
['186', '076', '186Os', '173176.051', '1484.807', '7.983']
['186', '077', '186Ir', '173179.367', '1480.198', '7.958']
['186', '078', '186Pt', '173180.165', '1478.107', '7.947']
['186', '079', '186Au', '173185.803', '1471.176', '7.91']
['186', '080', '186Hg', '173188.468', '1467.217', '7.888']
['186', '081', '186Tl', '173196.306', '1458.086', '7.839']
['186', '082', '186Pb', '173201.304', '1451.795', '7.805']
['186', '083', '186Bi', '173212.304', '1439.501', '7.739']
['187', '074', '187W', '174111.662', '1491.348', '7.975']
['187', '075', '187Re', '174109.84', '1491.877', '7.978']
['187', '076', '187Os', '174109.326', '1491.097', '7.974']
['187', '077', '187Ir', '174110.318', '1488.813', '7.962']
['187', '078', '187Pt', '174112.81', '1485.027', '7.941']
['187', '079', '187Au', '174116.007', '1480.537', '7.917']
['187', '080', '187Hg', '174120.383', '1474.868', '7.887']
['187', '081', '187Tl', '174125.546', '1468.411', '7.852']
['187', '082', '187Pb', '174132.499', '1460.165', '7.808']
['187', '083', '187Bi', '174140.595', '1450.776', '7.758']
['188', '074', '188W', '175044.394', '1498.182', '7.969']
['188', '075', '188Re', '175043.533', '1497.749', '7.967']
['188', '076', '188Os', '175040.902', '1499.087', '7.974']
['188', '077', '188Ir', '175043.2', '1495.496', '7.955']
['188', '078', '188Pt', '175043.194', '1494.209', '7.948']
['188', '079', '188Au', '175048.205', '1487.904', '7.914']
['188', '080', '188Hg', '175049.793', '1485.023', '7.899']
['188', '081', '188Tl', '175057.134', '1476.389', '7.853']
['188', '082', '188Pb', '175061.158', '1471.071', '7.825']
['188', '083', '188Bi', '175071.262', '1459.674', '7.764']
['188', '084', '188Po', '175077.413', '1452.23', '7.725']
['189', '074', '189W', '175979.075', '1503.066', '7.953']
['189', '075', '189Re', '175976.066', '1504.782', '7.962']
['189', '076', '189Os', '175974.547', '1505.007', '7.963']
['189', '077', '189Ir', '175974.569', '1503.692', '7.956']
['189', '078', '189Pt', '175976.028', '1500.94', '7.941']
['189', '079', '189Au', '175978.418', '1497.257', '7.922']
['189', '080', '189Hg', '175981.859', '1492.522', '7.897']
['189', '081', '189Tl', '175986.376', '1486.712', '7.866']
['189', '082', '189Pb', '175992.587', '1479.208', '7.826']
['189', '083', '189Bi', '175999.896', '1470.605', '7.781']
['189', '084', '189Po', '176008.03', '1461.178', '7.731']
['190', '074', '190W', '176911.749', '1509.958', '7.947']
['190', '075', '190Re', '176909.968', '1510.445', '7.95']
['190', '076', '190Os', '176906.32', '1512.799', '7.962']
['190', '077', '190Ir', '176907.764', '1510.062', '7.948']
['190', '078', '190Pt', '176906.682', '1509.851', '7.947']
['190', '079', '190Au', '176910.613', '1504.627', '7.919']
['190', '080', '190Hg', '176911.613', '1502.334', '7.907']
['190', '081', '190Tl', '176918.142', '1494.511', '7.866']
['190', '082', '190Pb', '176921.544', '1489.816', '7.841']
['190', '083', '190Bi', '176930.55', '1479.517', '7.787']
['190', '084', '190Po', '176936.376', '1472.397', '7.749']
['191', '075', '191Re', '177842.683', '1517.296', '7.944']
['191', '076', '191Os', '177840.127', '1518.558', '7.951']
['191', '077', '191Ir', '177839.303', '1518.088', '7.948']
['191', '078', '191Pt', '177839.801', '1516.298', '7.939']
['191', '079', '191Au', '177841.178', '1513.627', '7.925']
['191', '080', '191Hg', '177843.884', '1509.628', '7.904']
['191', '081', '191Tl', '177847.685', '1504.534', '7.877']
['191', '082', '191Pb', '177853.205', '1497.72', '7.841']
['191', '083', '191Bi', '177859.704', '1489.928', '7.801']
['191', '084', '191Po', '177867.379', '1480.96', '7.754']
['192', '076', '192Os', '178772.134', '1526.116', '7.949']
['192', '077', '192Ir', '178772.67', '1524.286', '7.939']
['192', '078', '192Pt', '178770.7', '1524.964', '7.943']
['192', '079', '192Au', '178773.705', '1520.666', '7.92']
['192', '080', '192Hg', '178773.96', '1519.117', '7.912']
['192', '081', '192Tl', '178779.59', '1512.194', '7.876']
['192', '082', '192Pb', '178782.393', '1508.098', '7.855']
['192', '083', '192Bi', '178790.888', '1498.309', '7.804']
['192', '084', '192Po', '178795.856', '1492.048', '7.771']
['193', '076', '193Os', '179706.116', '1531.699', '7.936']
['193', '077', '193Ir', '179704.464', '1532.058', '7.938']
['193', '078', '193Pt', '179704.01', '1531.219', '7.934']
['193', '079', '193Au', '179704.582', '1529.354', '7.924']
['193', '080', '193Hg', '179706.414', '1526.229', '7.908']
['193', '081', '193Tl', '179709.634', '1521.715', '7.885']
['193', '082', '193Pb', '179714.253', '1515.803', '7.854']
['193', '083', '193Bi', '179720.059', '1508.704', '7.817']
['193', '084', '193Po', '179727.061', '1500.408', '7.774']
['193', '085', '193At', '179734.76', '1491.416', '7.728']
['194', '076', '194Os', '180638.57', '1538.811', '7.932']
['194', '077', '194Ir', '180637.962', '1538.125', '7.928']
['194', '078', '194Pt', '180635.218', '1539.577', '7.936']
['194', '079', '194Au', '180637.208', '1536.293', '7.919']
['194', '080', '194Hg', '180636.766', '1535.442', '7.915']
['194', '081', '194Tl', '180641.618', '1529.297', '7.883']
['194', '082', '194Pb', '180643.729', '1525.892', '7.865']
['194', '083', '194Bi', '180651.436', '1516.892', '7.819']
['194', '084', '194Po', '180655.91', '1511.125', '7.789']
['194', '085', '194At', '180665.214', '1500.527', '7.735']
['195', '076', '195Os', '181572.807', '1544.139', '7.919']
['195', '077', '195Ir', '181570.296', '1545.357', '7.925']
['195', '078', '195Pt', '181568.678', '1545.682', '7.927']
['195', '079', '195Au', '181568.394', '1544.673', '7.921']
['195', '080', '195Hg', '181569.453', '1542.32', '7.909']
['195', '081', '195Tl', '181571.787', '1538.693', '7.891']
['195', '082', '195Pb', '181575.717', '1533.47', '7.864']
['195', '083', '195Bi', '181580.896', '1526.997', '7.831']
['195', '084', '195Po', '181587.339', '1519.261', '7.791']
['195', '085', '195At', '181594.422', '1510.885', '7.748']
['195', '086', '195Rn', '181602.457', '1501.556', '7.7']
['196', '076', '196Os', '182505.711', '1550.801', '7.912']
['196', '077', '196Ir', '182504.04', '1551.178', '7.914']
['196', '078', '196Pt', '182500.321', '1553.604', '7.927']
['196', '079', '196Au', '182501.318', '1551.314', '7.915']
['196', '080', '196Hg', '182500.12', '1551.218', '7.914']
['196', '081', '196Tl', '182503.939', '1546.106', '7.888']
['196', '082', '196Pb', '182505.564', '1543.188', '7.873']
['196', '083', '196Bi', '182512.405', '1535.053', '7.832']
['196', '084', '196Po', '182516.429', '1529.736', '7.805']
['196', '085', '196At', '182525.472', '1519.4', '7.752']
['196', '086', '196Rn', '182530.851', '1512.727', '7.718']
['197', '077', '197Ir', '183436.706', '1558.078', '7.909']
['197', '078', '197Pt', '183434.04', '1559.45', '7.916']
['197', '079', '197Au', '183432.811', '1559.386', '7.916']
['197', '080', '197Hg', '183432.9', '1558.004', '7.909']
['197', '081', '197Tl', '183434.589', '1555.021', '7.894']
['197', '082', '197Pb', '183437.67', '1550.647', '7.871']
['197', '083', '197Bi', '183442.22', '1544.804', '7.842']
['197', '084', '197Po', '183448.037', '1537.693', '7.806']
['197', '085', '197At', '183454.546', '1529.891', '7.766']
['197', '086', '197Rn', '183461.855', '1521.289', '7.722']
['198', '078', '198Pt', '184366.049', '1567.007', '7.914']
['198', '079', '198Au', '184365.864', '1565.899', '7.909']
['198', '080', '198Hg', '184363.98', '1566.489', '7.912']
['198', '081', '198Tl', '184366.934', '1562.242', '7.89']
['198', '082', '198Pb', '184367.863', '1560.019', '7.879']
['198', '083', '198Bi', '184374.033', '1552.556', '7.841']
['198', '084', '198Po', '184377.418', '1547.878', '7.818']
['198', '085', '198At', '184385.71', '1538.292', '7.769']
['198', '086', '198Rn', '184390.638', '1532.071', '7.738']
['199', '077', '199Ir', '185303.562', '1570.352', '7.891']
['199', '078', '199Pt', '185300.059', '1572.562', '7.902']
['199', '079', '199Au', '185297.845', '1573.483', '7.907']
['199', '080', '199Hg', '185296.882', '1573.153', '7.905']
['199', '081', '199Tl', '185297.859', '1570.882', '7.894']
['199', '082', '199Pb', '185300.179', '1567.269', '7.876']
['199', '083', '199Bi', '185304.098', '1562.056', '7.85']
['199', '084', '199Po', '185309.17', '1555.691', '7.818']
['199', '085', '199At', '185315.054', '1548.514', '7.781']
['199', '086', '199Rn', '185321.843', '1540.431', '7.741']
['199', '087', '199Fr', '185329.612', '1531.369', '7.695']
['200', '078', '200Pt', '186232.342', '1579.844', '7.899']
['200', '079', '200Au', '186231.164', '1579.729', '7.899']
['200', '080', '200Hg', '186228.419', '1581.181', '7.906']
['200', '081', '200Tl', '186230.364', '1577.942', '7.89']
['200', '082', '200Pb', '186230.658', '1576.355', '7.882']
['200', '083', '200Bi', '186236.02', '1569.7', '7.848']
['200', '084', '200Po', '186238.925', '1565.501', '7.828']
['200', '085', '200At', '186246.38', '1556.753', '7.784']
['200', '086', '200Rn', '186250.851', '1550.989', '7.755']
['200', '087', '200Fr', '186260.466', '1540.08', '7.7']
['201', '078', '201Pt', '187166.699', '1585.053', '7.886']
['201', '079', '201Au', '187163.527', '1586.931', '7.895']
['201', '080', '201Hg', '187161.753', '1587.411', '7.898']
['201', '081', '201Tl', '187161.724', '1586.148', '7.891']
['201', '082', '201Pb', '187163.137', '1583.441', '7.878']
['201', '083', '201Bi', '187166.468', '1578.817', '7.855']
['201', '084', '201Po', '187170.848', '1573.144', '7.827']
['201', '085', '201At', '187176.073', '1566.625', '7.794']
['201', '086', '201Rn', '187182.281', '1559.124', '7.757']
['201', '087', '201Fr', '187189.44', '1550.672', '7.715']
['202', '079', '202Au', '188097.022', '1593.002', '7.886']
['202', '080', '202Hg', '188093.565', '1595.165', '7.897']
['202', '081', '202Tl', '188094.417', '1593.02', '7.886']
['202', '082', '202Pb', '188093.955', '1592.189', '7.882']
['202', '083', '202Bi', '188098.645', '1586.205', '7.853']
['202', '084', '202Po', '188100.943', '1582.614', '7.835']
['202', '085', '202At', '188107.765', '1574.499', '7.795']
['202', '086', '202Rn', '188111.57', '1569.4', '7.769']
['202', '087', '202Fr', '188120.474', '1559.203', '7.719']
['202', '088', '202Ra', '188126.033', '1552.351', '7.685']
['203', '079', '203Au', '189029.773', '1599.816', '7.881']
['203', '080', '203Hg', '189027.136', '1601.16', '7.887']
['203', '081', '203Tl', '189026.133', '1600.87', '7.886']
['203', '082', '203Pb', '189026.596', '1599.113', '7.877']
['203', '083', '203Bi', '189029.332', '1595.084', '7.858']
['203', '084', '203Po', '189033.054', '1590.068', '7.833']
['203', '085', '203At', '189037.687', '1584.142', '7.804']
['203', '086', '203Rn', '189043.179', '1577.357', '7.77']
['203', '087', '203Fr', '189049.689', '1569.553', '7.732']
['203', '088', '203Ra', '189056.957', '1560.992', '7.69']
['204', '080', '204Hg', '189959.209', '1608.652', '7.886']
['204', '081', '204Tl', '189959.042', '1607.526', '7.88']
['204', '082', '204Pb', '189957.767', '1607.507', '7.88']
['204', '083', '204Bi', '189961.699', '1602.282', '7.854']
['204', '084', '204Po', '189963.521', '1599.167', '7.839']
['204', '085', '204At', '189969.469', '1591.925', '7.804']
['204', '086', '204Rn', '189972.849', '1587.252', '7.781']
['204', '087', '204Fr', '189980.93', '1577.878', '7.735']
['204', '088', '204Ra', '189985.865', '1571.649', '7.704']
['205', '080', '205Hg', '190893.106', '1614.32', '7.875']
['205', '081', '205Tl', '190891.061', '1615.072', '7.878']
['205', '082', '205Pb', '190890.601', '1614.239', '7.874']
['205', '083', '205Bi', '190892.798', '1610.748', '7.857']
['205', '084', '205Po', '190895.84', '1606.413', '7.836']
['205', '085', '205At', '190899.866', '1601.094', '7.81']
['205', '086', '205Rn', '190904.617', '1595.049', '7.781']
['205', '087', '205Fr', '190910.506', '1587.867', '7.746']
['205', '088', '205Ra', '190917.145', '1579.935', '7.707']
['206', '080', '206Hg', '191825.941', '1621.051', '7.869']
['206', '081', '206Tl', '191824.123', '1621.575', '7.872']
['206', '082', '206Pb', '191822.079', '1622.325', '7.875']
['206', '083', '206Bi', '191825.326', '1617.786', '7.853']
['206', '084', '206Po', '191826.661', '1615.157', '7.841']
['206', '085', '206At', '191831.912', '1608.613', '7.809']
['206', '086', '206Rn', '191834.705', '1604.527', '7.789']
['206', '087', '206Fr', '191842.067', '1595.871', '7.747']
['206', '088', '206Ra', '191846.364', '1590.281', '7.72']
['206', '089', '206Ac', '191855.798', '1579.554', '7.668']
['207', '080', '207Hg', '192762.161', '1624.396', '7.847']
['207', '081', '207Tl', '192756.836', '1628.428', '7.867']
['207', '082', '207Pb', '192754.907', '1629.063', '7.87']
['207', '083', '207Bi', '192756.793', '1625.883', '7.855']
['207', '084', '207Po', '192759.191', '1622.193', '7.837']
['207', '085', '207At', '192762.583', '1617.507', '7.814']
['207', '086', '207Rn', '192766.684', '1612.113', '7.788']
['207', '087', '207Fr', '192771.964', '1605.54', '7.756']
['207', '088', '207Ra', '192777.833', '1598.377', '7.722']
['207', '089', '207Ac', '192784.912', '1590.005', '7.681']
['208', '081', '208Tl', '193692.614', '1632.214', '7.847']
['208', '082', '208Pb', '193687.104', '1636.431', '7.867']
['208', '083', '208Bi', '193689.472', '1632.77', '7.85']
['208', '084', '208Po', '193690.361', '1630.587', '7.839']
['208', '085', '208At', '193694.829', '1624.827', '7.812']
['208', '086', '208Rn', '193697.161', '1621.201', '7.794']
['208', '087', '208Fr', '193703.628', '1613.441', '7.757']
['208', '088', '208Ra', '193707.501', '1608.275', '7.732']
['208', '089', '208Ac', '193716.036', '1598.446', '7.685']
['209', '081', '209Tl', '194627.22', '1637.174', '7.833']
['209', '082', '209Pb', '194622.732', '1640.368', '7.849']
['209', '083', '209Bi', '194621.577', '1640.23', '7.848']
['209', '084', '209Po', '194622.959', '1637.555', '7.835']
['209', '085', '209At', '194625.934', '1633.287', '7.815']
['209', '086', '209Rn', '194629.374', '1628.554', '7.792']
['209', '087', '209Fr', '194634.023', '1622.611', '7.764']
['209', '088', '209Ra', '194639.131', '1616.21', '7.733']
['209', '089', '209Ac', '194645.61', '1608.438', '7.696']
['209', '090', '209Th', '194652.759', '1599.995', '7.655']
['210', '081', '210Tl', '195563.106', '1640.854', '7.814']
['210', '082', '210Pb', '195557.113', '1645.554', '7.836']
['210', '083', '210Bi', '195556.538', '1644.835', '7.833']
['210', '084', '210Po', '195554.866', '1645.214', '7.834']
['210', '085', '210At', '195558.336', '1640.45', '7.812']
['210', '086', '210Rn', '195560.199', '1637.294', '7.797']
['210', '087', '210Fr', '195565.94', '1630.26', '7.763']
['210', '088', '210Ra', '195569.236', '1625.67', '7.741']
['210', '089', '210Ac', '195577.054', '1616.559', '7.698']
['210', '090', '210Th', '195581.796', '1610.524', '7.669']
['211', '082', '211Pb', '196492.843', '1649.388', '7.817']
['211', '083', '211Bi', '196490.966', '1649.972', '7.82']
['211', '084', '211Po', '196489.88', '1649.764', '7.819']
['211', '085', '211At', '196490.155', '1648.197', '7.811']
['211', '086', '211Rn', '196492.535', '1644.523', '7.794']
['211', '087', '211Fr', '196496.622', '1639.143', '7.768']
['211', '088', '211Ra', '196501.105', '1633.367', '7.741']
['211', '089', '211Ac', '196506.958', '1626.22', '7.707']
['211', '090', '211Th', '196513.157', '1618.728', '7.672']
['212', '082', '212Pb', '197427.281', '1654.515', '7.804']
['212', '083', '212Bi', '197426.201', '1654.303', '7.803']
['212', '084', '212Po', '197423.437', '1655.773', '7.81']
['212', '085', '212At', '197424.675', '1653.242', '7.798']
['212', '086', '212Rn', '197424.125', '1652.499', '7.795']
['212', '087', '212Fr', '197428.736', '1646.594', '7.767']
['212', '088', '212Ra', '197431.572', '1642.465', '7.747']
['212', '089', '212Ac', '197438.532', '1634.212', '7.709']
['212', '090', '212Th', '197442.832', '1628.618', '7.682']
['212', '091', '212Pa', '197451.84', '1618.317', '7.634']
['213', '082', '213Pb', '198363.139', '1658.223', '7.785']
['213', '083', '213Bi', '198360.581', '1659.488', '7.791']
['213', '084', '213Po', '198358.648', '1660.128', '7.794']
['213', '085', '213At', '198358.211', '1659.271', '7.79']
['213', '086', '213Rn', '198358.581', '1657.608', '7.782']
['213', '087', '213Fr', '198360.218', '1654.678', '7.768']
['213', '088', '213Ra', '198363.615', '1649.987', '7.746']
['213', '089', '213Ac', '198368.896', '1643.413', '7.716']
['213', '090', '213Th', '198374.355', '1636.661', '7.684']
['213', '091', '213Pa', '198381.384', '1628.338', '7.645']
['214', '082', '214Pb', '199297.636', '1663.292', '7.772']
['214', '083', '214Bi', '199296.106', '1663.528', '7.773']
['214', '084', '214Po', '199292.325', '1666.016', '7.785']
['214', '085', '214At', '199292.904', '1664.144', '7.776']
['214', '086', '214Rn', '199291.453', '1664.301', '7.777']
['214', '087', '214Fr', '199294.304', '1660.157', '7.758']
['214', '088', '214Ra', '199294.852', '1658.316', '7.749']
['214', '089', '214Ac', '199300.669', '1651.205', '7.716']
['214', '090', '214Th', '199304.441', '1646.14', '7.692']
['214', '091', '214Pa', '199312.708', '1636.58', '7.648']
['215', '083', '215Bi', '200230.449', '1668.751', '7.762']
['215', '084', '215Po', '200227.749', '1670.157', '7.768']
['215', '085', '215At', '200226.523', '1670.09', '7.768']
['215', '086', '215Rn', '200226.098', '1669.222', '7.764']
['215', '087', '215Fr', '200227.074', '1666.952', '7.753']
['215', '088', '215Ra', '200228.779', '1663.954', '7.739']
['215', '089', '215Ac', '200231.746', '1659.694', '7.72']
['215', '090', '215Th', '200236.15', '1653.996', '7.693']
['215', '091', '215Pa', '200242.582', '1646.271', '7.657']
['216', '083', '216Bi', '201166.168', '1672.597', '7.744']
['216', '084', '216Po', '201161.567', '1675.905', '7.759']
['216', '085', '216At', '201161.529', '1674.649', '7.753']
['216', '086', '216Rn', '201159.017', '1675.868', '7.759']
['216', '087', '216Fr', '201161.229', '1672.362', '7.742']
['216', '088', '216Ra', '201161.03', '1671.268', '7.737']
['216', '089', '216Ac', '201165.351', '1665.654', '7.711']
['216', '090', '216Th', '201167.021', '1662.69', '7.698']
['216', '091', '216Pa', '201174.006', '1654.412', '7.659']
['217', '084', '217Po', '202097.178', '1679.859', '7.741']
['217', '085', '217At', '202095.162', '1680.581', '7.745']
['217', '086', '217Rn', '202093.914', '1680.536', '7.744']
['217', '087', '217Fr', '202094.059', '1679.098', '7.738']
['217', '088', '217Ra', '202095.12', '1676.743', '7.727']
['217', '089', '217Ac', '202097.429', '1673.141', '7.71']
['217', '090', '217Th', '202100.427', '1668.85', '7.691']
['217', '091', '217Pa', '202104.77', '1663.213', '7.665']
['217', '092', '217U', '202109.889', '1656.801', '7.635']
['218', '084', '218Po', '203031.129', '1685.473', '7.732']
['218', '085', '218At', '203030.359', '1684.95', '7.729']
['218', '086', '218Rn', '203026.966', '1687.049', '7.739']
['218', '087', '218Fr', '203028.297', '1684.425', '7.727']
['218', '088', '218Ra', '203027.378', '1684.051', '7.725']
['218', '089', '218Ac', '203031.056', '1679.079', '7.702']
['218', '090', '218Th', '203032.079', '1676.763', '7.692']
['218', '091', '218Pa', '203037.863', '1669.686', '7.659']
['218', '092', '218U', '203040.603', '1665.652', '7.641']
['219', '085', '219At', '203964.151', '1690.723', '7.72']
['219', '086', '219Rn', '203962.074', '1691.507', '7.724']
['219', '087', '219Fr', '203961.35', '1690.937', '7.721']
['219', '088', '219Ra', '203961.615', '1689.379', '7.714']
['219', '089', '219Ac', '203963.28', '1686.421', '7.701']
['219', '090', '219Th', '203965.669', '1682.738', '7.684']
['219', '091', '219Pa', '203969.208', '1677.906', '7.662']
['219', '092', '219U', '203973.387', '1672.434', '7.637']
['220', '085', '220At', '204899.598', '1694.841', '7.704']
['220', '086', '220Rn', '204895.35', '1697.796', '7.717']
['220', '087', '220Fr', '204895.709', '1696.144', '7.71']
['220', '088', '220Ra', '204893.988', '1696.571', '7.712']
['220', '089', '220Ac', '204896.956', '1692.31', '7.692']
['220', '090', '220Th', '204897.362', '1690.611', '7.685']
['220', '091', '220Pa', '204902.562', '1684.117', '7.655']
['221', '086', '221Rn', '205830.703', '1702.008', '7.701']
['221', '087', '221Fr', '205828.998', '1702.42', '7.703']
['221', '088', '221Ra', '205828.173', '1701.952', '7.701']
['221', '089', '221Ac', '205829.218', '1699.613', '7.691']
['221', '090', '221Th', '205831.125', '1696.413', '7.676']
['221', '091', '221Pa', '205834.056', '1692.189', '7.657']
['222', '086', '222Rn', '206764.099', '1708.178', '7.694']
['222', '087', '222Fr', '206763.563', '1707.42', '7.691']
['222', '088', '222Ra', '206761.024', '1708.666', '7.697']
['222', '089', '222Ac', '206762.813', '1705.584', '7.683']
['222', '090', '222Th', '206762.884', '1704.219', '7.677']
['223', '087', '223Fr', '207697.092', '1713.457', '7.684']
['223', '088', '223Ra', '207695.432', '1713.824', '7.685']
['223', '089', '223Ac', '207695.512', '1712.45', '7.679']
['223', '090', '223Th', '207696.561', '1710.108', '7.669']
['223', '091', '223Pa', '207698.984', '1706.391', '7.652']
['223', '092', '223U', '207701.993', '1702.089', '7.633']
['224', '087', '224Fr', '208631.862', '1718.252', '7.671']
['224', '088', '224Ra', '208628.518', '1720.302', '7.68']
['224', '089', '224Ac', '208629.415', '1718.112', '7.67']
['224', '090', '224Th', '208628.665', '1717.569', '7.668']
['224', '091', '224Pa', '208632.028', '1712.913', '7.647']
['224', '092', '224U', '208633.361', '1710.286', '7.635']
['225', '087', '225Fr', '209565.506', '1724.173', '7.663']
['225', '088', '225Ra', '209563.179', '1725.207', '7.668']
['225', '089', '225Ac', '209562.312', '1724.781', '7.666']
['225', '090', '225Th', '209562.473', '1723.326', '7.659']
['225', '091', '225Pa', '209563.992', '1720.514', '7.647']
['225', '092', '225U', '209566.518', '1716.695', '7.63']
['225', '093', '225Np', '209570.22', '1711.699', '7.608']
['226', '087', '226Fr', '210500.56', '1728.685', '7.649']
['226', '088', '226Ra', '210496.348', '1731.603', '7.662']
['226', '089', '226Ac', '210496.478', '1730.18', '7.656']
['226', '090', '226Th', '210494.854', '1730.511', '7.657']
['226', '091', '226Pa', '210497.179', '1726.892', '7.641']
['226', '092', '226U', '210497.964', '1724.814', '7.632']
['227', '087', '227Fr', '211434.334', '1734.476', '7.641']
['227', '088', '227Ra', '211431.352', '1736.165', '7.648']
['227', '089', '227Ac', '211429.513', '1736.71', '7.651']
['227', '090', '227Th', '211428.957', '1735.973', '7.647']
['227', '091', '227Pa', '211429.472', '1734.165', '7.639']
['227', '092', '227U', '211431.151', '1731.192', '7.626']
['227', '093', '227Np', '211434.178', '1726.872', '7.607']
['228', '088', '228Ra', '212364.609', '1742.473', '7.642']
['228', '089', '228Ac', '212364.052', '1741.737', '7.639']
['228', '090', '228Th', '212361.417', '1743.078', '7.645']
['228', '091', '228Pa', '212363.058', '1740.144', '7.632']
['228', '092', '228U', '212362.848', '1739.061', '7.627']
['228', '094', '228Pu', '212368.691', '1730.631', '7.59']
['229', '087', '229Fr', '213303.492', '1744.449', '7.618']
['229', '088', '229Ra', '213299.724', '1746.923', '7.628']
['229', '089', '229Ac', '213297.4', '1747.954', '7.633']
['229', '090', '229Th', '213295.726', '1748.335', '7.635']
['229', '091', '229Pa', '213295.526', '1747.241', '7.63']
['229', '092', '229U', '213296.328', '1745.146', '7.621']
['229', '093', '229Np', '213298.386', '1741.795', '7.606']
['229', '094', '229Pu', '213301.495', '1737.392', '7.587']
['230', '088', '230Ra', '214233.173', '1753.04', '7.622']
['230', '089', '230Ac', '214231.954', '1752.965', '7.622']
['230', '090', '230Th', '214228.497', '1755.129', '7.631']
['230', '091', '230Pa', '214229.297', '1753.036', '7.622']
['230', '092', '230U', '214228.226', '1752.813', '7.621']
['230', '093', '230Np', '214231.34', '1748.406', '7.602']
['230', '094', '230Pu', '214232.523', '1745.93', '7.591']
['231', '089', '231Ac', '215165.558', '1758.927', '7.614']
['231', '090', '231Th', '215162.944', '1760.247', '7.62']
['231', '091', '231Pa', '215162.042', '1759.856', '7.618']
['231', '092', '231U', '215161.912', '1758.693', '7.613']
['231', '093', '231Np', '215163.224', '1756.087', '7.602']
['231', '094', '231Pu', '215165.368', '1752.65', '7.587']
['232', '089', '232Ac', '216100.282', '1763.768', '7.602']
['232', '090', '232Th', '216096.069', '1766.687', '7.615']
['232', '091', '232Pa', '216096.058', '1765.405', '7.61']
['232', '092', '232U', '216094.21', '1765.96', '7.612']
['232', '094', '232Pu', '216096.943', '1760.64', '7.589']
['233', '090', '233Th', '217030.848', '1771.474', '7.603']
['233', '091', '233Pa', '217029.094', '1771.934', '7.605']
['233', '092', '233U', '217028.013', '1771.722', '7.604']
['233', '093', '233Np', '217028.532', '1769.91', '7.596']
['233', '094', '233Pu', '217030.121', '1767.028', '7.584']
['233', '096', '233Cm', '217036.339', '1758.223', '7.546']
['234', '090', '234Th', '217964.223', '1777.664', '7.597']
['234', '091', '234Pa', '217963.439', '1777.155', '7.595']
['234', '092', '234U', '217960.734', '1778.567', '7.601']
['234', '093', '234Np', '217962.032', '1775.975', '7.59']
['234', '094', '234Pu', '217961.915', '1774.799', '7.585']
['234', '096', '234Cm', '217967.267', '1766.86', '7.551']
['235', '090', '235Th', '218899.363', '1782.09', '7.583']
['235', '091', '235Pa', '218896.922', '1783.237', '7.588']
['235', '092', '235U', '218895.002', '1783.864', '7.591']
['235', '093', '235Np', '218894.615', '1782.958', '7.587']
['235', '094', '235Pu', '218895.243', '1781.036', '7.579']
['236', '091', '236Pa', '219831.436', '1788.289', '7.577']
['236', '092', '236U', '219828.021', '1790.41', '7.586']
['236', '093', '236Np', '219828.444', '1788.694', '7.579']
['236', '094', '236Pu', '219827.456', '1788.389', '7.578']
['237', '091', '237Pa', '220765.22', '1794.07', '7.57']
['237', '092', '237U', '220762.461', '1795.536', '7.576']
['237', '093', '237Np', '220761.431', '1795.272', '7.575']
['237', '094', '237Pu', '220761.14', '1794.27', '7.571']
['238', '091', '238Pa', '221699.844', '1799.011', '7.559']
['238', '092', '238U', '221695.872', '1801.69', '7.57']
['238', '093', '238Np', '221695.508', '1800.76', '7.566']
['238', '094', '238Pu', '221693.706', '1801.269', '7.568']
['238', '095', '238Am', '221695.45', '1798.232', '7.556']
['238', '096', '238Cm', '221695.919', '1796.469', '7.548']
['239', '092', '239U', '222630.631', '1806.496', '7.559']
['239', '093', '239Np', '222628.859', '1806.975', '7.561']
['239', '094', '239Pu', '222627.625', '1806.916', '7.56']
['239', '095', '239Am', '222627.916', '1805.331', '7.554']
['240', '092', '240U', '223564.266', '1812.426', '7.552']
['240', '093', '240Np', '223563.355', '1812.044', '7.55']
['240', '094', '240Pu', '223560.656', '1813.45', '7.556']
['240', '095', '240Am', '223561.53', '1811.282', '7.547']
['240', '096', '240Cm', '223561.233', '1810.287', '7.543']
['241', '093', '241Np', '224496.794', '1818.17', '7.544']
['241', '094', '241Pu', '224494.98', '1818.691', '7.546']
['241', '095', '241Am', '224494.448', '1817.93', '7.543']
['241', '096', '241Cm', '224494.705', '1816.38', '7.537']
['242', '093', '242Np', '225431.448', '1823.082', '7.533']
['242', '094', '242Pu', '225428.236', '1825.001', '7.541']
['242', '095', '242Am', '225428.476', '1823.467', '7.535']
['242', '096', '242Cm', '225427.3', '1823.35', '7.535']
['242', '098', '242Cf', '225430.813', '1817.25', '7.509']
['243', '094', '243Pu', '226362.767', '1830.035', '7.531']
['243', '095', '243Am', '226361.676', '1829.832', '7.53']
['243', '096', '243Cm', '226361.173', '1829.042', '7.527']
['243', '097', '243Bk', '226362.169', '1826.753', '7.518']
['244', '094', '244Pu', '227296.311', '1836.056', '7.525']
['244', '095', '244Am', '227295.875', '1835.199', '7.521']
['244', '096', '244Cm', '227293.937', '1835.844', '7.524']
['244', '097', '244Bk', '227295.688', '1832.799', '7.511']
['244', '098', '244Cf', '227295.94', '1831.254', '7.505']
['245', '094', '245Pu', '228231.105', '1840.827', '7.514']
['245', '095', '245Am', '228229.388', '1841.251', '7.515']
['245', '096', '245Cm', '228227.982', '1841.364', '7.516']
['245', '097', '245Bk', '228228.282', '1839.771', '7.509']
['245', '098', '245Cf', '228229.342', '1837.417', '7.5']
['246', '094', '246Pu', '229164.888', '1846.61', '7.507']
['246', '095', '246Am', '229163.977', '1846.227', '7.505']
['246', '096', '246Cm', '229161.09', '1847.822', '7.511']
['246', '097', '246Bk', '229161.93', '1845.688', '7.503']
['246', '098', '246Cf', '229161.541', '1844.784', '7.499']
['246', '100', '246Fm', '229166.567', '1837.171', '7.468']
['247', '096', '247Cm', '230095.499', '1852.977', '7.502']
['247', '097', '247Bk', '230094.945', '1852.238', '7.499']
['247', '098', '247Cf', '230095.08', '1850.81', '7.493']
['248', '096', '248Cm', '231028.851', '1859.191', '7.497']
['248', '098', '248Cf', '231027.677', '1857.778', '7.491']
['248', '100', '248Fm', '231031.321', '1851.547', '7.466']
['249', '096', '249Cm', '231963.703', '1863.904', '7.486']
['249', '097', '249Bk', '231962.292', '1864.022', '7.486']
['249', '098', '249Cf', '231961.657', '1863.364', '7.483']
['250', '096', '250Cm', '232897.436', '1869.736', '7.479']
['250', '097', '250Bk', '232896.887', '1868.992', '7.476']
['250', '098', '250Cf', '232894.597', '1869.989', '7.48']
['250', '100', '250Fm', '232896.477', '1865.522', '7.462']
['251', '096', '251Cm', '233832.589', '1874.149', '7.467']
['251', '097', '251Bk', '233830.658', '1874.786', '7.469']
['251', '098', '251Cf', '233829.054', '1875.097', '7.471']
['251', '099', '251Es', '233828.92', '1873.938', '7.466']
['251', '100', '251Fm', '233829.884', '1871.68', '7.457']
['252', '098', '252Cf', '234762.447', '1881.269', '7.465']
['252', '099', '252Es', '234763.192', '1879.231', '7.457']
['252', '100', '252Fm', '234762.208', '1878.922', '7.456']
['252', '102', '252No', '234767.25', '1871.293', '7.426']
['253', '098', '253Cf', '235697.208', '1886.074', '7.455']
['253', '099', '253Es', '235696.41', '1885.579', '7.453']
['253', '100', '253Fm', '235696.235', '1884.46', '7.448']
['254', '098', '254Cf', '236630.742', '1892.105', '7.449']
['254', '099', '254Es', '236630.882', '1890.672', '7.444']
['254', '100', '254Fm', '236629.284', '1890.977', '7.445']
['254', '102', '254No', '236632.081', '1885.593', '7.424']
['255', '099', '255Es', '237564.473', '1896.646', '7.438']
['255', '100', '255Fm', '237563.672', '1896.154', '7.436']
['255', '101', '255Md', '237564.205', '1894.327', '7.429']
['255', '102', '255No', '237565.705', '1891.534', '7.418']
['256', '100', '256Fm', '238496.853', '1902.538', '7.432']
['256', '101', '256Md', '238498.476', '1899.622', '7.42']
['256', '102', '256No', '238498.169', '1898.635', '7.417']
['256', '104', '256Rf', '238503.559', '1890.659', '7.385']
['257', '100', '257Fm', '239431.45', '1907.506', '7.422']
['257', '101', '257Md', '239431.347', '1906.317', '7.418']
['257', '102', '257No', '239432.08', '1904.289', '7.41']
['258', '101', '258Md', '240365.532', '1911.696', '7.41']
['260', '106', '260Sg', '242240.857', '1909.035', '7.342']
['261', '104', '261Rf', '243168.109', '1923.936', '7.371']
['264', '108', '264Hs', '245978.832', '1926.736', '7.298']
['265', '106', '265Sg', '246904.568', '1943.152', '7.333']
###Markdown
Further filter the items, by selecting the items we want and format the item into correct variables!
###Code
data = []
for row in open('index.txt').readlines():
if not row.startswith('#'):
t = row.strip().split()
t1 = (int(t[0]),float(t[5]),t[2])
data.append(t1)
print data
###Output
[(1, 0.0, '1H'), (2, 1.112, '2H'), (3, 2.827, '3H'), (3, 2.573, '3He'), (4, 1.401, '4H'), (4, 7.074, '4He'), (4, 1.155, '4Li'), (5, 1.337, '5H'), (5, 5.48, '5He'), (5, 5.266, '5Li'), (6, 0.964, '6H'), (6, 4.878, '6He'), (6, 5.332, '6Li'), (6, 4.487, '6Be'), (7, 4.119, '7He'), (7, 5.606, '7Li'), (7, 5.371, '7Be'), (7, 3.531, '7B'), (8, 3.926, '8He'), (8, 5.16, '8Li'), (8, 7.062, '8Be'), (8, 4.717, '8B'), (8, 3.098, '8C'), (9, 3.349, '9He'), (9, 5.038, '9Li'), (9, 6.463, '9Be'), (9, 6.257, '9B'), (9, 4.337, '9C'), (10, 3.034, '10He'), (10, 4.532, '10Li'), (10, 6.498, '10Be'), (10, 6.475, '10B'), (10, 6.032, '10C'), (10, 3.644, '10N'), (11, 4.149, '11Li'), (11, 5.953, '11Be'), (11, 6.928, '11B'), (11, 6.676, '11C'), (11, 5.364, '11N'), (12, 5.721, '12Be'), (12, 6.631, '12B'), (12, 7.68, '12C'), (12, 6.17, '12N'), (12, 4.879, '12O'), (13, 5.273, '13Be'), (13, 6.496, '13B'), (13, 7.47, '13C'), (13, 7.239, '13N'), (13, 5.812, '13O'), (14, 4.994, '14Be'), (14, 6.102, '14B'), (14, 7.52, '14C'), (14, 7.476, '14N'), (14, 7.052, '14O'), (15, 5.879, '15B'), (15, 7.1, '15C'), (15, 7.699, '15N'), (15, 7.464, '15O'), (15, 6.483, '15F'), (16, 5.509, '16B'), (16, 6.922, '16C'), (16, 7.374, '16N'), (16, 7.976, '16O'), (16, 6.964, '16F'), (16, 6.083, '16Ne'), (17, 5.267, '17B'), (17, 6.558, '17C'), (17, 7.286, '17N'), (17, 7.751, '17O'), (17, 7.542, '17F'), (17, 6.643, '17Ne'), (18, 6.426, '18C'), (18, 7.039, '18N'), (18, 7.767, '18O'), (18, 7.632, '18F'), (18, 7.341, '18Ne'), (18, 6.249, '18Na'), (19, 6.118, '19C'), (19, 6.948, '19N'), (19, 7.566, '19O'), (19, 7.779, '19F'), (19, 7.567, '19Ne'), (19, 6.938, '19Na'), (19, 5.838, '19Mg'), (20, 5.959, '20C'), (20, 6.709, '20N'), (20, 7.569, '20O'), (20, 7.72, '20F'), (20, 8.032, '20Ne'), (20, 7.299, '20Na'), (20, 6.723, '20Mg'), (21, 6.608, '21N'), (21, 7.389, '21O'), (21, 7.738, '21F'), (21, 7.972, '21Ne'), (21, 7.766, '21Na'), (21, 7.105, '21Mg'), (22, 6.366, '22N'), (22, 7.365, '22O'), (22, 7.624, '22F'), (22, 8.08, '22Ne'), (22, 7.916, '22Na'), (22, 7.663, '22Mg'), (23, 7.164, '23O'), (23, 7.62, '23F'), (23, 7.955, '23Ne'), (23, 8.111, '23Na'), (23, 7.901, '23Mg'), (23, 7.335, '23Al'), (24, 7.016, '24O'), (24, 7.463, '24F'), (24, 7.993, '24Ne'), (24, 8.064, '24Na'), (24, 8.261, '24Mg'), (24, 7.65, '24Al'), (24, 7.167, '24Si'), (25, 7.339, '25F'), (25, 7.843, '25Ne'), (25, 8.101, '25Na'), (25, 8.224, '25Mg'), (25, 8.021, '25Al'), (25, 7.48, '25Si'), (26, 7.098, '26F'), (26, 7.754, '26Ne'), (26, 8.004, '26Na'), (26, 8.334, '26Mg'), (26, 8.15, '26Al'), (26, 7.925, '26Si'), (27, 6.887, '27F'), (27, 7.52, '27Ne'), (27, 7.957, '27Na'), (27, 8.264, '27Mg'), (27, 8.332, '27Al'), (27, 8.124, '27Si'), (27, 7.663, '27P'), (28, 7.39, '28Ne'), (28, 7.799, '28Na'), (28, 8.272, '28Mg'), (28, 8.31, '28Al'), (28, 8.448, '28Si'), (28, 7.908, '28P'), (28, 7.479, '28S'), (29, 7.179, '29Ne'), (29, 7.683, '29Na'), (29, 8.114, '29Mg'), (29, 8.349, '29Al'), (29, 8.449, '29Si'), (29, 8.251, '29P'), (29, 7.749, '29S'), (30, 7.041, '30Ne'), (30, 7.506, '30Na'), (30, 8.055, '30Mg'), (30, 8.261, '30Al'), (30, 8.521, '30Si'), (30, 8.354, '30P'), (30, 8.123, '30S'), (31, 7.386, '31Na'), (31, 7.872, '31Mg'), (31, 8.226, '31Al'), (31, 8.458, '31Si'), (31, 8.481, '31P'), (31, 8.282, '31S'), (31, 7.87, '31Cl'), (32, 7.207, '32Na'), (32, 7.808, '32Mg'), (32, 8.099, '32Al'), (32, 8.482, '32Si'), (32, 8.464, '32P'), (32, 8.493, '32S'), (32, 8.072, '32Cl'), (32, 7.7, '32Ar'), (33, 7.056, '33Na'), (33, 7.639, '33Mg'), (33, 8.022, '33Al'), (33, 8.36, '33Si'), (33, 8.514, '33P'), (33, 8.498, '33S'), (33, 8.305, '33Cl'), (33, 7.929, '33Ar'), (34, 7.536, '34Mg'), (34, 7.858, '34Al'), (34, 8.336, '34Si'), (34, 8.448, '34P'), (34, 8.584, '34S'), (34, 8.399, '34Cl'), (34, 8.198, '34Ar'), (35, 7.784, '35Al'), (35, 8.169, '35Si'), (35, 8.446, '35P'), (35, 8.538, '35S'), (35, 8.52, '35Cl'), (35, 8.327, '35Ar'), (35, 7.966, '35K'), (36, 7.628, '36Al'), (36, 8.114, '36Si'), (36, 8.308, '36P'), (36, 8.575, '36S'), (36, 8.522, '36Cl'), (36, 8.52, '36Ar'), (36, 8.142, '36K'), (36, 7.816, '36Ca'), (37, 7.528, '37Al'), (37, 7.953, '37Si'), (37, 8.267, '37P'), (37, 8.46, '37S'), (37, 8.57, '37Cl'), (37, 8.527, '37Ar'), (37, 8.34, '37K'), (37, 8.004, '37Ca'), (38, 7.381, '38Al'), (38, 7.89, '38Si'), (38, 8.151, '38P'), (38, 8.449, '38S'), (38, 8.505, '38Cl'), (38, 8.614, '38Ar'), (38, 8.438, '38K'), (38, 8.24, '38Ca'), (39, 7.262, '39Al'), (39, 7.741, '39Si'), (39, 8.1, '39P'), (39, 8.344, '39S'), (39, 8.494, '39Cl'), (39, 8.563, '39Ar'), (39, 8.557, '39K'), (39, 8.369, '39Ca'), (39, 8.013, '39Sc'), (40, 7.661, '40Si'), (40, 7.981, '40P'), (40, 8.33, '40S'), (40, 8.428, '40Cl'), (40, 8.595, '40Ar'), (40, 8.538, '40K'), (40, 8.551, '40Ca'), (40, 8.174, '40Sc'), (40, 7.862, '40Ti'), (41, 7.473, '41Si'), (41, 7.914, '41P'), (41, 8.23, '41S'), (41, 8.413, '41Cl'), (41, 8.534, '41Ar'), (41, 8.576, '41K'), (41, 8.547, '41Ca'), (41, 8.369, '41Sc'), (42, 7.77, '42P'), (42, 8.194, '42S'), (42, 8.348, '42Cl'), (42, 8.556, '42Ar'), (42, 8.551, '42K'), (42, 8.617, '42Ca'), (42, 8.445, '42Sc'), (42, 8.26, '42Ti'), (43, 7.664, '43P'), (43, 8.059, '43S'), (43, 8.324, '43Cl'), (43, 8.488, '43Ar'), (43, 8.577, '43K'), (43, 8.601, '43Ca'), (43, 8.531, '43Sc'), (43, 8.353, '43Ti'), (44, 7.994, '44S'), (44, 8.229, '44Cl'), (44, 8.494, '44Ar'), (44, 8.547, '44K'), (44, 8.658, '44Ca'), (44, 8.557, '44Sc'), (44, 8.534, '44Ti'), (44, 8.211, '44V'), (45, 7.865, '45S'), (45, 8.184, '45Cl'), (45, 8.42, '45Ar'), (45, 8.555, '45K'), (45, 8.631, '45Ca'), (45, 8.619, '45Sc'), (45, 8.556, '45Ti'), (45, 8.38, '45V'), (45, 8.076, '45Cr'), (46, 8.102, '46Cl'), (46, 8.411, '46Ar'), (46, 8.518, '46K'), (46, 8.669, '46Ca'), (46, 8.622, '46Sc'), (46, 8.656, '46Ti'), (46, 8.486, '46V'), (46, 8.304, '46Cr'), (47, 8.323, '47Ar'), (47, 8.515, '47K'), (47, 8.639, '47Ca'), (47, 8.665, '47Sc'), (47, 8.661, '47Ti'), (47, 8.582, '47V'), (47, 8.407, '47Cr'), (48, 8.431, '48K'), (48, 8.666, '48Ca'), (48, 8.656, '48Sc'), (48, 8.723, '48Ti'), (48, 8.623, '48V'), (48, 8.572, '48Cr'), (48, 8.275, '48Mn'), (49, 8.387, '49K'), (49, 8.595, '49Ca'), (49, 8.686, '49Sc'), (49, 8.711, '49Ti'), (49, 8.683, '49V'), (49, 8.613, '49Cr'), (49, 8.44, '49Mn'), (50, 8.281, '50K'), (50, 8.55, '50Ca'), (50, 8.633, '50Sc'), (50, 8.756, '50Ti'), (50, 8.696, '50V'), (50, 8.701, '50Cr'), (50, 8.533, '50Mn'), (50, 8.354, '50Fe'), (51, 8.468, '51Ca'), (51, 8.597, '51Sc'), (51, 8.709, '51Ti'), (51, 8.742, '51V'), (51, 8.712, '51Cr'), (51, 8.634, '51Mn'), (51, 8.461, '51Fe'), (52, 8.396, '52Ca'), (52, 8.532, '52Sc'), (52, 8.692, '52Ti'), (52, 8.715, '52V'), (52, 8.776, '52Cr'), (52, 8.67, '52Mn'), (52, 8.61, '52Fe'), (53, 8.63, '53Ti'), (53, 8.71, '53V'), (53, 8.76, '53Cr'), (53, 8.734, '53Mn'), (53, 8.649, '53Fe'), (53, 8.477, '53Co'), (54, 8.401, '54Sc'), (54, 8.597, '54Ti'), (54, 8.662, '54V'), (54, 8.778, '54Cr'), (54, 8.738, '54Mn'), (54, 8.736, '54Fe'), (54, 8.569, '54Co'), (54, 8.392, '54Ni'), (55, 8.31, '55Sc'), (55, 8.516, '55Ti'), (55, 8.638, '55V'), (55, 8.732, '55Cr'), (55, 8.765, '55Mn'), (55, 8.747, '55Fe'), (55, 8.67, '55Co'), (55, 8.497, '55Ni'), (56, 8.459, '56Ti'), (56, 8.573, '56V'), (56, 8.723, '56Cr'), (56, 8.738, '56Mn'), (56, 8.79, '56Fe'), (56, 8.695, '56Co'), (56, 8.643, '56Ni'), (57, 8.358, '57Ti'), (57, 8.531, '57V'), (57, 8.663, '57Cr'), (57, 8.737, '57Mn'), (57, 8.77, '57Fe'), (57, 8.742, '57Co'), (57, 8.671, '57Ni'), (57, 8.503, '57Cu'), (58, 8.454, '58V'), (58, 8.641, '58Cr'), (58, 8.698, '58Mn'), (58, 8.792, '58Fe'), (58, 8.739, '58Co'), (58, 8.732, '58Ni'), (58, 8.571, '58Cu'), (58, 8.396, '58Zn'), (59, 8.395, '59V'), (59, 8.565, '59Cr'), (59, 8.68, '59Mn'), (59, 8.755, '59Fe'), (59, 8.768, '59Co'), (59, 8.737, '59Ni'), (59, 8.642, '59Cu'), (59, 8.475, '59Zn'), (60, 8.314, '60V'), (60, 8.533, '60Cr'), (60, 8.632, '60Mn'), (60, 8.756, '60Fe'), (60, 8.747, '60Co'), (60, 8.781, '60Ni'), (60, 8.666, '60Cu'), (60, 8.583, '60Zn'), (61, 8.455, '61Cr'), (61, 8.596, '61Mn'), (61, 8.704, '61Fe'), (61, 8.756, '61Co'), (61, 8.765, '61Ni'), (61, 8.716, '61Cu'), (61, 8.61, '61Zn'), (61, 8.446, '61Ga'), (62, 8.42, '62Cr'), (62, 8.531, '62Mn'), (62, 8.693, '62Fe'), (62, 8.721, '62Co'), (62, 8.795, '62Ni'), (62, 8.718, '62Cu'), (62, 8.679, '62Zn'), (62, 8.519, '62Ga'), (63, 8.497, '63Mn'), (63, 8.63, '63Fe'), (63, 8.718, '63Co'), (63, 8.763, '63Ni'), (63, 8.752, '63Cu'), (63, 8.686, '63Zn'), (63, 8.584, '63Ga'), (64, 8.432, '64Mn'), (64, 8.609, '64Fe'), (64, 8.676, '64Co'), (64, 8.777, '64Ni'), (64, 8.739, '64Cu'), (64, 8.736, '64Zn'), (64, 8.612, '64Ga'), (64, 8.529, '64Ge'), (65, 8.396, '65Mn'), (65, 8.541, '65Fe'), (65, 8.657, '65Co'), (65, 8.736, '65Ni'), (65, 8.757, '65Cu'), (65, 8.724, '65Zn'), (65, 8.662, '65Ga'), (65, 8.554, '65Ge'), (66, 8.514, '66Fe'), (66, 8.601, '66Co'), (66, 8.74, '66Ni'), (66, 8.731, '66Cu'), (66, 8.76, '66Zn'), (66, 8.669, '66Ga'), (66, 8.626, '66Ge'), (66, 8.46, '66As'), (67, 8.45, '67Fe'), (67, 8.578, '67Co'), (67, 8.696, '67Ni'), (67, 8.737, '67Cu'), (67, 8.734, '67Zn'), (67, 8.708, '67Ga'), (67, 8.633, '67Ge'), (67, 8.532, '67As'), (68, 8.406, '68Fe'), (68, 8.516, '68Co'), (68, 8.682, '68Ni'), (68, 8.702, '68Cu'), (68, 8.756, '68Zn'), (68, 8.701, '68Ga'), (68, 8.688, '68Ge'), (68, 8.558, '68As'), (68, 8.477, '68Se'), (69, 8.49, '69Co'), (69, 8.623, '69Ni'), (69, 8.695, '69Cu'), (69, 8.723, '69Zn'), (69, 8.725, '69Ga'), (69, 8.681, '69Ge'), (69, 8.612, '69As'), (69, 8.502, '69Se'), (70, 8.422, '70Co'), (70, 8.603, '70Ni'), (70, 8.647, '70Cu'), (70, 8.73, '70Zn'), (70, 8.709, '70Ga'), (70, 8.722, '70Ge'), (70, 8.622, '70As'), (70, 8.578, '70Se'), (71, 8.392, '71Co'), (71, 8.54, '71Ni'), (71, 8.635, '71Cu'), (71, 8.689, '71Zn'), (71, 8.718, '71Ga'), (71, 8.703, '71Ge'), (71, 8.664, '71As'), (71, 8.586, '71Se'), (71, 8.489, '71Br'), (71, 8.335, '71Kr'), (72, 8.516, '72Ni'), (72, 8.587, '72Cu'), (72, 8.692, '72Zn'), (72, 8.687, '72Ga'), (72, 8.732, '72Ge'), (72, 8.66, '72As'), (72, 8.645, '72Se'), (72, 8.511, '72Br'), (72, 8.429, '72Kr'), (73, 8.569, '73Cu'), (73, 8.646, '73Zn'), (73, 8.694, '73Ga'), (73, 8.705, '73Ge'), (73, 8.69, '73As'), (73, 8.641, '73Se'), (73, 8.568, '73Br'), (73, 8.46, '73Kr'), (74, 8.522, '74Cu'), (74, 8.642, '74Zn'), (74, 8.663, '74Ga'), (74, 8.725, '74Ge'), (74, 8.68, '74As'), (74, 8.688, '74Se'), (74, 8.584, '74Br'), (74, 8.533, '74Kr'), (74, 8.382, '74Rb'), (75, 8.49, '75Cu'), (75, 8.591, '75Zn'), (75, 8.661, '75Ga'), (75, 8.696, '75Ge'), (75, 8.701, '75As'), (75, 8.679, '75Se'), (75, 8.628, '75Br'), (75, 8.553, '75Kr'), (75, 8.448, '75Rb'), (75, 8.297, '75Sr'), (76, 8.444, '76Cu'), (76, 8.58, '76Zn'), (76, 8.625, '76Ga'), (76, 8.705, '76Ge'), (76, 8.683, '76As'), (76, 8.711, '76Se'), (76, 8.636, '76Br'), (76, 8.609, '76Kr'), (76, 8.486, '76Rb'), (76, 8.394, '76Sr'), (77, 8.529, '77Zn'), (77, 8.613, '77Ga'), (77, 8.671, '77Ge'), (77, 8.696, '77As'), (77, 8.695, '77Se'), (77, 8.667, '77Br'), (77, 8.617, '77Kr'), (77, 8.537, '77Rb'), (77, 8.436, '77Sr'), (78, 8.506, '78Zn'), (78, 8.577, '78Ga'), (78, 8.672, '78Ge'), (78, 8.674, '78As'), (78, 8.718, '78Se'), (78, 8.662, '78Br'), (78, 8.661, '78Kr'), (78, 8.558, '78Rb'), (78, 8.5, '78Sr'), (79, 8.556, '79Ga'), (79, 8.634, '79Ge'), (79, 8.677, '79As'), (79, 8.696, '79Se'), (79, 8.688, '79Br'), (79, 8.657, '79Kr'), (79, 8.601, '79Rb'), (79, 8.524, '79Sr'), (79, 8.424, '79Y'), (80, 8.426, '80Zn'), (80, 8.507, '80Ga'), (80, 8.627, '80Ge'), (80, 8.651, '80As'), (80, 8.711, '80Se'), (80, 8.678, '80Br'), (80, 8.693, '80Kr'), (80, 8.612, '80Rb'), (80, 8.579, '80Sr'), (80, 8.455, '80Y'), (80, 8.374, '80Zr'), (81, 8.488, '81Ga'), (81, 8.581, '81Ge'), (81, 8.648, '81As'), (81, 8.686, '81Se'), (81, 8.696, '81Br'), (81, 8.683, '81Kr'), (81, 8.645, '81Rb'), (81, 8.587, '81Sr'), (81, 8.51, '81Y'), (81, 8.407, '81Zr'), (82, 8.566, '82Ge'), (82, 8.614, '82As'), (82, 8.693, '82Se'), (82, 8.682, '82Br'), (82, 8.711, '82Kr'), (82, 8.647, '82Rb'), (82, 8.636, '82Sr'), (82, 8.531, '82Y'), (83, 8.602, '83As'), (83, 8.659, '83Se'), (83, 8.693, '83Br'), (83, 8.696, '83Kr'), (83, 8.675, '83Rb'), (83, 8.638, '83Sr'), (83, 8.575, '83Y'), (83, 8.495, '83Zr'), (83, 8.395, '83Nb'), (84, 8.659, '84Se'), (84, 8.672, '84Br'), (84, 8.717, '84Kr'), (84, 8.676, '84Rb'), (84, 8.677, '84Sr'), (84, 8.591, '84Y'), (85, 8.61, '85Se'), (85, 8.674, '85Br'), (85, 8.699, '85Kr'), (85, 8.697, '85Rb'), (85, 8.676, '85Sr'), (85, 8.628, '85Y'), (85, 8.564, '85Zr'), (85, 8.484, '85Nb'), (86, 8.582, '86Se'), (86, 8.632, '86Br'), (86, 8.712, '86Kr'), (86, 8.697, '86Rb'), (86, 8.708, '86Sr'), (86, 8.638, '86Y'), (86, 8.612, '86Zr'), (86, 8.51, '86Nb'), (86, 8.44, '86Mo'), (87, 8.531, '87Se'), (87, 8.606, '87Br'), (87, 8.675, '87Kr'), (87, 8.711, '87Rb'), (87, 8.705, '87Sr'), (87, 8.675, '87Y'), (87, 8.624, '87Zr'), (87, 8.555, '87Nb'), (87, 8.472, '87Mo'), (88, 8.495, '88Se'), (88, 8.564, '88Br'), (88, 8.657, '88Kr'), (88, 8.681, '88Rb'), (88, 8.733, '88Sr'), (88, 8.683, '88Y'), (88, 8.666, '88Zr'), (88, 8.571, '88Nb'), (88, 8.524, '88Mo'), (89, 8.534, '89Br'), (89, 8.617, '89Kr'), (89, 8.664, '89Rb'), (89, 8.706, '89Sr'), (89, 8.714, '89Y'), (89, 8.673, '89Zr'), (89, 8.617, '89Nb'), (89, 8.545, '89Mo'), (90, 8.485, '90Br'), (90, 8.591, '90Kr'), (90, 8.631, '90Rb'), (90, 8.696, '90Sr'), (90, 8.693, '90Y'), (90, 8.71, '90Zr'), (90, 8.633, '90Nb'), (90, 8.597, '90Mo'), (90, 8.489, '90Tc'), (91, 8.446, '91Br'), (91, 8.545, '91Kr'), (91, 8.608, '91Rb'), (91, 8.664, '91Sr'), (91, 8.685, '91Y'), (91, 8.693, '91Zr'), (91, 8.671, '91Nb'), (91, 8.614, '91Mo'), (91, 8.537, '91Tc'), (92, 8.389, '92Br'), (92, 8.513, '92Kr'), (92, 8.569, '92Rb'), (92, 8.649, '92Sr'), (92, 8.662, '92Y'), (92, 8.693, '92Zr'), (92, 8.662, '92Nb'), (92, 8.658, '92Mo'), (92, 8.564, '92Tc'), (93, 8.457, '93Kr'), (93, 8.541, '93Rb'), (93, 8.613, '93Sr'), (93, 8.649, '93Y'), (93, 8.672, '93Zr'), (93, 8.664, '93Nb'), (93, 8.651, '93Mo'), (93, 8.609, '93Tc'), (93, 8.532, '93Ru'), (94, 8.493, '94Rb'), (94, 8.594, '94Sr'), (94, 8.623, '94Y'), (94, 8.667, '94Zr'), (94, 8.649, '94Nb'), (94, 8.662, '94Mo'), (94, 8.609, '94Tc'), (94, 8.584, '94Ru'), (95, 8.46, '95Rb'), (95, 8.549, '95Sr'), (95, 8.605, '95Y'), (95, 8.644, '95Zr'), (95, 8.647, '95Nb'), (95, 8.649, '95Mo'), (95, 8.623, '95Tc'), (95, 8.587, '95Ru'), (95, 8.525, '95Rh'), (96, 8.408, '96Rb'), (96, 8.521, '96Sr'), (96, 8.57, '96Y'), (96, 8.635, '96Zr'), (96, 8.629, '96Nb'), (96, 8.654, '96Mo'), (96, 8.615, '96Tc'), (96, 8.609, '96Ru'), (96, 8.535, '96Rh'), (96, 8.491, '96Pd'), (97, 8.375, '97Rb'), (97, 8.474, '97Sr'), (97, 8.543, '97Y'), (97, 8.604, '97Zr'), (97, 8.623, '97Nb'), (97, 8.635, '97Mo'), (97, 8.624, '97Tc'), (97, 8.604, '97Ru'), (97, 8.56, '97Rh'), (97, 8.502, '97Pd'), (97, 8.422, '97Ag'), (98, 8.329, '98Rb'), (98, 8.448, '98Sr'), (98, 8.499, '98Y'), (98, 8.581, '98Zr'), (98, 8.596, '98Nb'), (98, 8.635, '98Mo'), (98, 8.61, '98Tc'), (98, 8.62, '98Ru'), (98, 8.561, '98Rh'), (98, 8.534, '98Pd'), (98, 8.442, '98Ag'), (98, 8.378, '98Cd'), (99, 8.293, '99Rb'), (99, 8.399, '99Sr'), (99, 8.472, '99Y'), (99, 8.541, '99Zr'), (99, 8.579, '99Nb'), (99, 8.608, '99Mo'), (99, 8.614, '99Tc'), (99, 8.609, '99Ru'), (99, 8.58, '99Rh'), (99, 8.538, '99Pd'), (99, 8.475, '99Ag'), (100, 8.376, '100Sr'), (100, 8.439, '100Y'), (100, 8.524, '100Zr'), (100, 8.55, '100Nb'), (100, 8.605, '100Mo'), (100, 8.595, '100Tc'), (100, 8.619, '100Ru'), (100, 8.575, '100Rh'), (100, 8.564, '100Pd'), (100, 8.485, '100Ag'), (100, 8.438, '100Cd'), (100, 8.33, '100In'), (100, 8.248, '100Sn'), (101, 8.216, '101Rb'), (101, 8.326, '101Sr'), (101, 8.412, '101Y'), (101, 8.489, '101Zr'), (101, 8.535, '101Nb'), (101, 8.573, '101Mo'), (101, 8.593, '101Tc'), (101, 8.601, '101Ru'), (101, 8.588, '101Rh'), (101, 8.561, '101Pd'), (101, 8.511, '101Ag'), (101, 8.45, '101Cd'), (102, 8.3, '102Sr'), (102, 8.379, '102Y'), (102, 8.468, '102Zr'), (102, 8.505, '102Nb'), (102, 8.568, '102Mo'), (102, 8.571, '102Tc'), (102, 8.607, '102Ru'), (102, 8.577, '102Rh'), (102, 8.581, '102Pd'), (102, 8.517, '102Ag'), (102, 8.484, '102Cd'), (102, 8.389, '102In'), (102, 8.324, '102Sn'), (103, 8.431, '103Zr'), (103, 8.491, '103Nb'), (103, 8.537, '103Mo'), (103, 8.566, '103Tc'), (103, 8.584, '103Ru'), (103, 8.584, '103Rh'), (103, 8.571, '103Pd'), (103, 8.538, '103Ag'), (103, 8.49, '103Cd'), (103, 8.423, '103In'), (104, 8.457, '104Nb'), (104, 8.528, '104Mo'), (104, 8.541, '104Tc'), (104, 8.587, '104Ru'), (104, 8.569, '104Rh'), (104, 8.585, '104Pd'), (104, 8.536, '104Ag'), (104, 8.518, '104Cd'), (104, 8.435, '104In'), (104, 8.384, '104Sn'), (105, 8.441, '105Nb'), (105, 8.495, '105Mo'), (105, 8.535, '105Tc'), (105, 8.562, '105Ru'), (105, 8.573, '105Rh'), (105, 8.571, '105Pd'), (105, 8.55, '105Ag'), (105, 8.517, '105Cd'), (105, 8.463, '105In'), (105, 8.396, '105Sn'), (105, 8.299, '105Sb'), (106, 8.481, '106Mo'), (106, 8.507, '106Tc'), (106, 8.561, '106Ru'), (106, 8.554, '106Rh'), (106, 8.58, '106Pd'), (106, 8.545, '106Ag'), (106, 8.539, '106Cd'), (106, 8.47, '106In'), (106, 8.433, '106Sn'), (106, 8.237, '106Te'), (107, 8.446, '107Mo'), (107, 8.496, '107Tc'), (107, 8.534, '107Ru'), (107, 8.554, '107Rh'), (107, 8.561, '107Pd'), (107, 8.554, '107Ag'), (107, 8.533, '107Cd'), (107, 8.494, '107In'), (107, 8.44, '107Sn'), (108, 8.463, '108Tc'), (108, 8.527, '108Ru'), (108, 8.533, '108Rh'), (108, 8.567, '108Pd'), (108, 8.542, '108Ag'), (108, 8.55, '108Cd'), (108, 8.495, '108In'), (108, 8.469, '108Sn'), (108, 8.303, '108Te'), (109, 8.447, '109Tc'), (109, 8.497, '109Ru'), (109, 8.528, '109Rh'), (109, 8.545, '109Pd'), (109, 8.548, '109Ag'), (109, 8.539, '109Cd'), (109, 8.513, '109In'), (109, 8.471, '109Sn'), (109, 8.405, '109Sb'), (109, 8.318, '109Te'), (109, 8.219, '109I'), (110, 8.411, '110Tc'), (110, 8.485, '110Ru'), (110, 8.504, '110Rh'), (110, 8.547, '110Pd'), (110, 8.532, '110Ag'), (110, 8.551, '110Cd'), (110, 8.509, '110In'), (110, 8.496, '110Sn'), (110, 8.359, '110Te'), (110, 8.159, '110Xe'), (111, 8.392, '111Tc'), (111, 8.452, '111Ru'), (111, 8.496, '111Rh'), (111, 8.522, '111Pd'), (111, 8.535, '111Ag'), (111, 8.537, '111Cd'), (111, 8.522, '111In'), (111, 8.493, '111Sn'), (111, 8.441, '111Sb'), (111, 8.367, '111Te'), (112, 8.36, '112Tc'), (112, 8.438, '112Ru'), (112, 8.469, '112Rh'), (112, 8.521, '112Pd'), (112, 8.516, '112Ag'), (112, 8.545, '112Cd'), (112, 8.515, '112In'), (112, 8.514, '112Sn'), (112, 8.444, '112Sb'), (112, 8.398, '112Te'), (112, 8.23, '112Xe'), (113, 8.406, '113Ru'), (113, 8.456, '113Rh'), (113, 8.493, '113Pd'), (113, 8.516, '113Ag'), (113, 8.527, '113Cd'), (113, 8.523, '113In'), (113, 8.507, '113Sn'), (113, 8.465, '113Sb'), (113, 8.405, '113Te'), (113, 8.334, '113I'), (113, 8.247, '113Xe'), (113, 8.148, '113Cs'), (114, 8.426, '114Rh'), (114, 8.488, '114Pd'), (114, 8.494, '114Ag'), (114, 8.532, '114Cd'), (114, 8.512, '114In'), (114, 8.523, '114Sn'), (114, 8.463, '114Sb'), (114, 8.433, '114Te'), (114, 8.289, '114Xe'), (114, 8.09, '114Ba'), (115, 8.35, '115Ru'), (115, 8.41, '115Rh'), (115, 8.458, '115Pd'), (115, 8.491, '115Ag'), (115, 8.511, '115Cd'), (115, 8.517, '115In'), (115, 8.514, '115Sn'), (115, 8.481, '115Sb'), (115, 8.431, '115Te'), (115, 8.375, '115I'), (115, 8.301, '115Xe'), (116, 8.378, '116Rh'), (116, 8.45, '116Pd'), (116, 8.466, '116Ag'), (116, 8.512, '116Cd'), (116, 8.502, '116In'), (116, 8.523, '116Sn'), (116, 8.476, '116Sb'), (116, 8.456, '116Te'), (116, 8.382, '116I'), (116, 8.337, '116Xe'), (117, 8.418, '117Pd'), (117, 8.46, '117Ag'), (117, 8.489, '117Cd'), (117, 8.504, '117In'), (117, 8.51, '117Sn'), (117, 8.488, '117Sb'), (117, 8.451, '117Te'), (117, 8.404, '117I'), (117, 8.344, '117Xe'), (117, 8.271, '117Cs'), (118, 8.406, '118Pd'), (118, 8.434, '118Ag'), (118, 8.488, '118Cd'), (118, 8.486, '118In'), (118, 8.517, '118Sn'), (118, 8.479, '118Sb'), (118, 8.47, '118Te'), (118, 8.406, '118I'), (118, 8.375, '118Xe'), (118, 8.286, '118Cs'), (119, 8.422, '119Ag'), (119, 8.461, '119Cd'), (119, 8.486, '119In'), (119, 8.499, '119Sn'), (119, 8.488, '119Sb'), (119, 8.462, '119Te'), (119, 8.427, '119I'), (119, 8.378, '119Xe'), (119, 8.317, '119Cs'), (119, 8.246, '119Ba'), (120, 8.356, '120Pd'), (120, 8.395, '120Ag'), (120, 8.458, '120Cd'), (120, 8.466, '120In'), (120, 8.505, '120Sn'), (120, 8.476, '120Sb'), (120, 8.477, '120Te'), (120, 8.424, '120I'), (120, 8.404, '120Xe'), (120, 8.328, '120Cs'), (120, 8.28, '120Ba'), (121, 8.384, '121Ag'), (121, 8.431, '121Cd'), (121, 8.464, '121In'), (121, 8.485, '121Sn'), (121, 8.482, '121Sb'), (121, 8.467, '121Te'), (121, 8.442, '121I'), (121, 8.404, '121Xe'), (121, 8.353, '121Cs'), (121, 8.294, '121Ba'), (122, 8.425, '122Cd'), (122, 8.442, '122In'), (122, 8.488, '122Sn'), (122, 8.468, '122Sb'), (122, 8.478, '122Te'), (122, 8.437, '122I'), (122, 8.425, '122Xe'), (122, 8.359, '122Cs'), (122, 8.324, '122Ba'), (123, 8.395, '123Cd'), (123, 8.438, '123In'), (123, 8.467, '123Sn'), (123, 8.472, '123Sb'), (123, 8.466, '123Te'), (123, 8.449, '123I'), (123, 8.421, '123Xe'), (123, 8.38, '123Cs'), (123, 8.33, '123Ba'), (124, 8.387, '124Cd'), (124, 8.414, '124In'), (124, 8.467, '124Sn'), (124, 8.456, '124Sb'), (124, 8.473, '124Te'), (124, 8.441, '124I'), (124, 8.438, '124Xe'), (124, 8.383, '124Cs'), (124, 8.356, '124Ba'), (124, 8.278, '124La'), (125, 8.358, '125Cd'), (125, 8.408, '125In'), (125, 8.446, '125Sn'), (125, 8.458, '125Sb'), (125, 8.458, '125Te'), (125, 8.45, '125I'), (125, 8.431, '125Xe'), (125, 8.4, '125Cs'), (125, 8.358, '125Ba'), (125, 8.305, '125La'), (126, 8.347, '126Cd'), (126, 8.385, '126In'), (126, 8.444, '126Sn'), (126, 8.44, '126Sb'), (126, 8.463, '126Te'), (126, 8.44, '126I'), (126, 8.444, '126Xe'), (126, 8.399, '126Cs'), (126, 8.38, '126Ba'), (126, 8.312, '126La'), (126, 8.273, '126Ce'), (127, 8.315, '127Cd'), (127, 8.376, '127In'), (127, 8.421, '127Sn'), (127, 8.44, '127Sb'), (127, 8.446, '127Te'), (127, 8.445, '127I'), (127, 8.434, '127Xe'), (127, 8.412, '127Cs'), (127, 8.378, '127Ba'), (127, 8.334, '127La'), (127, 8.281, '127Ce'), (128, 8.304, '128Cd'), (128, 8.353, '128In'), (128, 8.417, '128Sn'), (128, 8.421, '128Sb'), (128, 8.449, '128Te'), (128, 8.433, '128I'), (128, 8.443, '128Xe'), (128, 8.406, '128Cs'), (128, 8.396, '128Ba'), (128, 8.337, '128La'), (128, 8.307, '128Ce'), (128, 8.229, '128Pr'), (129, 8.34, '129In'), (129, 8.393, '129Sn'), (129, 8.418, '129Sb'), (129, 8.43, '129Te'), (129, 8.436, '129I'), (129, 8.431, '129Xe'), (129, 8.416, '129Cs'), (129, 8.391, '129Ba'), (129, 8.356, '129La'), (129, 8.311, '129Ce'), (129, 8.254, '129Pr'), (130, 8.256, '130Cd'), (130, 8.314, '130In'), (130, 8.387, '130Sn'), (130, 8.397, '130Sb'), (130, 8.43, '130Te'), (130, 8.421, '130I'), (130, 8.438, '130Xe'), (130, 8.409, '130Cs'), (130, 8.406, '130Ba'), (130, 8.356, '130La'), (130, 8.333, '130Ce'), (130, 8.264, '130Pr'), (130, 8.223, '130Nd'), (131, 8.299, '131In'), (131, 8.363, '131Sn'), (131, 8.393, '131Sb'), (131, 8.411, '131Te'), (131, 8.422, '131I'), (131, 8.424, '131Xe'), (131, 8.415, '131Cs'), (131, 8.399, '131Ba'), (131, 8.37, '131La'), (131, 8.333, '131Ce'), (131, 8.286, '131Pr'), (131, 8.23, '131Nd'), (132, 8.254, '132In'), (132, 8.355, '132Sn'), (132, 8.373, '132Sb'), (132, 8.408, '132Te'), (132, 8.406, '132I'), (132, 8.428, '132Xe'), (132, 8.406, '132Cs'), (132, 8.409, '132Ba'), (132, 8.368, '132La'), (132, 8.352, '132Ce'), (132, 8.291, '132Pr'), (132, 8.257, '132Nd'), (133, 8.311, '133Sn'), (133, 8.365, '133Sb'), (133, 8.389, '133Te'), (133, 8.405, '133I'), (133, 8.413, '133Xe'), (133, 8.41, '133Cs'), (133, 8.4, '133Ba'), (133, 8.379, '133La'), (133, 8.35, '133Ce'), (133, 8.31, '133Pr'), (133, 8.262, '133Nd'), (133, 8.204, '133Pm'), (134, 8.278, '134Sn'), (134, 8.327, '134Sb'), (134, 8.384, '134Te'), (134, 8.389, '134I'), (134, 8.414, '134Xe'), (134, 8.399, '134Cs'), (134, 8.408, '134Ba'), (134, 8.374, '134La'), (134, 8.366, '134Ce'), (134, 8.313, '134Pr'), (134, 8.286, '134Nd'), (134, 8.213, '134Pm'), (135, 8.292, '135Sb'), (135, 8.346, '135Te'), (135, 8.385, '135I'), (135, 8.399, '135Xe'), (135, 8.401, '135Cs'), (135, 8.398, '135Ba'), (135, 8.383, '135La'), (135, 8.362, '135Ce'), (135, 8.329, '135Pr'), (135, 8.288, '135Nd'), (135, 8.236, '135Pm'), (135, 8.178, '135Sm'), (136, 8.319, '136Te'), (136, 8.351, '136I'), (136, 8.396, '136Xe'), (136, 8.39, '136Cs'), (136, 8.403, '136Ba'), (136, 8.376, '136La'), (136, 8.373, '136Ce'), (136, 8.33, '136Pr'), (136, 8.309, '136Nd'), (136, 8.244, '136Pm'), (136, 8.206, '136Sm'), (137, 8.282, '137Te'), (137, 8.327, '137I'), (137, 8.364, '137Xe'), (137, 8.389, '137Cs'), (137, 8.392, '137Ba'), (137, 8.382, '137La'), (137, 8.367, '137Ce'), (137, 8.342, '137Pr'), (137, 8.31, '137Nd'), (137, 8.264, '137Pm'), (137, 8.214, '137Sm'), (138, 8.295, '138I'), (138, 8.346, '138Xe'), (138, 8.36, '138Cs'), (138, 8.393, '138Ba'), (138, 8.375, '138La'), (138, 8.377, '138Ce'), (138, 8.339, '138Pr'), (138, 8.326, '138Nd'), (138, 8.269, '138Pm'), (138, 8.238, '138Sm'), (138, 8.162, '138Eu'), (139, 8.268, '139I'), (139, 8.312, '139Xe'), (139, 8.342, '139Cs'), (139, 8.367, '139Ba'), (139, 8.378, '139La'), (139, 8.37, '139Ce'), (139, 8.349, '139Pr'), (139, 8.323, '139Nd'), (139, 8.286, '139Pm'), (139, 8.243, '139Sm'), (139, 8.187, '139Eu'), (140, 8.291, '140Xe'), (140, 8.314, '140Cs'), (140, 8.353, '140Ba'), (140, 8.355, '140La'), (140, 8.376, '140Ce'), (140, 8.347, '140Pr'), (140, 8.338, '140Nd'), (140, 8.289, '140Pm'), (140, 8.264, '140Sm'), (140, 8.198, '140Eu'), (140, 8.155, '140Gd'), (140, 8.069, '140Tb'), (141, 8.256, '141Xe'), (141, 8.294, '141Cs'), (141, 8.326, '141Ba'), (141, 8.343, '141La'), (141, 8.355, '141Ce'), (141, 8.354, '141Pr'), (141, 8.336, '141Nd'), (141, 8.304, '141Pm'), (141, 8.266, '141Sm'), (141, 8.218, '141Eu'), (141, 8.165, '141Gd'), (141, 8.097, '141Tb'), (142, 8.235, '142Xe'), (142, 8.265, '142Cs'), (142, 8.311, '142Ba'), (142, 8.321, '142La'), (142, 8.347, '142Ce'), (142, 8.336, '142Pr'), (142, 8.346, '142Nd'), (142, 8.307, '142Pm'), (142, 8.286, '142Sm'), (142, 8.226, '142Eu'), (142, 8.19, '142Gd'), (143, 8.244, '143Cs'), (143, 8.282, '143Ba'), (143, 8.306, '143La'), (143, 8.325, '143Ce'), (143, 8.329, '143Pr'), (143, 8.331, '143Nd'), (143, 8.318, '143Pm'), (143, 8.288, '143Sm'), (143, 8.246, '143Eu'), (143, 8.198, '143Gd'), (143, 8.138, '143Tb'), (144, 8.212, '144Cs'), (144, 8.265, '144Ba'), (144, 8.282, '144La'), (144, 8.315, '144Ce'), (144, 8.312, '144Pr'), (144, 8.327, '144Nd'), (144, 8.305, '144Pm'), (144, 8.304, '144Sm'), (144, 8.254, '144Eu'), (144, 8.222, '144Gd'), (144, 8.151, '144Tb'), (144, 8.106, '144Dy'), (145, 8.189, '145Cs'), (145, 8.234, '145Ba'), (145, 8.267, '145La'), (145, 8.29, '145Ce'), (145, 8.302, '145Pr'), (145, 8.309, '145Nd'), (145, 8.303, '145Pm'), (145, 8.293, '145Sm'), (145, 8.269, '145Eu'), (145, 8.229, '145Gd'), (145, 8.175, '145Tb'), (145, 8.117, '145Dy'), (146, 8.158, '146Cs'), (146, 8.216, '146Ba'), (146, 8.239, '146La'), (146, 8.279, '146Ce'), (146, 8.281, '146Pr'), (146, 8.304, '146Nd'), (146, 8.289, '146Pm'), (146, 8.294, '146Sm'), (146, 8.262, '146Eu'), (146, 8.25, '146Gd'), (146, 8.187, '146Tb'), (146, 8.146, '146Dy'), (147, 8.132, '147Cs'), (147, 8.223, '147La'), (147, 8.253, '147Ce'), (147, 8.271, '147Pr'), (147, 8.284, '147Nd'), (147, 8.284, '147Pm'), (147, 8.281, '147Sm'), (147, 8.264, '147Eu'), (147, 8.243, '147Gd'), (147, 8.207, '147Tb'), (147, 8.157, '147Dy'), (147, 8.095, '147Ho'), (148, 8.1, '148Cs'), (148, 8.167, '148Ba'), (148, 8.197, '148La'), (148, 8.24, '148Ce'), (148, 8.25, '148Pr'), (148, 8.277, '148Nd'), (148, 8.268, '148Pm'), (148, 8.28, '148Sm'), (148, 8.254, '148Eu'), (148, 8.248, '148Gd'), (148, 8.204, '148Tb'), (148, 8.181, '148Dy'), (148, 8.109, '148Ho'), (149, 8.214, '149Ce'), (149, 8.238, '149Pr'), (149, 8.255, '149Nd'), (149, 8.262, '149Pm'), (149, 8.264, '149Sm'), (149, 8.254, '149Eu'), (149, 8.24, '149Gd'), (149, 8.21, '149Tb'), (149, 8.179, '149Dy'), (149, 8.134, '149Ho'), (149, 8.075, '149Er'), (150, 8.201, '150Ce'), (150, 8.219, '150Pr'), (150, 8.25, '150Nd'), (150, 8.244, '150Pm'), (150, 8.262, '150Sm'), (150, 8.241, '150Eu'), (150, 8.243, '150Gd'), (150, 8.206, '150Tb'), (150, 8.189, '150Dy'), (150, 8.135, '150Ho'), (150, 8.102, '150Er'), (151, 8.178, '151Ce'), (151, 8.208, '151Pr'), (151, 8.23, '151Nd'), (151, 8.241, '151Pm'), (151, 8.244, '151Sm'), (151, 8.239, '151Eu'), (151, 8.231, '151Gd'), (151, 8.209, '151Tb'), (151, 8.185, '151Dy'), (151, 8.146, '151Ho'), (151, 8.105, '151Er'), (151, 8.05, '151Tm'), (151, 7.984, '151Yb'), (152, 8.187, '152Pr'), (152, 8.224, '152Nd'), (152, 8.226, '152Pm'), (152, 8.244, '152Sm'), (152, 8.227, '152Eu'), (152, 8.233, '152Gd'), (152, 8.202, '152Tb'), (152, 8.193, '152Dy'), (152, 8.145, '152Ho'), (152, 8.119, '152Er'), (152, 8.057, '152Tm'), (152, 8.016, '152Yb'), (153, 8.172, '153Pr'), (153, 8.205, '153Nd'), (153, 8.221, '153Pm'), (153, 8.229, '153Sm'), (153, 8.229, '153Eu'), (153, 8.22, '153Gd'), (153, 8.205, '153Tb'), (153, 8.186, '153Dy'), (153, 8.154, '153Ho'), (153, 8.119, '153Er'), (153, 8.072, '153Tm'), (153, 7.959, '153Lu'), (154, 8.15, '154Pr'), (154, 8.193, '154Nd'), (154, 8.206, '154Pm'), (154, 8.227, '154Sm'), (154, 8.217, '154Eu'), (154, 8.225, '154Gd'), (154, 8.197, '154Tb'), (154, 8.193, '154Dy'), (154, 8.151, '154Ho'), (154, 8.132, '154Er'), (154, 8.074, '154Tm'), (154, 8.04, '154Yb'), (155, 8.195, '155Pm'), (155, 8.211, '155Sm'), (155, 8.217, '155Eu'), (155, 8.213, '155Gd'), (155, 8.203, '155Tb'), (155, 8.184, '155Dy'), (155, 8.159, '155Ho'), (155, 8.129, '155Er'), (155, 8.088, '155Tm'), (155, 8.044, '155Yb'), (155, 7.987, '155Lu'), (156, 8.158, '156Nd'), (156, 8.177, '156Pm'), (156, 8.205, '156Sm'), (156, 8.205, '156Eu'), (156, 8.215, '156Gd'), (156, 8.195, '156Tb'), (156, 8.192, '156Dy'), (156, 8.154, '156Ho'), (156, 8.142, '156Er'), (156, 8.09, '156Tm'), (156, 8.062, '156Yb'), (156, 7.996, '156Lu'), (156, 7.953, '156Hf'), (157, 8.165, '157Pm'), (157, 8.187, '157Sm'), (157, 8.2, '157Eu'), (157, 8.204, '157Gd'), (157, 8.198, '157Tb'), (157, 8.185, '157Dy'), (157, 8.163, '157Ho'), (157, 8.136, '157Er'), (157, 8.101, '157Tm'), (157, 8.063, '157Yb'), (157, 8.014, '157Lu'), (157, 7.896, '157Ta'), (158, 8.143, '158Pm'), (158, 8.177, '158Sm'), (158, 8.185, '158Eu'), (158, 8.202, '158Gd'), (158, 8.189, '158Tb'), (158, 8.19, '158Dy'), (158, 8.158, '158Ho'), (158, 8.148, '158Er'), (158, 8.101, '158Tm'), (158, 8.079, '158Yb'), (158, 8.019, '158Lu'), (158, 7.981, '158Hf'), (159, 8.158, '159Sm'), (159, 8.177, '159Eu'), (159, 8.188, '159Gd'), (159, 8.189, '159Tb'), (159, 8.182, '159Dy'), (159, 8.165, '159Ho'), (159, 8.143, '159Er'), (159, 8.113, '159Tm'), (159, 8.078, '159Yb'), (159, 8.035, '159Lu'), (159, 7.987, '159Hf'), (159, 7.929, '159Ta'), (160, 8.183, '160Gd'), (160, 8.178, '160Tb'), (160, 8.184, '160Dy'), (160, 8.159, '160Ho'), (160, 8.152, '160Er'), (160, 8.111, '160Tm'), (160, 8.093, '160Yb'), (160, 8.038, '160Lu'), (160, 8.006, '160Hf'), (160, 7.939, '160Ta'), (160, 7.893, '160W'), (161, 8.167, '161Gd'), (161, 8.175, '161Tb'), (161, 8.173, '161Dy'), (161, 8.163, '161Ho'), (161, 8.146, '161Er'), (161, 8.12, '161Tm'), (161, 8.09, '161Yb'), (161, 8.053, '161Lu'), (161, 8.009, '161Hf'), (161, 7.837, '161Re'), (162, 8.159, '162Gd'), (162, 8.163, '162Tb'), (162, 8.173, '162Dy'), (162, 8.155, '162Ho'), (162, 8.152, '162Er'), (162, 8.118, '162Tm'), (162, 8.103, '162Yb'), (162, 8.055, '162Lu'), (162, 8.027, '162Hf'), (162, 7.964, '162Ta'), (162, 7.924, '162W'), (163, 8.156, '163Tb'), (163, 8.162, '163Dy'), (163, 8.157, '163Ho'), (163, 8.145, '163Er'), (163, 8.125, '163Tm'), (163, 8.099, '163Yb'), (163, 8.067, '163Lu'), (163, 8.028, '163Hf'), (163, 7.982, '163Ta'), (163, 7.93, '163W'), (163, 7.871, '163Re'), (164, 8.14, '164Tb'), (164, 8.159, '164Dy'), (164, 8.148, '164Ho'), (164, 8.149, '164Er'), (164, 8.12, '164Tm'), (164, 8.109, '164Yb'), (164, 8.066, '164Lu'), (164, 8.044, '164Hf'), (164, 7.987, '164Ta'), (164, 7.951, '164W'), (164, 7.834, '164Os'), (165, 8.144, '165Dy'), (165, 8.147, '165Ho'), (165, 8.14, '165Er'), (165, 8.126, '165Tm'), (165, 8.105, '165Yb'), (165, 8.077, '165Lu'), (165, 8.043, '165Hf'), (165, 8.003, '165Ta'), (165, 7.956, '165W'), (165, 7.902, '165Re'), (166, 8.113, '166Tb'), (166, 8.137, '166Dy'), (166, 8.136, '166Ho'), (166, 8.142, '166Er'), (166, 8.119, '166Tm'), (166, 8.112, '166Yb'), (166, 8.074, '166Lu'), (166, 8.056, '166Hf'), (166, 8.005, '166Ta'), (166, 7.975, '166W'), (166, 7.866, '166Os'), (167, 8.121, '167Dy'), (167, 8.13, '167Ho'), (167, 8.132, '167Er'), (167, 8.123, '167Tm'), (167, 8.106, '167Yb'), (167, 8.083, '167Lu'), (167, 8.054, '167Hf'), (167, 8.019, '167Ta'), (167, 7.977, '167W'), (167, 7.874, '167Os'), (167, 7.813, '167Ir'), (168, 8.113, '168Dy'), (168, 8.117, '168Ho'), (168, 8.13, '168Er'), (168, 8.115, '168Tm'), (168, 8.112, '168Yb'), (168, 8.08, '168Lu'), (168, 8.066, '168Hf'), (168, 8.019, '168Ta'), (168, 7.994, '168W'), (168, 7.935, '168Re'), (168, 7.896, '168Os'), (168, 7.774, '168Pt'), (169, 8.095, '169Dy'), (169, 8.109, '169Ho'), (169, 8.117, '169Er'), (169, 8.115, '169Tm'), (169, 8.104, '169Yb'), (169, 8.086, '169Lu'), (169, 8.062, '169Hf'), (169, 8.031, '169Ta'), (169, 7.995, '169W'), (169, 7.951, '169Re'), (169, 7.901, '169Os'), (169, 7.846, '169Ir'), (170, 8.094, '170Ho'), (170, 8.112, '170Er'), (170, 8.106, '170Tm'), (170, 8.107, '170Yb'), (170, 8.082, '170Lu'), (170, 8.071, '170Hf'), (170, 8.03, '170Ta'), (170, 8.009, '170W'), (170, 7.955, '170Re'), (170, 7.921, '170Os'), (170, 7.808, '170Pt'), (171, 8.084, '171Ho'), (171, 8.098, '171Er'), (171, 8.102, '171Tm'), (171, 8.098, '171Yb'), (171, 8.085, '171Lu'), (171, 8.066, '171Hf'), (171, 8.04, '171Ta'), (171, 8.008, '171W'), (171, 7.969, '171Re'), (171, 7.924, '171Os'), (171, 7.874, '171Ir'), (171, 7.817, '171Pt'), (171, 7.754, '171Au'), (172, 8.09, '172Er'), (172, 8.091, '172Tm'), (172, 8.097, '172Yb'), (172, 8.078, '172Lu'), (172, 8.072, '172Hf'), (172, 8.038, '172Ta'), (172, 8.02, '172W'), (172, 7.972, '172Re'), (172, 7.942, '172Os'), (172, 7.839, '172Pt'), (172, 7.714, '172Hg'), (173, 8.084, '173Tm'), (173, 8.087, '173Yb'), (173, 8.079, '173Lu'), (173, 8.066, '173Hf'), (173, 8.044, '173Ta'), (173, 8.018, '173W'), (173, 7.984, '173Re'), (173, 7.944, '173Os'), (173, 7.898, '173Ir'), (173, 7.845, '173Pt'), (173, 7.788, '173Au'), (174, 8.071, '174Tm'), (174, 8.084, '174Yb'), (174, 8.071, '174Lu'), (174, 8.069, '174Hf'), (174, 8.04, '174Ta'), (174, 8.027, '174W'), (174, 7.985, '174Re'), (174, 7.959, '174Os'), (174, 7.903, '174Ir'), (174, 7.866, '174Pt'), (174, 7.75, '174Hg'), (175, 8.062, '175Tm'), (175, 8.071, '175Yb'), (175, 8.069, '175Lu'), (175, 8.061, '175Hf'), (175, 8.044, '175Ta'), (175, 8.024, '175W'), (175, 7.995, '175Re'), (175, 7.961, '175Os'), (175, 7.918, '175Ir'), (175, 7.869, '175Pt'), (175, 7.818, '175Au'), (175, 7.759, '175Hg'), (176, 8.045, '176Tm'), (176, 8.064, '176Yb'), (176, 8.059, '176Lu'), (176, 8.061, '176Hf'), (176, 8.039, '176Ta'), (176, 8.03, '176W'), (176, 7.994, '176Re'), (176, 7.973, '176Os'), (176, 7.921, '176Ir'), (176, 7.889, '176Pt'), (176, 7.783, '176Hg'), (177, 8.05, '177Yb'), (177, 8.053, '177Lu'), (177, 8.052, '177Hf'), (177, 8.041, '177Ta'), (177, 8.025, '177W'), (177, 8.001, '177Re'), (177, 7.972, '177Os'), (177, 7.935, '177Ir'), (177, 7.892, '177Pt'), (177, 7.844, '177Au'), (177, 7.79, '177Hg'), (177, 7.732, '177Tl'), (178, 8.043, '178Yb'), (178, 8.042, '178Lu'), (178, 8.049, '178Hf'), (178, 8.034, '178Ta'), (178, 8.029, '178W'), (178, 7.998, '178Re'), (178, 7.982, '178Os'), (178, 7.937, '178Ir'), (178, 7.908, '178Pt'), (178, 7.85, '178Au'), (178, 7.811, '178Hg'), (178, 7.691, '178Pb'), (179, 8.035, '179Lu'), (179, 8.039, '179Hf'), (179, 8.034, '179Ta'), (179, 8.023, '179W'), (179, 8.004, '179Re'), (179, 7.979, '179Os'), (179, 7.948, '179Ir'), (179, 7.911, '179Pt'), (179, 7.865, '179Au'), (179, 7.816, '179Hg'), (179, 7.764, '179Tl'), (180, 8.022, '180Lu'), (180, 8.035, '180Hf'), (180, 8.026, '180Ta'), (180, 8.025, '180W'), (180, 8.0, '180Re'), (180, 7.987, '180Os'), (180, 7.948, '180Ir'), (180, 7.924, '180Pt'), (180, 7.87, '180Au'), (180, 7.836, '180Hg'), (180, 7.726, '180Pb'), (181, 8.022, '181Hf'), (181, 8.023, '181Ta'), (181, 8.018, '181W'), (181, 8.004, '181Re'), (181, 7.983, '181Os'), (181, 7.957, '181Ir'), (181, 7.924, '181Pt'), (181, 7.884, '181Au'), (181, 7.84, '181Hg'), (181, 7.792, '181Tl'), (181, 7.734, '181Pb'), (182, 8.015, '182Hf'), (182, 8.013, '182Ta'), (182, 8.018, '182W'), (182, 7.999, '182Re'), (182, 7.99, '182Os'), (182, 7.955, '182Ir'), (182, 7.935, '182Pt'), (182, 7.887, '182Au'), (182, 7.857, '182Hg'), (182, 7.796, '182Tl'), (182, 7.756, '182Pb'), (183, 8.0, '183Hf'), (183, 8.007, '183Ta'), (183, 8.008, '183W'), (183, 8.001, '183Re'), (183, 7.985, '183Os'), (183, 7.962, '183Ir'), (183, 7.933, '183Pt'), (183, 7.899, '183Au'), (183, 7.859, '183Hg'), (183, 7.816, '183Tl'), (183, 7.762, '183Pb'), (184, 7.991, '184Hf'), (184, 7.994, '184Ta'), (184, 8.005, '184W'), (184, 7.993, '184Re'), (184, 7.989, '184Os'), (184, 7.959, '184Ir'), (184, 7.943, '184Pt'), (184, 7.9, '184Au'), (184, 7.874, '184Hg'), (184, 7.819, '184Tl'), (184, 7.783, '184Pb'), (185, 7.986, '185Ta'), (185, 7.993, '185W'), (185, 7.991, '185Re'), (185, 7.981, '185Os'), (185, 7.964, '185Ir'), (185, 7.94, '185Pt'), (185, 7.909, '185Au'), (185, 7.875, '185Hg'), (185, 7.836, '185Tl'), (185, 7.787, '185Pb'), (186, 7.972, '186Ta'), (186, 7.989, '186W'), (186, 7.981, '186Re'), (186, 7.983, '186Os'), (186, 7.958, '186Ir'), (186, 7.947, '186Pt'), (186, 7.91, '186Au'), (186, 7.888, '186Hg'), (186, 7.839, '186Tl'), (186, 7.805, '186Pb'), (186, 7.739, '186Bi'), (187, 7.975, '187W'), (187, 7.978, '187Re'), (187, 7.974, '187Os'), (187, 7.962, '187Ir'), (187, 7.941, '187Pt'), (187, 7.917, '187Au'), (187, 7.887, '187Hg'), (187, 7.852, '187Tl'), (187, 7.808, '187Pb'), (187, 7.758, '187Bi'), (188, 7.969, '188W'), (188, 7.967, '188Re'), (188, 7.974, '188Os'), (188, 7.955, '188Ir'), (188, 7.948, '188Pt'), (188, 7.914, '188Au'), (188, 7.899, '188Hg'), (188, 7.853, '188Tl'), (188, 7.825, '188Pb'), (188, 7.764, '188Bi'), (188, 7.725, '188Po'), (189, 7.953, '189W'), (189, 7.962, '189Re'), (189, 7.963, '189Os'), (189, 7.956, '189Ir'), (189, 7.941, '189Pt'), (189, 7.922, '189Au'), (189, 7.897, '189Hg'), (189, 7.866, '189Tl'), (189, 7.826, '189Pb'), (189, 7.781, '189Bi'), (189, 7.731, '189Po'), (190, 7.947, '190W'), (190, 7.95, '190Re'), (190, 7.962, '190Os'), (190, 7.948, '190Ir'), (190, 7.947, '190Pt'), (190, 7.919, '190Au'), (190, 7.907, '190Hg'), (190, 7.866, '190Tl'), (190, 7.841, '190Pb'), (190, 7.787, '190Bi'), (190, 7.749, '190Po'), (191, 7.944, '191Re'), (191, 7.951, '191Os'), (191, 7.948, '191Ir'), (191, 7.939, '191Pt'), (191, 7.925, '191Au'), (191, 7.904, '191Hg'), (191, 7.877, '191Tl'), (191, 7.841, '191Pb'), (191, 7.801, '191Bi'), (191, 7.754, '191Po'), (192, 7.949, '192Os'), (192, 7.939, '192Ir'), (192, 7.943, '192Pt'), (192, 7.92, '192Au'), (192, 7.912, '192Hg'), (192, 7.876, '192Tl'), (192, 7.855, '192Pb'), (192, 7.804, '192Bi'), (192, 7.771, '192Po'), (193, 7.936, '193Os'), (193, 7.938, '193Ir'), (193, 7.934, '193Pt'), (193, 7.924, '193Au'), (193, 7.908, '193Hg'), (193, 7.885, '193Tl'), (193, 7.854, '193Pb'), (193, 7.817, '193Bi'), (193, 7.774, '193Po'), (193, 7.728, '193At'), (194, 7.932, '194Os'), (194, 7.928, '194Ir'), (194, 7.936, '194Pt'), (194, 7.919, '194Au'), (194, 7.915, '194Hg'), (194, 7.883, '194Tl'), (194, 7.865, '194Pb'), (194, 7.819, '194Bi'), (194, 7.789, '194Po'), (194, 7.735, '194At'), (195, 7.919, '195Os'), (195, 7.925, '195Ir'), (195, 7.927, '195Pt'), (195, 7.921, '195Au'), (195, 7.909, '195Hg'), (195, 7.891, '195Tl'), (195, 7.864, '195Pb'), (195, 7.831, '195Bi'), (195, 7.791, '195Po'), (195, 7.748, '195At'), (195, 7.7, '195Rn'), (196, 7.912, '196Os'), (196, 7.914, '196Ir'), (196, 7.927, '196Pt'), (196, 7.915, '196Au'), (196, 7.914, '196Hg'), (196, 7.888, '196Tl'), (196, 7.873, '196Pb'), (196, 7.832, '196Bi'), (196, 7.805, '196Po'), (196, 7.752, '196At'), (196, 7.718, '196Rn'), (197, 7.909, '197Ir'), (197, 7.916, '197Pt'), (197, 7.916, '197Au'), (197, 7.909, '197Hg'), (197, 7.894, '197Tl'), (197, 7.871, '197Pb'), (197, 7.842, '197Bi'), (197, 7.806, '197Po'), (197, 7.766, '197At'), (197, 7.722, '197Rn'), (198, 7.914, '198Pt'), (198, 7.909, '198Au'), (198, 7.912, '198Hg'), (198, 7.89, '198Tl'), (198, 7.879, '198Pb'), (198, 7.841, '198Bi'), (198, 7.818, '198Po'), (198, 7.769, '198At'), (198, 7.738, '198Rn'), (199, 7.891, '199Ir'), (199, 7.902, '199Pt'), (199, 7.907, '199Au'), (199, 7.905, '199Hg'), (199, 7.894, '199Tl'), (199, 7.876, '199Pb'), (199, 7.85, '199Bi'), (199, 7.818, '199Po'), (199, 7.781, '199At'), (199, 7.741, '199Rn'), (199, 7.695, '199Fr'), (200, 7.899, '200Pt'), (200, 7.899, '200Au'), (200, 7.906, '200Hg'), (200, 7.89, '200Tl'), (200, 7.882, '200Pb'), (200, 7.848, '200Bi'), (200, 7.828, '200Po'), (200, 7.784, '200At'), (200, 7.755, '200Rn'), (200, 7.7, '200Fr'), (201, 7.886, '201Pt'), (201, 7.895, '201Au'), (201, 7.898, '201Hg'), (201, 7.891, '201Tl'), (201, 7.878, '201Pb'), (201, 7.855, '201Bi'), (201, 7.827, '201Po'), (201, 7.794, '201At'), (201, 7.757, '201Rn'), (201, 7.715, '201Fr'), (202, 7.886, '202Au'), (202, 7.897, '202Hg'), (202, 7.886, '202Tl'), (202, 7.882, '202Pb'), (202, 7.853, '202Bi'), (202, 7.835, '202Po'), (202, 7.795, '202At'), (202, 7.769, '202Rn'), (202, 7.719, '202Fr'), (202, 7.685, '202Ra'), (203, 7.881, '203Au'), (203, 7.887, '203Hg'), (203, 7.886, '203Tl'), (203, 7.877, '203Pb'), (203, 7.858, '203Bi'), (203, 7.833, '203Po'), (203, 7.804, '203At'), (203, 7.77, '203Rn'), (203, 7.732, '203Fr'), (203, 7.69, '203Ra'), (204, 7.886, '204Hg'), (204, 7.88, '204Tl'), (204, 7.88, '204Pb'), (204, 7.854, '204Bi'), (204, 7.839, '204Po'), (204, 7.804, '204At'), (204, 7.781, '204Rn'), (204, 7.735, '204Fr'), (204, 7.704, '204Ra'), (205, 7.875, '205Hg'), (205, 7.878, '205Tl'), (205, 7.874, '205Pb'), (205, 7.857, '205Bi'), (205, 7.836, '205Po'), (205, 7.81, '205At'), (205, 7.781, '205Rn'), (205, 7.746, '205Fr'), (205, 7.707, '205Ra'), (206, 7.869, '206Hg'), (206, 7.872, '206Tl'), (206, 7.875, '206Pb'), (206, 7.853, '206Bi'), (206, 7.841, '206Po'), (206, 7.809, '206At'), (206, 7.789, '206Rn'), (206, 7.747, '206Fr'), (206, 7.72, '206Ra'), (206, 7.668, '206Ac'), (207, 7.847, '207Hg'), (207, 7.867, '207Tl'), (207, 7.87, '207Pb'), (207, 7.855, '207Bi'), (207, 7.837, '207Po'), (207, 7.814, '207At'), (207, 7.788, '207Rn'), (207, 7.756, '207Fr'), (207, 7.722, '207Ra'), (207, 7.681, '207Ac'), (208, 7.847, '208Tl'), (208, 7.867, '208Pb'), (208, 7.85, '208Bi'), (208, 7.839, '208Po'), (208, 7.812, '208At'), (208, 7.794, '208Rn'), (208, 7.757, '208Fr'), (208, 7.732, '208Ra'), (208, 7.685, '208Ac'), (209, 7.833, '209Tl'), (209, 7.849, '209Pb'), (209, 7.848, '209Bi'), (209, 7.835, '209Po'), (209, 7.815, '209At'), (209, 7.792, '209Rn'), (209, 7.764, '209Fr'), (209, 7.733, '209Ra'), (209, 7.696, '209Ac'), (209, 7.655, '209Th'), (210, 7.814, '210Tl'), (210, 7.836, '210Pb'), (210, 7.833, '210Bi'), (210, 7.834, '210Po'), (210, 7.812, '210At'), (210, 7.797, '210Rn'), (210, 7.763, '210Fr'), (210, 7.741, '210Ra'), (210, 7.698, '210Ac'), (210, 7.669, '210Th'), (211, 7.817, '211Pb'), (211, 7.82, '211Bi'), (211, 7.819, '211Po'), (211, 7.811, '211At'), (211, 7.794, '211Rn'), (211, 7.768, '211Fr'), (211, 7.741, '211Ra'), (211, 7.707, '211Ac'), (211, 7.672, '211Th'), (212, 7.804, '212Pb'), (212, 7.803, '212Bi'), (212, 7.81, '212Po'), (212, 7.798, '212At'), (212, 7.795, '212Rn'), (212, 7.767, '212Fr'), (212, 7.747, '212Ra'), (212, 7.709, '212Ac'), (212, 7.682, '212Th'), (212, 7.634, '212Pa'), (213, 7.785, '213Pb'), (213, 7.791, '213Bi'), (213, 7.794, '213Po'), (213, 7.79, '213At'), (213, 7.782, '213Rn'), (213, 7.768, '213Fr'), (213, 7.746, '213Ra'), (213, 7.716, '213Ac'), (213, 7.684, '213Th'), (213, 7.645, '213Pa'), (214, 7.772, '214Pb'), (214, 7.773, '214Bi'), (214, 7.785, '214Po'), (214, 7.776, '214At'), (214, 7.777, '214Rn'), (214, 7.758, '214Fr'), (214, 7.749, '214Ra'), (214, 7.716, '214Ac'), (214, 7.692, '214Th'), (214, 7.648, '214Pa'), (215, 7.762, '215Bi'), (215, 7.768, '215Po'), (215, 7.768, '215At'), (215, 7.764, '215Rn'), (215, 7.753, '215Fr'), (215, 7.739, '215Ra'), (215, 7.72, '215Ac'), (215, 7.693, '215Th'), (215, 7.657, '215Pa'), (216, 7.744, '216Bi'), (216, 7.759, '216Po'), (216, 7.753, '216At'), (216, 7.759, '216Rn'), (216, 7.742, '216Fr'), (216, 7.737, '216Ra'), (216, 7.711, '216Ac'), (216, 7.698, '216Th'), (216, 7.659, '216Pa'), (217, 7.741, '217Po'), (217, 7.745, '217At'), (217, 7.744, '217Rn'), (217, 7.738, '217Fr'), (217, 7.727, '217Ra'), (217, 7.71, '217Ac'), (217, 7.691, '217Th'), (217, 7.665, '217Pa'), (217, 7.635, '217U'), (218, 7.732, '218Po'), (218, 7.729, '218At'), (218, 7.739, '218Rn'), (218, 7.727, '218Fr'), (218, 7.725, '218Ra'), (218, 7.702, '218Ac'), (218, 7.692, '218Th'), (218, 7.659, '218Pa'), (218, 7.641, '218U'), (219, 7.72, '219At'), (219, 7.724, '219Rn'), (219, 7.721, '219Fr'), (219, 7.714, '219Ra'), (219, 7.701, '219Ac'), (219, 7.684, '219Th'), (219, 7.662, '219Pa'), (219, 7.637, '219U'), (220, 7.704, '220At'), (220, 7.717, '220Rn'), (220, 7.71, '220Fr'), (220, 7.712, '220Ra'), (220, 7.692, '220Ac'), (220, 7.685, '220Th'), (220, 7.655, '220Pa'), (221, 7.701, '221Rn'), (221, 7.703, '221Fr'), (221, 7.701, '221Ra'), (221, 7.691, '221Ac'), (221, 7.676, '221Th'), (221, 7.657, '221Pa'), (222, 7.694, '222Rn'), (222, 7.691, '222Fr'), (222, 7.697, '222Ra'), (222, 7.683, '222Ac'), (222, 7.677, '222Th'), (223, 7.684, '223Fr'), (223, 7.685, '223Ra'), (223, 7.679, '223Ac'), (223, 7.669, '223Th'), (223, 7.652, '223Pa'), (223, 7.633, '223U'), (224, 7.671, '224Fr'), (224, 7.68, '224Ra'), (224, 7.67, '224Ac'), (224, 7.668, '224Th'), (224, 7.647, '224Pa'), (224, 7.635, '224U'), (225, 7.663, '225Fr'), (225, 7.668, '225Ra'), (225, 7.666, '225Ac'), (225, 7.659, '225Th'), (225, 7.647, '225Pa'), (225, 7.63, '225U'), (225, 7.608, '225Np'), (226, 7.649, '226Fr'), (226, 7.662, '226Ra'), (226, 7.656, '226Ac'), (226, 7.657, '226Th'), (226, 7.641, '226Pa'), (226, 7.632, '226U'), (227, 7.641, '227Fr'), (227, 7.648, '227Ra'), (227, 7.651, '227Ac'), (227, 7.647, '227Th'), (227, 7.639, '227Pa'), (227, 7.626, '227U'), (227, 7.607, '227Np'), (228, 7.642, '228Ra'), (228, 7.639, '228Ac'), (228, 7.645, '228Th'), (228, 7.632, '228Pa'), (228, 7.627, '228U'), (228, 7.59, '228Pu'), (229, 7.618, '229Fr'), (229, 7.628, '229Ra'), (229, 7.633, '229Ac'), (229, 7.635, '229Th'), (229, 7.63, '229Pa'), (229, 7.621, '229U'), (229, 7.606, '229Np'), (229, 7.587, '229Pu'), (230, 7.622, '230Ra'), (230, 7.622, '230Ac'), (230, 7.631, '230Th'), (230, 7.622, '230Pa'), (230, 7.621, '230U'), (230, 7.602, '230Np'), (230, 7.591, '230Pu'), (231, 7.614, '231Ac'), (231, 7.62, '231Th'), (231, 7.618, '231Pa'), (231, 7.613, '231U'), (231, 7.602, '231Np'), (231, 7.587, '231Pu'), (232, 7.602, '232Ac'), (232, 7.615, '232Th'), (232, 7.61, '232Pa'), (232, 7.612, '232U'), (232, 7.589, '232Pu'), (233, 7.603, '233Th'), (233, 7.605, '233Pa'), (233, 7.604, '233U'), (233, 7.596, '233Np'), (233, 7.584, '233Pu'), (233, 7.546, '233Cm'), (234, 7.597, '234Th'), (234, 7.595, '234Pa'), (234, 7.601, '234U'), (234, 7.59, '234Np'), (234, 7.585, '234Pu'), (234, 7.551, '234Cm'), (235, 7.583, '235Th'), (235, 7.588, '235Pa'), (235, 7.591, '235U'), (235, 7.587, '235Np'), (235, 7.579, '235Pu'), (236, 7.577, '236Pa'), (236, 7.586, '236U'), (236, 7.579, '236Np'), (236, 7.578, '236Pu'), (237, 7.57, '237Pa'), (237, 7.576, '237U'), (237, 7.575, '237Np'), (237, 7.571, '237Pu'), (238, 7.559, '238Pa'), (238, 7.57, '238U'), (238, 7.566, '238Np'), (238, 7.568, '238Pu'), (238, 7.556, '238Am'), (238, 7.548, '238Cm'), (239, 7.559, '239U'), (239, 7.561, '239Np'), (239, 7.56, '239Pu'), (239, 7.554, '239Am'), (240, 7.552, '240U'), (240, 7.55, '240Np'), (240, 7.556, '240Pu'), (240, 7.547, '240Am'), (240, 7.543, '240Cm'), (241, 7.544, '241Np'), (241, 7.546, '241Pu'), (241, 7.543, '241Am'), (241, 7.537, '241Cm'), (242, 7.533, '242Np'), (242, 7.541, '242Pu'), (242, 7.535, '242Am'), (242, 7.535, '242Cm'), (242, 7.509, '242Cf'), (243, 7.531, '243Pu'), (243, 7.53, '243Am'), (243, 7.527, '243Cm'), (243, 7.518, '243Bk'), (244, 7.525, '244Pu'), (244, 7.521, '244Am'), (244, 7.524, '244Cm'), (244, 7.511, '244Bk'), (244, 7.505, '244Cf'), (245, 7.514, '245Pu'), (245, 7.515, '245Am'), (245, 7.516, '245Cm'), (245, 7.509, '245Bk'), (245, 7.5, '245Cf'), (246, 7.507, '246Pu'), (246, 7.505, '246Am'), (246, 7.511, '246Cm'), (246, 7.503, '246Bk'), (246, 7.499, '246Cf'), (246, 7.468, '246Fm'), (247, 7.502, '247Cm'), (247, 7.499, '247Bk'), (247, 7.493, '247Cf'), (248, 7.497, '248Cm'), (248, 7.491, '248Cf'), (248, 7.466, '248Fm'), (249, 7.486, '249Cm'), (249, 7.486, '249Bk'), (249, 7.483, '249Cf'), (250, 7.479, '250Cm'), (250, 7.476, '250Bk'), (250, 7.48, '250Cf'), (250, 7.462, '250Fm'), (251, 7.467, '251Cm'), (251, 7.469, '251Bk'), (251, 7.471, '251Cf'), (251, 7.466, '251Es'), (251, 7.457, '251Fm'), (252, 7.465, '252Cf'), (252, 7.457, '252Es'), (252, 7.456, '252Fm'), (252, 7.426, '252No'), (253, 7.455, '253Cf'), (253, 7.453, '253Es'), (253, 7.448, '253Fm'), (254, 7.449, '254Cf'), (254, 7.444, '254Es'), (254, 7.445, '254Fm'), (254, 7.424, '254No'), (255, 7.438, '255Es'), (255, 7.436, '255Fm'), (255, 7.429, '255Md'), (255, 7.418, '255No'), (256, 7.432, '256Fm'), (256, 7.42, '256Md'), (256, 7.417, '256No'), (256, 7.385, '256Rf'), (257, 7.422, '257Fm'), (257, 7.418, '257Md'), (257, 7.41, '257No'), (258, 7.41, '258Md'), (260, 7.342, '260Sg'), (261, 7.371, '261Rf'), (264, 7.298, '264Hs'), (265, 7.333, '265Sg')]
###Markdown
A magic function zip() and * return the row tuples into a column tuple, and assign different variables to the list of tuples
###Code
plotdata = zip(*data)
x,y,label = plotdata
###Output
_____no_output_____
###Markdown
import numpy and matplolib for plotting, use the scatter function
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(16,8))
plt.scatter(x,y, alpha=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Add the title and x,y axis
###Code
plt.figure(figsize=(16,8))
plt.scatter(x,y, alpha=0.5)
plt.ylabel("Binding energy per nucleon (MeV)", fontsize=16);
plt.xlabel("Number of nucleon(s)", fontsize=16)
plt.title("Binding energy curve", fontsize=16)
plt.show()
###Output
_____no_output_____
|
lessons/lesson04.ipynb
|
###Markdown
Lesson 4 - Conda, IPython, and Jupyter Notebooks Installing Miniconda and working with Conda_Conda is an open source package management system and environment management system for installing multiple versions of software packages and their dependencies and switching easily between them. It works on Linux, OS X and Windows, and was created for Python programs but can package and distribute any software._ -- First, install miniconda3: . By default, environments you create will use Python 3, but you can specify Python 2 if required.Then, create a conda environment. Let's make an environment called `python3` for this class that includes Python 3 and Jupyter. ```conda create -n python3 python=3 jupyter```To activate the environment:```source activate python3```To deactivate the environment:```source deactivate```To delete an environment:```conda env remove -n myenv```After you activate your environment, you can install additional packages to that environment using `conda install`:```conda install pandas```If the package isn't available from conda, try `pip install`:```pip install tabview```List the environments on your system:```conda env list```List the packages in your current environment:```conda list``` Python and IPython command-line interpreter(demonstration) Working with Jupyter notebooks and Python basics(demonstration with Python Crash Course) Python Crash CourseIn this tutorial we will cover some basic aspects of Python using IPython (Jupyter) notebooks. 1. Syntax2. Data types3. Loops and control structures4. numpy, scipy, math Import modules to use
###Code
import math
import numpy as np
###Output
_____no_output_____
###Markdown
Syntax
###Code
2 + 2
x = 5
y = 2
x * y
x ** y
print('Hello, world!')
print('%s raised to power of %s equals %s' % (x, y, x ** y))
###Output
5 raised to power of 2 equals 25
###Markdown
Data types Boolean'True' and 'False' have special meaning in Python.
###Code
a = True
b = False
a == True
b == True
a or b
a and b
###Output
_____no_output_____
###Markdown
Numbers: integers and floatsNumbers are pretty straightforward, especially in Python 3.
###Code
1 + 2
1.0 + 2.0
1 / 2
1.0 / 2.0
type(1)
type(1/2)
###Output
_____no_output_____
###Markdown
StringsThe next four data types -- strings, lists, tuples, arrays -- are all sequences.Strings are sequences of characters.
###Code
s = 'Hello, world'
type(s)
s[0:4]
s + '!'
s
s = s + '!'
s
###Output
_____no_output_____
###Markdown
ListsLists are _mutable_ sequences of anything.
###Code
l = [0, 1, 1, 2, 3, 5, 8]
m = [5, 2, 'a', 'xxx', True, [0, 1]]
l[0:3]
m[4]
m[4] = False
m[4:]
###Output
_____no_output_____
###Markdown
TuplesTuples are immutable sequences of anything (similar to lists except you can't change them).
###Code
n = (3, 5, 6)
n[0]
n[0] = 2
###Output
_____no_output_____
###Markdown
Lesson 4 - Conda, IPython, and Jupyter Notebooks Readings* Geohackweek: [Introduction to Conda](https://geohackweek.github.io/Introductory/01-conda-tutorial/) Table of Contents* [Installation](install)* [IPython and Jupyter](ipython)* [Python Syntax](syntax)* [Data Types](types)* [Control Structures](control)* [Version](version) Installing Miniconda and working with Conda_Conda is an open source package management system and environment management system for installing multiple versions of software packages and their dependencies and switching easily between them. It works on Linux, OS X and Windows, and was created for Python programs but can package and distribute any software._ -- Install Miniconda3: . By default, environments you create will use Python 3, but you can specify Python 2 if required. To run Conda: Windows users, open the Anaconda Prompt (instead of PowerShell) from the Start menu and run `conda ...` commands. macOS and Linux users, open Terminal and run `conda ...` commands.Create a Conda environment. Let's make an environment called `python3` for this class that includes Python 3 and Jupyter. ```conda create -n python3 python=3 jupyter```To activate the environment (Windows: `conda activate`):```source activate python3```To deactivate the environment (Windows: `conda deactivate`):```source deactivate```To delete an environment:```conda env remove -n myenv```After you activate your environment, you can install additional packages to that environment using `conda install`:```conda install pandas```If the package isn't available from conda, try `pip install`:```pip install tabview```List the environments on your system:```conda env list```List the packages in your current environment:```conda list``` IPython and JupyterIPython offers enhanced interactive Python shells with support for data visualization, distributed and parallel computation, and the browser-based Jupyter notebook. Jupyter notebook provides support for code, text, mathematical expressions, inline plots, and other rich media. Python command-line interpreterTo open the Python interpreter, go to your terminal and type:```python``` IPython command-line interpreterTo open the IPython interpreter, go to your terminal and type:```ipython``` Jupyter (IPython) notebooksFor macOS and Linux users, to launch a Jupyter notebook, open Terminal and type:```source activate python3jupyter notebook```For Windows users, to launch a Jupyter notebook, open Anaconda Prompt and type:```conda activate python3jupyter notebook```Open up a new notebook, then check out **Help > User Interface Tour** and **Help > Keyboard Shortcuts**. See screenshots `jupyter_shortcuts_*` in `images`. When you are done exploring, delete the notebook `Untitled.ipynb` that you just created. Downloading Today's LessonAt the start of every class, we will follow a regular routine of navigating to our lessons folder, downloading the day's lesson, activating our Conda environment, and launching Jupyter notebook.For macOS and Linux users:1. Open Terminal2. `cd sio209/lessons`3. `curl -O LINKTOLESSON`4. `source activate python3`5. `jupyter notebook`For Windows users:1. Open Git Bash2. `cd sio209/lessons`3. `curl -O LINKTOLESSON`4. Open Anaconda Prompt5. `conda activate python3`6. `jupyter notebook`Windows users, see screenshot `jupyter_for_windows.png` in `images`. Python Crash CourseIn this tutorial we will cover some basic aspects of Python using IPython (Jupyter) notebooks. 1. Syntax2. Data types3. Loops and control structures4. numpy, scipy, math Syntax
###Code
2 + 2
x = 5
y = 2
x * y
x ** y
print('Hello, world!')
print('%s raised to power of %s equals %s' % (x, y, x ** y))
###Output
5 raised to power of 2 equals 25
###Markdown
Data Types Booleans'True' and 'False' have special meaning in Python.
###Code
a = True
b = False
a == True
b == True
a or b
a and b
###Output
_____no_output_____
###Markdown
Numbers: integers and floatsNumbers are pretty straightforward, especially in Python 3.
###Code
1 + 2
1.0 + 2.0
1 / 2
1.0 / 2.0
type(1)
type(1/2)
###Output
_____no_output_____
###Markdown
StringsThe next four data types -- strings, lists, tuples, arrays -- are all sequences.Strings are sequences of characters.
###Code
s = 'Hello, world'
type(s)
s[0:4]
s + '!'
s
s = s + '!'
s
###Output
_____no_output_____
###Markdown
ListsLists are _mutable_ sequences of anything.
###Code
l = [0, 1, 1, 2, 3, 5, 8]
m = [5, 2, 'a', 'xxx', True, [0, 1]]
l[0:3]
m[4]
m[4] = False
m[4:]
###Output
_____no_output_____
###Markdown
TuplesTuples are immutable sequences of anything (similar to lists except you can't change them).
###Code
n = (3, 5, 6)
n[0]
n[0] = 2
###Output
_____no_output_____
###Markdown
Arrays (numpy)
###Code
# Import modules to use
import math
import numpy as np
mylist = [0, 2, 4]
np.array(mylist)
np.zeros(5)
np.arange(5)
np.arange(4, 10)
np.arange(0, 10, 2)
np.linspace(0, 10, 5)
np.linspace(0, 10, 11)
np.random.rand()
np.random.rand(5)
###Output
_____no_output_____
###Markdown
SetsSets are unordered collections of unique objects.
###Code
s1 = {'a', 'b', 'c'}
s2 = {'a', 'd', 'e'}
s1 & s2
s1 | s2
s3 = set(l)
s4 = set(m[0:2])
s3 & s4
s3 | s4
s3 - s4
###Output
_____no_output_____
###Markdown
DictionariesDictionaries or 'dicts' are hash tables, where a key points to a value.
###Code
d = {'name': 'John Doe', 'age': 27, 'dob': '7/20/1989'}
d
d['name']
d['zip'] = 92039
###Output
_____no_output_____
###Markdown
Loops and Control Structures Boolean and comparison operations
###Code
x = 5
(x < 6) and (x > 4)
x != 4
5 in [3, 4, 5]
'ell' in 'Hello'
len('Hello') >= 5
###Output
_____no_output_____
###Markdown
if tests
###Code
if 'd' in 'abc':
print('Learn your alphabet.')
elif (2 + 2 == 5):
print('Sometimes yes.')
else:
print('Nothing is true.')
###Output
Nothing is true.
###Markdown
while loops
###Code
i = 0
while (i < 5):
print(i)
i += 1
i
###Output
_____no_output_____
###Markdown
for loops
###Code
for x in [0, 1, 2, 3, 4]:
print(x**2)
###Output
0
1
4
9
16
###Markdown
Determining Your Python Version Method 1: sys
###Code
import sys
sys.version
###Output
_____no_output_____
###Markdown
Method 2: platform
###Code
import platform
platform.python_version()
platform.version()
###Output
_____no_output_____
###Markdown
Method 3: version_information
###Code
# first install: pip install version_information
%reload_ext version_information
%version_information math, numpy, sys, platform
###Output
_____no_output_____
###Markdown
Arrays (numpy)
###Code
mylist = [0, 2, 4]
np.array(mylist)
np.zeros(5)
np.arange(5)
np.arange(4, 10)
np.arange(0, 10, 2)
np.linspace(0, 10, 5)
np.linspace(0, 10, 11)
np.random.rand()
np.random.rand(5)
###Output
_____no_output_____
###Markdown
SetsSets are unordered collections of unique objects.
###Code
s1 = {'a', 'b', 'c'}
s2 = {'a', 'd', 'e'}
s1 & s2
s1 | s2
s3 = set(l)
s4 = set(m[0:2])
s3 & s4
s3 | s4
s3 - s4
###Output
_____no_output_____
###Markdown
DictionariesDictionaries or 'dicts' are hash tables, where a key points to a value.
###Code
d = {'name': 'John Doe', 'age': 27, 'dob': '7/20/1989'}
d
d['name']
d['zip'] = 92039
###Output
_____no_output_____
###Markdown
Loops and control structures Boolean and comparison operations
###Code
x = 5
(x < 6) and (x > 4)
x != 4
5 in [3, 4, 5]
'ell' in 'Hello'
len('Hello') >= 5
###Output
_____no_output_____
###Markdown
if tests
###Code
if 'd' in 'abc':
print('Learn your alphabet.')
elif (2 + 2 == 5):
print('Sometimes yes.')
else:
print('Nothing is true.')
###Output
Nothing is true.
###Markdown
while loops
###Code
i = 0
while (i < 5):
print(i)
i += 1
i
###Output
_____no_output_____
###Markdown
for loops
###Code
for x in [0, 1, 2, 3, 4]:
print(x**2)
###Output
0
1
4
9
16
###Markdown
Determining your Python version Method 1: sys
###Code
import sys
sys.version
###Output
_____no_output_____
###Markdown
Method 2: platform
###Code
import platform
platform.python_version()
platform.version()
###Output
_____no_output_____
###Markdown
Method 3: version_information
###Code
# first install: pip install version_information
%reload_ext version_information
%version_information math, numpy, sys, platform
###Output
_____no_output_____
|
Aprendizado Supervisionado.ipynb
|
###Markdown
Escolha o método de aprendizado para regressão: https://scikit-learn.org/stable/supervised_learning.html
###Code
# Criar seu próprio modelo de predição para o problema em questão.
# Para isso, você deve substituir a "?" pelo caminho/biblioteca correta
# Primeiramente, faça o import do modelo
from sklearn.? import ?
# E então crie e treine o seu modelo
model = ?()
model_fit = model.fit(X, y)
from sklearn.metrics import mean_absolute_error
print('Mean Absolute Error:', mean_absolute_error(model_fit.predict(X), y))
###Output
_____no_output_____
###Markdown
Vamos avaliar o seu modelo para novas instâncias de teste
###Code
from sklearn.metrics import mean_absolute_error
print('Mean Absolute Error:',
mean_absolute_error(model_fit.predict(X_test), y_test))
###Output
_____no_output_____
###Markdown
Classificação
###Code
# Carrega dataset para tarefa de classificação (tem ou não tem a doença)
cancer_dataset = datasets.load_breast_cancer()
# Extrai do dataframe as features e valor a ser predito
train_percent = 0.7
train_index = int(len(cancer_dataset.data) * train_percent)
test_index = int(len(cancer_dataset.data) * (1 - train_percent))
X = cancer_dataset.data[:train_index]
y = cancer_dataset.target[:train_index]
X_test = cancer_dataset.data[-test_index:]
y_test = cancer_dataset.target[-test_index:]
###Output
_____no_output_____
###Markdown
Escolha o método de aprendizado para classificação: https://scikit-learn.org/stable/supervised_learning.html
###Code
# Criar seu próprio modelo de predição para o problema em questão.
# Para isso, você deve substituir a "?" pelo caminho/biblioteca correta
# Primeiramente, faça o import do modelo
from sklearn.? import ?
# E então crie e treine o seu modelo
model = ?()
model_fit = model.fit(X, y)
from sklearn.metrics import accuracy_score
print('Model Accuracy:', accuracy_score(model_fit.predict(X), y))
###Output
_____no_output_____
###Markdown
Vamos avaliar o seu modelo para novas instâncias de teste
###Code
from sklearn.metrics import accuracy_score
print('Model Accuracy:', accuracy_score(model_fit.predict(X_test), y_test))
###Output
_____no_output_____
###Markdown
Regressão
###Code
# Carrega dataset para tarefa de regressão
house_prices_dataset = datasets.load_boston()
# Extrai do dataframe as features e valor a ser predito
train_percent = 0.7
train_index = int(len(house_prices_dataset.data) * train_percent)
test_index = int(len(house_prices_dataset.data) * (1 - train_percent))
X = house_prices_dataset.data[:train_index]
y = house_prices_dataset.target[:train_index]
X_test = house_prices_dataset.data[-test_index:]
y_test = house_prices_dataset.target[-test_index:]
###Output
_____no_output_____
|
docs/source/examples/tabular-data-rossmann/02-ETL-with-NVTabular.ipynb
|
###Markdown
NVTabular demo on Rossmann data - Feature Engineering & Preprocessing OverviewNVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library. Learning objectivesThis notebook demonstrates the steps for carrying out data preprocessing, transformation and loading with NVTabular on the Kaggle Rossmann [dataset](https://www.kaggle.com/c/rossmann-store-sales/overview). Rossmann operates over 3,000 drug stores in 7 European countries. Historical sales data for 1,115 Rossmann stores are provided. The task is to forecast the "Sales" column for the test set. The following example will illustrate how to use NVTabular to preprocess and feature engineer the data for further training deep learning models. We provide notebooks for training neural networks in [TensorFlow](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/03a-Training-with-TF.ipynb), [PyTorch](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/03b-Training-with-PyTorch.ipynb) and [FastAI](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/04-Training-with-FastAI.ipynb). We'll use a [dataset built by FastAI](https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson6-rossmann.ipynb) for solving the [Kaggle Rossmann Store Sales competition](https://www.kaggle.com/c/rossmann-store-sales). Some cuDF preprocessing is required to build the appropriate feature set, so make sure to run [01-Download-Convert.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/01-Download-Convert.ipynb) first before going through this notebook.
###Code
import os
import json
import nvtabular as nvt
from nvtabular import ops
###Output
_____no_output_____
###Markdown
Preparing our datasetLet's start by defining some of the a priori information about our data, including its schema (what columns to use and what sorts of variables they represent), as well as the location of the files corresponding to some particular sampling from this schema. Note that throughout, I'll use UPPERCASE variables to represent this sort of a priori information that you might usually encode using commandline arguments or config files.We use the data schema to define our pipeline.
###Code
DATA_DIR = os.environ.get(
"OUTPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/data/")
)
CATEGORICAL_COLUMNS = [
"Store",
"DayOfWeek",
"Year",
"Month",
"Day",
"StateHoliday",
"CompetitionMonthsOpen",
"Promo2Weeks",
"StoreType",
"Assortment",
"PromoInterval",
"CompetitionOpenSinceYear",
"Promo2SinceYear",
"State",
"Week",
"Events",
"Promo_fw",
"Promo_bw",
"StateHoliday_fw",
"StateHoliday_bw",
"SchoolHoliday_fw",
"SchoolHoliday_bw",
]
CONTINUOUS_COLUMNS = [
"CompetitionDistance",
"Max_TemperatureC",
"Mean_TemperatureC",
"Min_TemperatureC",
"Max_Humidity",
"Mean_Humidity",
"Min_Humidity",
"Max_Wind_SpeedKm_h",
"Mean_Wind_SpeedKm_h",
"CloudCover",
"trend",
"trend_DE",
"AfterStateHoliday",
"BeforeStateHoliday",
"Promo",
"SchoolHoliday",
]
LABEL_COLUMNS = ["Sales"]
COLUMNS = CATEGORICAL_COLUMNS + CONTINUOUS_COLUMNS + LABEL_COLUMNS
###Output
_____no_output_____
###Markdown
What files are available to train on in our data directory?
###Code
! ls $DATA_DIR
###Output
output.csv test.csv valid.csv
ross_pre test_inference_rossmann_data.csv workflow
rossmann_predictions.csv train.csv
###Markdown
`train.csv` and `valid.csv` seem like good candidates, let's use those.
###Code
TRAIN_PATH = os.path.join(DATA_DIR, "train.csv")
VALID_PATH = os.path.join(DATA_DIR, "valid.csv")
###Output
_____no_output_____
###Markdown
Defining our Data PipelineThe first step is to define the feature engineering and preprocessing pipeline.NVTabular has already implemented multiple calculations, called `ops`. An `op` can be applied to a `ColumnGroup` from an overloaded `>>` operator, which in turn returns a new `ColumnGroup`. A `ColumnGroup` is a list of column names as text.**Example:**features = [*\*, ...] >> *\* >> *\* >> ...This may sounds more complicated as it is. Let's define our first pipeline for the Rossmann dataset. We need to categorify the categorical input features. This converts the categorical values of a feature into continuous integers (0, ..., |C|), which is required by an embedding layer of a neural network.* Initial `ColumnGroup` is `CATEGORICAL_COLUMNS`* `Op` is `Categorify`
###Code
cat_features = CATEGORICAL_COLUMNS >> ops.Categorify()
###Output
_____no_output_____
###Markdown
We can visualize the calculation with `graphviz`.
###Code
(cat_features).graph
###Output
_____no_output_____
###Markdown
Our next step is to process the continuous columns. We want to fill in missing values and normalize the continuous features with mean=0 and std=1.* Initial `ColumnGroup` is `CONTINUOUS_COLUMNS`* First `Op` is `FillMissing`* Second `Op` is `Normalize`
###Code
cont_features = CONTINUOUS_COLUMNS >> ops.FillMissing() >> ops.Normalize()
(cont_features).graph
###Output
_____no_output_____
###Markdown
Finally, we need to apply the LogOp to the label/target column.
###Code
label_feature = LABEL_COLUMNS >> ops.LogOp()
(label_feature).graph
###Output
_____no_output_____
###Markdown
We can visualize the full workflow by concatenating the output `ColumnGroups`.
###Code
(cat_features + cont_features + label_feature).graph
###Output
_____no_output_____
###Markdown
Workflow A NVTabular `workflow` orchastrates the pipelines. We initialize the NVTabular `workflow` with the output `ColumnGroups`.
###Code
proc = nvt.Workflow(cat_features + cont_features + label_feature)
###Output
_____no_output_____
###Markdown
DatasetsIn general, the `Op`s in our `Workflow` will require measurements of statistical properties of our data in order to be leveraged. For example, the `Normalize` op requires measurements of the dataset mean and standard deviation, and the `Categorify` op requires an accounting of all the categories a particular feature can manifest. However, we frequently need to measure these properties across datasets which are too large to fit into GPU memory (or CPU memory for that matter) at once.NVTabular solves this by providing the `Dataset` class, which breaks a set of parquet or csv files into into a collection of `cudf.DataFrame` chunks that can fit in device memory. Under the hood, the data decomposition corresponds to the construction of a [dask_cudf.DataFrame](https://docs.rapids.ai/api/cudf/stable/dask-cudf.html) object. By representing our dataset as a lazily-evaluated [Dask](https://dask.org/) collection, we can handle the calculation of complex global statistics (and later, can also iterate over the partitions while feeding data into a neural network).
###Code
train_dataset = nvt.Dataset(TRAIN_PATH)
valid_dataset = nvt.Dataset(VALID_PATH)
PREPROCESS_DIR = os.path.join(DATA_DIR, "ross_pre/")
PREPROCESS_DIR_TRAIN = os.path.join(PREPROCESS_DIR, "train")
PREPROCESS_DIR_VALID = os.path.join(PREPROCESS_DIR, "valid")
! rm -rf $PREPROCESS_DIR # remove previous trials
! mkdir -p $PREPROCESS_DIR_TRAIN
! mkdir -p $PREPROCESS_DIR_VALID
###Output
_____no_output_____
###Markdown
Now that we have our datasets, we'll apply our `Workflow` to them and save the results out to parquet files for fast reading at train time. Similar to the `scikit learn` API, we collect the statistics of our train dataset with `.fit`.
###Code
proc.fit(train_dataset)
###Output
_____no_output_____
###Markdown
We apply and transform our dataset with `.transform` and persist it to disk with `.to_parquet`. We want to shuffle our train dataset before storing to disk to provide more randomness during our deep learning training.
###Code
proc.transform(train_dataset).to_parquet(PREPROCESS_DIR_TRAIN, shuffle=nvt.io.Shuffle.PER_WORKER)
proc.transform(valid_dataset).to_parquet(PREPROCESS_DIR_VALID, shuffle=None)
###Output
_____no_output_____
###Markdown
Then, we save the workflow to be used by the Triton export functions for inference.
###Code
proc.save(os.path.join(DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
Finalize embedding tablesIn the next steps, we will train a deep learning model with either TensorFlow, PyTorch or FastAI. Our training pipeline requires information about the data schema to define the neural network architecture. In addition, we store the embedding tables structure.
###Code
EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES
json.dump(
{
"EMBEDDING_TABLE_SHAPES": EMBEDDING_TABLE_SHAPES,
"CATEGORICAL_COLUMNS": CATEGORICAL_COLUMNS,
"CONTINUOUS_COLUMNS": CONTINUOUS_COLUMNS,
"LABEL_COLUMNS": LABEL_COLUMNS,
},
open(PREPROCESS_DIR + "/stats.json", "w"),
)
!ls $PREPROCESS_DIR
###Output
stats.json train valid
|
_doc/notebooks/td2a_algo/gentry_integer_encryption_correction.ipynb
|
###Markdown
Cryptage homomorphic de Craig Gentry - correctionUn cryptage homomorphe préserve l'addition et la multiplication : une addition sur des nombres cryptés est égale au résultat crypté de l'addition sur les nombres non cryptées. Craig Gentry a proposé un tel cryptage dans son article [Fully Homomorphic Encryption over the Integers](https://eprint.iacr.org/2009/616.pdf). Le système de cryptage encrypte et décrypte des bits (0 ou 1). Correction.
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
|
Udemy Course/T1 2 Data cleaning analisis datos.ipynb
|
###Markdown
Resumen datos: estructuras y dimensiones
###Code
import pandas as pd
import os
mainpath="/Users/nesto/OneDrive/Documents/GitHub/Fundamentals-of-Machine-learning/Udemy Course/Course recourses/datasets"
filename="titanic/titanic3.csv"
fullpath=os.path.join(mainpath, filename)
data=pd.read_csv(fullpath)
data.head(8)
data.shape
data.columns
data.columns.values
data.tail(8)
###Output
_____no_output_____
###Markdown
Resumen estadisticos
###Code
data.describe()
data.dtypes
###Output
_____no_output_____
###Markdown
Missing values
###Code
pd.isnull(data['age'])
pd.notnull(data['age'])
pd.isnull(data['age']).values
pd.isnull(data['age']).values.ravel()
# Numero de elementos faltantes
pd.isnull(data['age']).values.ravel().sum()
pd.notnull(data['age']).values.ravel().sum()
###Output
_____no_output_____
###Markdown
Los valores faltantes pueden ser por dos razones* Extraccion de los datos* Recoleccion de los datos Borraado valores faltantes
###Code
data.dropna(axis=0, how="all")
#Eliminar fila si alguna tiene NaN
data2=data
data2.dropna(axis=0, how='any')
###Output
_____no_output_____
###Markdown
Inferir datos, computo * Añadir valores por unos faltantes como un cero, promedio
###Code
data3=data
data3.fillna(0)
data4=data
data4.fillna('desconocido')
data5=data
data5['body']=data5['body'].fillna(0)
data5['home.dest']=data5['home.dest'].fillna('desconocido')
data5.tail()
pd.isnull(data5['age']).values.ravel().sum()
data5['age'].fillna(data5['age'].mean())
###Output
_____no_output_____
###Markdown
* remplazar con el dato mas cercano o lejano para NaN
###Code
data5['age'][1291]
data5['age'].fillna(method='ffill')
data5['age'].fillna(method='bfill')
data5['age'].fillna(method='backfill')
###Output
_____no_output_____
###Markdown
Variables dummy * crear variables separadas para cada categoria
###Code
data['sex']
dummy_sex=pd.get_dummies(data['sex'], prefix='sex')
dummy_sex.head()
###Output
_____no_output_____
###Markdown
* Dummyficacion volver variables 1-0 segun 2 posibles respuestas
###Code
column_name=data.columns.values.tolist()
column_name
data=data.drop(['sex'], axis=1)
#axis=1 columna, axis=0 fila
pd.concat([data, dummy_sex], axis=1)
def createDummies(df, var_name):
dummy=pd.get_dummies(df[var_name], prefix=var_name)
df=df.drop(var_name, axis=1)
df=pd.concat([df, dummy], axis=1)
return df
createDummies(data3, 'sex')
###Output
_____no_output_____
|
2021/September/8_kyu/Monday-Sept-20-Playing-Banjo.ipynb
|
###Markdown
[Codewars](https://www.codewars.com/kata/53af2b8861023f1d88000832/train/python)- Fundamentals- Strings- Functions- Control Flow- Basic Language Features Are you Playing Banjo?> >Create a function which answers the question "Are you playing banjo?".>> If your name starts with the letter "R" or lower case "r", you are playing banjo!>>The function takes a name (always valid strings) as its only argument, and returns one of the following strings:>`name + " plays banjo"``name + " does not play banjo"`>
###Code
# Test
import codewars_test as test
try:
from solution import areYouPlayingBanjo as are_you_playing_banjo
except ImportError:
from solution import are_you_playing_banjo
@test.describe("Fixed Tests")
def basic_tests():
@test.it('Basic Test Cases')
def basic_test_cases():
test.assert_equals(are_you_playing_banjo("martin"), "martin does not play banjo");
test.assert_equals(are_you_playing_banjo("Rikke"), "Rikke plays banjo");
###Output
_____no_output_____
###Markdown
My Solution My UnderstandingI will be given a string. Based on the first letter of that string, will be the return statement. I believe an If statement will work here.- [x] Create If statement for when true------- Realized I need to set the index for `name = 'R'` - [x] Else statement when false
###Code
def are_you_playing_banjo(name):
# if statement for 'R', 'r'
if name[0] == 'R' or name[0] == 'r':
return f'{name} plays banjo'
# when name doesn't start with 'R', 'r'
else:
return f'{name} does not play banjo'
are_you_playing_banjo("martin")
are_you_playing_banjo("Rikke")
###Output
_____no_output_____
###Markdown
Other Solutions
###Code
def areYouPlayingBanjo(name):
return name + (' plays' if name[0].lower() == 'r' else ' does not play') + " banjo";
def areYouPlayingBanjo(name):
if name[0].lower() == 'r':
return name + ' plays banjo'
else:
return name + ' does not play banjo'
###Output
_____no_output_____
|
python/notes/Day01.python_variable.ipynb
|
###Markdown
연산과 연산자
###Code
3+3
3-1
3*2
3/1
7/3
7//3
7 % 3
2**3
7 > 2
3 < 6
2 >= 3
9 <= 2
2 == 2
2 != 2
fast = 10
campus = 4
fast
campus
fast = 7
fast
fast+campus
fire = "csharp"
fighter = "java"
print(fire, fighter, sep='fastcampus')
###Output
csharpfastcampusjava
###Markdown
Small Project
###Code
r = 10
pi = 3.1415
d = 2*r
print("d=", d)
c = 2*pi*r
print("c=", c)
a = pi*(r**2)
print("a=", a)
gnb = 4*pi*(r**2)
print("gnb=", gnb)
v=(4/3)*pi*(r**3)
print("v=", v)
###Output
d= 20
c= 62.830000000000005
a= 314.15000000000003
gnb= 1256.6000000000001
v= 4188.666666666666
|
sentence_embeddings_1.ipynb
|
###Markdown
###Code
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('bert-base-nli-mean-tokens')
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", len(embedding), embedding[:10])
print("")
sentence_embeddings = model.encode([
'Мама мыла раму'
])
rus_sentences = ['мама мыла раму', 'рама мыла маму']
rus_sentences_embeddings = model.encode(rus_sentences)
for sentence, embedding in zip(rus_sentences, rus_sentences_embeddings):
print("Sentence:", sentence)
print("Embedding:", len(embedding), embedding[:10])
print("")
tatar_sentences = ['Вахит Имамовның бу китабын экстремистик китап дип бәяләргә тырышып, тыю өчен суд-мәхкәмә эшләре бара.',
'Суд киләсе елда эшен дәвам итәчәк.',
'Ә әлегә документаль әсәр экспертиза уза.',
'Әлеге китапны ни сәбәпледер, мин дә үз вакытында укымый калганмын']
tat_sentences_embeddings = model.encode(tatar_sentences)
for sentence, embedding in zip(tatar_sentences, tat_sentences_embeddings):
print("Sentence:", sentence)
print("Embedding:", len(embedding), embedding[:10])
print("")
###Output
Sentence: Вахит Имамовның бу китабын экстремистик китап дип бәяләргә тырышып, тыю өчен суд-мәхкәмә эшләре бара.
Embedding: 768 [-0.19085988 0.5377364 1.1533412 0.36398053 0.9467714 -0.22634262
0.6719892 0.23747882 -0.02229647 0.29271245]
Sentence: Суд киләсе елда эшен дәвам итәчәк.
Embedding: 768 [-0.17635988 0.2094933 1.1887797 -0.07288645 0.39875263 -0.01915761
0.76020753 0.43875957 -0.05099004 0.07172778]
Sentence: Ә әлегә документаль әсәр экспертиза уза.
Embedding: 768 [-0.3382328 0.26497155 1.0133281 0.49888176 1.0454834 -0.17264372
0.46522382 0.51790977 0.02218307 0.23018496]
Sentence: Әлеге китапны ни сәбәпледер, мин дә үз вакытында укымый калганмын
Embedding: 768 [-0.35848114 0.48103192 0.9773039 0.37314278 0.7843815 -0.09250573
0.64860684 0.556238 -0.32417187 0.40644333]
###Markdown
FAISS InstallationSadly, it can be painfull :(In according to answer from SO: https://stackoverflow.com/questions/47967252/installing-faiss-on-google-colaboratory
###Code
#!wget https://anaconda.org/pytorch/faiss-cpu/1.2.1/download/linux-64/faiss-cpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
#!tar xvjf faiss-cpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
!wget https://anaconda.org/pytorch/faiss-gpu/1.2.1/download/linux-64/faiss-gpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
!tar xvjf faiss-gpu-1.2.1-py36_cuda9.0.176_1.tar.bz2
!cp -r lib/python3.6/site-packages/* /usr/local/lib/python3.6/dist-packages/
!pip install mkl
import faiss
###Output
_____no_output_____
###Markdown
Now time for downloading datasets
###Code
import math
import logging
from datetime import datetime
from pathlib import Path
import os
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
import joblib
import faiss
import numpy as np
import pandas as pd
from sentence_transformers import models, losses
from sentence_transformers import SentencesDataset, LoggingHandler, SentenceTransformer
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
from sentence_transformers.readers import *
from sentence_transformers.util import batch_to_device
from sentence_transformers.readers.InputExample import InputExample
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
###Output
_____no_output_____
###Markdown
Tatoeba manipulation
###Code
!wget https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/moses/ru-tt.txt.zip
!unzip ru-tt.txt.zip -d ./tatoeba/
!wget https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/moses/en-tt.txt.zip
!unzip en-tt.txt.zip -d ./tatoeba/
class TatoebaReader:
"""Reads in a plain text file, in which every line contains one
sentence."""
def __init__(self, file_path: Path):
self.file_path = file_path
def get_examples(self):
examples = []
with open(self.file_path) as fin:
for i, line in enumerate(fin.readlines()):
examples.append(InputExample(guid=i, texts=[line], label=0))
return examples
TATOEBA_PATH = Path("./tatoeba/")
# Use BERT for mapping tokens to embeddings
# handle the downloading and caching for you:
word_embedding_model = models.BERT('bert-base-multilingual-cased')
def children(m):
return m if isinstance(m, (list, tuple)) else list(m.children())
def set_trainable_attr(m, b):
m.trainable = b
for p in m.parameters():
p.requires_grad = b
def apply_leaf(m, f):
c = children(m)
if isinstance(m, nn.Module):
f(m)
if len(c) > 0:
for l in c:
apply_leaf(l, f)
def set_trainable(l, b):
apply_leaf(l, lambda m: set_trainable_attr(m, b))
set_trainable(word_embedding_model.bert.embeddings.word_embeddings, False)
print(word_embedding_model.bert.embeddings.word_embeddings.weight.requires_grad)
print(word_embedding_model.bert.embeddings.position_embeddings.weight.requires_grad)
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
batch_size = 16
TATOEBA_PATH = '/content/tatoeba'
lang_1 = 'ru'
lang_2 = 'tt'
def evaluate_language_pair(model, pair_name="en-tt", batch_size=32):
lang_1, lang_2 = pair_name.split("-")
reader_1 = TatoebaReader(os.path.join(TATOEBA_PATH, f"Tatoeba.{pair_name}.{lang_1}"))
ds_1 = SentencesDataset(reader_1.get_examples(), model=model)
loader_1 = DataLoader(
ds_1, shuffle=False, batch_size=batch_size,
collate_fn=model.smart_batching_collate)
reader_2 = TatoebaReader(os.path.join(TATOEBA_PATH, f"Tatoeba.{pair_name}.{lang_2}"))
ds_2 = SentencesDataset(reader_2.get_examples(), model=model)
loader_2 = DataLoader(
ds_2, shuffle=False, batch_size=batch_size,
collate_fn=model.smart_batching_collate)
model.eval()
emb_1, emb_2 = [], []
with torch.no_grad():
for batch in loader_1:
emb_1.append(model(
batch_to_device(batch, "cuda")[0][0]
)['sentence_embedding'])
for batch in loader_2:
emb_2.append(model(
batch_to_device(batch, "cuda")[0][0]
)['sentence_embedding'])
emb_1 = torch.cat(emb_1).cpu().numpy()
emb_2 = torch.cat(emb_2).cpu().numpy()
idx_1 = faiss.IndexFlatL2(emb_1.shape[1])
faiss.normalize_L2(emb_1)
idx_1.add(emb_1)
idx_2 = faiss.IndexFlatL2(emb_2.shape[1])
faiss.normalize_L2(emb_2)
idx_2.add(emb_2)
results = []
_, match = idx_2.search(x=emb_1, k=1)
results.append((
lang_1, lang_2,
np.sum(match[:, 0] == np.arange(len(emb_1))),
len(emb_1)
))
_, match = idx_1.search(x=emb_2, k=1)
results.append((
lang_2, lang_1,
np.sum(match[:, 0] == np.arange(len(emb_2))),
len(emb_2)
))
return results
PAIRS = ["en-tt", "ru-tt"]
results = []
for pair in PAIRS:
results += evaluate_language_pair(model, pair_name=pair, batch_size=50)
df_baseline_mean = pd.DataFrame(results, columns=["from", "to", "correct", "total"])
df_baseline_mean
###Output
Convert dataset: 100%|██████████| 1414/1414 [00:00<00:00, 4203.24it/s]
Convert dataset: 35%|███▍ | 490/1414 [00:00<00:00, 4894.94it/s]
###Markdown
Fine Tuning Firstly, just download datasets for fine-tuning.
###Code
import os
folder_path = './datasets/'
if not os.path.exists(folder_path):
os.makedirs(folder_path)
import urllib.request
import zipfile
print('Beginning download of datasets')
datasets = ['AllNLI.zip', 'stsbenchmark.zip', 'wikipedia-sections-triplets.zip']
server = "https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/datasets/"
for dataset in datasets:
print("Download", dataset)
url = server+dataset
dataset_path = os.path.join(folder_path, dataset)
urllib.request.urlretrieve(url, dataset_path)
print("Extract", dataset)
with zipfile.ZipFile(dataset_path, "r") as zip_ref:
zip_ref.extractall(folder_path)
os.remove(dataset_path)
print("All datasets downloaded and extracted")
# Read the dataset
batch_size = 16
nli_reader = NLIDataReader('./datasets/AllNLI')
sts_reader = STSDataReader('./datasets/stsbenchmark')
train_num_labels = nli_reader.get_num_labels()
model_save_path = 'output/training_nli_bert-'+datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# Convert the dataset to a DataLoader ready for training
logging.info("Read AllNLI train dataset")
train_data = SentencesDataset(nli_reader.get_examples('train.gz', max_examples=100000), model=model)
train_dataloader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
train_loss = losses.SoftmaxLoss(
model=model, sentence_embedding_dimension=model.get_sentence_embedding_dimension(), num_labels=train_num_labels)
joblib.dump(train_data, "allnli_train_dataset.jl")
logging.info("Read STSbenchmark dev dataset")
dev_data = SentencesDataset(examples=sts_reader.get_examples('sts-dev.csv'), model=model)
dev_dataloader = DataLoader(dev_data, shuffle=False, batch_size=batch_size)
evaluator = EmbeddingSimilarityEvaluator(dev_dataloader)
joblib.dump(dev_data, "sts_dev_dataset.jl")
# Configure the training
num_epochs = 1
warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1) #10% of train data for warm-up
logging.info("Warmup-steps: {}".format(warmup_steps))
# Train the model
model.fit(train_objectives=[(train_dataloader, train_loss)],
evaluator=evaluator,
epochs=num_epochs,
evaluation_steps=1000,
warmup_steps=warmup_steps,
output_path=model_save_path
)
###Output
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/6250 [00:00<?, ?it/s][A
###Markdown
After fine-tuning
###Code
model = SentenceTransformer('/content/output/training_nli_bert-2020-01-13_14-53-24')
results = []
for pair in PAIRS:
results += evaluate_language_pair(model, pair_name=pair, batch_size=50)
df_finetuned = pd.DataFrame(results, columns=["from", "to", "correct", "total"])
df_finetuned
df_baseline_mean
###Output
_____no_output_____
|
examples/experimental/koen/MLP Plan.ipynb
|
###Markdown
Dataset
###Code
mnist_path = Path.home() / ".pysyft" / "mnist"
mnist_path.mkdir(exist_ok=True, parents=True)
mnist_train = datasets.MNIST(str(mnist_path), train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]))
mnist_test = datasets.MNIST((mnist_path), train=False,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]))
train_loader = th.utils.data.DataLoader(mnist_train, batch_size=64*3, shuffle=True, pin_memory=True)
test_loader = th.utils.data.DataLoader(mnist_test, batch_size=1024, shuffle=True, pin_memory=True)
###Output
_____no_output_____
###Markdown
Define Plan obvious shortcommings:- slice is not in the AST, so we cannot do for xs[0:64]- nn.module is not serializable, so we cannot send it- we are using syft.lib.python.list.List to create a sendable list of model params
###Code
class MLP(sy.Module):
def __init__(self, torch_ref):
super().__init__(torch_ref=torch_ref)
self.l1 = self.torch_ref.nn.Linear(784, 100)
self.a1 = self.torch_ref.nn.ReLU()
self.l2 = self.torch_ref.nn.Linear(100, 10)
def forward(self, x):
x_reshaped = x.view(-1, 28 * 28)
l1_out = self.a1(self.l1(x_reshaped))
l2_out = self.l2(l1_out)
return l2_out
def set_params(model, params):
"""happens outside of plan"""
for p, p_new in zip(model.parameters(), params): p.data = p_new.data
def cross_entropy_loss(logits, targets, batch_size):
norm_logits = logits - logits.max()
log_probs = norm_logits - norm_logits.exp().sum(dim=1, keepdim=True).log()
return -(targets * log_probs).sum() / batch_size
def sgd_step(model, lr=0.1):
with ROOT_CLIENT.torch.no_grad():
for p in model.parameters():
p.data = p.data - lr * p.grad
p.grad = th.zeros_like(p.grad.get())
local_model = MLP(th)
@make_plan
def train(xs = th.rand([64*3, 1, 28, 28]), ys = th.randint(0, 10, [64*3, 10]),
params = List(local_model.parameters()) ):
model = local_model.send(ROOT_CLIENT)
set_params(model, params)
for i in range(1):
indices = th.tensor(range(64*i, 64*(i+1)))
x, y = xs.index_select(0, indices), ys.index_select(0, indices)
out = model(x)
loss = cross_entropy_loss(out, y, 64)
loss.backward()
sgd_step(model)
return model.parameters()
train
train.actions[:10]
###Output
_____no_output_____
###Markdown
Run
###Code
alice_client = VirtualMachine(name="alice").get_client()
train_ptr = train.send(alice_client)
def test(test_loader, model):
correct = []
model.eval()
for data, target in test_loader:
output = model(data)
_, pred = th.max(output, 1)
correct.append(th.sum(np.squeeze(pred.eq(target.data.view_as(pred)))))
acc = sum(correct) / len(test_loader.dataset)
return acc
def show_predictions(test_loader, model, n=6):
xs, ys = next(iter(test_loader))
preds = model(xs).detach()
fig, axs = plt.subplots(1, n, sharex='col', sharey='row', figsize=(16, 8))
for i in range(n):
ax = axs[i]
ax.set_xticks([]),ax.set_yticks([])
ax.set_xlabel(f"prediction: {np.argmax(preds[i])}, actual: {ys[i]}")
ax.imshow(xs[i].reshape((28, 28)))
show_predictions(test_loader, local_model)
print(f"accuracy: {test(test_loader, local_model):.2F}")
###Output
accuracy: 0.07
###Markdown
Train
###Code
for i, (x, y) in enumerate(train_loader):
y = th.nn.functional.one_hot(y)
res_ptr = train_ptr(xs=x,ys=y, params=local_model.parameters())
params, = res_ptr.get()
set_params(local_model, params)
if i%10 == 0:
acc = test(test_loader, local_model)
print(f"Iter: {i} Test accuracy: {acc:.2F}", flush=True)
if i>100:
break
###Output
Iter: 0 Test accuracy: 0.32
Iter: 10 Test accuracy: 0.67
Iter: 20 Test accuracy: 0.84
Iter: 30 Test accuracy: 0.82
Iter: 40 Test accuracy: 0.83
Iter: 50 Test accuracy: 0.85
Iter: 60 Test accuracy: 0.89
Iter: 70 Test accuracy: 0.89
Iter: 80 Test accuracy: 0.88
Iter: 90 Test accuracy: 0.88
Iter: 100 Test accuracy: 0.88
###Markdown
Test
###Code
show_predictions(test_loader, local_model)
print(f"accuracy: {test(test_loader, local_model):.2F}")
###Output
accuracy: 0.90
###Markdown
Dataset
###Code
mnist_path = Path.home() / ".pysyft" / "mnist"
mnist_path.mkdir(exist_ok=True, parents=True)
mnist_train = datasets.MNIST(str(mnist_path), train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]))
mnist_test = datasets.MNIST((mnist_path), train=False,
transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]))
train_loader = th.utils.data.DataLoader(mnist_train, batch_size=64*3, shuffle=True, pin_memory=True)
test_loader = th.utils.data.DataLoader(mnist_test, batch_size=1024, shuffle=True, pin_memory=True)
###Output
_____no_output_____
###Markdown
Define Plan obvious shortcommings:- slice is not in the AST, so we cannot do for xs[0:64]- nn.module is not serializable, so we cannot send it- we are using syft.lib.python.list.List to create a sendable list of model params
###Code
class MLP(sy.Module):
def __init__(self, torch_ref):
super().__init__(torch_ref=torch_ref)
self.l1 = self.torch_ref.nn.Linear(784, 100)
self.a1 = self.torch_ref.nn.ReLU()
self.l2 = self.torch_ref.nn.Linear(100, 10)
def forward(self, x):
x_reshaped = x.view(-1, 28 * 28)
l1_out = self.a1(self.l1(x_reshaped))
l2_out = self.l2(l1_out)
return l2_out
def set_params(model, params):
"""happens outside of plan"""
for p, p_new in zip(model.parameters(), params): p.data = p_new.data
def cross_entropy_loss(logits, targets, batch_size):
norm_logits = logits - logits.max()
log_probs = norm_logits - norm_logits.exp().sum(dim=1, keepdim=True).log()
return -(targets * log_probs).sum() / batch_size
def sgd_step(model, lr=0.1):
with ROOT_CLIENT.torch.no_grad():
for p in model.parameters():
p.data = p.data - lr * p.grad
p.grad = th.zeros_like(p.grad.get())
local_model = MLP(th)
@make_plan
def train(xs = th.rand([64*3, 1, 28, 28]), ys = th.randint(0, 10, [64*3, 10]),
params = List(local_model.parameters()) ):
model = local_model.send(ROOT_CLIENT)
set_params(model, params)
for i in range(1):
indices = th.tensor(range(64*i, 64*(i+1)))
x, y = xs.index_select(0, indices), ys.index_select(0, indices)
out = model(x)
loss = cross_entropy_loss(out, y, 64)
loss.backward()
sgd_step(model)
return model.parameters()
train, train.actions[:10]
###Output
_____no_output_____
###Markdown
Run
###Code
alice_client = VirtualMachine(name="alice").get_client()
train_ptr = train.send(alice_client)
def test(test_loader, model):
correct = []
model.eval()
for data, target in test_loader:
output = model(data)
_, pred = th.max(output, 1)
correct.append(th.sum(np.squeeze(pred.eq(target.data.view_as(pred)))))
acc = sum(correct) / len(test_loader.dataset)
return acc
def show_predictions(test_loader, model, n=6):
xs, ys = next(iter(test_loader))
preds = model(xs).detach()
fig, axs = plt.subplots(1, n, sharex='col', sharey='row', figsize=(16, 8))
for i in range(n):
ax = axs[i]
ax.set_xticks([]),ax.set_yticks([])
ax.set_xlabel(f"prediction: {np.argmax(preds[i])}, actual: {ys[i]}")
ax.imshow(xs[i].reshape((28, 28)))
show_predictions(test_loader, local_model)
print(f"accuracy: {test(test_loader, local_model):.2F}")
###Output
accuracy: 0.07
###Markdown
Train
###Code
for i, (x, y) in enumerate(train_loader):
y = th.nn.functional.one_hot(y)
res_ptr = train_ptr(xs=x,ys=y, params=local_model.parameters())
params, = res_ptr.get()
set_params(local_model, params)
if i%10 == 0:
acc = test(test_loader, local_model)
print(f"Iter: {i} Test accuracy: {acc:.2F}", flush=True)
if i>100:
break
###Output
Iter: 0 Test accuracy: 0.32
Iter: 10 Test accuracy: 0.67
Iter: 20 Test accuracy: 0.84
Iter: 30 Test accuracy: 0.82
Iter: 40 Test accuracy: 0.83
Iter: 50 Test accuracy: 0.85
Iter: 60 Test accuracy: 0.89
Iter: 70 Test accuracy: 0.89
Iter: 80 Test accuracy: 0.88
Iter: 90 Test accuracy: 0.88
Iter: 100 Test accuracy: 0.88
###Markdown
Test
###Code
show_predictions(test_loader, local_model)
print(f"accuracy: {test(test_loader, local_model):.2F}")
###Output
accuracy: 0.90
|
bad_ugly_n_concise/bad_ugly_n_concise.ipynb
|
###Markdown
Bad, ugly and/or concisePython code py3k*Maciej Urbański, [email protected]* built-ins!https://docs.python.org/3/library/functions.html||||||-|-|-|-|-||abs()|dict()|help()|min()|setattr()||all()|dir()|hex()|next()|slice()}|any()|divmod()|id()|object()|sorted()||ascii()|enumerate()|input()|oct()|staticmethod()||bin()|eval()|int()|open()|str()||bool()|exec()|isinstance()|ord()|sum()||bytearray()|filter()|issubclass()|pow()|super()||bytes()|float()|iter()|print()|tuple()||callable()|format()|len()|property()|type()||chr()|frozenset()|list()|range()|vars()||classmethod()|getattr()|locals()|repr()|zip()||compile()|globals()|map()|reversed()|\_\_import\_\_()||complex()|hasattr()|max()|round()||delattr()|hash()|memoryview()|set()| Assignment, unpacking & slicing```pya = 1b, c = 2, 3mylist = [a, b, c][:-1]```
###Code
# initialize multilbe variables at the same time
a = b = c = 1337
a = b = c = [] # probably a bug
a, b, c = ([], ) * 3 # exactly the same problem!
a, b, c = ([] for i in range(3)) # ok!
a, b, c = map(list, [()] * 3)
# lets unpack a tuple, disregard extra stuff
a, b, c = (1, 3, 3, 7)[:3] # meh
a, b, c, _ = 1, 3, 3, 7
a, b, *_ = 1, 3, 3, 7 # PEP 3132 -- Extended Iterable Unpacking
###Output
_____no_output_____
###Markdown
but I'm still on python 2!
###Code
seq = (1, 3, 3, 7)
head, tail = seq[0], seq[1:] # python3: head, *tail = seq
it = iter(seq)
head, tail = next(it), list(it)
# yea... sucks to be you
counter = 0
for element in ['a', 'b', 'c', 'd']:
counter += 1
print(counter, element)
for counter, element in enumerate(['a', 'b', 'c', 'd'], start=1): # [(1, 'a'), (2, 'b'), ...]
print(counter, element)
import itertools
###Output
_____no_output_____
###Markdown
note there is a difference between these two above!
###Code
for n, (first, second) in enumerate(itertools.product(range(3), repeat=2)):
print(n, first, second)
###Output
0 0 0
1 0 1
2 0 2
3 1 0
4 1 1
5 1 2
6 2 0
7 2 1
8 2 2
###Markdown
Slicingemo picture didn't make the final cut
###Code
l = [1, 2, 3, 4, 5]
print(l[2:3])
l[2:3] = [0, 0, 0,]
print(l)
# now the same with `slice` # py3k
l = [1, 2, 3, 4, 5]
s = slice(2, 3)
print(l[s])
l[s] = [0, 0, 0,]
print(l)
l = 1, 2, 3, 4
# get first element of list
first = l[0]
l = []
# but I want to support empty list too!
first = next(iter(l), None)
# or
first = l[0] if l else None
# get last elememt of list
# but I want to...
last = next(iter(l[-1:]), None)
# or
last = l[-1] if l else None
# what about generators?
last = None
for last in range(10): pass
print(last)
# wait a minute, this is python 3
*_, last = range(20)
print(last)
###Output
19
###Markdown
Conditional statements and logic operators
###Code
a, b, condition = 1, 2, True
# range checking
if a<1 and a>0: pass # ew.
if 0<a<1: pass # better
# conditional value
a if condition else b
# still missing ternany operator from C? condition?a:b
(b, a)[condition]
condition and a or b
# quite often we want to check if something is there and if not subsitute it with sth else
# instead of
a if a else b
# you can do
a or b
# do you want just conditionally run some code?
True and print('YES!') or print('no')
flag = 1 is 1
other_flag = flag == 1
other_flag = bool(flag == 1) # ~maybe~ better?
# BANG BANG version for FULL STACK DEVELOPERS
other_flag = not not flag == 1
###Output
_____no_output_____
###Markdown
imports and so on
###Code
from sys import *
###Output
_____no_output_____
###Markdown
please don't.
###Code
# do you want to use sys, os and html modules - just import some other module that uses them!
import cgi
cgi.os
cgi.sys
cgi.html
###Output
_____no_output_____
###Markdown
PRETTY PLEASE? ```py do you recognize this handy one-liner?import pdb; pdb.set_trace() semicolon in python? BLASPHEMY``` ```py FIXED!__import__('pdb').set_trace()``` on more serious note if you want to import modules by name look into `importlib` module
###Code
from os import fabs
x = fabs(a) + fabs(b) + fabs(c)
# don't you think this is just too long?
from os import fabs as f
x = f(a) + f(b) + f(c)
# still kinda long...
import os; f=os.fabs
x = f(a) + f(b) + f(c)
###Output
_____no_output_____
###Markdown
Comprehension
###Code
import random
# list compehension
list_of_pairs = [(i, random.random()) for i in range(10)]
# I know, not as cool as generators...
# dict
my_dict = {key: value for key, value in list_of_pairs()}
# set
my_set = {v for v in my_dict.keys()}
# but what to do if you hate your coworkers?
import random
my_set = {
v for v in {
key: value
for key, value in [(i, random.random()) for i in range(10)]}}
# PERFECT
print(my_set)
###Output
_____no_output_____
###Markdown
oh? you REALLY hate them?
###Code
# just start subsituting your traditional `for` loops with compehension
[print('KILLME') for i in range(10)]
# or `map`
map(lambda i:print('KILLME'), range(10))
# oh, right, python3 - map is a generator
list(map(lambda _:print('KILLME'), range(10)))
# if you what to give additional FU to py2 and don't care about
# empty loops:
_,*_=map(lambda _:print('KILLME'), range(10))
###Output
_____no_output_____
###Markdown
built-ins!https://docs.python.org/3/library/functions.html
###Code
# parsing django-like order by
order_by = '-date'
desc = order_by[0] == '-'
ordering_field = order_by[desc:]
_, desc, ordering_field = order_by.rpartition('-')
# rotating list of lists
# aka matrix transposition
# aka COLUMNS ARE NOW ROWS!
zip(*original[::-1])
###Output
_____no_output_____
###Markdown
not powerfull enough for you? checkout `itertools` module! python 3.0 `range` is little different than the one from python 2
###Code
r=range(1000000000) # doesn't actually eat up memory
###Output
_____no_output_____
###Markdown
pfff, I knew that!
###Code
import timeit
# but did you know it implements `__contains__`?
5 in range(9999999999999999999)
# ... but only for `long`'s?
timeit.timeit('30000 in range(1, 1000000, 3)', number=1000)
timeit.timeit('30000.5 in range(0, 10000, 3)', number=1000)
# btw, in case you were wondering:
0.5 in range(-10, 10)
-10 < 0.5 < 10
# still not on Python 3.6 but you want PEP 498: Formatted string literals?
name, place = 'stranger', 'Internet'
'Welcome {name} to {place}'.format(**locals())
# ... PEP 498 stuff seems actually much more powerful, but still ~kinda~ similar
# multiple every element of iterable
# a*b*c*...
import operator
import functools
functools.reduce(operator.mul, range(1,7), 1)
###Output
_____no_output_____
|
CONTRIBUTION/Jupyter notebooks/Rahul_Gupta.ipynb
|
###Markdown
Part 1 Imports
###Code
%config Completer.use_jedi = False
import numpy as np
import pandas as pd
import nltk
nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
from nltk import pos_tag
nltk.download('stopwords')
from nltk.corpus import stopwords
nltk.download('wordnet')
from nltk.corpus import wordnet
import re
nltk.download('averaged_perceptron_tagger')
nltk.download('sentiwordnet')
###Output
_____no_output_____
###Markdown
Handling Dataset Reading the dataset and adding new features
###Code
df= pd.read_csv("dataset.csv")
df.head()
df = df.drop('Unnamed: 0', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Cleaning the tweet captions
###Code
# Define a function to clean the text
def clean(text):
# Removes all special characters and numericals leaving the alphabets
text = re.sub('[^A-Za-z]+', ' ', text)
return text
# Cleaning the text in the review column
df['Cleaned Tweetcaptions'] = df['tweetcaption'].apply(clean)
df.head()
###Output
_____no_output_____
###Markdown
Adding Parts of Speech (POS) for Vader analysis
###Code
# POS tagger dictionary
pos_dict = {'J':wordnet.ADJ, 'V':wordnet.VERB, 'N':wordnet.NOUN, 'R':wordnet.ADV}
def token_stop_pos(text):
tags = pos_tag(word_tokenize(text))
newlist = []
for word, tag in tags:
if word.lower() not in set(stopwords.words('english')):
newlist.append(tuple([word, pos_dict.get(tag[0])]))
return newlist
df['POS tagged'] = df['Cleaned Tweetcaptions'].apply(token_stop_pos)
df.head()
###Output
_____no_output_____
###Markdown
Adding Lemma column
###Code
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
def lemmatize(pos_data):
lemma_rew = " "
for word, pos in pos_data:
if not pos:
lemma = word
lemma_rew = lemma_rew + " " + lemma
else:
lemma = wordnet_lemmatizer.lemmatize(word, pos=pos)
lemma_rew = lemma_rew + " " + lemma
return lemma_rew
df['Lemma'] = df['POS tagged'].apply(lemmatize)
df.head()
df.Data.nunique()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 35266 entries, 0 to 35265
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 35266 non-null int64
1 Data 35266 non-null object
2 Date 35266 non-null object
3 Time 35266 non-null object
4 tweetcaption 35266 non-null object
dtypes: int64(1), object(4)
memory usage: 1.6+ MB
###Markdown
As we can see there are no null values Building Functions I have used three rule based analysis models:1. VADER (Valence Aware Dictionary for Sentiment Reasoning): It is a model used for text sentiment analysis that is sensitive to both polarity (positive/negative) and intensity (strength) of emotion.2. TextBlob: TextBlob returns polarity and subjectivity of a sentence. Polarity lies between [-1,1], -1 defines a negative sentiment and 1 defines a positive sentiment.3. SentiWordNet :SentiWordNet is an opinion lexicon derived from the WordNet database where each term isassociated with numerical scores indicating positive and negative sentiment information. VADER
###Code
sid = SentimentIntensityAnalyzer() #Vader Sentiment analyser
df['scores']=df['Lemma'].apply(lambda lemma: sid.polarity_scores(lemma)) #applying Vader analysis on Lemma
df.head()
df['compound'] = df['scores'].apply(lambda score_dict: score_dict['compound'])
df['sentiment_type']=''
df.loc[df.compound>0,'sentiment_type']='POSITIVE'
df.loc[df.compound==0,'sentiment_type']='NEUTRAL'
df.loc[df.compound<0,'sentiment_type']='NEGATIVE'
df.head()
df['sentiment_type'].value_counts()
###Output
_____no_output_____
###Markdown
TextBlob
###Code
from textblob import TextBlob
def getSubjectivity(text):
return TextBlob(text).sentiment.subjectivity
#Create a function to get the polarity
def getPolarity(text):
return TextBlob(text).sentiment.polarity
#Create two new columns ‘Subjectivity’ & ‘Polarity’
# df[‘TextBlob_Subjectivity’] = df[‘tweet’].apply(getSubjectivity)
df ['TextBlob_Polarity'] = df["Lemma"].apply(getPolarity)
def getAnalysis(score):
if score < 0:
return "Negative"
elif score == 0:
return "Neutral"
else:
return "Positive"
df["TextBlob_Analysis"] = df["TextBlob_Polarity"].apply(getAnalysis )
df.head()
###Output
_____no_output_____
###Markdown
comparing analysis of vader and TextBlob
###Code
print(df['sentiment_type'].value_counts())
print(df['TextBlob_Analysis'].value_counts())
###Output
POSITIVE 27652
NEGATIVE 7468
NEUTRAL 146
Name: sentiment_type, dtype: int64
Positive 29343
Negative 5675
Neutral 248
Name: TextBlob_Analysis, dtype: int64
###Markdown
Here I tried to normalize the scores to come up with a new scoring system, however it turned out that it is not very useful so I dropped this idea
###Code
df.describe()
normalized_df=(df-df.mean())/df.std()
normalized_df.head()
df['nNegativeScore']=normalized_df['NegativeScore']
df['nneutralScore']=normalized_df['neutralScore']
df['npostiveScore']=normalized_df['positiveScore']
df['ncompound']=normalized_df['compound']
df['nTextBlob_Polarity']=normalized_df['TextBlob_Polarity']
df.head()
df=df.drop(["POS tagged",'scores'],axis=1)
df.head()
df[df['sentiment_type'] =="NEUTRAL"].head() # checking neutral values
cmax= df['compound'].max()
cmin= df['compound'].min()
df['nncompound'] = df.apply(lambda x:(x['compound']-cmin)/(cmax-cmin), axis=1) # normalizing
df.head()
cmax= df['TextBlob_Polarity'].mean()
cmin= df['TextBlob_Polarity'].std()
df['nn1TextBlob_Polarity'] = df.apply(lambda x:(x['TextBlob_Polarity']-cmax)/(cmin), axis=1)
df.head()
cmax= df['compound'].std()
cmin= df['compound'].mean()
df['nn1compound'] = df.apply(lambda x:(x['nncompound']-cmin)/(cmax), axis=1) #standardizing
df.head()
###Output
_____no_output_____
###Markdown
I found out that normalizing the scores did not help so I dropped these columns
###Code
df=df.drop(["nncompound","nnTextBlob_Polarity","nn1TextBlob_Polarity","nn1compound"],axis=1) # dropping these as I found out that these were not useful
###Output
_____no_output_____
###Markdown
Creating a new score by taking the average of scores of VADER and TextBlob
###Code
df['average0'] = df.apply(lambda x:(x['compound']+x['TextBlob_Polarity'])/(2), axis=1) # finding average
df.head()
def getAnalysis(score):
if score < 0:
return "Negative"
elif score == 0:
return "Neutral"
else:
return "Positive"
df["new_score"] = df["average0"].apply(getAnalysis ) #Analysing the new score
df.head()
###Output
_____no_output_____
###Markdown
Comparing the analysis of new score with the two models
###Code
print(df["new_score"].value_counts())
print(df['TextBlob_Analysis'].value_counts())
print(df['sentiment_type'].value_counts())
###Output
Positive 27858
Negative 7330
Neutral 78
Name: new_score, dtype: int64
Positive 29343
Negative 5675
Neutral 248
Name: TextBlob_Analysis, dtype: int64
POSITIVE 27652
NEGATIVE 7468
NEUTRAL 146
Name: sentiment_type, dtype: int64
###Markdown
SentiWordNet Adding Parts of Speech Column(POS)
###Code
# POS tagger dictionary
pos_dict = {'J':wordnet.ADJ, 'V':wordnet.VERB, 'N':wordnet.NOUN, 'R':wordnet.ADV}
def token_stop_pos(text):
tags = pos_tag(word_tokenize(text))
newlist = []
for word, tag in tags:
if word.lower() not in set(stopwords.words('english')):
newlist.append(tuple([word, pos_dict.get(tag[0])]))
return newlist
df['POS tagged'] = df['Cleaned Tweetcaptions'].apply(token_stop_pos)
df.head()
###Output
_____no_output_____
###Markdown
Applying SentiWordNet and modifying it to get the score as well
###Code
from nltk.corpus import wordnet as wn
from nltk.corpus import sentiwordnet as swn
def sentiwordnetanalysis(pos_data):
sentiment = 0
tokens_count = 0
for word, pos in pos_data:
if not pos:
continue
lemma = wordnet_lemmatizer.lemmatize(word, pos=pos)
if not lemma:
continue
synsets = wordnet.synsets(lemma, pos=pos)
if not synsets:
continue
# Take the first sense, the most common
synset = synsets[0]
swn_synset = swn.senti_synset(synset.name())
sentiment += swn_synset.pos_score() - swn_synset.neg_score()
tokens_count += 1
# print(swn_synset.pos_score(),swn_synset.neg_score(),swn_synset.obj_score())
if not tokens_count:
return 0
if sentiment>0:
return "Positive"
if sentiment==0:
return "Neutral"
else:
return "Negative"
df['SWN analysis'] = df['POS tagged'].apply(sentiwordnetanalysis)
df.head()
df['SWN analysis'].value_counts()
###Output
_____no_output_____
###Markdown
Changing the function to output sentiment score
###Code
def sentiwordnetscore(pos_data):
sentiment = 0
tokens_count = 0
for word, pos in pos_data:
if not pos:
continue
lemma = wordnet_lemmatizer.lemmatize(word, pos=pos)
if not lemma:
continue
synsets = wordnet.synsets(lemma, pos=pos)
if not synsets:
continue
# Take the first sense, the most common
synset = synsets[0]
swn_synset = swn.senti_synset(synset.name())
sentiment += swn_synset.pos_score() - swn_synset.neg_score()
tokens_count += 1
return sentiment;
df['SWN_score'] = df['POS tagged'].apply(sentiwordnetscore)
df.head()
###Output
_____no_output_____
###Markdown
Final score Finding the final score by taking the average of the scores of VADER, TextBlob and SentiWordNet as it will give more a generalized prediction giving it a representation of all the three models.
###Code
df['average1'] = df.apply(lambda x:(x['compound']+x['TextBlob_Polarity']+x['SWN_score'])/(3), axis=1) #Finding the average score
df.head()
df["newest_score"] = df["average1"].apply(getAnalysis ) # Adding the analysis
df.head()
###Output
_____no_output_____
###Markdown
Comparing the analysis of all the models
###Code
print(df["new_score"].value_counts())
print(df['TextBlob_Analysis'].value_counts())
print(df['sentiment_type'].value_counts())
print(df['SWN analysis'].value_counts())
print(df['newest_score'].value_counts())
df.head()
###Output
_____no_output_____
###Markdown
Cleaning the dataset by removing unwanted columns and rearranging and renaming the useful ones
###Code
df_final=df.drop(["Cleaned Tweetcaptions","nNegativeScore","npostiveScore","ncompound","nTextBlob_Polarity","nneutralScore","POS tagged","new_score","average0"],axis=1)
df_final.head()
df_final.rename(columns={'NegativeScore': 'Vader_negScore', 'positiveScore': 'Vader_posScore', 'neutralScore': 'Vader_neuScore', 'compound': 'Vader_compoundScore', 'sentiment_type': 'Vader_analysis', 'average1': 'newScore', 'newest_score': 'combinedAnalysis'}, inplace=True)
df_final.head()
oldcols=df_final.columns
oldcols
newcols=['Data', 'Date', 'Time', 'tweetcaption', 'Lemma', 'Vader_negScore',
'Vader_posScore', 'Vader_neuScore', 'Vader_compoundScore',
'Vader_analysis', 'TextBlob_Polarity', 'TextBlob_Analysis',
'SWN_score','SWN analysis', 'newScore', 'combinedAnalysis']
df_final = df_final.reindex(columns=newcols)
df_final.head()
print(df_final['TextBlob_Analysis'].value_counts())
print(df_final['Vader_analysis'].value_counts())
print(df_final['SWN analysis'].value_counts())
print(df_final['combinedAnalysis'].value_counts())
###Output
Positive 29343
Negative 5675
Neutral 248
Name: TextBlob_Analysis, dtype: int64
POSITIVE 27652
NEGATIVE 7468
NEUTRAL 146
Name: Vader_analysis, dtype: int64
Neutral 23928
Positive 8337
Negative 2991
Name: SWN analysis, dtype: int64
Positive 27851
Negative 7347
Neutral 68
Name: combinedAnalysis, dtype: int64
|
programming practice 1.ipynb
|
###Markdown
problem 3
###Code
import pandas as pd
data ={"R&D Spend": [5354, 256, 2662, 2827, 2782, 2772, 2726],
"Administration": [7376.3, 6736.3, 6363.4, 3767.4, 6363.4, 6363.5, 6363.6],
"Marketing spend": [66367.56, 74774.5, 3767363.4, 4377.5, 7363.5, 77474.5, 73673.5],
"State":["Newyork","Florida","Carlifonia","New york","Texas","Florida","Carlifonia"],
"Profit": [673660.4, 736730.5, 7836730.7, 8746740.5, 74640.4, 78360.5, 7847640.4]}
data=pan.DataFrame (data)
print(data)
data.to_excel('data.xlsx')
###Output
_____no_output_____
###Markdown
problem 4
###Code
d= float(input("Enter distance"))
t= float(input("Enter time:"))
s = d/t
print("Speed =", s, "Miles/hour")
###Output
Enter distance50
Enter time:2
Speed = 25.0 Miles/hour
|
13 Binary Trees - 1/13.10 Number of Leaf Nodes.ipynb
|
###Markdown
0 - 1 leaf node 0 / \ - 2 leaf nodes 1 2
###Code
class BinaryTreeNode:
def __init__(self, data):
self.data = data
self.left = None
self.right = None
def treeInput():
rootData = int(input())
if rootData == -1:
return None
root = BinaryTreeNode(rootData)
root.left = treeInput()
root.right = treeInput()
return root
def numLeafNodes(root):
if root == None:
return 0
if root.left == None and root.right == None:
return 1
numLeafLeft = numLeafNodes(root.left)
numLeafRight = numLeafNodes(root.right)
return numLeafLeft + numLeafRight
root = treeInput()
printTreeDetailed(root)
print("Number of Leaf Nodes = ", numLeafNodes(root))
###Output
8
3
10
1
6
-1
-1
14
-1
-1
4
7
13
-1
-1
-1
-1
-1
-1
8:L 3,
3:L 10,
10:L 1,R 4
1:L 6,R 14
6:
14:
4:L 7,
7:L 13,
13:
Number of Leaf Nodes = 3
|
pytorch-lightning_ipynb/mlp/mlp-batchnorm.ipynb
|
###Markdown
Model Zoo -- Multilayer Perceptron with BatchNorm General settings and hyperparameters - Here, we specify some general hyperparameter values and general settings- Note that for small datatsets, it is not necessary and better not to use multiple workers as it can sometimes cause issues with too many open files in PyTorch. So, if you have problems with the data loader later, try setting `NUM_WORKERS = 0` instead.
###Code
BATCH_SIZE = 256
NUM_EPOCHS = 20
LEARNING_RATE = 0.005
NUM_WORKERS = 4
###Output
_____no_output_____
###Markdown
Implementing a Neural Network using PyTorch Lightning's `LightningModule` - In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.- We start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
###Code
import torch
import torch.nn.functional as F
# Regular PyTorch Module
class PyTorchMLP(torch.nn.Module):
def __init__(self, input_size, hidden_units, num_classes):
super().__init__()
# Initialize MLP layers
all_layers = []
for hidden_unit in hidden_units:
layer = torch.nn.Linear(input_size, hidden_unit, bias=False)
all_layers.append(layer)
all_layers.append(torch.nn.BatchNorm1d(hidden_unit))
all_layers.append(torch.nn.ReLU())
input_size = hidden_unit
output_layer = torch.nn.Linear(
in_features=hidden_units[-1],
out_features=num_classes)
all_layers.append(output_layer)
self.model = torch.nn.Sequential(*all_layers)
def forward(self, x):
x = torch.flatten(x, start_dim=1) # to make it work for image inputs
x = self.model(x)
return x # x are the model's logits
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningMLP(pl.LightningModule):
def __init__(self, model):
super().__init__()
# The inherited PyTorch module
self.model = model
# Save hyperparameters to the log directory
self.save_hyperparameters()
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, _, _ = self._shared_step(batch)
self.log("train_loss", loss)
# To account for BatchNorm behavior during evaluation
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc.update(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc.update(predicted_labels, true_labels)
self.log("valid_acc", self.valid_acc,
on_epoch=True, on_step=False, prog_bar=True)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc.update(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=LEARNING_RATE)
return optimizer
###Output
_____no_output_____
###Markdown
Setting up the dataset - In this section, we are going to set up our dataset. Inspecting the dataset
###Code
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True)
test_dataset = datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False)
# Checking the dataset
all_train_labels = []
all_test_labels = []
for images, labels in train_loader:
all_train_labels.append(labels)
all_train_labels = torch.cat(all_train_labels)
for images, labels in test_loader:
all_test_labels.append(labels)
all_test_labels = torch.cat(all_test_labels)
print('Training labels:', torch.unique(all_train_labels))
print('Training label distribution:', torch.bincount(all_train_labels))
print('\nTest labels:', torch.unique(all_test_labels))
print('Test label distribution:', torch.bincount(all_test_labels))
###Output
Training labels: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Training label distribution: tensor([5914, 6737, 5949, 6121, 5833, 5409, 5908, 6253, 5838, 5942])
Test labels: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Test label distribution: tensor([ 980, 1135, 1032, 1010, 982, 892, 958, 1028, 974, 1009])
###Markdown
Performance baseline - Especially for imbalanced datasets, it's quite useful to compute a performance baseline.- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- you want your model to be better than that!
###Code
majority_prediction = torch.argmax(torch.bincount(all_test_labels))
baseline_acc = torch.mean((all_test_labels == majority_prediction).float())
print(f'Baseline ACC: {baseline_acc*100:.2f}%')
###Output
Baseline ACC: 11.35%
###Markdown
Setting up a `DataModule` - There are three main ways we can prepare the dataset for Lightning. We can 1. make the dataset part of the model; 2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the next subsection; 3. create a LightningDataModule.- Here, we are going to use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods as we can see below:
###Code
import os
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
class DataModule(pl.LightningDataModule):
def __init__(self, data_path='./'):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.MNIST(root=self.data_path,
download=True)
return
def setup(self, stage=None):
# Note transforms.ToTensor() scales input images
# to 0-1 range
train = datasets.MNIST(root=self.data_path,
train=True,
transform=transforms.ToTensor(),
download=False)
self.test = datasets.MNIST(root=self.data_path,
train=False,
transform=transforms.ToTensor(),
download=False)
self.train, self.valid = random_split(train, lengths=[55000, 5000])
def train_dataloader(self):
train_loader = DataLoader(dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return test_loader
###Output
_____no_output_____
###Markdown
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the the dataset loading -- if you run your code in a distributed setting, this will be called on each node / GPU. - Next, lets initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
###Code
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
###Output
_____no_output_____
###Markdown
Training the model using the PyTorch Lightning Trainer class - Next, we initialize our model.- Also, we define a call back so that we can obtain the model with the best validation set performance after training.- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. Here, we will keep things simple and use the `CSVLogger`:
###Code
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
pytorch_model = PyTorchMLP(
input_size=28*28,
hidden_units=(128, 256),
num_classes=10
)
lightning_model = LightningMLP(pytorch_model)
callbacks = [ModelCheckpoint(
save_top_k=1, mode='max', monitor="valid_acc")] # save top 1 model
logger = CSVLogger(save_dir="logs/", name="my-mlp")
###Output
_____no_output_____
###Markdown
- Now it's time to train our model:
###Code
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
log_every_n_steps=100)
trainer.fit(model=lightning_model, datamodule=data_module)
###Output
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
-----------------------------------------
0 | model | PyTorchMLP | 136 K
1 | train_acc | Accuracy | 0
2 | valid_acc | Accuracy | 0
3 | test_acc | Accuracy | 0
-----------------------------------------
136 K Trainable params
0 Non-trainable params
136 K Total params
0.546 Total estimated model params size (MB)
###Markdown
Evaluating the model - After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (you may want to consider a [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) that does that for you):
###Code
import pandas as pd
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='Loss')
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='ACC')
###Output
_____no_output_____
###Markdown
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
###Code
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
###Output
Restoring states from the checkpoint path at logs/my-mlp/version_9/checkpoints/epoch=13-step=2995.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at logs/my-mlp/version_9/checkpoints/epoch=13-step=2995.ckpt
###Markdown
Predicting labels of new data - You can use the `trainer.predict` method on a new `DataLoader` or `DataModule` to apply the model to new data.- Alternatively, you can also manually load the best model from a checkpoint as shown below:
###Code
path = f'{trainer.logger.log_dir}/checkpoints/epoch=13-step=2995.ckpt'
lightning_model = LightningMLP.load_from_checkpoint(path)
###Output
_____no_output_____
###Markdown
- Note that our PyTorch model, which is passed to the Lightning model requires input arguments. However, this is automatically being taken care of since we used `self.save_hyperparameters()` in our PyTorch model's `__init__` method.- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
###Code
test_dataloader = data_module.test_dataloader()
all_predicted_labels = []
for batch in test_dataloader:
features, _ = batch
logits = lightning_model.model(features)
predicted_labels = torch.argmax(logits, dim=1)
all_predicted_labels.append(predicted_labels)
all_predicted_labels = torch.cat(all_predicted_labels)
all_predicted_labels[:5]
###Output
_____no_output_____
###Markdown
Model Zoo -- Multilayer Perceptron with BatchNorm General settings and hyperparameters - Here, we specify some general hyperparameter values and general settings- Note that for small datatsets, it is not necessary and better not to use multiple workers as it can sometimes cause issues with too many open files in PyTorch. So, if you have problems with the data loader later, try setting `NUM_WORKERS = 0` instead.
###Code
BATCH_SIZE = 256
NUM_EPOCHS = 20
LEARNING_RATE = 0.005
NUM_WORKERS = 4
###Output
_____no_output_____
###Markdown
Implementing a Neural Network using PyTorch Lightning's `LightningModule` - In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.- We start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
###Code
import torch
import torch.nn.functional as F
# Regular PyTorch Module
class PyTorchMLP(torch.nn.Module):
def __init__(self, input_size, hidden_units, num_classes):
super().__init__()
# Initialize MLP layers
all_layers = []
for hidden_unit in hidden_units:
layer = torch.nn.Linear(input_size, hidden_unit, bias=False)
all_layers.append(layer)
all_layers.append(torch.nn.BatchNorm1d(hidden_unit))
all_layers.append(torch.nn.ReLU())
input_size = hidden_unit
output_layer = torch.nn.Linear(
in_features=hidden_units[-1],
out_features=num_classes)
all_layers.append(output_layer)
self.layers = torch.nn.Sequential(*all_layers)
def forward(self, x):
x = torch.flatten(x, start_dim=1) # to make it work for image inputs
x = self.layers(x)
return x # x are the model's logits
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningMLP(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=['model'])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# To account for BatchNorm behavior during evaluation
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc.update(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log("valid_acc", self.valid_acc,
on_epoch=True, on_step=False, prog_bar=True)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
###Output
_____no_output_____
###Markdown
Setting up the dataset - In this section, we are going to set up our dataset. Inspecting the dataset
###Code
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True)
test_dataset = datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False)
# Checking the dataset
all_train_labels = []
all_test_labels = []
for images, labels in train_loader:
all_train_labels.append(labels)
all_train_labels = torch.cat(all_train_labels)
for images, labels in test_loader:
all_test_labels.append(labels)
all_test_labels = torch.cat(all_test_labels)
print('Training labels:', torch.unique(all_train_labels))
print('Training label distribution:', torch.bincount(all_train_labels))
print('\nTest labels:', torch.unique(all_test_labels))
print('Test label distribution:', torch.bincount(all_test_labels))
###Output
Training labels: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Training label distribution: tensor([5914, 6730, 5946, 6124, 5835, 5413, 5906, 6259, 5840, 5937])
Test labels: tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Test label distribution: tensor([ 980, 1135, 1032, 1010, 982, 892, 958, 1028, 974, 1009])
###Markdown
Performance baseline - Especially for imbalanced datasets, it's quite useful to compute a performance baseline.- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- you want your model to be better than that!
###Code
majority_prediction = torch.argmax(torch.bincount(all_test_labels))
baseline_acc = torch.mean((all_test_labels == majority_prediction).float())
print(f'Baseline ACC: {baseline_acc*100:.2f}%')
###Output
Baseline ACC: 11.35%
###Markdown
Setting up a `DataModule` - There are three main ways we can prepare the dataset for Lightning. We can 1. make the dataset part of the model; 2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the next subsection; 3. create a LightningDataModule.- Here, we are going to use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods as we can see below:
###Code
import os
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
class DataModule(pl.LightningDataModule):
def __init__(self, data_path='./'):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.MNIST(root=self.data_path,
download=True)
return
def setup(self, stage=None):
# Note transforms.ToTensor() scales input images
# to 0-1 range
train = datasets.MNIST(root=self.data_path,
train=True,
transform=transforms.ToTensor(),
download=False)
self.test = datasets.MNIST(root=self.data_path,
train=False,
transform=transforms.ToTensor(),
download=False)
self.train, self.valid = random_split(train, lengths=[55000, 5000])
def train_dataloader(self):
train_loader = DataLoader(dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return test_loader
###Output
_____no_output_____
###Markdown
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the the dataset loading -- if you run your code in a distributed setting, this will be called on each node / GPU. - Next, lets initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
###Code
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
###Output
_____no_output_____
###Markdown
Training the model using the PyTorch Lightning Trainer class - Next, we initialize our model.- Also, we define a call back so that we can obtain the model with the best validation set performance after training.- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. Here, we will keep things simple and use the `CSVLogger`:
###Code
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
pytorch_model = PyTorchMLP(
input_size=28*28,
hidden_units=(128, 256),
num_classes=10
)
lightning_model = LightningMLP(
pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [ModelCheckpoint(
save_top_k=1, mode='max', monitor="valid_acc")] # save top 1 model
logger = CSVLogger(save_dir="logs/", name="my-mlp")
###Output
_____no_output_____
###Markdown
- Now it's time to train our model:
###Code
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
deterministic=True,
log_every_n_steps=100)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time)/60
print(f"Training took {runtime:.2f} min in total.")
###Output
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
-----------------------------------------
0 | model | PyTorchMLP | 136 K
1 | train_acc | Accuracy | 0
2 | valid_acc | Accuracy | 0
3 | test_acc | Accuracy | 0
-----------------------------------------
136 K Trainable params
0 Non-trainable params
136 K Total params
0.546 Total estimated model params size (MB)
###Markdown
Evaluating the model - After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (you may want to consider a [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) that does that for you):
###Code
import pandas as pd
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='Loss')
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='ACC')
###Output
_____no_output_____
###Markdown
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
###Code
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
###Output
Restoring states from the checkpoint path at logs/my-mlp/version_16/checkpoints/epoch=13-step=2995.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at logs/my-mlp/version_16/checkpoints/epoch=13-step=2995.ckpt
###Markdown
Predicting labels of new data - You can use the `trainer.predict` method on a new `DataLoader` or `DataModule` to apply the model to new data.- Alternatively, you can also manually load the best model from a checkpoint as shown below:
###Code
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningMLP.load_from_checkpoint(path, model=pytorch_model)
lightning_model.eval();
###Output
_____no_output_____
###Markdown
- Note that our PyTorch model, which is passed to the Lightning model requires input arguments. However, this is automatically being taken care of since we used `self.save_hyperparameters()` in our PyTorch model's `__init__` method.- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
###Code
test_dataloader = data_module.test_dataloader()
all_true_labels = []
all_predicted_labels = []
for batch in test_dataloader:
features, labels = batch
with torch.no_grad(): # since we don't need to backprop
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
all_predicted_labels.append(predicted_labels)
all_true_labels.append(labels)
all_predicted_labels = torch.cat(all_predicted_labels)
all_true_labels = torch.cat(all_true_labels)
all_predicted_labels[:5]
###Output
_____no_output_____
###Markdown
Just as an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
###Code
test_acc = torch.mean((all_predicted_labels == all_true_labels).float())
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
###Output
Test accuracy: 0.9813 (98.13%)
###Markdown
The three extensions below are optional, for more information, see- `watermark`: https://github.com/rasbt/watermark- `pycodestyle_magic`: https://github.com/mattijn/pycodestyle_magic- `nb_black`: https://github.com/dnanhkhoa/nb_black
###Code
%load_ext pycodestyle_magic
%flake8_on --ignore W291,W293,E703,E402 --max_line_length=100
%load_ext nb_black
###Output
_____no_output_____
###Markdown
Multilayer Perceptron with BatchNorm trained on MNIST A simple multilayer perceptron [1][2] with BatchNorm [3][4][5] trained on MNIST [6]. References- [1] https://en.wikipedia.org/wiki/Multilayer_perceptron- [2] L9.1 Multilayer Perceptron Architecture (24:24): https://www.youtube.com/watch?v=IUylp47hNA0- [3] https://en.wikipedia.org/wiki/Batch_normalization- [4] L11.2 How BatchNorm Works (15:14): https://www.youtube.com/watch?v=34PDIFvvESc- [5] Batch normalization: Accelerating deep network training by reducing internal covariate shift, http://proceedings.mlr.press/v37/ioffe15.html- [6] https://en.wikipedia.org/wiki/MNIST_database General settings and hyperparameters - Here, we specify some general hyperparameter values and general settings.
###Code
HIDDEN_UNITS = (128, 256)
BATCH_SIZE = 256
NUM_EPOCHS = 10
LEARNING_RATE = 0.005
NUM_WORKERS = 4
DROPOUT_PROBA = 0.5
###Output
_____no_output_____
###Markdown
- Note that using multiple workers can sometimes cause issues with too many open files in PyTorch for small datasets. If we have problems with the data loader later, try setting `NUM_WORKERS = 0` and reload the notebook. Implementing a Neural Network using PyTorch Lightning's `LightningModule` - In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.- In essence, `LightningModule` is a wrapper around a PyTorch module.- We start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
###Code
import torch
import torch.nn.functional as F
# Regular PyTorch Module
class PyTorchModel(torch.nn.Module):
def __init__(self, input_size, hidden_units, num_classes):
super().__init__()
# Initialize MLP layers
all_layers = []
for hidden_unit in hidden_units:
layer = torch.nn.Linear(input_size, hidden_unit, bias=False)
all_layers.append(layer)
all_layers.append(torch.nn.BatchNorm1d(hidden_unit))
all_layers.append(torch.nn.ReLU())
input_size = hidden_unit
output_layer = torch.nn.Linear(
in_features=hidden_units[-1],
out_features=num_classes)
all_layers.append(output_layer)
self.layers = torch.nn.Sequential(*all_layers)
def forward(self, x):
x = torch.flatten(x, start_dim=1) # to make it work for image inputs
x = self.layers(x)
return x # x are the model's logits
# %load ../code_lightningmodule/lightningmodule_classifier_basic.py
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningModel(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
if hasattr(model, "dropout_proba"):
self.dropout_proba = model.dropout_proba
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=["model"])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# Do another forward pass in .eval() mode to compute accuracy
# while accountingfor Dropout, BatchNorm etc. behavior
# during evaluation (inference)
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log(
"valid_acc",
self.valid_acc,
on_epoch=True,
on_step=False,
prog_bar=True,
)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
###Output
_____no_output_____
###Markdown
Setting up the dataset - In this section, we are going to set up our dataset. Inspecting the dataset
###Code
# %load ../code_dataset/dataset_mnist_check.py
from collections import Counter
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.MNIST(
root="./data", train=True, transform=transforms.ToTensor(), download=True
)
train_loader = DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True,
)
test_dataset = datasets.MNIST(
root="./data", train=False, transform=transforms.ToTensor()
)
test_loader = DataLoader(
dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False,
)
train_counter = Counter()
for images, labels in train_loader:
train_counter.update(labels.tolist())
test_counter = Counter()
for images, labels in test_loader:
test_counter.update(labels.tolist())
print("\nTraining label distribution:")
sorted(train_counter.items())
print("\nTest label distribution:")
sorted(test_counter.items())
###Output
Training label distribution:
Test label distribution:
###Markdown
Performance baseline - Especially for imbalanced datasets, it's pretty helpful to compute a performance baseline.- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- we want our model to be better than that!
###Code
# %load ../code_dataset/performance_baseline.py
majority_class = test_counter.most_common(1)[0]
print("Majority class:", majority_class[0])
baseline_acc = majority_class[1] / sum(test_counter.values())
print("Accuracy when always predicting the majority class:")
print(f"{baseline_acc:.2f} ({baseline_acc*100:.2f}%)")
###Output
Majority class: 1
Accuracy when always predicting the majority class:
0.11 (11.35%)
###Markdown
A quick visual check
###Code
# %load ../code_dataset/plot_visual-check_basic.py
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torchvision
for images, labels in train_loader:
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training images")
plt.imshow(np.transpose(torchvision.utils.make_grid(
images[:64],
padding=2,
normalize=True),
(1, 2, 0)))
plt.show()
###Output
_____no_output_____
###Markdown
Setting up a `DataModule` - There are three main ways we can prepare the dataset for Lightning. We can 1. make the dataset part of the model; 2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the following subsection; 3. create a LightningDataModule.- Here, we will use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods, as we can see below:
###Code
# %load ../code_lightningmodule/datamodule_mnist_basic.py
from torch.utils.data.dataset import random_split
class DataModule(pl.LightningDataModule):
def __init__(self, data_path="./"):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.MNIST(root=self.data_path, download=True)
return
def setup(self, stage=None):
# Note transforms.ToTensor() scales input images
# to 0-1 range
train = datasets.MNIST(
root=self.data_path,
train=True,
transform=transforms.ToTensor(),
download=False,
)
self.test = datasets.MNIST(
root=self.data_path,
train=False,
transform=transforms.ToTensor(),
download=False,
)
self.train, self.valid = random_split(train, lengths=[55000, 5000])
def train_dataloader(self):
train_loader = DataLoader(
dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS,
)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(
dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS,
)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(
dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS,
)
return test_loader
###Output
_____no_output_____
###Markdown
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the dataset loading -- if we run our code in a distributed setting, this will be called on each node / GPU. - Next, let's initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
###Code
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
###Output
_____no_output_____
###Markdown
Training the model using the PyTorch Lightning Trainer class - Next, we initialize our model.- Also, we define a call back to obtain the model with the best validation set performance after training.- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. However, here, we will keep things simple and use the `CSVLogger`:
###Code
pytorch_model = PyTorchModel(
input_size=28*28,
hidden_units=HIDDEN_UNITS,
num_classes=10
)
# %load ../code_lightningmodule/logger_csv_acc_basic.py
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
lightning_model = LightningModel(pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [
ModelCheckpoint(
save_top_k=1, mode="max", monitor="valid_acc"
) # save top 1 model
]
logger = CSVLogger(save_dir="logs/", name="my-model")
###Output
_____no_output_____
###Markdown
- Now it's time to train our model:
###Code
# %load ../code_lightningmodule/trainer_nb_basic.py
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
deterministic=True,
log_every_n_steps=10,
)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time) / 60
print(f"Training took {runtime:.2f} min in total.")
###Output
/home/jovyan/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:90: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=50)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.
rank_zero_deprecation(
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
-------------------------------------------
0 | model | PyTorchModel | 136 K
1 | train_acc | Accuracy | 0
2 | valid_acc | Accuracy | 0
3 | test_acc | Accuracy | 0
-------------------------------------------
136 K Trainable params
0 Non-trainable params
136 K Total params
0.546 Total estimated model params size (MB)
###Markdown
Evaluating the model - After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (PS: you may want to check out [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) later on, which take care of it for us):
###Code
# %load ../code_lightningmodule/logger_csv_plot_basic.py
import pandas as pd
import matplotlib.pyplot as plt
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel="Epoch", ylabel="Loss"
)
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel="Epoch", ylabel="ACC"
)
plt.show()
###Output
_____no_output_____
###Markdown
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
###Code
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
###Output
Restoring states from the checkpoint path at logs/my-model/version_27/checkpoints/epoch=9-step=2139.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at logs/my-model/version_27/checkpoints/epoch=9-step=2139.ckpt
###Markdown
Predicting labels of new data - We can use the `trainer.predict` method either on a new `DataLoader` (`trainer.predict(dataloaders=...)`) or `DataModule` (`trainer.predict(datamodule=...)`) to apply the model to new data.- Alternatively, we can also manually load the best model from a checkpoint as shown below:
###Code
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningModel.load_from_checkpoint(path, model=pytorch_model)
lightning_model.eval();
###Output
_____no_output_____
###Markdown
- For simplicity, we reused our existing `pytorch_model` above. However, we could also reinitialize the `pytorch_model`, and the `.load_from_checkpoint` method would load the corresponding model weights for us from the checkpoint file.- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
###Code
# %load ../code_lightningmodule/datamodule_testloader.py
test_dataloader = data_module.test_dataloader()
acc = torchmetrics.Accuracy()
for batch in test_dataloader:
features, true_labels = batch
with torch.no_grad():
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
acc(predicted_labels, true_labels)
predicted_labels[:5]
###Output
_____no_output_____
###Markdown
- As an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
###Code
test_acc = acc.compute()
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
###Output
Test accuracy: 0.9789 (97.89%)
###Markdown
Inspecting Failure Cases - In practice, it is often informative to look at failure cases like wrong predictions for particular training instances as it can give us some insights into the model behavior and dataset.- Inspecting failure cases can sometimes reveal interesting patterns and even highlight dataset and labeling issues.
###Code
# In the case of MNIST, the class label mapping
# is relatively trivial
class_dict = {0: 'digit 0',
1: 'digit 1',
2: 'digit 2',
3: 'digit 3',
4: 'digit 4',
5: 'digit 5',
6: 'digit 6',
7: 'digit 7',
8: 'digit 8',
9: 'digit 9'}
# %load ../code_lightningmodule/plot_failurecases_basic.py
# Append the folder that contains the
# helper_data.py, helper_plotting.py, and helper_evaluate.py
# files so we can import from them
import sys
sys.path.append("../../pytorch_ipynb")
from helper_plotting import show_examples
show_examples(
model=lightning_model, data_loader=test_dataloader, class_dict=class_dict
)
###Output
_____no_output_____
###Markdown
- In addition to inspecting failure cases visually, it is also informative to look at which classes the model confuses the most via a confusion matrix:
###Code
# %load ../code_lightningmodule/plot_confusion-matrix_basic.py
from torchmetrics import ConfusionMatrix
import matplotlib
from mlxtend.plotting import plot_confusion_matrix
cmat = ConfusionMatrix(num_classes=len(class_dict))
for x, y in test_dataloader:
with torch.no_grad():
pred = lightning_model(x)
cmat(pred, y)
cmat_tensor = cmat.compute()
cmat = cmat_tensor.numpy()
fig, ax = plot_confusion_matrix(
conf_mat=cmat,
class_names=class_dict.values(),
norm_colormap=matplotlib.colors.LogNorm()
# normed colormaps highlight the off-diagonals
# for high-accuracy models better
)
plt.show()
%watermark --iversions
###Output
matplotlib : 3.3.4
pytorch_lightning: 1.5.1
pandas : 1.4.1
torchvision : 0.11.2
numpy : 1.22.0
sys : 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51)
[GCC 9.4.0]
torchmetrics : 0.6.2
torch : 1.10.1
|
c6/ex_6_3.ipynb
|
###Markdown
习题 6.3
###Code
data <- read.csv("ex_6_3.csv")
attach(data)
data
plot(x2 ~ x1, data=data)
###Output
_____no_output_____
###Markdown
(1) 聚为 3 类
###Code
km3 <- kmeans(data[-1], 3); km3
###Output
_____no_output_____
###Markdown
将结果画在散点图里 (ref https://bookdown.org/rdpeng/exdata/plotting-and-color-in-r.html ):
###Code
plot(x2 ~ x1,
col=km3$cluster, pch = 19,
data=data)
legend("topleft", legend = paste("Cluster", 1:3),
col = 1:3, pch = 19, bty = "n")
###Output
_____no_output_____
###Markdown
用前两个主成分绘制聚类图 (ref https://zhuanlan.zhihu.com/p/140534259 ):
###Code
library(cluster)
clusplot(data, km3$cluster, color=TRUE, shade=TRUE, labels=2, lines=0)
###Output
_____no_output_____
###Markdown
反应出了各类的聚集性。 (2) 聚为 4 类
###Code
km4 <- kmeans(data[-1], 4); km4
plot(x2 ~ x1,
col=km4$cluster, pch = 19,
data=data)
legend("topleft", legend = paste("Cluster", 1:4),
col = 1:4, pch = 19, bty = "n")
clusplot(data, km4$cluster, color=TRUE, shade=TRUE, labels=2, lines=0)
###Output
_____no_output_____
###Markdown
(3) 绝对距离 聚成 3 类
###Code
pam3 <- pam(data[-1], 3, metric = "manhattan"); pam3
plot(x2 ~ x1,
col=pam3$cluster, pch = 19,
data=data)
legend("topleft", legend = paste("Cluster", 1:3),
col = 1:3, pch = 19, bty = "n")
###Output
_____no_output_____
###Markdown
结果和 K-Means 聚类类似。 聚成 4 类
###Code
pam4 <- pam(data[-1], 4, metric = "manhattan"); pam4
plot(x2 ~ x1,
col=pam4$cluster, pch = 19,
data=data)
legend("topleft", legend = paste("Cluster", 1:3),
col = 1:3, pch = 19, bty = "n")
###Output
_____no_output_____
|
Improved_competition_folder/model_predictions.ipynb
|
###Markdown
LOAD MODEL DATA SETS
###Code
all_games_df = pd.read_csv('all_games_df.csv')
test_combos_df = pd.read_csv('test_combos_df_2015.csv')
test_combos_df = test_combos_df.sort_values(by=['ID']).reset_index(drop=True)
test_combos_df.head(3)
test_combos_df.tail()
ind_var_selected = [
'is_tourney',
'HRankPOM',
'RRankPOM',
'line',
'Hwins_top25',
'Rwins_top25',
'HPointMargin',
'RPointMargin',
'HFG',
'RFG',
'HFG3',
'RFG3',
'Hadjem',
'Hadjo',
'Hadjd',
'Hluck',
'Radjem',
'Radjo',
'Radjd',
'Rluck',
'Htourny20plus',
'Rtourny20plus',
'HBig4Conf',
'RBig4Conf',
'HSeed',
'RSeed'
]
###Output
_____no_output_____
###Markdown
Note: test is 2019 predictions but our "test" holdout set is referred to as "valid"
###Code
#prediction set 2019
test_ids = test_combos_df['ID'].reset_index(drop=True)
X_test = test_combos_df[['is_tourney','HRankPOM','RRankPOM','Hwins_top25','Rwins_top25','HPointMargin','RPointMargin','HFG','RFG','HFG3','RFG3','Hadjem','Hadjo','Hadjd','Hluck','Radjem','Radjo','Radjd','Rluck','Htourny20plus','Rtourny20plus','HBig4Conf','RBig4Conf','HSeed','RSeed']].reset_index(drop=True)
#Predict the last two years as a test set (2017, 2018):
temp_df = all_games_df[all_games_df['Season']>2014]
temp_df = temp_df[temp_df['is_tourney']==1]
X_valid = temp_df[ind_var_selected].reset_index(drop=True)
y_valid = temp_df['Hwin'].reset_index(drop=True)
#Train on everything else:
temp_df1 = all_games_df[all_games_df['Season']>2014]
temp_df1 = temp_df1[temp_df1['is_tourney']==0]
temp_df2 = all_games_df[all_games_df['Season']<2015]
combined_temp_df = temp_df1.append(temp_df2)
X_train = combined_temp_df[ind_var_selected].reset_index(drop=True)
y_train = combined_temp_df['Hwin'].reset_index(drop=True)
#For final predictions:
X_train_orig = all_games_df[ind_var_selected].reset_index(drop=True)
y_train_orig = all_games_df['Hwin'].reset_index(drop=True)
#Create second holdout set to double-check not overfit and check model stability (season 2016)
temp_df16 = all_games_df[all_games_df['Season']==2014]
temp_df16 = temp_df16[temp_df16['is_tourney']==1]
X_valid16 = temp_df16[ind_var_selected].reset_index(drop=True)
y_valid16 = temp_df16['Hwin'].reset_index(drop=True)
temp_df1_16 = all_games_df[all_games_df['Season']==2014]
temp_df1_16 = temp_df1_16[temp_df1_16['is_tourney']==0]
temp_df2_16 = all_games_df[all_games_df['Season']!=2014]
combined_temp_df_16 = temp_df1_16.append(temp_df2_16)
X_train16 = combined_temp_df_16[ind_var_selected].reset_index(drop=True)
y_train16 = combined_temp_df_16['Hwin'].reset_index(drop=True)
X_test = X_test.astype("float64")
X_train_orig = X_train_orig.astype("float64")
y_train_orig = y_train_orig.astype("float64")
X_train = X_train.astype("float64")
X_valid = X_valid.astype("float64")
y_train = y_train.astype("float64")
y_valid = y_valid.astype("float64")
X_train16 = X_train16.astype("float64")
X_valid16 = X_valid16.astype("float64")
y_train16 = y_train16.astype("float64")
y_valid16 = y_valid16.astype("float64")
###Output
_____no_output_____
###Markdown
Scoring rules and benchmarks
###Code
def LogLoss(predictions, realizations):
predictions_use = predictions.clip(0)
realizations_use = realizations.clip(0)
LogLoss = -np.mean( (realizations_use * np.log(predictions_use)) +
(1 - realizations_use) * np.log(1 - predictions_use) )
return LogLoss
###Output
_____no_output_____
###Markdown
If the model doesn't beat assuming 50% it is poor
###Code
bench_5050 = np.repeat(0.5, len(y_valid))
LogLoss(bench_5050, y_valid)
###Output
_____no_output_____
###Markdown
How does this compare to Lopez and Matthews (2014 winners)?
###Code
Z1 = LogisticRegression(C = 1e9, random_state=23)
Z1.fit(X_train[['line']], y_train)
Z1_pred = pd.DataFrame(Z1.predict_proba(X_valid[['line']]))[1]
LogLoss(Z1_pred, y_valid)
Z2 = LogisticRegression(C = 1e9, random_state=23)
Z2.fit(X_train[['Hadjo','Hadjd','Radjo','Radjd']], y_train)
Z2_pred = pd.DataFrame(Z2.predict_proba(X_valid[['Hadjo','Hadjd','Radjo','Radjd']]))[1]
LogLoss(Z2_pred, y_valid)
Z1 = LogisticRegression(C = 1e9, random_state=23)
Z1.fit(X_train16[['line']], y_train16)
Z1_pred = pd.DataFrame(Z1.predict_proba(X_valid16[['line']]))[1]
LogLoss(Z1_pred, y_valid16)
Z2 = LogisticRegression(C = 1e9, random_state=23)
Z2.fit(X_train16[['Hadjo','Hadjd','Radjo','Radjd']], y_train16)
Z2_pred = pd.DataFrame(Z2.predict_proba(X_valid16[['Hadjo','Hadjd','Radjo','Radjd']]))[1]
LogLoss(Z2_pred, y_valid16)
###Output
_____no_output_____
###Markdown
Fit a neural network (with and without line)Normalize data (using z-scores) before neural network
###Code
scaler = StandardScaler()
scaler.fit(X_train) # Fit only to the training data
scaled_X_train = pd.DataFrame(scaler.transform(X_train), index=X_train.index, columns=X_train.columns)
scaled_X_valid = pd.DataFrame(scaler.transform(X_valid), index=X_valid.index, columns=X_valid.columns)
scaler = StandardScaler()
scaler.fit(X_train16) # Fit only to the training data
scaled_X_train16 = pd.DataFrame(scaler.transform(X_train16), index=X_train16.index, columns=X_train16.columns)
scaled_X_valid16 = pd.DataFrame(scaler.transform(X_valid16), index=X_valid16.index, columns=X_valid16.columns)
#drop line from training since we won't use in predictions, need these to be same number of columns.
X_train_orig = X_train_orig.drop(['line'], axis=1)
scaler = StandardScaler()
scaler.fit(X_train_orig) # Fit to all training data
scaled_X_train_orig = pd.DataFrame(scaler.transform(X_train_orig), index=X_train_orig.index, columns=X_train_orig.columns)
scaled_X_test = pd.DataFrame(scaler.transform(X_test), index=X_test.index, columns=X_test.columns)
###Output
_____no_output_____
###Markdown
With line (note: we won't have line in the rounds after the first, but we could use this for the first round only like Lopez and Matthews did)
###Code
#Note: I tried logistic activation and different combinations of hidden layers/nodes
#Hyperparameters below minimized the log loss in the holdout set
#I also submited a prediction with 10 nodes in the first layer, but this is the submission that placed 4th (w/ 8 in 1st)
nn = MLPClassifier(activation='relu', hidden_layer_sizes=(8,5,3),random_state=201, max_iter=1000)
nn.fit(scaled_X_train,y_train)
nn_pred = pd.DataFrame(nn.predict_proba(scaled_X_valid))[1]
LogLoss(nn_pred, y_valid)
#try second holdout (does worse, but still better than baseline of 54)
nn.fit(scaled_X_train16,y_train16)
nn_pred16 = pd.DataFrame(nn.predict_proba(scaled_X_valid16))[1]
LogLoss(nn_pred16, y_valid16)
###Output
_____no_output_____
###Markdown
without a line
###Code
#Note: I tried logistic activation and different combinations of hidden layers/nodes
#Hyperparameters below minimized the log loss in the holdout set
ind_var_selected_no_line = ['is_tourney', 'Hwins_top25','Rwins_top25','HPointMargin','RPointMargin','HFG','RFG','HFG3','RFG3','HRankPOM','RRankPOM','Hadjem','Hadjo','Hadjd','Hluck','Radjem','Radjo','Radjd','Rluck','Htourny20plus','Rtourny20plus','HBig4Conf','RBig4Conf', 'HSeed','RSeed']
nn = MLPClassifier(activation='relu', hidden_layer_sizes=(7,5,3),random_state=201, max_iter=1000)
nn.fit(scaled_X_train[ind_var_selected_no_line],y_train)
nn_pred_no_line = pd.DataFrame(nn.predict_proba(scaled_X_valid[ind_var_selected_no_line]))[1]
LogLoss(nn_pred_no_line, y_valid)
#try second holdout (does better)
nn.fit(scaled_X_train16[ind_var_selected_no_line],y_train16)
nn_pred_no_line16 = pd.DataFrame(nn.predict_proba(scaled_X_valid16[ind_var_selected_no_line]))[1]
LogLoss(nn_pred_no_line16, y_valid16)
###Output
_____no_output_____
###Markdown
Try avg of line and no line:
###Code
avg = (nn_pred_no_line+nn_pred)/2
LogLoss(avg, y_valid)
avg16 = (nn_pred_no_line16+nn_pred16)/2
LogLoss(avg16, y_valid16)
###Output
_____no_output_____
###Markdown
Create test predictions
###Code
#different submissions: differ by first layer of nueral net
ind_var_selected_no_line = ['is_tourney', 'Hwins_top25','Rwins_top25','HPointMargin','RPointMargin','HFG','RFG','HFG3','RFG3','HRankPOM','RRankPOM','Hadjem','Hadjo','Hadjd','Hluck','Radjem','Radjo','Radjd','Rluck','Htourny20plus','Rtourny20plus','HBig4Conf','RBig4Conf', 'HSeed','RSeed']
#train model on all data (previously held out some tournaments for a test set)
nn = MLPClassifier(activation='relu', hidden_layer_sizes=(7,5,3),random_state=201, max_iter=1000)
nn.fit(scaled_X_train_orig[ind_var_selected_no_line],y_train_orig)
second_rd_submission_all = pd.DataFrame(nn.predict_proba(scaled_X_test[ind_var_selected_no_line]))
#Note: I'm predicting home (lower seed) win probability. Need to convert to be consistent with output file (lower team ID)
second_rd_submission = pd.merge(test_combos_df, second_rd_submission_all, left_index=True, right_index=True)
second_rd_submission.loc[second_rd_submission['HTeamID']<second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[1]
second_rd_submission.loc[second_rd_submission['HTeamID']>second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[0]
second_rd_submission.to_csv('Ismail_second_rd_submission_all.csv', index=False)
second_rd_submission = second_rd_submission[['ID','pred']]
second_rd_submission.head()
second_rd_submission.tail()
#Export to submit to Kaggle
second_rd_submission.to_csv('Neera_submission.csv', index=False)
###Output
_____no_output_____
###Markdown
Other modelsNo model performed as well as the neural networkXGBoost
###Code
X_train = X_train[['is_tourney', 'Hwins_top25','Rwins_top25','HPointMargin','RPointMargin','HFG','RFG','HFG3','RFG3','HRankPOM','RRankPOM','Hadjem','Hadjo','Hadjd','Hluck','Radjem','Radjo','Radjd','Rluck','Htourny20plus','Rtourny20plus','HBig4Conf','RBig4Conf', 'HSeed','RSeed']]
X_valid= X_valid[['is_tourney', 'Hwins_top25','Rwins_top25','HPointMargin','RPointMargin','HFG','RFG','HFG3','RFG3','HRankPOM','RRankPOM','Hadjem','Hadjo','Hadjd','Hluck','Radjem','Radjo','Radjd','Rluck','Htourny20plus','Rtourny20plus','HBig4Conf','RBig4Conf', 'HSeed','RSeed']]
X_train_xgb = xgb.DMatrix(X_train, label = y_train)
X_valid_xgb = xgb.DMatrix(X_valid)
num_round_for_cv = 1000
param = {'max_depth':3, 'eta':0.01, 'seed':201, 'objective':'binary:logistic', 'nthread':2}
p = xgb.cv(param,
X_train_xgb,
num_round_for_cv,
nfold = 5,
show_stdv = False,
verbose_eval = False,
as_pandas = False)
p = pd.DataFrame(p)
use_num = p['test-error-mean'].argmin()
num_round = use_num
xgb_train = xgb.train(param, X_train_xgb, num_round)
xgb_valid_prob = pd.Series(xgb_train.predict(X_valid_xgb))
xgb.plot_importance(xgb_train)
plt.rcParams['figure.figsize'] = [20, 20]
plt.show()
LogLoss(xgb_valid_prob, y_valid)
clf = RandomForestClassifier(n_estimators=200, max_depth=3, min_samples_leaf=3)
clf.fit(X_train, y_train)
rf_prob = pd.DataFrame(clf.predict_proba(X_valid))
LogLoss(rf_prob[1], y_valid)
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', probability= True, random_state = 0)
classifier.fit(scaled_X_train_orig[ind_var_selected_no_line],y_train_orig)
second_rd_svc = pd.DataFrame(classifier.predict_proba(scaled_X_test[ind_var_selected_no_line]))
second_rd_submission = pd.merge(test_combos_df, second_rd_svc, left_index=True, right_index=True)
second_rd_submission.loc[second_rd_submission['HTeamID']<second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[1]
second_rd_submission.loc[second_rd_submission['HTeamID']>second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[0]
second_rd_submission = second_rd_submission[['ID','pred']]
second_rd_submission.tail()
second_rd_submission.to_csv('SVC_Ismail_submission.csv', index=False)
avg = (rf_prob[1]+xgb_valid_prob+nn_pred)/3
LogLoss(avg, y_valid)
###Output
_____no_output_____
###Markdown
trying to see how VotingClassifier will perform
###Code
Z1 = LogisticRegression(C = 1e9, random_state=23)
clf2 = RandomForestClassifier(n_estimators=200, max_depth =3, min_samples_leaf=3)
nn = MLPClassifier(activation='relu', hidden_layer_sizes=(7,5,3),random_state=201, max_iter=1000)
clf3 = SVC(kernel = 'linear', probability= True, random_state = 0)
eclf1 = VotingClassifier(estimators=[
('linear', clf3), ('relu', nn)], voting='soft')
eclf1 = eclf1.fit(scaled_X_train_orig[ind_var_selected_no_line],y_train_orig)
second_rd_submission_all = pd.DataFrame(eclf1.predict_proba(scaled_X_test[ind_var_selected_no_line]))
second_rd_submission = pd.merge(test_combos_df, second_rd_submission_all, left_index=True, right_index=True)
second_rd_submission.loc[second_rd_submission['HTeamID']<second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[1]
second_rd_submission.loc[second_rd_submission['HTeamID']>second_rd_submission['RTeamID'], 'pred'] = second_rd_submission[0]
second_rd_submission.to_csv('voting_soft_second_rd_submission_all.csv', index=False)
second_rd_submission = second_rd_submission[['ID','pred']]
second_rd_submission.head()
second_rd_submission.tail()
second_rd_submission.to_csv('SVC_now_vote_Ismail_submission.csv', index=False)
###Output
_____no_output_____
|
Govt_Data/Consumer_Sentiment_BackUp.ipynb
|
###Markdown
Customer Sentiment Evolution of customer sentiment among the year
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import matplotlib.patches as mpatches
# Study data files
customer = "consumer_sentiment.csv"
# Read the file data
customer_df = pd.read_csv(customer)
customer_df
# Remove unnecessary columns from source dataframe
customer_df = customer_df.drop(['Good Time<br>Prices are Low','Good Time<br>Prices will increase',
'Good Time<br>Interest rates low','Good Time<br>Rising interest rates',
'Good Time<br>Fuel Efficiency', 'Bad Time<br>Prices High', 'Bad Time<br>Interest rates high',
"Bad Time<br>Can't Afford",'Bad Time<br>Gas Prices','Bad Time<br>Poor Selection','Relative: prices',
'Relative: rates'],axis=1)
customer_df
# Remove data beyond the end of the 3Q 2020 starting at 2019-01-01
customer_df = customer_df[customer_df['Date'] >= '2019-01-01']
customer_df = customer_df[customer_df['Date'] <= '2020-10-01']
customer_df
#Sort values from 2019-01-01 to 2020-10-01
customer_df = customer_df.sort_values('Date')
customer_df
# Historical points as markers
# Group by Date to create df for historical markers.
CSI = customer_df.groupby(['Date']).sum()
dates = ["2020-03-31","2020-05-31"]
def find_loc(CSI, dates):
marks = []
for date in dates:
marks.append(UEI.index.get_loc(date))
return marks
#Bad Time<br>Uncertain Future
# Line chart selection
customer_df.plot.line(x='Date', y='Bad Time<br>Uncertain Future', legend = False, rot=60, title="Uncertain Future",
markevery=find_loc(CSI, dates), marker='s', markerfacecolor='red')
# Sets the y limits
plt.ylim(0, 30)
# Provides labels
plt.xlabel("Date", fontsize=12)
plt.ylabel("Uncertain Future Feeling", fontsize=12)
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Interest Rates.png')
plt.show(block=True)
###Output
_____no_output_____
###Markdown
Additional Comparation
###Code
#Compare the trend between types of interest rates
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
line1 = ax.plot(customer_df['Date'], customer_df["Good Time<br>Times good"], color='tab:green',
markevery=find_loc(UEI, dates), marker='s', markerfacecolor='red')
line2 = ax.plot(customer_df['Date'], customer_df['Bad Time<br>Uncertain Future'], color='tab:red',
markevery=find_loc(CSI, dates), marker='s', markerfacecolor='red')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
line1
line2
# Sets the y limits
plt.ylim(0, 30)
# Provides labels
plt.title("Consumer Sentiment to buy a Car")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Consumer Sentiment", fontsize=12)
red_patch = mpatches.Patch(color='tab:red', label='Uncertain Future')
green_patch = mpatches.Patch(color='tab:green', label='Time is good')
plt.legend(handles=[red_patch,green_patch])
plt.tick_params(axis='both', direction='in', length=6, width=2, labelcolor = 'black',colors='teal')
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
plt.xticks(rotation=90)
# Save the figure as .png
#plt.savefig('Images/Interest Rates.png')
#plt.legend(loc='best', ncol=2, mode="expand", shadow=True, fancybox=True)
plt.show(block=True)
# Correlation between two datased using pearson -r
correlation = round(st.pearsonr(customer_df['Good Time<br>Times good'],customer_df['Bad Time<br>Uncertain Future'])[0],2)
print(f"The correlation is {correlation}")
if correlation == 0:
print (f"The correlation is exact")
elif correlation <= -0.8:
print(f'There is an inverse relation')
elif correlation >= 0.8:
print(f'There is no correlation at all')
else:
print(f'There is some correlation')
###Output
_____no_output_____
###Markdown
Customer Sentiment Evolution of customer sentiment among the year
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import matplotlib.patches as mpatches
# Study data files
customer = "consumer_sentiment.csv"
# Read the file data
customer_df = pd.read_csv(customer)
customer_df
# Remove unnecessary columns from source dataframe
customer_df = customer_df.drop(['Good Time<br>Prices are Low','Good Time<br>Prices will increase',
'Good Time<br>Interest rates low','Good Time<br>Rising interest rates',
'Good Time<br>Fuel Efficiency', 'Bad Time<br>Prices High', 'Bad Time<br>Interest rates high',
"Bad Time<br>Can't Afford",'Bad Time<br>Gas Prices','Bad Time<br>Poor Selection','Relative: prices',
'Relative: rates'],axis=1)
customer_df
# Remove data beyond the end of the 3Q 2020 starting at 2019-01-01
customer_df = customer_df[customer_df['Date'] >= '2019-01-01']
customer_df = customer_df[customer_df['Date'] <= '2020-10-01']
customer_df
#Sort values from 2019-01-01 to 2020-10-01
customer_df = customer_df.sort_values('Date')
customer_df
# Historical points as markers
# Group by Date to create df for historical markers.
CSI = customer_df.groupby(['Date']).sum()
dates = ["2020-03-31","2020-05-31"]
def find_loc(CSI, dates):
marks = []
for date in dates:
marks.append(CSI.index.get_loc(date))
return marks
#Bad Time<br>Uncertain Future
# Line chart selection
customer_df.plot.line(x='Date', y='Bad Time<br>Uncertain Future', legend = False, rot=60, title="Uncertain Future",
markevery=find_loc(CSI, dates), marker='s', markerfacecolor='red')
# Sets the y limits
plt.ylim(0, 30)
# Provides labels
plt.xlabel("Date", fontsize=12)
plt.ylabel("Uncertain Future Feeling", fontsize=12)
plt.tick_params(axis='both', direction='out', length=6, width=2, labelcolor = 'black',colors='teal')
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
# plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
# Save the figure as .png
#plt.savefig('Images/Interest Rates.png')
plt.show(block=True)
###Output
_____no_output_____
###Markdown
Additional Comparation
###Code
#Compare the trend between types of interest rates
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
line1 = ax.plot(customer_df['Date'], customer_df["Good Time<br>Times good"], color='tab:green',
markevery=find_loc(CSI, dates), marker='s', markerfacecolor='red')
line2 = ax.plot(customer_df['Date'], customer_df['Bad Time<br>Uncertain Future'], color='tab:red',
markevery=find_loc(CSI, dates), marker='s', markerfacecolor='red')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
line1
line2
# Sets the y limits
plt.ylim(0, 30)
# Provides labels
plt.title("Consumer Sentiment to buy a Car")
plt.xlabel("Date", fontsize=12)
plt.ylabel("Consumer Sentiment", fontsize=12)
red_patch = mpatches.Patch(color='tab:red', label='Uncertain Future')
green_patch = mpatches.Patch(color='tab:green', label='Time is good')
plt.legend(handles=[red_patch,green_patch])
plt.tick_params(axis='both', direction='in', length=6, width=2, labelcolor = 'black',colors='teal')
# Major grid lines
plt.grid(b=True, which='major', color='lightblue', alpha=0.6, linestyle='dashdot', lw=1.5)
# Minor grid lines
plt.minorticks_on()
plt.grid(b=True, which='minor', color='beige', alpha=0.8, ls='-', lw=1)
plt.xticks(rotation=90)
# Save the figure as .png
#plt.savefig('Images/Interest Rates.png')
#plt.legend(loc='best', ncol=2, mode="expand", shadow=True, fancybox=True)
plt.show(block=True)
# Correlation between two datased using pearson -r
correlation = round(st.pearsonr(customer_df['Good Time<br>Times good'],customer_df['Bad Time<br>Uncertain Future'])[0],2)
print(f"The correlation is {correlation}")
if correlation == 0:
print (f"The correlation is exact")
elif correlation <= -0.8:
print(f'There is an inverse relation')
elif correlation >= 0.8:
print(f'There is no correlation at all')
else:
print(f'There is some correlation')
###Output
There is an inverse relation
|
FluentPython/Chapter09_pythonic_object.ipynb
|
###Markdown
创建一个类github: https://github.com/fluentpython/example-code/tree/master/09-pythonic-obj
###Code
# vector 2d class
import math
from array import array
class Vector2d:
typecode = 'd' #typecode是类属性,在Vector2d实例和字节序列之间转换时使用。
def __init__(self, x, y):
self.x = float(x)
self.y = float(y)
def __iter__(self):
return (i for i in (self.x, self.y))
def __repr__(self):
class_name = type(self).__name__
return "{}({!r},{!r})".format(class_name, *self)
def __str__(self):
return str(tuple(self))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.hypot(self.x, self.y)
def __bool__(self):
return bool(abs(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(array(self.typecode, self)))
def angle(self):
return math.atan2(self.x, self.y)
# 备选构造方法 (与上面的方法不同,传入的参数不同)
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
mev = memoryview(octets[1:]).cast(typecode)
return cls(*mev)
# 格式化显示
def __format__(self, fmt_spec):
if fmt_spec.endswith('p'):
fmt_spec = fmt_spec[:-1]
coords = (abs(self), self.angle())
outer = '<{},{}>'
else:
coords = (i for i in self)
outer = '({},{})'
components = (format(c, fmt_spec) for c in coords)
return outer.format(*components)
# 构造函数
v1 = Vector2d(1,2)
print("__init__():Vector2d(1,2)")
# 可拆包
vx, vy = v1
print("__iter__():{}, {}".format(vx, vy))
# 支持打印。如果没有定义str会使用repr函数代替
print("__str__():{}".format(v1))
# eval with format
v2 = eval(repr(v1))
print("__repr__():eval(repr(v1))-->{}".format(v2))
# equal
print("__eq__():{}".format(v1 == v2))
# abs
print("__abs__(): abs(v1)-->{}".format(abs(v1)))
# bool
print("__bool__(): bool(v1)-->{}".format(bool(v1)))
# bytes
print("__bytes__(): bytes(v1)-->{}".format(bytes(v1)))
###Output
__init__():Vector2d(1,2)
__iter__():1.0, 2.0
__str__():(1.0,2.0)
__repr__():eval(repr(v1))-->(1.0,2.0)
__eq__():True
__abs__(): abs(v1)-->2.23606797749979
__bool__(): bool(v1)-->True
__bytes__(): bytes(v1)-->b'd\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@'
###Markdown
classmethod vs staticmethod
###Code
# classmethod是操作类的方法,而不是操作实例的方法。
Vector2d.frombytes(bytes(v1))
# staticmethod 是一个静态函数,即使写在了类中也有全局效果,类似与直接写在模块中的类。
def common_static_method(*args):
return args
class Demo:
@staticmethod
def static_method(*args):
return args
@classmethod
def class_method(*args):
return args
# 作为类的方法,返回的第一个参数永远是类本身
Demo.class_method()
Demo.class_method("test")
# 而作为静态方法,返回的函数与直接在模块中创建的函数返回的内容是一样的
Demo.static_method()
Demo.static_method("test")
common_static_method()
common_static_method("test")
###Output
_____no_output_____
###Markdown
格式化表示
###Code
print("{v:0.1f}".format(v = v1))
format(v1, '0.3f')
# 使用自己定义的格式打印
format(v1, '0.1fp')
###Output
_____no_output_____
###Markdown
可散列
###Code
# 位运算符异或计算实例:
a = 443.016
b = 444.017
print(hash(a))
print(hash(b))
hash(a)^hash(b)
###Output
36893488147464635
39199331156623804
###Markdown
声明只读变量与可散列其实只要类中实现\_\_Hash\_\_()和\_\_eq\_\_()两个函数,就能实现散列。但是因为散列要求数据只读,以此我们需要在类中使用特性将变量声明成只读属性。
###Code
class pVector2d:
def __init__(self, x, y):
self.__x = x
self.__y = y
@property
def x(self):
return self.__x
@property
def y(self):
return self.__y
def __repr__(self):
class_name = type(self).__name__
return "{}({},{})".format(class_name, self.x, self.y)
# 只有只读属性,才能散列
def __hash__(self):
return hash(self.x)^hash(self.y)
pv1 = pVector2d(2,3)
# 只读属性
pv1.x = 1
# v1 不可散列
v1
# pv1 可散列
pv1
# v1 不可散列:unhashable
set([v1,pv1])
hash(v1)
hash(pv1)
###Output
_____no_output_____
###Markdown
整体实现- 实现分量表示- 实现拆包- 实现格式化- 实现repr返回构建源码(eval可以创建实例)- 实现相等- 实现返回布尔值- 实现打印- 实现操作类方法- 实现返回绝对值- 实现返回Bytes值- 实现散列- 实现只读- 实现float转换- 实现int转换- 实现complex
###Code
import math
from array import array
class myVector2d:
typecode = 'd'
def __init__(self, x, y):
self.__x = x
self.__y = y
@property
def x(self):
return self.__x
@property
def y(self):
return self.__y
def __iter__(self):
return ( i for i in (self.x, self.y))
def __abs__(self):
#return math.sqrt(self.x * self.x + self.y * self.y)
return math.hypot(self.x, self.y) # hypot求斜边
def __repr__(self):
return "myVector2d({},{})".format(self.x,self.y)
def __str__(self):
return str(tuple(self))
def __hash__(self):
return hash(self.x)^hash(self.y)
def __eq__(self, other):
return hash(self) == hash(other)
def __bool__(self):
return bool(abs(self))
def __bytes__(self):
# 二进制表示形式
return (bytes([ord(self.typecode)]) +
bytes(array(self.typecode, [self.x, self.y])))
def angle(self):
return math.atan2(self.y, self.x)
def __format__(self, fmt_spec):
if fmt_spec.endswith("p") :
fmt_spec = fmt_spec[:-1]
coords = (abs(self), self.angle())
out = "myVector2d:<{},{}>"
else:
coords = (i for i in self)
out = "myVector2d:({},{})"
componets = (format(c, fmt_spec) for c in coords)
return out.format(*componets)
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
mev = memoryview(octets[1:]).cast(typecode)
return cls(*mev)
def __float__(self):
return float(abs(self))
def __int__(self):
return int(abs(self))
def __complex__(self):
return complex(self.x, self.y)
mv = myVector2d(2,5)
# 验证repr
mv
# 验证拆包和分量
a,b = mv
print("a:{}\nb:{}\nmv.x:{}\nmv.y:{}\n".format(a,b,mv.x,mv.y))
# 验证str
print(mv)
# 验证abs
abs(mv)
# 验证相等
mv2 = myVector2d(2,3)
mv == mv2
mv3 = mv
mv == mv3
# 验证可散列
set([mv,mv2,mv3])
# 验证布尔值
bool(mv)
# 验证格式化
"{:0.3fp}".format(mv)
# 验证格式化2
"{:0.3f}".format(mv)
# 验证二进制转换
bytes(mv2)
# 验证类方法
octets = bytes(mv2)
result = myVector2d.frombytes(octets)
result
# 验证float和int
float(mv)
int(mv)
# 验证complex
complex(mv)
###Output
_____no_output_____
###Markdown
使用 __slots__ 类属性节省空间
###Code
# 实例默认使用字典记录其属性
print(mv.__dict__)
# 如果有数百万个属性不多的实例,效率就会大大降低。
###Output
{'_myVector2d__x': 2, '_myVector2d__y': 5}
###Markdown
通过 \_\_slots\_\_ 类属性,能节省大量内存,方法是让解释器在元组中存储实例属性,而不用字典。
###Code
import math
from array import array
class myVector2d_v02:
"""
在类中定义 __slots__ 属性的目的是告诉解释器:“这个类中的所有 实例属性都在这儿了!”这样,Python 会在各个实例中使用类似元组的 结构存储实例变量,从而避免使用消耗内存的 __dict__ 属性。如果 有数百万个实例同时活动,这样做能节省大量内存。
"""
__slots__ = ('__x','__y')
typecode = 'd'
def __init__(self, x, y):
self.__x = x
self.__y = y
@property
def x(self):
return self.__x
@property
def y(self):
return self.__y
def __iter__(self):
return ( i for i in (self.x, self.y))
def __abs__(self):
#return math.sqrt(self.x * self.x + self.y * self.y)
return math.hypot(self.x, self.y) # hypot求斜边
def __repr__(self):
return "myVector2d({},{})".format(self.x,self.y)
def __str__(self):
return str(tuple(self))
def __hash__(self):
return hash(self.x)^hash(self.y)
def __eq__(self, other):
return hash(self) == hash(other)
def __bool__(self):
return bool(abs(self))
def __bytes__(self):
# 二进制表示形式
return (bytes([ord(self.typecode)]) +
bytes(array(self.typecode, [self.x, self.y])))
def angle(self):
return math.atan2(self.y, self.x)
def __format__(self, fmt_spec):
if fmt_spec.endswith("p") :
fmt_spec = fmt_spec[:-1]
coords = (abs(self), self.angle())
out = "myVector2d:<{},{}>"
else:
coords = (i for i in self)
out = "myVector2d:({},{})"
componets = (format(c, fmt_spec) for c in coords)
return out.format(*componets)
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
mev = memoryview(octets[1:]).cast(typecode)
return cls(*mev)
def __float__(self):
return float(abs(self))
def __int__(self):
return int(abs(self))
def __complex__(self):
return complex(self.x, self.y)
# 测试10 000 000个实例的内存用量
import resource
mem_init = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
vectors = [myVector2d_v02(1.00,3.00) for i in range(10000000)]
mem_final = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
print('Initial RAM usage: {:14,}'.format(mem_init))
print(' Final RAM usage: {:14,}'.format(mem_final))
print(' RAM usage: {:14,}'.format(mem_final-mem_init))
mem_init = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
vectors = [myVector2d(1.00,3.00) for i in range(10000000)]
mem_final = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
print('Initial RAM usage: {:14,}'.format(mem_init))
print(' Final RAM usage: {:14,}'.format(mem_final))
print(' RAM usage: {:14,}'.format(mem_final-mem_init))
###Output
Initial RAM usage: 629,276,672
Final RAM usage: 2,262,327,296
RAM usage: 1,633,050,624
|
results/visualize_results.ipynb
|
###Markdown
Results for nontrivial energy landscape
###Code
CASE_LIST = ['small', 'med', 'large']
simulation_results = {}
simulation_results_multiple_starts = {}
capacity_results = {}
model_params = {}
for case in CASE_LIST:
with open('capacity_based_hitprob/{}_target/simulation_results.pkl'.format(case), 'rb') as f:
simulation_results[case] = pickle.load(f)
with open('simulation_based_hitprob_multiple_starts/{}_target/results.pkl'.format(case), 'rb') as f:
simulation_results_multiple_starts[case] = pickle.load(f)
with open('capacity_based_hitprob/{}_target/capacity_results.pkl'.format(case), 'rb') as f:
capacity_results[case] = pickle.load(f)
with open('capacity_based_hitprob/{}_target/model_params.pkl'.format(case), 'rb') as f:
model_params[case] = pickle.load(f)
loc_dict = {
'small': 'center',
'med': 'center left',
}
legend_loc_dict = {
'small': 'upper left',
'med': 'upper right',
'large': 'upper left'
}
xlim_dict = {
'small': (0.78, 0.84),
'med': (0.36, 0.42),
}
ylim_dict = {
'small': (0, 25),
'med': (0, 28),
'large': (0, 32)
}
loc12_dict = {
'small': (2, 3),
'med': (2, 3)
}
aspect_dict = {
'small': 0.0022,
'med': 0.002
}
tick_dict = {
'small': np.arange(0.78, 0.85, 0.02),
'med': np.arange(0.37, 0.42, 0.02)
}
fig = plt.figure(figsize=(20, 15))
for cc, case in enumerate(CASE_LIST):
# Plot targets
ax = fig.add_subplot(3, 4, cc * 4 + 1)
ax.add_patch(Circle((0, 0), 1, fill=False, linewidth=1))
for ii in range(2):
center = model_params[case]['target_param_list'][ii]['center'][:2]
circle_patches = []
for rr in model_params[case]['target_param_list'][ii]['radiuses']:
circle_patches.append(Circle(center, rr, fill=False, linewidth=1))
circle_patches[1].set_fill(True)
circle_patches[1].set_fc('green')
circle_patches[1].set_ec('black')
circle_patches[1].set_alpha(0.4)
circle_patches[0].set_fill(True)
circle_patches[0].set_fc('magenta')
for patch in circle_patches:
ax.add_patch(patch)
ax.axis('off')
ax.set_aspect(1)
ax.set_xlim(-1.015, 1.015)
ax.set_ylim(-1.015, 1.015)
name = case.capitalize() if case != 'med' else 'Medium'
ax.annotate(s='{} targets'.format(name), xy=(-0.3, 0), fontsize=20)
# Plot histogram
ax = plt.subplot2grid((3, 4), (cc, 1), 1, 3, fig=fig)
hitting_prob_list = simulation_results[case]['hitting_prob_list'][:, 0]
mean_hitprob = np.mean(simulation_results_multiple_starts[case]['hitting_prob_list'][:, 0])
capacity_based_hitprob = capacity_results[case]['hitting_prob'][0]
# Zoomed out histogram
ax.hist(hitting_prob_list, label='Simulated hitting probabilities\nfrom random initializations')
ax.set_xlim(0, 1)
ax.set_ylim(*ylim_dict[case])
ax.xaxis.set_tick_params(labelsize=20)
ax.yaxis.set_tick_params(labelsize=20)
ax.set_xlabel('Probability of hitting target A before B', fontsize=20)
ax.set_ylabel('Number of occurences', fontsize=20)
ax.axvline(x=mean_hitprob, label='Mean of simulated probabilities', color='r')
ax.axvline(
x=capacity_based_hitprob, label='Capacity-based probabilities',
color='r', linestyle=':'
)
ax.legend(fontsize=18, loc=legend_loc_dict[case])
# Zoomed in histogram
if case == 'large':
continue
axins = zoomed_inset_axes(
ax, 4, loc=loc_dict[case], axes_kwargs={'aspect': aspect_dict[case]}
)
axins.hist(hitting_prob_list)
axins.axvline(x=mean_hitprob, color='r')
axins.axvline(
x=capacity_based_hitprob, color='r', linestyle=':'
)
axins.xaxis.set_tick_params(labelsize=15)
axins.yaxis.set_visible(False)
axins.set_xlim(*xlim_dict[case])
axins.set_ylim(*ylim_dict[case])
axins.set_xticks(tick_dict[case])
# axins.set_title('Zoomed in histogram')
mark_inset(ax, axins, loc1=loc12_dict[case][0], loc2=loc12_dict[case][1], fc='none', ec='0.5')
fig.tight_layout()
fig.savefig('results_nontrivial.pdf', dpi=400, bbox_inches='tight')
###Output
/home/stannis/miniconda3/envs/entropic_barrier/lib/python3.6/site-packages/ipykernel_launcher.py:96: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
###Markdown
Results for flat energy landscape
###Code
CASE_LIST = ['small', 'med', 'large']
results = {}
results_multiple_starts = {}
configs = {}
for case in CASE_LIST:
with open('sanity_checks/{}_target/results.pkl'.format(case), 'rb') as f:
results[case] = pickle.load(f)
with open('sanity_checks_multiple_starts/{}_target/results.pkl'.format(case), 'rb') as f:
results_multiple_starts[case] = pickle.load(f)
with open('sanity_checks/{}_target/config.json'.format(case), 'r') as f:
configs[case] = json.load(f)
loc_dict = {
'small': 'center',
'med': 'center',
}
legend_loc_dict = {
'small': 'upper right',
'med': 'upper right',
'large': 'upper left'
}
xlim_dict = {
'small': (0.085, 0.12),
'med': (0.2, 0.25),
}
ylim_dict = {
'small': (0, 20),
'med': (0, 22),
'large': (0, 32)
}
loc12_dict = {
'small': (1, 4),
'med': (1, 4)
}
aspect_dict = {
'small': 0.0022,
'med': 0.002
}
tick_dict = {
'small': np.arange(0.08, 0.12, 0.02),
'med': np.arange(0.2, 0.25, 0.02)
}
fig = plt.figure(figsize=(20, 15))
for cc, case in enumerate(CASE_LIST):
# Plot targets
ax = fig.add_subplot(3, 4, cc * 4 + 1)
ax.add_patch(Circle((0, 0), 1, fill=False, linewidth=1))
for ii in range(2):
center = configs[case]['centers'][ii][:2]
circle_patches = []
for rr in configs[case]['radiuses'][ii]:
circle_patches.append(Circle(center, rr, fill=False, linewidth=1))
circle_patches[1].set_fill(True)
circle_patches[1].set_fc('green')
circle_patches[1].set_ec('black')
circle_patches[1].set_alpha(0.4)
circle_patches[0].set_fill(True)
circle_patches[0].set_fc('magenta')
for patch in circle_patches:
ax.add_patch(patch)
ax.axis('off')
ax.set_aspect(1)
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
name = case.capitalize() if case != 'med' else 'Medium'
ax.annotate(s='{} targets'.format(name), xy=(-0.3, 0), fontsize=20)
# Plot histogram
ax = plt.subplot2grid((3, 4), (cc, 1), 1, 3, fig=fig)
hitting_prob_list = results[case]['hitting_prob_list'][:, 0]
mean_hitprob = np.mean(results_multiple_starts[case]['hitting_prob_list'][:, 0])
expected_hitprob = results[case]['expected_hitting_prob'][0]
# Zoomed out histogram
ax.hist(hitting_prob_list, label='Simulated hitting probabilities\nfrom random initializations')
ax.set_xlim(0, 1)
ax.set_ylim(*ylim_dict[case])
ax.xaxis.set_tick_params(labelsize=20)
ax.yaxis.set_tick_params(labelsize=20)
ax.set_xlabel('Probability of hitting target A before B', fontsize=20)
ax.set_ylabel('Number of occurences', fontsize=20)
ax.axvline(x=mean_hitprob, label='Mean of simulated probabilities', color='r')
ax.axvline(
x=expected_hitprob, label='Capacity-based probabilities',
color='r', linestyle=':'
)
ax.legend(fontsize=18, loc=legend_loc_dict[case])
# Zoomed in histogram
if case == 'large':
continue
axins = zoomed_inset_axes(
ax, 4, loc=loc_dict[case], axes_kwargs={'aspect': aspect_dict[case]}
)
axins.hist(hitting_prob_list)
axins.axvline(x=mean_hitprob, color='r')
axins.axvline(
x=expected_hitprob, color='r', linestyle=':'
)
axins.xaxis.set_tick_params(labelsize=15)
axins.yaxis.set_visible(False)
axins.set_xlim(*xlim_dict[case])
axins.set_ylim(*ylim_dict[case])
axins.set_xticks(tick_dict[case])
# axins.set_title('Zoomed in histogram')
mark_inset(ax, axins, loc1=loc12_dict[case][0], loc2=loc12_dict[case][1], fc='none', ec='0.5')
fig.tight_layout()
fig.savefig('results_brownian_motion.pdf', dpi=400, bbox_inches='tight')
###Output
/home/stannis/miniconda3/envs/entropic_barrier/lib/python3.6/site-packages/ipykernel_launcher.py:96: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
|
notebooks/v24.ipynb
|
###Markdown
Data Science in der Produktion Uebung 2: Modellbildung Inhaltsverzeichnis1. [Einfuehrung](einfuehrung) 1. [Lernziel der Uebung](lernziel) 2. [Installation benötigter Bibliotheken](installation) 3. [Kurzbeschreibung der Software-Bibliotheken](kurzbeschreibung)2. [Supervised Learning](supervised) 1. [Klassifikation und Regression](klassifikation) 2. [Generalisierung, Overfitting und Underfitting](generalisierung) 3. [Algorithmen](algorithmen) 1. [k-Nearest-Neighbors (k-NN)](knn) 2. [Lineare Modelle](linear) 3. [Decision Trees](decision)3. [Vorbereitungsaufgabe](vorbereitung)4. [Versuchsdurchführung](versuch) 1. [Aufgabe 1](a1) 2. [Aufgabe 2](a2) 3. [Aufgabe 3](a3) 1. Einfuehrung Data Science gewinnt in der Produktionstechnik eine immer größere Bedeutung, da der Zugang zu Daten durch eine gestiegene Konnektivität von u.a. Produktionsanlagen einfacher wird und die Performance der Computer in den letzten Jahren gestiegen ist. Für ein strukturiertes Vorgehen im Bereich des Data Science wurde in der Vorlesung das CRISP-Modell (Cross-industry standard process for data mining) vorgestellt. Dieses Modell soll anhand des in der Vorlesung vorgestellten Anwendungsfalls (Block C, CRISP Phase 1) in den Übungen praktisch eingeführt, verstanden und bearbeitet werden.Beim Anwendungsfall einer Kleinserienfertigung am ISW werden acht Varianten eines Produkts nach Kundenwunsch an einer Montagelinie angefertigt. Auf Kundenwunsch landet die Bestellung im Auftragssystem und wird je nach Auslastung der Linie in kürzester Zeit gefertigt. Die Bestellzahl liegt bei ca. 12000 Bestellungen pro Jahr, wobei der Reingewinn 0,75€ pro Bestellung beträgt. Zahlreiche Stationen, die unterschiedliche Fähigkeiten besitzen und zur Produktion der Teile beitragen, sind in der Anlage untergebracht.In den letzten Monaten häuften sich die Beschwerden von Kunden über Ausfälle des Produkts im Feldeinsatz. Dadurch sank die Anzahl der Bestellungen auf 800 Produkte im Monat. Das bedeutet 20% weniger als zuvor und somit auf das Jahr gerechnet 1800€ weniger Gewinn. Das Problem muss gefunden werden, um wieder die ursprüngliche Anzahl an Bestellungen pro Jahr zu erreichen.In der Vorlesung wurde die erste Phase des CRISP-Modells durchgeführt (Block C, CRISP Phase 1) und das Data Science Ziel definiert. Mit den vorhandenen Sensoren an der Anlage muss identifiziert werden, ob das Produkt ein Qualitätsproblem hat oder nicht. In dieser Übung werden die beiden nächsten Phasen des CRIPS-Modells (Datenverständnis und Datenvorbereitung) am Anwendungsfall durchgeführt. 1.1 Lernziel der Übung Die Übung vermittelt Kenntnisse der Modellbildung im Bereich Supervised Learning.Sie kennen den grundelegenden Unterschied zwischen Klassifikation und Regression. Weiter können Sie evaluieren wann ein Problem durch eine Klassifikation bzw. Regression gelöst werden kann. Es wird vermittelt inwiefern die Modellkomplexität mit Overfitting bzw. Underfitting korelliert.Sie lernen drei beliebte Algorithmen kennen. Die Algorithmen werden jeweils als Klassifikator bzw. Regressor eingeführt. Im weiteren werden die Stärken, Schwächen und Parameter der Algorithmen verdeutlicht. Sodass Sie für ein Machine-Learning Problem den optimalen Algorithmus auswählen können. Ihnen wird vermittelt wie die trainierten Modelle visualisiert werden und auf Genauigkeit, Generalisierungsleistung und Performanz untersucht werden können. 1.2 Installation benötigter Bibliotheken Die benötigten Python-Bibliotheken können mit dem folgenden Befehl auf dem Host des Jupyter Servers installiert werden. Durch den --user Paramater werden die Pakete nur für den angemeldeten Benutzer installiert. Die Installation kann einige Minuten dauern.
###Code
%%bash
pip3 install numpy scipy matplotlib ipython scikit-learn pandas --user
###Output
_____no_output_____
###Markdown
1.3 Kurzbeschreibung der Software-Bibliotheken Scikit-learnScikit-learn ist ein Open-Source-Projekt, d.h. der Quellcode ist öffentlich einsehbar und kann ohne kann kostenlos verwendet werden. Das scikit-learn-Projekt wird ständig weiterentwickelt und verbessert und hat eine sehr aktive Nutzergemeinde. Es enthält eine Reihe modernster maschineller Lernalgorithmen sowie eine umfassende Dokumentation zu jedem Algorithmus. Scikit-learn ist ein sehr beliebtes Werkzeug und die führende Python-Bibliothek für Machine Learning. Es ist in der Industrie und Wissenschaft weit verbreitet. Scikit-learn funktioniert gut mit einer Reihe anderer wissenschaftlicher Python-Tools, auf die später in diesem Kapitel eingegangen wird. Es wird empfohlen auch das Scikit-learn Benutzerhandbuch und die API-Dokumentation zu lesen, um weitere Details und viele weitere Optionen für die verschiedenen Algorithmen zu erhalten. In der gesamten Übung werden die Bibliotheken NumPy, Matplotlib, Scipy, Pandas und Scikit-Learn verwendet. Der gesamte Code geht von folgenden Importen aus:
###Code
from IPython.display import display, Math, Latex
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import numpy as np
import sklearn
###Output
_____no_output_____
###Markdown
Es wird davon ausgegangen, dass der Code in einem Jupyter-Notebook ausgeführt wird. Weiter wird empfohlen im Jupyter-Notebook die Option %matplotlib inline zu aktivieren. Dadurch ist gewährleistet, dass Plots direkt im Notebook angezeigt werden. Andernfalls muss die Funktion plt.show() aufgerufen werden, um tatsächlich einen Plots anzuzeigen. Jupyter Notebook Das Jupyter Notebook ist eine interaktive Umgebung zum Ausführen von Code im Browser. Es ist ein großartiges Werkzeug für die explorative Datenanalyse und wird von Datenwissenschaftlern häufig verwendet. Während das Jupyter Notebook viele Programmiersprachen unterstützt, benötigen wir nur die Python-Unterstützung. Das Jupyter Notebook macht es einfach, Code, Text und Bilder einzubinden, und all dieses Buch wurde tatsächlich als Jupyter Notebook geschrieben. Alle von uns mitgelieferten Codebeispiele können von GitHub heruntergeladen werden. NumPy NumPy ist eines der grundlegenden Pakete für wissenschaftliches Rechnen in Python. Es enthält Funktionen für multidimensionale Arrays, mathematische Funktionen auf hohem Niveau wie lineare Algebraoperationen und die Fourier-Transformation sowie Pseudozufallszahlengeneratoren. Im scikit-learn ist das NumPy-Array die grundlegende Datenstruktur. scikit-learn nimmt Daten in Form von NumPy-Arrays auf. Alle Daten, die Sie verwenden, müssen an ein NumPy-Array übergeben werden. Die Kernfunktionalität von NumPy ist die Klasse ndarray, ein multidimensionales (n-dimensionales) Array. Alle Elemente des Arrays müssen vom gleichen Typ sein. Ein NumPy-Array sieht so aus:
###Code
import numpy as np
x = np.array([[1, 2, 3], [4, 5, 6]])
print("x:\n{}".format(x))
###Output
x:
[[1 2 3]
[4 5 6]]
###Markdown
Wir werden NumPy in diesem Buch viel verwenden, und wir werden auf Objekte der NumPy ndarray-Klasse als "NumPy-Arrays" oder einfach als "Arrays" verweisen. SciPy SciPy ist eine Sammlung von Funktionen für das wissenschaftliche Rechnen in Python. Es bietet unter anderem fortschrittliche lineare Algebra-Routinen, mathematische Funktionsoptimierung, Signalverarbeitung, spezielle mathematische Funktionen und statistische Verteilungen. scikit-learn bezieht sich auf SciPys Sammlung von Funktionen zur Implementierung seiner Algorithmen. Der wichtigste Teil von SciPy für uns ist scipy.sparse: Dies liefert spärliche Matrizen, die eine weitere Darstellung sind, die für Daten in scikit\x02learn verwendet wird. Sparse Matrizen werden immer dann verwendet, wenn wir ein 2D-Array speichern wollen, das meist Nullen enthält:
###Code
from scipy import sparse
# Create a 2D NumPy array with a diagonal of ones, and zeros everywhere else
eye = np.eye(4)
print("NumPy array:\n{}".format(eye))
# Convert the NumPy array to a SciPy sparse matrix in CSR format
# Only the nonzero entries are stored
sparse_matrix = sparse.csr_matrix(eye)
print("\nSciPy sparse CSR matrix:\n{}".format(sparse_matrix))
###Output
SciPy sparse CSR matrix:
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
(3, 3) 1.0
###Markdown
Normalerweise ist es nicht möglich, dichte Darstellungen von spärlichen Daten zu erstellen (da sie nicht in den Speicher passen würden), also müssen wir spärliche Darstellungen direkt erstellen. Hier ist eine Möglichkeit, die gleiche spärliche Matrix wie bisher im COO-Format zu erstellen:
###Code
data = np.ones(4)
row_indices = np.arange(4)
col_indices = np.arange(4)
eye_coo = sparse.coo_matrix((data, (row_indices, col_indices)))
print("COO representation:\n{}".format(eye_coo))
###Output
COO representation:
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
(3, 3) 1.0
###Markdown
Weitere Details zu SciPy Sparse Matrizen finden Sie in den SciPy Vorlesungsunterlagen. Matplotlib Matplotlib ist die primäre wissenschaftliche Plottbibliothek in Python. Es bietet Funktionen zur Erstellung von Visualisierungen in Publikationsqualität, wie z.B. Liniendiagramme, Histogramme, Streudiagramme usw. Die Visualisierung der Daten und verschiedener Modelle kann wichtige Erkenntnisse liefern. Im Rahmen der Uebung wird ausschließlich Matplotlib für Visualisierungen eingesetzt. Wenn Sie im Jupyter Notebook arbeiten, können Sie Matplotlib-Visualisierungen direkt im Browser anzeigen, indem Sie die Befehle %matplotlib notebook und %matplotlib inline verwenden. Es wird die Verwendung von %matplotlib Notebooks empfohlen, da es eine interaktive Umgebung bietet. Im folgenden befindet sich Code zu einer beispielhaften Darstellung einer Sinus-Kurve visualisiert durch Matplotlib (vgl. Abbildung 1-1):
###Code
%matplotlib inline
import matplotlib.pyplot as plt
# Generate a sequence of numbers from -10 to 10 with 100 steps in between
x = np.linspace(-10, 10, 100)
# Create a second array using sine
y = np.sin(x)
# The plot function makes a line chart of one array against another
plt.plot(x, y, marker="x")
###Output
_____no_output_____
###Markdown
Abbildung 1-1. Einfaches Linienplot der Sinusfunktion mit matplotlib Pandas Pandas ist eine Python-Bibliothek für Datenbereitstellung und Analyse. Es basiert auf einer Datenstruktur namens DataFrame, die nach dem R DataFrame aufgebaut ist. Einfach ausgedrückt, ist ein Pandas DataFrame eine Tabelle, ähnlich einer Excel-Tabelle. Pandas bietet eine Vielzahl von Methoden, um diese Tabelle zu modifizieren und zu bearbeiten; insbesondere erlaubt es SQL-ähnliche Abfragen und Joins von Tabellen. Weiter erlaubt Pandas, dass jede Spalte eines Arrays einen eigenen Datentyp (z.B. ganze Zahlen, Datum, Gleitkommazahlen und Zeichenketten) hat. Ein weiteres wertvolles Werkzeug von Pandas ist die Möglichkeit, aus einer Vielzahl von Dateiformaten und Datenbanken wie SQL-, Excel- und CSV-Dateien (Comma Separated Values) zu übernehmen. Im Folgenden ist ein Beispiel für die Erstellung eines DataFrames mit einem Dictionary:
###Code
import pandas as pd
# create a simple dataset of people
data = {'Name': ["John", "Anna", "Peter", "Linda"],
'Location' : ["New York", "Paris", "Berlin", "London"],
'Age' : [24, 13, 53, 33]
}
data_pandas = pd.DataFrame(data)
# IPython.display allows "pretty printing" of dataframes
# in the Jupyter notebook
display(data_pandas)
###Output
_____no_output_____
###Markdown
Es gibt mehrere Möglichkeiten, diese Tabelle abzufragen. Beispielsweise:
###Code
# Select all rows that have an age column greater than 30
display(data_pandas[data_pandas.Age > 40])
###Output
_____no_output_____
###Markdown
Mglearn Mglearn ist eine Bibliothek von Hilfsfunktionen, die für das Buch Introduction to machine learning with Python geschrieben wurde. Die Bibliothek dient im Wesentlichen dazu Code-Blöcke kompakter und übersichtlicher zu gestalten. Bei Interesse kann der Quellcode in dem Repository [todo] nachvollzogen werden. Wenn Sie einen Aufruf zum Mglearn im Code sehen, ist es in der Regel eine Möglichkeit, schnell eine gute Visualisierung von den vorliegenden Daten zu erhalten. 2. Supervised Learning Supervised Learning ist eine der am häufigsten verwendeten und erfolgreichsten Formen des maschinellen Lernens. In diesem Kapitel werden drei beliebte Algorhithmen vorgestellt. Supervised Learning wird dann angewandt, wenn ein Ergebnis aus einem bestimmten Input vorhergesagt werden soll. Dazu stehen Beispieldaten für Input/Output-Paare zur Verfügung. Aus diesen Input/Output-Paaren wird ein maschinelles Lernmodell trainiert. Das Ziel ist es, genaue Prädiktionen für neue, unbekannte Daten durchzuführen. Supervised Learning erfordert oft menschliches Zutun, um den Trainingssatz aufzubauen (Labelling). Es automatisiert und beschleunigt danach jedoch oft eine ansonsten mühsame oder nicht ausführbare Aufgabenstellung. 2.1 Klassifikation und Regression Es gibt zwei Hauptkategorien von Supervised Learning Problemen, die als Klassifikation und Regression bezeichnet werden. Bei der Klassifikation ist die Vorhersage eines Klassenlabels aus einer Auswahl aus einer vordefinierten von Möglichkeiten das Ziel. Die Klassifikation wird häufig in binäre Klassifikation, d.h. den Sonderfall der Unterscheidung zwischen genau zwei Klassen, und Multiklassenklassifikation, d.h. die Klassifikation zwischen mehr als zwei Klassen, unterteilt. Bei der binären Klassifikation werden Ja/Nein-Frage beantwortet. Die Klassifikation von produzierten Komponenten in "In Ordnung iO" bzw. "Nicht in Ordnung niO" ist ein Beispiel für ein binäres Klassifikationsproblem. In dieser binären Klassifikationsaufgabe lautet die Ja/Nein-Frage "Ist die produzierte Komponente in Ordnung". Ein weiteres Beispiel ist die Anomaliedetektion. Hierbei muss ermittelt werden welche Fehlerklasse, aus einer definierten Liste an Klassen, vorliegt. Für Regressionsaufgaben ist es das Ziel, eine reelle (kontinuierliche) Zahl vorherzusagen. Die Vorhersage eines Prozessparameters der Lukas-Nülle Anlage anhand verfügbarer Sensordaten ist ein Beispiel für eine Regressionsaufgabe. Bei der Vorhersage eines Prozessparameters ist der prädizierte Wert ein Skalar und kann eine beliebige Zahl in einem vordefinierten Bereich sein. Ein weiteres Beispiel für eine Regressionsaufgabe ist die Vorhersagedes Absatzes eines Produkts anhand von Attributen wie früheren Erträgen, Jahreszeit und der Wettervorhersage. Um zwischen Klassifikations- und Regressionsaufgaben zu unterscheiden wird untersucht, ob es eine Kontinuität in der Ausgabe vorliegt. Wenn es eine Kontinuität zwischen möglichen Ergebnissen gibt, dann ist das Problem ein Regressionsproblem. Beispiel: Vorhersage des Jahreseinkommen: Es gibt eine klare Kontinuität in der Ausgabe. Ob eine Person 40.000 € oder 40.001 € pro Jahr verdient, macht keinen greifbaren Unterschied. Für die Aufgabe, die Fehlerklasse einer produzierten Komponentee zu ermitteln (was ein Klassifikationsproblem ist), gibt es dagegen keine Kontinuität. 2.2 Generalisierung, Overfitting und Underfitting Beim Supervised Learning soll ein Modell auf der Grundlage von Trainingsdaten ermittelt werden. Dieses Modell soll dann genaue Vorhersagen über neue, unbekannte Daten treffen können, die die gleichen Eigenschaften haben wie das uns verwendete Trainingsset. Wenn ein Modell in der Lage ist, genaue Vorhersagen über unbekannte Daten zu treffen, ist dieses Modell in der Lage vom Trainingsset zum Testset zu generalisieren. Das Ziel ist es ein Modell mit möglichst hoher Generalisierungsleistung zu trainieren. Normalerweise wird ein Modell so trainiert, dass es genaue Prädiktionen über das Trainingsset treffen kann. Wenn die Trainings- und Testsets hinreichend Gemeinsamkeiten haben, ist zu erwarten, dass das Modell auch auf dem Testset präzise ist. Es gibt jedoch einige Fälle, in denen diese Annahme nicht zutrifft. Beim Training von komplexen Modellen, ist es möglich eine positive Vorhersagesrate über das Trainingsset von nahezu 100% zu erreichen. Um den Zusammenhang zu verdeutlich wird das folgende Beispiel angeführt:Sie sind ein Data-Scientist und wollen vorhersagen, ob ein Kunde ein Boot kaufen wird. Angesichts der Aufzeichnungen von früheren Bootskäufern und Kunden, von denen bekannt ist, dass sie nicht am Kauf eines Bootes interessiert sind. Angenommen, Ihnen steht die in Tabelle 2-1 aufgeführten Kundendaten zur Verfügung. Tabelle 2-1. Beispieldaten der Kunden Nach Betrachung der Daten, kommen Sie auf die folgende Regel: "Wenn der Kunde älter als 45 Jahre ist und weniger als 3 Kinder hat oder nicht geschieden ist, dann will er ein Boot kaufen." Dieses Regel hat eine Genauigkeit von 100! Es kann eine Vielzahl an verschiedenen Regeln auf das Porblem angewendet werden. Beispielsweise erscheint jedes Alter lediglich einmal in den Daten. Folglich könnte die Regel angewandt werden, dass Menschen im Alter von 66, 52, 53 oder 58 Jahren ein Boot kaufen wollen. Gleichwohl wollen alle Probanden mit einem anderen Alter kein Boot kaufen. Es muss weiter das Ziel sein, zu ermittlen, ob neue Kunden wahrscheinlich ein Boot kaufen werden. Eine 100-prozentige Genauigkeit bei Vorhersagen über das Trainingssets schafft keine Abhilfe.Es ist nicht zu erwarten, dass die oben genannten Regeln bei unbekannten Kunden eine hohe Prädiktionsgüte aufweisen.Die Regel erscheint zu komplex, und wird nur von wenigen Daten getragen.Ein Maß dafür, ob ein Algorithmus bei neuen Daten gut abschneidet, ist die Auswertung des Testsets. Intuitiv wird erwartet, dass einfache Modelle besser auf neue Daten generalisieren. Wenn die Regel "Menschen über 50 Jahre wollen ein Boot kaufen" gilt, würde das Verhalten aller Kunden erklärt werden. Gleichermaßen, würden aber wichtige Kriterien wie beispielsweise Alter, Kinder und Familienstand unberücksichtigt bleiben. Ein Modell zu erstellen, das für die Menge an verfügbaren Informationen zu komplex ist, wird als Overfitting bezeichnet. Overfitting tritt auf, wenn Sie ein Modell zu genau an die Eigenschaften des Trainingssets anpassen. Das trainierte Modell ist dann zwar hervorragend geeignet Vorhersagen über das Trainingsset durchzuführen, nicht jedoch das gelernte Wissen auf neue Daten zu verallgemeinern.Gegensätzlich gilt: Wenn ein zu einfaches Modell gewählt wird, ist dieses nicht in der Lage alle Aspekte der Vielseitigkeit in den vorliegenden Daten zu erfassen. Beispielsweise wird das durch die Regel "Jeder, der ein Haus besitzt, kauft ein Boot" beschrieben. Das generierte Modell wird hier selbst auf dem Trainingsset schlecht abschneiden. Die Wahl eines zu einfachen Modells wird als Underfitting bezeichnet. Je komplexer das Modell gewählt wird, desto besser können Trainingsdaten vorhersagt werden. Wenn das Modell jedoch zu komplex wird, konzentriert sich das Modell zu stark auf die einzelnen Datenpunkte des Trainingssets zu. Das Modell wird dann nur schwer auf neue Daten generalisieren können. Dazwischen befindet sich der sogenannte Sweet Spot, in dem das Modell den optimalen Generalisierungsleistung erbringt. Dieses Paradigma beschreibt den Trade-off zwischen Over- und Underfitting. Der Zusammenhang wird in der Abbildung 2-1 verdeutlicht. Abbildung 2-1. Kompromiss der Modellkomplexität in Bezug auf Training und Testgenauigkeit 2.3. Verhältnis von Modellkomplexität zu Datensatzgröße Es ist zu beachten, dass die Modellkomplexität eng mit der Variation der im Trainingsdatensatz enthaltenen Daten verbunden ist: Je größer die Vielfalt der Datenpunkte in dem vorliegenden Datensatz, desto komplexer kann das Modell sein, welches ohne Overfitting verwendet werden kann. Oftmals resultiert die Akquise von großen Datenmengen in einer höheren Datenvielfalt, so dass größere Datensätze den Aufbau komplexer Modelle ermöglichen. Es muss jedoch darauf geachtet werden, dass die Duplizierung von gleichen Datenpunkten oder die Akquise von sehr ähnlichen Daten nicht in einem Mehrwert resultiert. Die Verwendung von großen Datensätzen und die Möglichkeit entsprechend komplexe Modelle anzuwenden, kann oft zu einer hervorragenden Prädiktionsgüte führen. Jedoch stehen in der Realität oftmals nur geringe, wenig diversifizierte Datensätze zur Verfügung. Dies muss bei der Auswahl des eingesetzten Algorithmus berücksichtigt werden. 2.4. Algorithmen In diesen Unterkapitel werden einige beliebte maschinelle Lernalgorithmen vorgestellt. Es wird vermittelt wie anhand von Datensätzen Modelle trainiert und Vorhersagen an unbekannten Daten durchgeführt werden können.Des Weiteren wird evaluiert, wie sich die Modellkomplexität für jedes dieser Modelle auswirkt.Es werden die Stärken und Schwächen der einzelnen Algorithmen untersucht und evaluiert auf welche Art diese optimal angewendet werden können. Außerdem wird die Relevanz einzelner Parameter auf die Arbeitsweise der Algorithmen thematisiert. Für die vorgestellten Algorithmen werden sowohl die Klassifikations-, als auch die Regressionsvariante erläutert. 2.4.1. k-Nearest Neighbors In der einfachsten Klassifikator-Variante betrachtet der k-Nearest-Neighbors(k-NN) Algorithmus nur genau einen nächsten Nachbar. Und zwar denjenigen Nachbar mit der kürzesten Distanz zwischen Trainings- und Testdatenpunkt. Die Vorhersage ist dann die bekannte Ausgabe für diesen Trainingspunkt. Abbildung 2-4 veranschaulicht dies für den Fall der Klassifikation auf dem Forge-Datensatz:
###Code
mglearn.plots.plot_knn_classification(n_neighbors=1)
###Output
_____no_output_____
###Markdown
Abbildung 2-4. Vorhersagen des One-Nearest-Neighbor Modells auf dem Forge-Datensatz In der Abbildung wurden drei neue Datenpunkte hinzugefügt, die als Sterne dargestellt werden. Für jeden neuen Datenpunkt wurde der nächste Punkt im Trainingsset markiert. Die Vorhersage des One-Neighbor-Algorithmus ist die Bezeichnung dieses Punktes (dargestellt durch die Farbe des Kreuzes). Anstatt nur den nächstgelegenen Nachbarn zu berücksichtigen, kann auch eine beliebige Anzahl, k, parametriert werden. Wenn mehr als ein Nachbar in Betracht gezogen wird, wird das vorhergesagte Label gevotet. Es wird für jeden Testpunkt ermittelt, wie viele Nachbarn mit bestimtmer Klassenzugehörigkeit in der parametrierten Nachbarschaft vorhanden sind.In der nachfolgenden Abbildung 2-5 wird die Anzahl der nächsten Nachbarn mit k = 3 parametriert.
###Code
mglearn.plots.plot_knn_classification(n_neighbors=3)
###Output
_____no_output_____
###Markdown
Abbildung 2-5. Vorhersagen des Three-Nearest-Neighbors Modells auf dem Forge-Datensatz Die Vorhersage des 3-NN Klassifikators für den neuen Datenpunkt oben links in der Abbildung ist nun nicht mehr die selbe wie beim 1-NN Klassifikator. Während es sich bei dieser Darstellung um ein binäres Klassifikationsproblem handelt, kann diese Methode auf Datensätze mit beliebig vielen Klassen angewendet werden. Die Voting-Strategie der Multi-Klassen k-NN Klassifikation funktioniert analog zur Binärklassifikation. Im weiteren soll die k-NN Klassifikation mit Scikit-Learn durchgeführt werden.Dazu werden die Daten zunächst in einen Training- bzw. Testdatensatz aufgeteilt. Anschließend soll die Generalisierungsleistung bewerten werden.
###Code
from sklearn.model_selection import train_test_split
X, y = mglearn.datasets.make_forge()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
###Output
_____no_output_____
###Markdown
Als nächstes wird die Klasse importiert und instanziiert. Hierbei können auch die Parameter einstellen werden, wie z.B. die Anzahl der zu verwendenden Nachbarn k.
###Code
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
###Output
_____no_output_____
###Markdown
Nun wird der Klassifikator mit dem Trainingsset trainiert. Der KNeighborsClassifier speichert hierzu den Datensatz, so dass die Anzahl nächster Nachbarn bei der Vorhersage berechnet werden können:
###Code
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Um Vorhersagen über die Testdaten zu treffen, wird die Vorhersagemethode definiert. Für jeden Datenpunkt im Testsatz berechnet dieser seine nächsten Nachbarn im Trainingssatz und ermittelt die häufigste Klasse unter diesen:
###Code
print("Test set predictions: {}".format(clf.predict(X_test)))
###Output
Test set predictions: [1 0 1 0 1 0 0]
###Markdown
Um zu beurteilen, wie gut das Modell generalisiert, kann die Score-Methode mit den Testdaten ubd den Testlabels aufgerufen werden:
###Code
print("Test set accuracy: {:.2f}".format(clf.score(X_test, y_test)))
###Output
Test set accuracy: 0.86
###Markdown
Das Modell erzeilt eine Genauigkeit von 86%. Das bedeutet, dass das Modell die Klasse für 86% der Datenpunkte des Testdatensatzes korrekt vorhergesagt hat. Analyse des KNeighborsClassifiers Für zweidimensionale Datensätze kann die Vorhersage für alle möglichen Testpunkte in der xy-Ebene dargestellt werden.Die Ebene wird in Anlehnung an die Klasse eingefärbt, die einem Punkt in dieser Region zugeordnet wird. Dies ermöglicht es, die Entscheidungsgrenze zu betrachten. Die Entscheidungsgrenze beschreibt die Kluft zwischen der Zuordnung zu Klasse 0 bzw. zu Klasse 1 durch den Algorithmus. Der folgende Code erzeugt die Visualisierungen der Entscheidungsgrenzen für einen, drei und neun Nachbarn, wie in Abbildung 2-6 dargestellt:
###Code
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate
# and fit in one line
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=.4)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc=3)
###Output
_____no_output_____
###Markdown
Abbildung 2-6. Entscheidungsgrenzen, die durch das k-NN Modell für Werte von k={1,3,9} erstellt wurden Wie links in der Abbildung 2-6 dargestellt, führt die Verwendung eines einzelnen Nachbarn zu einer Entscheidungsgrenze, die den Trainingsdaten genau folgt. Die Berücksichtigung von mehr Nachbarn führt zu einer glatteren Entscheidungsgrenze. Eine glattere Grenze entspricht einem einfacheren Modell. Mit anderen Worten, die Verwendung weniger Nachbarn entspricht einer hohen Modellkomplexität, und die Verwendung vieler Nachbarn entspricht einer geringen Modellkomplexität. Betrachtet man den Extremfall, dass die Anzahl der Nachbarn die Anzahl aller Datenpunkte im Trainingsset ist, so hätte jeder Testpunkt genau die gleichen Nachbarn (alle Trainingspunkte) und alle Vorhersagen wären die gleichen: Die Klasse, die im Trainingsset am häufigsten ist. Im Weiteren soll der Zusammenhang zwischen Modellkomplexität und Generalisierungsleistung untersucht werden.Dies wird auf der Grundlage des realen open-source Datensatzes breast-cancer durchgeführt. Erneut wird der Datensatz in ein Training und ein Testset aufgeteilt. Danach wird das Training ausgewertet und die Leistung des Sets mit einer unterschiedlichen Anzahl von Nachbarn getestet. Die Ergebnisse sind in Abbildung 2-7 dargestellt:
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# build the model
clf = KNeighborsClassifier(n_neighbors=n_neighbors)
clf.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(clf.score(X_train, y_train))
# record generalization accuracy
test_accuracy.append(clf.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
###Output
_____no_output_____
###Markdown
Die Darstellung zeigt die Genauigkeit des Trainings- und Testsatzes auf der y-Achse im Anzahl der Nachbarnk auf der x-Achse. In der Abbildung können einige der Merkmale von Over- und Underfitting erkannt werden.Es ist zu beachten , dass die Berücksichtigung von weniger Nachbarn einem komplexeren Modell entspricht. Unter Einsatz eines 1-NN Klassifikators ist die Vorhersage auf dem Trainingsset perfekt. Werden allerdings mehr Nachbarn berücksichtigt, wird das Modell einfacher und die Trainingsgenauigkeit sinkt. Die Genauigkeit des Testsets für die Verwendung eines einzelnen Nachbarn ist geringer als bei der Verwendung mehrerer Nachbarn. Dies deutet darauf hin, dass die Verwendung des einzelnen nächsten Nachbarn zu einem zu komplexen Modell führt. Werden allerdings 10 Nachbarn betrachtet, ist das Modell zu einfach und die Modelleistung sinkt. Die beste Modellleistung befindet sich mit etwa sechs Nachbarn in der Mitte. **k-Neighbors Regression** Nun wird die Regressionsvariante des k-Nearest-Neighbors Algorithmus vorgestellt. Auch hier wird zunächst mit der 1-NN Variante begonnen und der wave-Datensatz eingesetzt. Außerdem werden in der Abbildung drei Testdatenpunkte eingeführt (grüne Sterne auf der x-Achse) hinzugefügt. Die Vorhersage mit einem einzelnen Nachbarn wird durch Zielwert des nächsten Nachbarn. Diese sind in Abbildung 2-8 als blaue Sterne dargestellt:
###Code
mglearn.plots.plot_knn_regression(n_neighbors=1)
###Output
_____no_output_____
###Markdown
Abbildung 2-8. Vorhersagen der One-Neighbor-Regression auf dem wave-Datensatz Auch hier können mehrerer nächste Nachbarn für die Regression verwenden werden. Bei der Verwendungmehrere nächster Nachbarn entspricht die Vorhersage dem Mittelwert der relevanten Nachbarn (vgl. Abbildung 2-9):
###Code
mglearn.plots.plot_knn_regression(n_neighbors=3)
###Output
_____no_output_____
###Markdown
Abbildung 2-9. Vorhersagen der Regression von drei Nachbarn auf dem Wellendatensatz Der k-NN Algorithmus für die Regression ist in der KNeighbors Regressor Klasse in scikit-learn implementiert. Dieser wird ähnlich wie der KNeighborsClassifier verwendet:
###Code
from sklearn.neighbors import KNeighborsRegressor
X, y = mglearn.datasets.make_wave(n_samples=40)
# split the wave dataset into a training and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# instantiate the model and set the number of neighbors to consider to 3
reg = KNeighborsRegressor(n_neighbors=3)
# fit the model using the training data and training targets
reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Nun können Vorhersagen am Testdatensatz durchgeführt werden.
###Code
print("Test set predictions:\n{}".format(reg.predict(X_test)))
###Output
Test set predictions:
[-0.05396539 0.35686046 1.13671923 -1.89415682 -1.13881398 -1.63113382
0.35686046 0.91241374 -0.44680446 -1.13881398]
###Markdown
Das Modell kann außerdem mit der Score-Methode bewertet werden, die für Regressoren den sogennanten R²-Score zurück gibt. Der R²-Score, auch bekannt als der Determinationskoeffizient, ist ein Maß für die Güte einer Vorhersage für ein Regressionsmodell und ergibt einen Wert zwischen 0 und 1. EEin Wert von 1 entspricht einer perfekten Vorhersage, und ein Wert von 0 entspricht einem konstanten Modell, das nur den Mittelwert der Trainingsdatenantworten voraussagt, y_train:
###Code
print("Test set R^2: {:.2f}".format(reg.score(X_test, y_test)))
###Output
Test set R^2: 0.83
###Markdown
Hier liegt die R²-Wert bei 0,83 was auf eine relativ gute Modellgüte hinweist. Analysieren des KNeighborsRegressorsFür einen eindimensionalen Datensatz ist ersichtlich, wie die Prognosen für verschiedene Merkmalswerte aussehen (Abbildung 2-10). Dazu wird ein Testdatensatz erstellt, der aus vielen Punkten auf einer Linie besteht:
###Code
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# create 1,000 data points, evenly spaced between -3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1, 1)
for n_neighbors, ax in zip([1, 3, 9], axes):
# make predictions using 1, 3, or 9 neighbors
reg = KNeighborsRegressor(n_neighbors=n_neighbors)
reg.fit(X_train, y_train)
ax.plot(line, reg.predict(line))
ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)
ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)
ax.set_title(
"{} neighbor(s)\n train score: {:.2f} test score: {:.2f}".format(
n_neighbors, reg.score(X_train, y_train),
reg.score(X_test, y_test)))
ax.set_xlabel("Feature")
ax.set_ylabel("Target")
axes[0].legend(["Model predictions", "Training data/target",
"Test data/target"], loc="best")
###Output
_____no_output_____
###Markdown
Abbildung 2-10. Vergleich von Vorhersagen der Regression des k-NN Algorithmus für verschiedene k-Werte Wie wir aus dem Diagramm ersehen können, hat jeder Punkt im Trainingssatz mit nur einem einzigen Nachbarn einen offensichtlichen Einfluss auf die Vorhersagen, und die vorhergesagten Werte durchlaufen alle Datenpunkte. Dies führt zu einer sehr instabilen Vorhersage. Die Berücksichtigung von mehr Nachbarn führt zu glatteren Vorhersagen, aber diese passen nicht auch zu den Trainingsdaten. **Stärken, Schwächen und Parameter** Grundsätzlich gibt es zwei wichtige Parameter für den KNeighbors Klassifikator: die Anzahl der Nachbarn k und die Wahl des ABstandmaßes zwischen den Punkten. In der Praxis funktioniert die Verwendung einer kleinen Anzahl von Nachbarn wie drei oder fünf häufig gut, jedoch ist es stets ratsam die Parameter zu variieren. Die Wahl des optimalen Abstandsmaßes ist nicht trivial. Standardmäßig wird die euklidische Distanz verwendet, was meist gut funktioniert. Eine der Stärken von k-NN ist, dass das Modell sehr einfach zu verstehen ist und oft eine vernünftige Leistung ohne viele Anpassungen erzielt wird. Die Verwendung dieses Algorithmus ist eine gute Methode um einen Datensatz zu bearbeiten, bevor fortgeschrittenere Techniken in Betracht gezogen werden. Die Bildung eines k-NN Modells ist grundsätzlich nicht rechenintensiv. Wird jedoch das Trainingsset groß ist, kann die Vorhersagedauer hoch sein. Bei der Verwendung des k-NN-Algorithmus ist es wichtig, dass die Daten vorverarbeitet werden. Dieser Ansatz funktioniert oft unzureichend bei Datensätzen mit vielen Features (Hunderte oder mehr) und bei Datensaätzen bei denen die meisten Feature-Werte nahe 0 sind (sogenannte spärliche Datensätze). Obwohl der Algorithmus der k-NN Algorithmus leicht zu verstehen ist, wird er in der Praxis nicht oft verwendet. Dies hat den Grund, dass die Vorhersage zeitaufwändig ist und der Algorithmusnicht in der Lage ist große Zahl an verschiedenen Features zu handhaben. Der Algorihmus, der im nächsten Schritt thematisiert wird hat keinen dieser Nachteile 2.4.2. Lineare Modelle Lineare Modelle sind eine Klasse von Modellen, die in der Praxis weit verbreitet sind und in den letzten Jahrzehnten intensiv untersucht wurden. Lineare Modelle tätigen eine Vorhersage mit einer linearen Funktion der Eingangsmerkmale. **Lineare Modelle für die Regression** Für die Regression sieht die allgemeine Praediktionsgleichung für ein lineares Modell wie folgt aus: ŷ = w[0] * x[0] + w[1] * x[1] + ... + w[p] * x[p] + b Hier bezeichnen x[0] bis x[p] die Merkmale (Anzahl der Merkmale: p) eines einzelnen Datenpunktes, w und b sind Parameter des erlernten Modells, und ŷ ist die Vorhersage des Modells. Für einen Datensatz mit einem einzelnen Merkmal ergibt sich der folgende Zusammenhang: ŷ = w[0] * x[0] + b Dies entspricht einer klassischen Geradengleichung. Hierbei ist w[0] die Steigung und b der y-Achsenabschnitt. Für weitere Merkmale enthält w die Steigungen der jeweiligen Merkmalsachse. Die Vorhersage des Modells kann auch als die gewichtete Summe der Eingabemerkmale vetrachtet werden, wobei die Gewichte (die negativ sein können) durch die Eingaben von w gegeben sind. Nun werden die Parameter w[0] und b auf dem bereits bekannten eindimensionalen wave-Datensatz trainiert. Dies kann durch den folgenden Befhel geschehen (vgl. Abbildung 2-11):
###Code
mglearn.plots.plot_linear_regression_wave()
###Output
w[0]: 0.393906 b: -0.031804
###Markdown
Abbildung 2-11. Vorhersage eines linearen Modells auf dem Wave-Datensatz In der Abbildung 2-11 beträgt die Steigung w[0] 0,4. Der y-Achsenabschnitt liegt mit -0,032 knapp unter dem Nullpunkt des Koordinatensystems.Lineare Modelle für die Regression können als Regressionsmodelle charakterisiert werden, bei denen die Vorhersage eine Linie für ein einzelnes Merkmal, eine Ebene bei Verwendung von zwei Merkmalen oder eine Hyperebene in höheren Dimensionen (d.h. bei Verwendung von mehr Merkmalen) ist.Es gibt verschiedene lineare Modelle für die Regression. Der Unterschied zwischen diesenModelle liegt in der Art und Weise wie die Modellparameter w und b aus den Trainingsdaten gebildet werden,und wie die Modellkomplexität kontrolliert werden kann. Im weiteren werden einigebeliebte lineare Modelle für die Regression vorgestellt. **Lineare Regression** Die lineare Regression (auch bekannt als gewöhnliche kleinste Quadrate (GLS) ist die einfachste lineare Methode der Regression. Die lineare Regression findet die Parameter w und b, die den mittleren quadratischen Fehler zwischen Vorhersagen und den wahren Regressionszielen y auf dem Trainingssatz minimieren. Der mittlere quadrierte Fehler ist die Summe der quadrierten Differenzen zwischen den Vorhersagen und den wahren Werten. Die lineare Regression hat keine Parameter. Folglich besteht keine Möglichkeit, die Komplexität des Modells zu kontrollieren. Unten folgt der Code, der das Modell erzeugt. Die Abbildung 2-11 enthält den entsprechenden Plot.
###Code
from sklearn.linear_model import LinearRegression
X, y = mglearn.datasets.make_wave(n_samples=60)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
lr = LinearRegression().fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Der Steigungs-Parameter (w), auch Gewichte oder Koeffizienten, werden im Attribut coef_ gespeichert, während der Offset oder Intercept (b) im Attribut intercept_ gespeichert wird:
###Code
print("lr.coef_: {}".format(lr.coef_))
print("lr.intercept_: {}".format(lr.intercept_))
###Output
lr.coef_: [0.39390555]
lr.intercept_: -0.031804343026759746
###Markdown
Das intercept_-Attribut ist stets ein Skalar, wobei das coef_-Attribut ein NumPy-Array mit einem Eintrag pro Eingabefunktion ist. Da im wave-Datensatz nur ein Eingabefeature existiert, hat lr.coef_ nur einen Eintrag. Weiter werden die Modellgüte (R²-Score) am Trainigs- und Testset bestimmt:
###Code
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
###Output
Training set score: 0.67
Test set score: 0.66
###Markdown
Der ermittelte R² von etwa 0,66 ist nicht besonders gut. Die Ergebnisse auf den Trainings- und Testsätzen liegen jedoch sehr eng beieinander liegen. Dies deutet auf ein Underfittung des Modellshin. Für diesen eindimensionalen Datensatz besteht kaum die Gefahr von Overfitting, da das Modell sehr einfach (oder eingeschränkt) ist. Bei höherdimensionalen Datensätzen (d.h. Datensätzen mit einer großen Anzahl von Features) werden lineare Modelle leistungsfähiger, und es besteht die Gefahr von Overfitting. Im Weiteren soll das Verhalten von linearen Regressoren bei höherdimensionalen Datensätzen, wie das Boston-Housing Datenset, untersucht werden. Der Datensatz enthält 506 Samples und 105 abgeleitete Features. Zunächst wird hierzu der Datensatz geladen und ein Training und ein Testset aufgeteilt. Danach wird das lineare Regressionsmodell trainiert:
###Code
X, y = mglearn.datasets.load_extended_boston()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
lr = LinearRegression().fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Beim Vergleich des R²-Scores des Regressionsmodells auf dem Trainings- und Testset-set ist erkennbar, dass die Vorhersage dem Trainings-Set hervorragend ist. Der R²-Wert auf dem Testset ist mit 0,61 deutlich schlechter:
###Code
print("Training set score: {:.2f}".format(lr.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lr.score(X_test, y_test)))
###Output
Training set score: 0.95
Test set score: 0.61
###Markdown
Diese Diskrepanz zwischen der Modellleistung auf dem Trainingsset bzw. Testset ist ein deutliches Zeichen für Overfitting. Um dies zu vermeiden, sollte ein Modell angewandt werden, welches in der Lage ist Modellkomplexität zu kontrollieren. Eine der am häufigsten verwendeten Alternativen zur herkömmlichen linearen Regression ist die Ridge-Regression, die wir im Folgenden untersuchen werden. **Ridge Regression** Die Ridge-Regression ist ebenfalls ein lineares Modell für die Regression. Es gilt folglichder selbe mathematische Zusammenhang wie für gewöhnliche kleinsten Quadrate (GLS).Bei der Ridge-Regression werden die Koeffizienten (w) trainiert um die folgenden beiden Kriterien zu erfüllen:- Eine gute Vorhersageleistung auf dem Trainingsset- Der skalare Werte der Koeffizienten (w) soll möglichst nahe Null seinDieses Vorgehen nennt man Regularisierung. Intuitiv bedeutet dies, dass jedes Merkmal so wenig Einfluss wie möglich auf das Ergebnis haben sollte, während es dennoch eine gute Vorhersage macht. Bei der Regularisierung wird ein Modell explizit eingeschränkt, um Overfitting zu vermeiden. Ridge-Regression ist auch als L2-Regularisierung bekannt. Die Ridge-Regression ist in linear_model.Ridge implementiert. Nachfolgend wird die Redge-Regresseion auf das Boston-Housing Datenset angewendet.
###Code
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training set score (Ridge): {:.2f}".format(ridge.score(X_train, y_train)))
print("Test set score(Ridge): {:.2f}".format(ridge.score(X_test, y_test)))
###Output
Training set score (Ridge): 0.89
Test set score(Ridge): 0.75
###Markdown
Der R²-Score auf dem Trainingsset von Ridge ist mit 0,75 niedriger als bei LinearRegression. Wohingegen der R²-Score bei dem Testset höher ist. Die verbesserte Generalisierungsleistung auf die unbekannten Daten des Testsets wird durch die Einschränkung des Modells erzielt. Ein weniger komplexes Modell bedeutet eine schlechtere Moidellleistung auf dem Trainingsset, resultiert jedoch in einer besseren Generalisierungsleistung. Da die Generalisierungsleistung eines Modell ein´sehr wichtiges Kriterium darstellt, sollten regularisierende Modelle bevorzugt werden. Das Ridge-Modell beschreibt einen Kompromiss zwischen der Einfachheit des Modells (nahezu Nullkoeffizienten) und der Leistung auf dem Trainingsset. Die Gewichtung auf die Einfachheit des Modells im Vergleich zur Leistung auf dem Trainingssets kann durch den Parameter alpha festgelegt werden.Im vorherigen Beispiel haben wurde der Standardparameter alpha=1.0 verwendet. Die Einstellung des Parameters alpha muss unter Berücksichtigung des Datensatzes geschehen. Die Erhöhung des Alpha-Koeffizienten, resultiert in kleineren Koeefizienten (näher Null - einfacheres Modell). Dies verringert zwar die Leistung des Regressors auf dem Trainingsset verringert, erhöht jedoch die Generalisierungsleistung.
###Code
ridge10 = Ridge(alpha=10).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge10.score(X_test, y_test)))
###Output
Training set score: 0.79
Test set score: 0.64
###Markdown
Durch die Verringerung von alpha werden die Koeffizienten weniger eingeschränkt. Geht alpha gegen Null sind die Koeffizienten kaum eingeschränkt, und das Ergebnis ähnelt der Linear-Regression:
###Code
ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
print("Training set score: {:.2f}".format(ridge01.score(X_train, y_train)))
print("Test set score: {:.2f}".format(ridge01.score(X_test, y_test)))
###Output
Training set score: 0.93
Test set score: 0.77
###Markdown
Es wird einer gute generalisierungsleistung für alpha=0.1 erzielt. Wir können auch einen qualitativeren Einblick erhalten, wie der Alpha-Parameter das Modell verändert, indem wir das coef_-Attribut von Modellen mit unterschiedlichen Alpha-Werten untersuchen. Ein höherer Alpha-Wert bedeutet ein eingeschränkteres Modell, daher erwarten wir, dass die Einträge von coef_ bei einem hohen Alpha-Wert eine kleinere Größe haben als bei einem niedrigen Alpha-Wert. Dies wird im Diagramm in Abbildung 2-12 bestätigt:in der Abbildung 2-12 wird die qualitateive Auswirkung des alpha-Parameters untersucht.
###Code
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge01.coef_, 'v', label="Ridge alpha=0.1")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.hlines(0, 0, len(lr.coef_))
plt.ylim(-25, 25)
plt.legend()
###Output
_____no_output_____
###Markdown
Abbildung 2-12. Vergleich von Koeffizientengrößen für die Ridge-Regression mit verschiedenen Werten der Alpha- und linearen Regression In der Abbildung 2-12 beschreibt die x-Achse die Einträge von coef_: x=0 zeigt den dem ersten Merkmal zugeordneten Koeffizienten, x=1 den dem zweiten Merkmal zugeordneten Koeffizienten usw. Die y-Achse beschreibt die Werte der Koeffizienten. Die Abbildun 2-12 zeigt, dass für alpha=10 die Koeffizienten meist zwischen etwa -3 und 3 liegen. Die Koeffizienten für das Ridge-Modell mit alpha=1 sind etwas größer. Die Punkte, die Alpha=0,1 zugeordnet werden können,sind noch größer. Viele der Punkte, die einer linearen Regression ohne Regularisierung entsprechen (was Alpha=0 wäre), haben eine Größenordnung, die außerhalb des Plots liegen. **Lasso Regression** Eine Alternative zu Ridge zur Regularisierung der linearen Regression ist Lasso. Wie bei der Ridge-Regression beschränkt auch die Verwendung von Lasso die Koeffizienten auf nahezu Null. Lasso wird auch als L1-Regulierung bezeichnet. Bei der Verwendung der Lasso-Regulierung, kommt es vor, dass einige Koeffizienten genau Null sind. Somit werden einige Features vom Modell vollständig ignoriert. Dies kann als eine Form der automatischen Merkmalsauswahl angesehen werden. Wenn einige Koeffizienten genau Null sind, ist ein Modell oft leichter zu interpretieren und kann die wichtigsten Merkmale Ihres Modells offenbaren. Nachfolgend wird Lasso auf den erweiterten Boston-Housing-Datensatz angewandt:
###Code
from sklearn.linear_model import Lasso
lasso = Lasso().fit(X_train, y_train)
print("Training set score (Lasso): {:.2f}".format(lasso.score(X_train, y_train)))
print("Test set score (Lasso): {:.2f}".format(lasso.score(X_test, y_test)))
print("Number of features used: {}".format(np.sum(lasso.coef_ != 0)))
###Output
Training set score (Lasso): 0.29
Test set score (Lasso): 0.21
Number of features used: 4
###Markdown
Die Modelleistung der Lasso-Regression ist mit R²-Scores von 0,29 und 0,21 auf dem Traings- bzw. Testset schlecht.Dies deutet auf ein Underfitting hin. Es wurden außerdem lediglich 4 der 105 Features einbezogen. Ähnlich wie Ridge hat das Lasso auch einen Regularisierungsparameter alpha. Alpha steuert, wie sehr Koeffizienten gegen Null gehen.Auch in diesem Beispiel kann alpha verringert werden um Underfitting zu vermeiden. Dazu muss jedoch auch die Standardeinstellung von max_iter (die maximale Anzahl der auszuführenden Iterationen) erhöht werden:
###Code
# we increase the default setting of "max_iter",
# otherwise the model would warn us that we should increase max_iter.
lasso001 = Lasso(alpha=0.01, max_iter=100000).fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso001.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso001.score(X_test, y_test)))
print("Number of features used: {}".format(np.sum(lasso001.coef_ != 0)))
###Output
Training set score: 0.90
Test set score: 0.77
Number of features used: 33
###Markdown
Ein geringerer Wert für Alpha erlaubte es, ein komplexeres Modell anzuwenden. Das komplexere Modell erzeilt einer bessere Modellleistung auf dem Trainings- und Testset. Die Modelleistung ist geringfügig besser als mit Ridge. Außerdem werden lediglich 33 der 105 Features benötigt. Dies macht dieses Modell potenziell leichter verständlich. Bei einer zu kleinen Definition von alpha, findet keine Regularisierung statt und Overfitting tritt auf(mit einem Ergebnissen ähnlich der Linear-Regression):
###Code
lasso00001 = Lasso(alpha=0.0001, max_iter=100000).fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso00001.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso00001.score(X_test, y_test)))
print("Number of features used: {}".format(np.sum(lasso00001.coef_ != 0)))
###Output
Training set score: 0.95
Test set score: 0.64
Number of features used: 96
###Markdown
Auch hier können die Koeffizienten der verschiedenen Modelle visualisiert werden. Das Ergebnis ist in Abbildung 2-14 dargestellt:
###Code
plt.plot(lasso.coef_, 's', label="Lasso alpha=1")
plt.plot(lasso001.coef_, '^', label="Lasso alpha=0.01")
plt.plot(lasso00001.coef_, 'v', label="Lasso alpha=0.0001")
plt.plot(ridge01.coef_, 'o', label="Ridge alpha=0.1")
plt.legend(ncol=2, loc=(0, 1.05))
plt.ylim(-25, 25)
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
###Output
_____no_output_____
###Markdown
Abbildung 2-14. Vergleich von Koeffizientengrößen für die Lasso-Regression mit verschiedenen Werten der Alpha- und Ridge-Regression Für alpha=1 ist ein Großteil Koeffizienten nahezu Null. Wird alpha auf 0,01 reduziert, erhalten wir die Lösung, die als grüne Punkte dargestellt wird, was dazu führt, dass die meisten Funktionen genau Null sind. Mit alpha=0.00001 ergibt sich ein unregularisiertes Modell, bei dem die meisten Koeffizienten groß und ungleich Null sind. Zum Vergleich: Die beste Lösung für das Ridge-Modell wird mit alpha=0,1 erzielt. Dieses hat eine vergleichbare Prädiktionsleistungwie das Lasso-Modell mit alpha=0,01. Bei Ridge sind jedoch alle Koeefizienten ungleich Null. In der Praxis ist die Ridge-Regression in der Regel die erste Wahl zwischen den beiden Regularisierungs-Modellen. Bei einer hohen Feature-Zahl, oder wenn ein leicht zu interpretierendes Modell gewünscht ist, kann Lasso jedoch die bessere Wahl sein. Scikit-learn bietet mit der ElasticNet-Klasse, eine hybride variante von Lasso und Ridge. In der Praxis funktioniert diese Kombination am besten, allerdings müssen hierbei zwei Parameter eingestellt werden (jeweils ein Parameter für die L1-Regulierung bzw. L2-Regulierung). **Lineare Modelle zur Klassifikation** Lineare Modelle werden auch häufig zur Klassifikation verwendet. Zunächst wird die binäre Klassifikation betrachtet. In diesem Fall wird eine Vorhersage mit der folgenden Formel durchgeführt: ŷ = w[0] * x[0] + w[1] * x[1] + ... + w[p] * x[p] + b > 0 Die Formel weist Ähnlichkeiten zur linearen Regressionsformelauf, enthält jedoch einen Grenzwert (>0). Wenn die Funktion ŷ ein Ergebnis kleiner als Null ergibt, prognostizieren der Klassifikator die Klasse -1. Ist das Ergebnis größer Null wird die Klasse +1 prädiziert. Dies ist das grundlegende Vorgehen für alle linearen Modelle zur Klassifikation. Auch hier gibt es viele verschiedene Möglichkeiten, die Koeffizienten (w) und den Achsabschnitt (b) zu berechnen.Bei linearen Modellen für die Regression ist die Ausgabe ŷ eine lineare Funktion der Features: Eine Linie, Ebene oder Hyperebene (in höheren Dimensionen). Bei linearen Modellen zur Klassifikation ist die Entscheidungsgrenze eine lineare Funktion des Eingangs. Ein (binärer) linearer Klassifikator ist ein Klassifikator, der zwei Klassen durch eine Linie, eine Ebene oder eine Hyperebene trennt. Es existieren viele Algorithmen zum Erlernen linearer Modelle. Diese Algorithmen unterscheiden sich alle in den folgenden zwei Punkten: - Die Berechnung der Modellgüte (Loss-function, Verlustfunktion)- Ob und welche Art von Regularisierung verwendet wird Aus mathematischen Gründen ist es nicht möglich, w und b so zu wählen, dass die Anzahl der Fehlklassifikationen minimiert wird. Die beiden gebräuchlichsten linearen Klassifikationsalgorithmen sind die logistische Regression (implementiert in linear_model.LogisticRegression) und lineare Support-Vector-Machines (lineare SVMs) (implementiert in svm.LinearSVC) Dabei steht SVC für Support-Vector-Classifier. Trotz ihres Namens ist LogisticRegression ein Klassifikationsalgorithmus und kein Regressionsalgorithmus und sollte nicht mit LinearRegression verwechselt werden. Wir können die Modelle LogisticRegression und LinearSVC auf den Forge-Datensatz anwenden und die Entscheidungsgrenze visualisieren, wie sie von den linearen Modellen gefunden wird (Abbildung 2-15):
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
X, y = mglearn.datasets.make_forge()
fig, axes = plt.subplots(1, 2, figsize=(10, 3))
for model, ax in zip([LinearSVC(), LogisticRegression()], axes):
clf = model.fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=False, eps=0.5,
ax=ax, alpha=.7)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{}".format(clf.__class__.__name__))
ax.set_xlabel("Feature 0")
ax.set_ylabel("Feature 1")
axes[0].legend()
###Output
/home/ralfi/.local/lib/python3.6/site-packages/sklearn/svm/base.py:929: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/home/ralfi/.local/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Abbildung 2-15. Entscheidungsgrenzen einer linearen SVM und logistische Regression auf den Forge-Datensatz mit den Standard-Parametern In obiger Abbildung wird das erste Feature des Forge-Datensatzes auf der x-Achse und das zweite Feature auf der y-Achse dargestellt. Die jeweils von LinearSVC bzw. LogisticRegression identifizierten Entscheidungsgrenzen sind als Linien in dem Plot dargestellt. Die Entscheidungsgrenzen trennen den Bereich der Klasse 1 von dem Bereich der Klasse 0. In anderen Worten: Jeder neue Datenpunkt, der oberhalb der schwarzen Linie liegt, wird vom jeweiligen Klassifizierer in die Klasse 1 eingestuft. Während Punkte, die unterhalb der schwarzen Linie liegen, der Klasse 0zugeordnet werden. Die beiden Modelle haben ähnliche Entscheidungsgrenzen.Auffällig ist, dass beide Modelle zwei der Punkte falsch klassifiziert haben. Standardmäßig wenden beide Modelle eine L2-Regularisierung an. Zur Steuerung der Regularisierung bei der Logistic-Regression und Linear-SVC existiert der Trade-off-Parameter C. Für große Werte für C geht eine starke Regularisierung einher (und Umgekehrt). Die Verwendung niedriger C-Werte führt dazu, das die Klassifikatoren sich nach dem Großteil der Datenpunkte bilden.Wobei die Verwendung eines großen C-Wertes mit der korrekten Klassifikation jedes einzelnen Datenpunkts einhergeht. Der Zusammenhang ist für eine Linear-SVC in Abbildung 2-16 dargestellt:
###Code
mglearn.plots.plot_linear_svc_regularization()
###Output
_____no_output_____
###Markdown
Abbildung 2-16. Entscheidungsgrenzen eines linearen SVM auf dem Forge-Datensatz für verschiedene Werte von C Im linken Plot (C=0,01) ergibt sich eine starke Regularisierung. Das stark regulierte Modell wählt eine relativ horizontale Linie und klassifiziert zwei Punkte falsch. Im mittleren Plot ist C etwas höher, was einer geringeren Regularisierung entspricht. Die Entscheidungsgrenze ist auf Grund der beiden falsch klassifizierten Punkte stärker geneigt.Im rechten Plot (C=1000) ist die Entscheidungsgrenze deutlich stärker geneigt. Das Modell klassifiziert nun alle Punkte in der Klasse 0 korrekt. Ein Punkt der Klasse 1 wird noch immer falsch klassifiziert. Es sei darauf hingewiesen, dass für das vorliegende Datenset keine fehlerfreie lineare Klassifikation möglich ist. Das Modell für C=1000 weist Overfitting auf. Ähnlich wie bei der Regression können lineare Modelle zur Klassifikation in niedrigdimensionalen Räumen sehr restriktiv erscheinen und nur geradlinige Entscheidungsgrenzen oder in der Form von Ebenen berücksichtigen. In höheren Dimensionen werden lineare Modelle zur Klassifikation sehr leistungsfähig. Dabei bekommt die Vermediung von Overfitting eine hohe Bedeutung. Im weiteren wird LinearLogistic im Detail anhand des Breast-Cancer-Datensatzes analysiert:
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=42)
logreg = LogisticRegression().fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg.score(X_test, y_test)))
###Output
Training set score: 0.953
Test set score: 0.958
###Markdown
Der Standardwert C=1 bietet erzielt eine gut Modellleistung mit einer Genauigkeit von 95% sowohl beim Training als auch beim Testset. Da die Leistung von Trainings- und Testsets jedoch sehr nahe beieinander liegt, ist es wahrscheinlich, dass underfitting vorliegt. Weiter soll C erhöht werden, um einem besser generalisierendes Modell zu erhalten:
###Code
logreg100 = LogisticRegression(C=100).fit(X_train, y_train)
print("Training set score: {:.3f}".format(logreg100.score(X_train, y_train)))
print("Test set score: {:.3f}".format(logreg100.score(X_test, y_test)))
###Output
Training set score: 0.967
Test set score: 0.965
###Markdown
Im weiteren können in der Abbildung 2-17 die Koeffizienten der Modelle für drei verschiedene Parameter C entnommen werdendung 2-17):
###Code
plt.plot(logreg.coef_.T, 'o', label="C=1")
plt.plot(logreg100.coef_.T, '^', label="C=100")
plt.plot(logreg001.coef_.T, 'v', label="C=0.001")
plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)
plt.hlines(0, 0, cancer.data.shape[1])
plt.ylim(-5, 5)
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.legend()
###Output
_____no_output_____
###Markdown
Abbildung 2-17. Koeffizienten, die durch logistische Regression auf dem Breast-Cancer Datensatz für verschiedene Werte von C gelernt wurden Stärken, Schwächen und Parameter Der wichtigste Parameter von linearen Modellen ist der Regularisierungsparameter:- Bei Regressionsmodellen: Alpha- Bei LinearSVC und LogisticRegression: Große Werte für Alpha oder kleine Werte für C bedeuten einfache Modelle. Insbesondere für die Regressionsmodelle ist die Einstellung dieser Parameter sehr wichtig. Normalerweise werden C und alpha auf einer logarithmischen Skala ermittelt. Weiter muss eine Art der Regularisierungsart (L1 oder L2) gewähöt werden.L1 sollte verwendet werden, wenn nur wenige der Features tatsächlich relevant sind oder die Interpretierbarkeit des Modells wichtig ist. Andernfalls sollte L2 verwendent werden.Lineare Modelle sind berechnungseffizient in Bezug auf das Training und die Prädiktion. Sie skalieren auf sehr große Datensätze und arbeiten gut mit spärlichen Daten. Nachteilig ist jedoch, dass die Nachvollziehbarkeit von Koeffizienten oft nicht möglich ist. Dies gilt insbesondere, wenn der Datensatz korrelierte Features aufweist. In diesen Fällen können die Koeffizienten schwer interpretierbar sein. 2.4.2. Decision Trees Entscheidungsbäume (Decision Trees) sind weit verbreitete Modelle für Klassifikations- und Regressionsaufgaben. Im Wesentlichen lernen sie eine Hierarchie von Bedingungen (if/else Abfragen), die zu einer finalen Entscheidung führt. **Decision Trees zur Klassifikation**Besipiel-Szenario: Es soll zwischen den folgenden vier Tier-Arten entschieden werden: Bären, Falken, Pinguine und Delfine. Das Ziel ist es, die richtige Antwort zu finden, indem Sie so wenig Bedingungen (if/else Abragen) wie möglich angewendet werden. Eine mögliche erste Bedingung sei nun die Frage nach der Eigenschaft obdas Tier Federn hat. Diese Bedingung reduziert die Menge der möglichen Tiere auf zwei (Falken und Pinguine). Nachfolgend zur ersten Bedingung sollen nun weitere Bedingungen folgen. Jeweils eine für den Pfad der die erfüllte und die nicht erfüllte Bedingung "Hat Federn". Um weiter zwischen Pinguinen und Falken unterscheiden kann die Bedingung "Kann fliegen" angewendet werden. Hat ein Tier keine Federn, kann nachfolgend die Bedingung "Hat flossen" abgefragt werden. Diese Reihe von Bedingungen kann als Entscheidungsbaum formuliert werden, wie in Abbildung 2-22 dargestellt. Abbildung 2-22. Ein Entscheidungsbaum zur Unterscheidung mehrerer Tiere Im Weiteren soll der Aufbau von Entscheidungsbaum des in der Abbildung 2-23 dargestellten 2D-Klassifikationsdatensatz betrachtet werden. Der Datensatz heißt Two-Moons Datensatz und entält Daten über Halbmondformen. Wobei jede Klasse aus 75 Datenpunkten besteht. Einen Entscheidungsbaum zu bilden bedeutet, diejenige Reihenfolge der if/else- Bedingungen zu lernen, die mit der geringsten Anzahl an Bedingungen alle Daten klassifizieren kann. Solche Bedingungen werden häufig auch als Tests bezeichnet. In der Praxis sind Tests häufig auch relationale Bedingung bei denen kontinuierliche Werte verglichen werden (vgl. Abbildung 2-14). Abbildung 2-23. Zwei-Monde-Datensatz, auf dem der Entscheidungsbaum aufgebaut wird. Um einen Baum zu erstellen, evaluiert der Algorithmus alle möglichen Tests und findet denjenigen, der die größte Aussagekraft über die Zielvariable hat. Die Abbildung 2-24 zeigt den ersten Test. Die vertikale Aufteilung des Datensatzes bei x[1]=0,0596 resultiert in dem höchsten Informationsgewinn; sie trennt am optimal die Punkte in Klasse 1 von den Punkten in Klasse 2. Der oberste Knoten des Entscheidungsbaums, auch Wurzel genannt, repräsentiert den gesamten Datensatz. Dieser besteht aus 75 Punkten der Klasse 0 und 75 Punkten der Klasse 1. Die Aufteilung erfolgt durch Prüfung, ob x[1] <= 0,0596, dargestellt durch eine schwarze Linie im Diagramm. Wenn der Test wahr ist, wird dem linken Knoten ein Punkt zugewiesen, der 2 Punkte der Klasse 0 und 32 Punkte der Klasse 1 enthält. Andernfalls wird der Punkt dem rechten Knoten zugeordnet, der 48 Punkte der Klasse 0 und 18 Punkte der Klasse 1 enthält. Diese beiden Knoten entsprechen den in Abbildung 2-24 dargestellten oberen und unteren Bereichen. Obwohl die erste Aufteilung die Trennung der beiden Klassen gut gemacht hat, enthält der untere Bereich noch Punkte der Klasse 0 und der obere Bereich noch Punkte der Klasse 1. Ein genaueres Modell kann erstellt werden, indem der Suchprozess des optimalen Tests in beiden Sub-Regionen wiederholt wird. Die Abbildung 2-25 zeigt, dass die aussagekräftigste nächste Aufteilung für den linken und rechten Bereich auf x[0] basiert. Abbildung 2-24. Entscheidungsgrenze des Baumes mit Tiefe 1 (links) und entsprechendem Baum (rechts) Abbildung 2-25. Entscheidungsgrenze des Baumes mit Tiefe 2 (links) und entsprechende EntscheidungBaum (rechts) Dieser rekursive Prozess liefert einen binären Entscheidungsbaum, wobei jeder Knoten einen Test enthält. Ein Test kann alternativ auch als eine Teilung der aktuell betrachteten Daten. Dies ergibt eine Ansicht des Algorithmus als Aufbau einer hierarchischen Partition. Da jeder Test nur ein einzelnes Feature betrifft, weisen die Bereiche in der resultierenden Partition immer achsparallele Grenzen auf. Die rekursive Partitionierung der Daten wird so lange wiederholt, bis jeder Bereich in der Partition (jedes Blatt im Entscheidungsbaum) nur noch einen einzigen Zielwert (eine einzelne Klasse oder ein einzelner Regressionswert) enthält. Ein Blatt des Baumes, das nur Datenpunkte des gleichen Zielwerts bzw. der gleichen Klasse enthält, wird als rein bezeichnet. Die endgültige Partitionierung für diesen Datensatz ist in Abbildung 2-26 dargestellt. Abbildung 2-26. Entscheidungsgrenze des Baumes mit der Tiefe 9 (links) und einem Teil des entsprechenden Baumes (rechts); der gesamte Baum ist ziemlich groß und schwer zu visualisieren. Eine Vorhersage für einen neuen Datenpunkt wird durchgeführt, indem überprüft wird, in welchem Bereich der Partition des Merkmalsraums der Punkt liegt. Weiter wird das Mehrheitsziel (oder das einzelne Ziel im Falle reiner Blätter) in diesem Bereich vorhergesagt. Der Bereich kann gefunden werden, indem man den Baum von der Wurzel durchläuft und einer abgehenden Kante des Knotens folgt (in Abhängigkeit der Erfüllung des Tests). Kontrolle der Komplexität von EntscheidungsbäumenDie rekursive Ausführung des des Decision-Tree Algorithmus bis zur Reinheit aller Blätter zu Modellen sehr komplexen Modellen. Das Vorhandensein von reinen Blättern bedeutet, dass der Entscheidungsbaum den Datensatz perfekt abbildet und deshalb Overfitting auftritt. Das Overfitting ist links in Abbildung 2-26 zu sehen. In der Mitte aller Punkte der Klasse 0 exisitieren Regionen, die zur Klasse 1 gehören. Außerdem gibt es einen kleinen Streifen, der als Klasse 0 um den Punkt der Klasse 0 ganz rechts vorhergesagt wird. Entscheidungsbäume sind folglich anfällig für einzelne Ausreißerpunkte (Punkte die weit von den anderen Punkten der Klasse entfernt sind).Es gibt zwei Strategien, um eine Overfitting zu verhindern: - Das frühzeitige Stoppen der Baumerstellung (auch Pre-Priming genannt) - Das Erstellen des Baums, aber dann das Entfernen oder Zusammenlegen von Knoten, die wenig Informationen enthalten (auch als Post-Pruning oder einfach nur Pruning bezeichnet). Mögliche Kriterien für das Vorbeschneiden sind die Begrenzung der maximalen Tiefe des Baumes, die Begrenzung der maximalen Anzahl von Blättern oder die Anforderung einer minimalen Anzahl von Punkten in einem Knoten, um ihn immer wieder zu teilen. Entscheidungsbäume in scikit-learn werden im DecisionTreeRegressor implementiert undDecisionTreeClassifier-Klassen. Scikit-learn implementiert nur Pre-Pruning, nicht aber Post-Priming. Der Effekt des Pre-Pruning soll nun im Detail auf den Breast-Cancer Datensatz betrachtet werden. Zunächst wird der Datensatz in ein Trainings- und ein Testset aufgeteilt. Danach wird das Modell mit Standardparametern trainiert. Hierbei wird die vollständigen Entwicklung des Baums betrachtet (Wachsen des Baums bis alle Blätter rein sind).
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test)))
###Output
Accuracy on training set: 1.000
Accuracy on test set: 0.937
###Markdown
Wie erwartet, beträgt die Genauigkeit des Trainingssets 100%. Da alle Blätter rein sind, wurde der Baum so genau trainiert, dass dieser alle Label auf den Trainingsdaten perfekt vorhersagen kann. Wenn die Tiefe eines Entscheidungsbaums nicht beschnitten (pruned) wird, kann der Baum beliebig tief und komplex werden. Nicht beschnittene Bäume sind daher anfällig für ein Overfitting und generalisieren nicht gut mit neuen Daten. Im Witeren soll Pre-Priming auf den Baum angewendet werden, wodurch die Entwicklung des Baumes begrenzt wird. Eine Möglichkeit besteht darin, die Entwicklung des Baumes nach Erreichen einer definierten Tiefe zu stoppen. Hierzu wird der Parameter max_depth=4 gesetzt, d.h. es können nur vier aufeinanderfolgende Bedingungen angefragt werden (vgl. Abbildungen 2-24 und 2-26). Die Begrenzung der Tiefe des Baumes verringert die Überfischung. Dies führt zu einer geringeren Genauigkeit auf dem Trainingsset, aber zu einer Verbesserung der Modellleistung auf dem Testset:
###Code
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test)))
###Output
Accuracy on training set: 0.988
Accuracy on test set: 0.951
###Markdown
Analyse von Entscheidungsbäumen Ein Entscheidungsbaum kann durch die Funktion export_graphviz aus dem tree-Modul visualisieren. Dabei wird eine Datei im .dot-Dateiformat gespeichert. Diese gespeicherte Datei kann im weiteren visualisiert werden.
###Code
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"], feature_names=cancer.feature_names, impurity=False, filled=True)
###Output
_____no_output_____
###Markdown
Die Datei kann gelesen und visualisiert werden (vgl. Abbildung 2-27). Mit dem Modul graphviz oder einem beliebigen programm, welches .dot Dateien verarbeiten kann.
###Code
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
###Output
_____no_output_____
###Markdown
Abbildung 2-27. Visualisierung des Entscheidungsbaums, der auf dem Datensatz Brustkrebs basiert. Die Visualisierung des Baumes bietet eine hervorragende Interpretierbarkeit. Tiefere Bäume (eine Tiefe von 10 ist nicht ungewöhnlich) sind nur noch schwerer zu erfassen. Wichtigkeit von Features in Decision Trees Um die Funktionsweise eines Entscheidungsbaums zu beschreiben wird die Wichtigkeit von Features ermittelt.Die Feature Wichtigkeit gibt an, wie wichtig jedes Feature für die Entscheidung ist, die der Algorithmus trifft. Die Feature Wichtigkeit wird durch ein skalaren Wert zwischen 0 und 1 für jedes Merkmal. Wobei 0 "nicht verwendet" und 1 "perfekt das Ziel voraussagt" bedeutet. Die Summation aller Feature-Wichtigkeiten ergibt 1:
###Code
print("Feature importances:\n{}".format(tree.feature_importances_))
###Output
Feature importances:
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.01019737 0.04839825
0. 0. 0.0024156 0. 0. 0.
0. 0. 0.72682851 0.0458159 0. 0.
0.0141577 0. 0.018188 0.1221132 0.01188548 0. ]
###Markdown
In der Abbildung 2-28 wird die Feature Wichtigkeite der einzelnen Features abgebildet.
###Code
def plot_feature_importances_cancer(model):
n_features = cancer.data.shape[1]
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), cancer.feature_names)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plot_feature_importances_cancer(tree)
###Output
_____no_output_____
###Markdown
Abbildung 2-28. Feature-Wichtigkeit, die aus einem Entscheidungsbaum berechnet wurden, der im Breast-Cancer Datensatz gelernt wurde. Das Merkmal, welches im oberen Split verwendet wird ("worst radius"), bei weitem das wichtigste Merkmal ist. Dies bestätigt die Beobachtung bei der Analyse des Baumes, dass die erste Ebene die beiden Klassen bereits recht gut trennt. Wenn ein Merkmal jedoch eine geringe feature_importance hat, bedeutet das nicht, dass dieses Merkmal nicht informativ ist. Es bedeutet lediglich, dass das Merkmal nicht vom Algorithmus ausgewählt wurde. Dies ist beispielsweise der Fall, wenn ein anderes Merkmal die gleichen Informationen kodiert. Im Gegensatz zu den Koeffizienten in linearen Modellen sind Feature-Wichtigkeiten immer positiv und kodieren nicht, welche Klasse ein Feature anzeigt. **Decision Trees für die Regression**Alle genannten Eigenschaften der Klassifikations-Variante des Decision-Trees sind auch für die Regressions-Variante gültig. Die Verwendung und Analyse von Regressionsbäumen ist sehr ähnlich wie bei Klassifikationsbäumen. Es gibt jedoch eine besondere Eigenschaft der Verwendung von baumbasierten Modellen für die Regression, die nachfolgend hervorgehoben wird. Der DecisionTreeRegressor (und alle anderen baumbasierten Regressionsmodelle) sind nicht in der Lage, Extrapolationen durchzuführen oder Vorhersagen außerhalb des Bereichs der Trainingsdaten zu treffen. Diese Eigenschaft soll anhand eines historischen Datensatzes mit Preisen für Computerspeicher (RAM) untersucht werden. Die Abbildung 2-31 zeigt den Datensatz mit dem Datum auf der x-Achse und dem Preis eines Megabytes RAM im entsprechenden Jahr auf der y-Achse:
###Code
import pandas as pd
ram_prices = pd.read_csv("data/ram_price.csv")
plt.semilogy(ram_prices.date, ram_prices.price)
plt.xlabel("Year")
plt.ylabel("Price in $/Mbyte")
###Output
_____no_output_____
###Markdown
Abbildung 2-31. Historische Entwicklung des RAM-Preises, dargestellt auf einer logarithmischen Skala. Bei der logarithmischen Darstellung scheint die Beziehung linear zu sein und sollte daher bis auf einige Unebenheiten relativ einfach vorherzusagen sein. Wir werden eine Prognose für die Jahre nach 2000 auf der Grundlage der historischen Daten bis zu diesem Zeitpunkt erstellen, wobei das Datum das einzige Feature ist. Es werden zwei einfache Modelle vergleichen: Ein DecisionTree-Regressor und eine Linear-Regression. Die Preise werden logarithisch skaliert, sodass die Beziehung linear ist. Dies macht für den DecisionTreeRegressor keinen Unterschied, aber für die LinearRegression einen großen Unterschied. Nachdem die Modelle trainiert und Vorhersagen getroffen worden sind, kann eine Exponential-Map angewandt werden, um die Logarithmus-Transformation rückgängig zu machen.
###Code
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
# use historical data to forecast prices after the year 2000
data_train = ram_prices[ram_prices.date < 2000]
data_test = ram_prices[ram_prices.date >= 2000]
# predict prices based on date
X_train = data_train.date[:, np.newaxis]
# we use a log-transform to get a simpler relationship of data to target
y_train = np.log(data_train.price)
tree = DecisionTreeRegressor().fit(X_train, y_train)
linear_reg = LinearRegression().fit(X_train, y_train)
# predict on all data
X_all = ram_prices.date[:, np.newaxis]
pred_tree = tree.predict(X_all)
pred_lr = linear_reg.predict(X_all)
# undo log-transform
price_tree = np.exp(pred_tree)
price_lr = np.exp(pred_lr)
###Output
_____no_output_____
###Markdown
Abbildung 2-32. Vergleich von Vorhersagen des Entscheidungsbaums und des linearen Regressionsmodells mit ground-truth:
###Code
plt.semilogy(data_train.date, data_train.price, label="Training data")
plt.semilogy(data_test.date, data_test.price, label="Test data")
plt.semilogy(ram_prices.date, price_tree, label="Tree prediction")
plt.semilogy(ram_prices.date, price_lr, label="Linear prediction")
plt.legend()
###Output
_____no_output_____
###Markdown
Abbildung 2-32. Vergleich von Vorhersagen eines linearen Modells und Vorhersagen eines Regressionsbaums auf den RAM-Preisdaten Der Unterschied zwischen den Modellen ist eindeutig. Das lineare Modell liefert eine gute Prognose für die Testdaten (die Jahre nach 2000). Das Baummodell hingegen liefert perfekte Vorhersagen über die Trainingsdaten. Die Komplexität des Baums wurde beschränkt, weshalb Overfitting auftritt. Sobald wir jedoch den bekannten Datenbereich verlassen, prognostiziert das Modell einfach fortlaufend wieder den letzten bekannten Punkt. Sämtliche Baum-basierten Modelle sind nicht in der Lage zu extrapolieren, d.h. "neue" Vorhersagen zu generieren, für unbekannte Daten. Stärken, Schwächen und Parameter Bei Entscheidungsbäumen existieren Parameter zur Steuerung der Modellkomplexität, Zur Definition von Prunigng-Parameter und zum vorzeitigen Abbruch der Baumenwicklung. Normalerweise wird eine der folgenden Strategien gewählt: - max_depth- max_leaf_nodes - max_leaf_nodes - min_Proben_Blatt Die Wahl einer der oben genannten Strategien iist in der Regel ausreichend um Overfitting zu vermeiden. Entscheidungsbäume haben gegenüber vielen der bisher diskutierten Algorithmen zwei Vorteile: Das resultierende Modell kann leicht visualisiert und interpretiert werden (zumindest für kleinere Bäume), und die Algorithmen sind völlig unabhängig von der Skalierung der Daten. Da jedes Merkmal separat verarbeitet wird und die möglichen Aufteilungen der Daten nicht von der Skalierung abhängen, ist für Entscheidungsbaumalgorithmen keine Vorverarbeitung wie Normalisierung oder Standardisierung der Merkmale erforderlich. Entscheidungsbäume funktionieren insbesondere dann gut, wenn Features mit völlig unterschiedlichen Skalen vorliegen. Ebenso ist eine Mischung aus binären und kontinuierlichen Feature problemlos handhabbar. Der größte Nachteil von Entscheidungsbäumen ist, dass sie selbst bei Verwendung von Pre-Pruning zu Overfitting neigen und eine schlechte Generalisierungsleistung zu erzielen. Außerdem besitzen Decision Trees nicht die Fähigkeit der Extrapolation. 3. Vorbereitungsaufgabe Arbeiten Sie das Jupyter Notebook "Praktikumsskript Uebung 2" sorgfältig durch. Führen Sie sämtliche Code-Zellen aus (inklusive unten stehender Zelle). Schicken Sie das Notebook als .ipynb oder .pdf vor dem Uebungstermin an [email protected]
###Code
name = ""
print("Mein Name lautet: ", name)
###Output
Mein Name lautet:
|
Module 3/utf-8''Required_Code_MOD3_IntroPy.ipynb
|
###Markdown
Module 3 Required Coding Activity Introduction to Python (Unit 2) Fundamentals **This Activity is intended to be completed in the jupyter notebook, Required_Code_MOD3_IntroPy.ipynb, and then pasted into the assessment page that follows.** All course .ipynb Jupyter Notebooks are available from the project files download topic in Module 1, Section 1.This is an activity from the Jupyter Notebook **`Practice_MOD03_IntroPy.ipynb`** which you may have already completed.| Assignment Requirements | |:-------------------------------| | **NOTE:** This program requires **`print`** output and using code syntax used in module 3: **`if`**, **`input`**, **`def`**, **`return`**, **`for`**/**`in`** keywords, **`.lower()`** and **`.upper()`** method, **`.append`**, **`.pop`**, **`.split`** methods, **`range`** and **`len`** functions | Program: poem mixer This program takes string input and then prints out a mixed order version of the string **Program Parts** - **program flow** gathers the word list, modifies the case and order, and prints - get string input, input like a poem, verse or saying - split the string into a list of individual words - determine the length of the list - Loop the length of the list by index number and for each list index: - if a word is short (3 letters or less) make the word in the list lowercase - if a word is long (7 letters or more) make the word in the list uppercase - **call the word_mixer** function with the modified list - print the return value from the word_mixer function - **word_mixer** Function has 1 argument: an original list of string words, containing greater than 5 words and the function returns a new list. - sort the original list - create a new list - Loop while the list is longer than 5 words: - *in each loop pop a word from the sorted original list and append to the new list* - pop the word 5th from the end of the list and append to the new list - pop the first word in the list and append to the new list - pop the last word in the list and append to the new list - **return** the new list on exiting the loop **input example** *(beginning of William Blake poem, "The Fly")* >enter a saying or poem: `Little fly, Thy summer’s play My thoughtless hand Has brushed away. Am not I A fly like thee? Or art not thou A man like me?` **output example** >`or BRUSHED thy not Little thou me? SUMMER’S thee? like THOUGHTLESS play i a not hand a my fly am man`**alternative output** in each loop in the function that creates the new list add a "\\n" to the list ``` or BRUSHED thy not Little thou me? SUMMER’S thee? like THOUGHTLESS play i a not hand a my fly am man```
###Code
# [] create poem mixer
# [] copy and paste in edX assignment page
###Output
_____no_output_____
|
CodingBat-Python.ipynb
|
###Markdown
CodingBat - Code Practice **Warmup-1-> Sleep_in** The parameter weekday is True if it is a weekday, and the parameter vacation is True if we are on vacation. We sleep in if it is not a weekday or we're on vacation. Return True if we sleep in. **Test Data:** *sleep_in(False, False) → True* *sleep_in(True, False) → False* *sleep_in(False, True) → True*
###Code
def sleep_in(weekday, vacation):
if (not weekday and vacation):
return True
elif (weekday and not vacation):
return False
elif (not weekday and not vacation):
return True
else:
return True
sleep_in(False,False)
sleep_in(True,False)
sleep_in(False,True)
sleep_in(True,True)
###Output
_____no_output_____
###Markdown
**Warmup-1 > monkey_trouble**We have two monkeys, a and b, and the parameters a_smile and b_smile indicate if each is smiling. We are in trouble if they are both smiling or if neither of them is smiling. Return True if we are in trouble.**Test data:***monkey_trouble(True, True) → True**monkey_trouble(False, False) → True**monkey_trouble(True, False) → False*
###Code
def monkey_trouble(a_smile, b_smile):
if (a_smile and b_smile) or (not a_smile and not b_smile):
return True
else:
return False
monkey_trouble(True, True)
monkey_trouble(False, False)
monkey_trouble(True, False)
monkey_trouble(False, True)
###Output
_____no_output_____
###Markdown
**Warmup-1 > sum_double**Given two int values, return their sum. Unless the two values are the same, then return double their sum.sum_double(1, 2) → 3sum_double(3, 2) → 5sum_double(2, 2) → 8
###Code
def sum_double(a, b):
# Store the sum in a local variable
sum = a + b
# Double it if a and b are the same
if a == b:
sum = sum * 2
return sum
sum_double(1,2)
sum_double(3, 2)
sum_double(2, 2)
###Output
_____no_output_____
|
deep_learning_place_recognition_project.ipynb
|
###Markdown
Deep Learning Features at Scale for Visual Place Recognition Implementation in python using keras
###Code
#Importing Libraries
from PIL import Image # used for loading images
import numpy as np
import os # used for navigating to image path
from random import shuffle
import matplotlib.pyplot as plt
import tensorflow as tf
# keras imports for the dataset and building our neural network
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPool2D
from keras.layers. normalization import BatchNormalization
from keras.utils import np_utils
# splitting function from sklearn
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Function to load image dataset
###Code
DIR = 'C:../Pictures/smalldataset/'
img_type = os.listdir(DIR)
IMG_SIZE = 227
def load_data():
data = []
for typ in img_type:
path_typ = os.path.join(DIR, typ)
for img in os.listdir(path_typ):
label = typ
path = os.path.join(path_typ, img)
img = Image.open(path)
img = img.convert('L')
img = img.resize((IMG_SIZE, IMG_SIZE), Image.ANTIALIAS)
data.append([np.array(img), label])
shuffle(data)
return data
data = load_data()
plt.imshow(data[13][0], cmap = 'gist_gray')
###Output
_____no_output_____
###Markdown
Spliting data set into train and test sets
###Code
X = np.array([i[0] for i in data]).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
y = np.array([i[1] for i in data])
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,
random_state=42)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(1024, 227, 227, 1)
(256, 227, 227, 1)
(1024,)
(256,)
###Markdown
Normalizing and one hotencoding the data set
###Code
# normalizing the data to help with the training
#X_train /= 255
#X_test /= 255
# one-hot encoding using keras' numpy-related utilities
#n_classes = 2543
n_classes = 3
print("Shape before one-hot encoding: ", y_train.shape)
y_train = np_utils.to_categorical(y_train, n_classes)
print("Shape before one-hot encoding: ", y_test.shape)
y_test = np_utils.to_categorical(y_test, n_classes)
# building a linear stack of layers with the sequential model
model = Sequential()
# convolutional layer 1
model.add(Conv2D(96, kernel_size=(11,11), strides=4, activation='relu', input_shape=(227, 227, 1)))
model.add(MaxPool2D(pool_size=(3,3)))
model.add(BatchNormalization())
# convolutional layer 2
model.add(Conv2D(256, kernel_size=(5, 5), strides=(1, 1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(3,3)))
#model.add(Dropout(0.25))
model.add(BatchNormalization())
# convolutional layer 3
model.add(Conv2D(384, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu'))
#model.add(MaxPool2D(pool_size=(3,3)))
#model.add(Dropout(0.25))
model.add(BatchNormalization())
# convolutional layer 4
model.add(Conv2D(384, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu'))
#model.add(MaxPool2D(pool_size=(3,3)))
#model.add(Dropout(0.25))
model.add(BatchNormalization())
# convolutional layer 5
model.add(Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu'))
#model.add(MaxPool2D(pool_size=(3,3)))
#model.add(Dropout(0.25))
model.add(BatchNormalization())
# convolutional layer 6
model.add(Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(3,3)))
#model.add(Dropout(0.25))
model.add(BatchNormalization())
# flatten output of conv
model.add(Flatten())
# hidden layer
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
# output layer
model.add(Dense(3, activation='softmax'))
#model.add(Dropout(0.3))
#Optimizer used is sgd
opt = keras.optimizers.SGD(lr=0.01, momentum=0.9, decay= 0.005, nesterov=False)
# compiling the sequential model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer=opt)
# training the model for 3 epochs
history = model.fit(X_train, y_train, batch_size=50, epochs=3)
###Output
Epoch 1/3
1024/1024 [==============================] - 47s 46ms/step - loss: 0.0739 - accuracy: 0.9688
Epoch 2/3
1024/1024 [==============================] - 50s 49ms/step - loss: 0.0249 - accuracy: 0.9951
Epoch 3/3
1024/1024 [==============================] - 42s 41ms/step - loss: 0.0101 - accuracy: 0.9961
###Markdown
Model Evaluation
###Code
loss, acc = model.evaluate(X_test,y_test, verbose = 0)
print(acc * 100)
###Output
_____no_output_____
|
Convolutional_Filters_Edge_Detection/6_2. Hough circles, agriculture.ipynb
|
###Markdown
Hough Circle Detection
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/round_farms.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
# Gray and blur
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
gray_blur = cv2.GaussianBlur(gray, (3, 3), 0)
plt.imshow(gray_blur, cmap='gray')
###Output
_____no_output_____
###Markdown
HoughCircles function`HoughCircles` takes in a few things as its arguments:* an input image, detection method (Hough gradient), resolution factor between the detection and image (1),* minDist - the minimum distance between circles* param1 - the higher value for performing Canny edge detection* param2 - threshold for circle detection, a smaller value --> more circles will be detected* min/max radius for detected circlesThe variable you should change will be the last two: min/max radius for detected circles. Take a look at the image above and estimate how many pixels the average circle is in diameter; use this estimate to provide values for min/max arguments. You may also want to see what happens if you change minDist.
###Code
# for drawing circles on
circles_im = np.copy(image)
## TODO: use HoughCircles to detect circles
# right now there are too many, large circles being detected
# try changing the value of maxRadius, minRadius, and minDist
circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1,
minDist=45,
param1=70,
param2=11,
minRadius=20,
maxRadius=40)
# convert circles into expected type
circles = np.uint16(np.around(circles))
# draw each one
for i in circles[0,:]:
# draw the outer circle
cv2.circle(circles_im,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(circles_im,(i[0],i[1]),2,(0,0,255),3)
plt.imshow(circles_im)
print('Circles shape: ', circles.shape)
###Output
Circles shape: (1, 165, 3)
|
notebooks/examples/widgets.ipynb
|
###Markdown
Widgets Code Exercise Markdown syntax```p1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))`````` this is a grader codep1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))``` Notebook code cell
###Code
# this is a grader code
P1 = 0.01
P3 = 3 * P1**2 * (1-P1) + P1**3 # probability of 2 or 3 errors
print(f'Single reply being garbled: {P1}')
print(f'The majority of three replies being garbled: {P3:.4f}')
###Output
_____no_output_____
###Markdown
Widgets Circuit sandbox Circuit sandbox q-circuit-sandbox-widget(goal="circuit-sandbox") .availableGates H X Z Y T S .instructions Text here establishing what this widget is, how the user can use it, and what we intend them to use it for. Lorem ipsum dolor sit amet, consectetur adipiscing elit, lorem ipsum dolor sit amet, consectetur. .explanation Text here to explain the difference between the matrix view and the state vector view Code Exercise Markdown syntax```p1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))`````` this is a grader codep1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))``` Notebook code cell
###Code
# this is a grader code
P1 = 0.01
P3 = 3 * P1**2 * (1-P1) + P1**3 # probability of 2 or 3 errors
print(f'Single reply being garbled: {P1}')
print(f'The majority of three replies being garbled: {P3:.4f}')
###Output
_____no_output_____
###Markdown
Widgets Code Exercise Markdown syntax```p1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))`````` this is a grader codep1 = 0.01p3 = 3 * p1**2 * (1-p1) + p1**3 probability of 2 or 3 errorsprint('Probability of a single reply being garbled: {}'.format(p1))print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3))``` Notebook code cell
###Code
# this is a grader code
P1 = 0.01
P3 = 3 * P1**2 * (1-P1) + P1**3 # probability of 2 or 3 errors
print(f'Single reply being garbled: {P1}')
print(f'The majority of three replies being garbled: {P3:.4f}')
###Output
_____no_output_____
|
notebooks/.ipynb_checkpoints/SED_MAKER-checkpoint.ipynb
|
###Markdown
Initial Setup
###Code
# Initial setup...
import numpy as np
import pandas as pd
from astropy.io import fits
from astropy.table import Table
import fitsio
from scipy import interpolate
import glob
import math
import os
import sys
import os.path
from pathlib import Path
import h5py
import bisect
from astropy.cosmology import FlatLambdaCDM
import argparse
import os.path
from os import path
from astropy.io.fits.verify import VerifyWarning
import warnings
warnings.simplefilter('ignore', category=VerifyWarning)
c = 2.99e10 # speed of light in cm/sec...
# v. 2.6.1 (which works with python 2.7) installed from
# https://imageio.readthedocs.io/en/stable/installation.html
import imageio
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
User input
###Code
# Redshift of kilonova...
z_kn = 0.0099
# If debug=True, just plot/calculate synthetic mags for the 4.00days past merger KN
debug=False
#These are the Kasen models we are using.
#This one is fblue
kasen2 ='/Users/mwiesner/Kasen_Kilonova_Models_2017/systematic_kilonova_model_grid/knova_d1_n10_m0.025_vk0.30_fd1.0_Xlan1e-4.0.h5'
#This one is fred
kasen1 = '/Users/mwiesner/Kasen_Kilonova_Models_2017/kilonova_models/knova_d1_n10_m0.040_vk0.15_Xlan1e-1.5.h5'
kasen2_check = True
#fblue = '/Users/mwiesner/KDC/kasen/knova_d1_n10_m0.030_vk0.05_fd1.0_Xlan1e-3.0.h5'
# Kasen (2017) models directory name:
#kasen_dirname='/data/des40.a/data/dtucker/DESGW_analysis/Kasen_Kilonova_Models_2017/kilonova_models'
#kasen_dirname= kasen_filename
#kasen_filename = "/Users/mwiesner/KDC/notebooks/output/blue_red.csv"
#fblue='/Users/mwiesner/Kasen_Kilonova_Models_2017/systematic_kilonova_model_grid/knova_d1_n10_m0.025_vk0.30_fd1.0_Xlan1e-4.0.h5'
#fred = '/Users/mwiesner/Kasen_Kilonova_Models_2017/kilonova_models/knova_d1_n10_m0.040_vk0.15_Xlan1e-1.5.h5'
#fblue='/Users/mwiesner/Kasen_Kilonova_Models_2017/systematic_kilonova_model_grid/knova_d1_n10_m0.025_vk0.30_fd1.0_Xlan1e-4.0.h5'
#fred = '/Users/mwiesner/Kasen_Kilonova_Models_2017/kilonova_models/knova_d1_n10_m0.040_vk0.15_Xlan1e-1.5.h5'
#a ‘blue’ kilonova (light r-process ejecta with M = 0.025M¤, vk = 0.3c
#and Xlan = 10-4
#) plus a ‘red’ kilonova (heavy r-process ejecta with M = 0.04M¤,
#vk = 0.15c, and Xlan = 10-1.5).
#Exponent of inner density profile = 1
#Exponent of outer density profile = 10
#Ejecta mass is 0.030 solar masses
#kinetic velocity is 0.05c
#
#Lanthanide fraction 3.0
#naming convention
#d = exponent of inner density profile
#n - exponent of
#Units of the spectra are ergs/sec/Hz
#divide by 4piD^2 to get an observed flux (ergs/sec/Hz/cm^2)
#kasen_pathname=os.path.join(kasen_dirname,kasen_filename)
# CSV file containing bandpass:
# (Need to add LSST passbands...)
bandpassFile = '/Users/mwiesner/KDC/kasen/DES_STD_BANDPASSES_Y3A2_ugrizY.test.csv'
#bandpassFile = '/Users/mwiesner/KDC/notebooks/input/LSST.dat'
# Comma-separated list with no spaces of passbands to be used...
bandList = ['u','g','r','i','z','Y']
# bandpass color palette:
bandpassColors_dict = {'u':'#56b4e9',
'g':'#008060',
'r':'#ff4000',
'i':'#850000',
'z':'#6600cc',
'y':'#000000',
'Y':'#000000'
}
# Verbosity level of output to screen (0,1,2,...)
verbose = 0
# Output directory...
output_dirname = '/Users/mwiesner/KDC/kasen'
sed_dirname = '/Users/mwiesner/KDC/kasen/SEDs'
png_dirname = '/Users/mwiesner/KDC/kasen/PNGs'
###Output
_____no_output_____
###Markdown
Synthetic Photometry MethodsBased on /data/des40.a/data/dtucker/Y6A1_abscal/calc_abmag.py, which itself made use of some variations of code developed by Keith Bechtol and Eli Rykoff for DES Y3 interstellar reddening corrections...
###Code
# Create an argparse Namespace and run "calc_abmag(args)"...
# (e.g., run_calc_abmag('u,g,r,i,z,Y', bandpassFile, tempFile, 'LAMBDA', 'Flam', 'Flam', verbose) )
def run_calc_abmag(bandList, bandpassFile, spectrumFile, colname_wave, colname_flam, flux_type, verbose):
args = argparse.Namespace(bandList = bandList,
bandpassFile = bandpassFile,
spectrumFile = spectrumFile,
colname_wave = colname_wave,
colname_flam = colname_flam,
flux_type = flux_type,
verbose = verbose)
if args.verbose > 0: print(args)
status = calc_abmag(args)
return status
#parser.add_argument('--bandList', help='comma-separated list with no spaces', default='g,r,i,z,Y')
#parser.add_argument('--bandpassFile', help='name of the input plan file', default='DES_STD_BANDPASSES_Y3A2_ugrizY.test.csv')
#parser.add_argument('--spectrumFile', help='name of the input plan file (can be CSV file or a synphot-style FITS file')
#parser.add_argument('--colname_wave', help='name of the wavelength column (in case of a CSV spectrumFile)', default='wave')
#parser.add_argument('--colname_flux', help='name of the flux column (in case of a CSV spectrumFile)', default='flux')
#parser.add_argument('--flux_type', help='type of flux (Flam [ergs/sec/cm**2/Angstrom] or Fnu [ergs/sec/cm**2/Hz])? ', default='Flam')
#parser.add_argument('--verbose', help='verbosity level of output to screen (0,1,2,...)', default=0, type=int)
# Main method for calculating synthetic AB magnitudes...
def calc_abmag(args):
# Extract the bandList...
bandList = args.bandList
bandList = bandList.split(',')
if args.verbose > 0:
print('bandList: ', bandList)
# Extract the name of the bandpassFile...
bandpassFile = args.bandpassFile
if os.path.isfile(bandpassFile)==False:
print("""bandpassFile %s does not exist...""" % (bandpassFile))
print('Returning with error code 1 now...')
return 1
if args.verbose > 0:
print('bandpassFile: ', bandpassFile)
# Extract the name of the spectrum file...
spectrumFile = args.spectrumFile
if os.path.isfile(spectrumFile)==False:
print("""spectrumFile %s does not exist...""" % (spectrumFile))
print('Returning with error code 1 now...')
return 1
if args.verbose > 0:
print('spectrumFile: ', spectrumFile)
# Try to determine spectrumFile type (FITS file or CSV file)...
spectrumType = 'Unknown'
try:
hdulist = fits.open(spectrumFile)
hdulist.close()
spectrumType = 'FITS'
except IOError:
if args.verbose > 2:
print("""spectrumFile %s is not a FITS file...""" % (spectrumFile))
try:
df_test = pd.read_csv(spectrumFile)
spectrumType = 'CSV'
except IOError:
if args.verbose > 2:
print("""spectrumFile %s is not a CSV file...""" % (spectrumFile))
# Read in spectrumFile and create a SciPy interpolated function of the spectrum...
if spectrumType == 'FITS':
flux,wave_lo,wave_hi = getSpectrumSynphot(spectrumFile, fluxFactor=1.0)
elif spectrumType == 'CSV':
flux,wave_lo,wave_hi = getSpectrumCSV(spectrumFile,
colname_wave=args.colname_wave, colname_flam=args.colname_flam,
fluxFactor=1.0)
else:
print("""Spectrum file %s is of unknown type...""" % (spectrumFile))
print('Returning with error code 1 now...')
return 1
# Read the bandpassFile into a Pandas DataFrame...
df_resp = pd.read_csv(bandpassFile, comment='#')
# Check to make sure the spectrumFile covers at least the same wavelength range
# as the bandpassFile...
if ( (wave_lo > df_resp['LAMBDA'].min()) or (wave_hi < df_resp['LAMBDA'].max()) ):
print("""WARNING: %s does not cover the full wavelength range of %s""" % (spectrumFile, bandpassFile))
print('Returning with error code 1 now...')
return 1
# Create wavelength_array and flux_array...
delta_wavelength = 1.0 # angstroms
wavelength_array = np.arange(wave_lo, wave_hi, delta_wavelength)
flux_array = flux(wavelength_array)
# If needed, convert flux from flam to fnu...
if args.flux_type == 'Fnu':
fnu_array = flux_array
elif args.flux_type == 'Flam':
c_kms = 299792.5 # speed of light in km/s
c_ms = 1000.*c_kms # speed of light in m/s
c_as = (1.000e10)*c_ms # speed of light in Angstroms/sec
fnu_array = flux_array * wavelength_array * wavelength_array / c_as
else:
print("""Flux type %s is unknown...""" % (args.flux_type))
print('Returning with error code 1 now...')
return 1
# Print out header...
outputLine = ''
for band in bandList:
outputLine = """%s,%s""" % (outputLine, band)
# print(outputLine[1:])
outputLine = ''
for band in bandList:
response = interpolate.interp1d(df_resp['LAMBDA'], df_resp[band],
bounds_error=False, fill_value=0.,
kind='linear')
response_array = response(wavelength_array)
try:
abmag = calc_abmag_value(wavelength_array, response_array, fnu_array)
except Exception:
abmag = -9999.
outputLine = """%s,%.4f""" % (outputLine, abmag)
# print(outputLine[1:])
f.write(str(outputLine[1:])+","+"{:.2f}".format(t_kn)+"\n")
return 0
# Calculate abmag using the wavelength version of the Fukugita et al. (1996) equation...
def calc_abmag_value(wavelength_array, response_array, fnu_array):
# Calculate the abmag...
numerator = np.sum(fnu_array * response_array / wavelength_array)
denominator = np.sum(response_array / wavelength_array)
abmag_value = -2.5*math.log10(numerator/denominator) - 48.60
return abmag_value
# Return a SciPy interpolation function of a Synphot-style FITS spectrum...
# (Based on code from Keith Bechtol's synthesize_locus.py.)
# Unless otherwise noted, fluxes are assumed to be Flam and wavelengths
# are assumed to be in Angstroms...
def getSpectrumSynphot(synphotFileName, fluxFactor=1.0):
try:
hdulist = fits.open(synphotFileName)
t = Table.read(hdulist[1])
hdulist.close()
except IOError:
print("""Could not read %s""" % synphotFileName)
sys.exit(1)
wave = t['WAVELENGTH'].data.tolist()
wave_lo = min(wave)
wave_hi = max(wave)
t['FLUX'] = fluxFactor*t['FLUX']
flam = t['FLUX'].data.tolist()
flam = t['FLUX'].data.tolist()
data = {'wavelength': wave, 'flux': flam}
f = interpolate.interp1d(data['wavelength'], data['flux'],
bounds_error=True,
kind='linear')
return f,wave_lo,wave_hi
# Return a SciPy interpolation function of a CSV-style spectrum...
# (Based on code from Keith Bechtol's synthesize_locus.py.)
# Unless otherwise noted, fluxes are assumed to be Flam and wavelengths
# are assumed to be in Angstroms...
def getSpectrumCSV(csvFileName, colname_wave='wave', colname_flam='flux', fluxFactor=1.0):
try:
df = pd.read_csv(csvFileName)
except IOError:
print("""Could not read %s""" % csvFileName)
sys.exit(1)
columnNameList = df.columns.tolist()
if colname_wave not in columnNameList:
print("""Column %s not in %s""" % (colname_wave, csvFileName))
sys.exit(1)
if colname_flam not in columnNameList:
print("""Column %s not in %s""" % (colname_wave, csvFileName))
sys.exit(1)
wave = df[colname_wave].tolist()
wave_lo = min(wave)
wave_hi = max(wave)
df[colname_flam] = fluxFactor*df[colname_flam]
flam = df[colname_flam].tolist()
data = {'wavelength': wave, 'flux': flam}
f = interpolate.interp1d(data['wavelength'], data['flux'],
bounds_error=True,
kind='linear')
return f,wave_lo,wave_hi
###Output
_____no_output_____
###Markdown
Setting up the cosmology
###Code
# Redshift to luminosity distance...
# Default values of H0 and Omega0 are from Bennett et al. (2014)...
def zToDlum(z, H0=69.6, Om0=0.286):
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=H0, Om0=Om0)
# comoving distance...
Dcom = cosmo.comoving_distance(z)
# luminosity distance...
Dlum = (1.+z)*Dcom
return Dlum
# Mpc_to_cm...
def Mpc_to_cm(Dmpc):
Dcm = Dmpc*1.00e6*3.086e+18
return Dcm
###Output
_____no_output_____
###Markdown
Read in bandpass file
###Code
df_band = pd.read_csv(bandpassFile, comment='#')
###Output
_____no_output_____
###Markdown
Read in and Plot Kilonova SED...
###Code
#This turns on plotting if set equal to True
plotter = False
Dlum = zToDlum(z_kn)
Dlum_cm = Mpc_to_cm(Dlum)
#This is the output light curve file
myfile = Path("output/light_curve.txt")
#This is the output file that contains the time after GW event
#myfile2 = Path("output/time.txt")
#I did this so i
if myfile.exists():
!rm output/light_curve.txt
#if myfile2.exists():
# !rm output/time.txt
f = open("output/light_curve.txt", "w")
f.write('u,g,r,i,z,y,time\n')
#f2 = open("output/time.txt", "w")
#f2.write("day_after \n")
# open Kasen model file
#fin = h5py.File(kasen_pathname,'r')
#RED FILE
# open model file
fin = h5py.File(kasen1,'r')
# frequency in Hz
nu = np.array(fin['nu'],dtype='d')
# array of time in seconds
times = np.array(fin['time'])
# convert time to days
times = times/3600.0/24.0
# specific luminosity (ergs/s/Hz)
# this is a 2D array, Lnu[times][nu]
Lnu_all = np.array(fin['Lnu'],dtype='d')
#_____________________________________________
#I put this trigger here so if there is only 1 Kasen model, it still runs
if kasen2_check == True:
#BLUE FILE
# open model file
fin2 = h5py.File(kasen2,'r')
# frequency in Hz
nu2 = np.array(fin2['nu'],dtype='d')
# array of time in seconds
times2 = np.array(fin2['time'])
# convert time to days
times2 = times2/3600.0/24.0
# specific luminosity (ergs/s/Hz)
# this is a 2D array, Lnu[times][nu]
Lnu_all2 = np.array(fin2['Lnu'],dtype='d')
fine = np.where((times > 0.05) & (times < 9.05))
timer = times[fine]
for t_kn in timer:
# if t_kn < 0.00: continue
if debug == True: t_kn = 4.0
# print('working on ',"{:.2f}".format(t_kn))
# f2.write(str(t_kn)+" \n")
try:
# index corresponding to t_kn
it = bisect.bisect(times,t_kn)
# spectrum at this epoch
Lnu = Lnu_all[it,:]
Lnu2 = Lnu_all2[it,:]
except:
print("Index out of bounds?")
continue
# Convert to in Llam (ergs/s/Angstrom)...
lam0 = c/nu*1e8 # rest-frame wavelength
lam = lam0*(1+z_kn) # redshifted wavelength
Llam = Lnu*nu**2.0/c/1e8 # Llam
#Makes a dataframe of rest-frame wavelength, redshifted wavelength and flux
df_model_red = pd.DataFrame({'LAMBDA0':lam0, 'LAMBDA':lam, 'Llam':Llam})
#This is where we add the second model and combine it with the first, if there is a second model.
if (kasen2_check == True): # and (t_kn < 2.05):
print('working on ',"{:.2f}".format(t_kn), ' with blue and red models')
# Convert to in Llam (ergs/s/Angstrom)...
lam02 = c/nu2*1e8 # rest-frame wavelength
lam2 = lam02*(1+z_kn) # redshifted wavelength
Llam2 = Lnu2*nu2**2.0/c/1e8 # Llam
df_model_blue = pd.DataFrame({'LAMBDA0':lam02, 'LAMBDA':lam2, 'Llam':Llam2})
df_model_comb = df_model_blue.copy()
df_model_comb['Llam'] = df_model_blue['Llam'] + df_model_red['Llam']
df_model_combined = pd.DataFrame({"LAMBDA":df_model_red['LAMBDA'], "Llam":df_model_comb['Llam']})
else:
df_model_combined = df_model_red
print('working on ',"{:.2f}".format(t_kn), ' with red model only')
# bothfile = """test/both_"""+str(t_kn)
# redfile = """test/red_"""+str(t_kn)
# bluefile = """test/blue_"""+str(t_kn)
# df_model_combined.to_csv(bothfile)
# df_model_red.to_csv(redfile)
# df_model_blue.to_csv(bluefile)
# df_model_combined.plot('LAMBDA','Llam', color='blue', xlim=[3000.,10000.], title=str(t_kn))
# plt.show()
# plt.annotate('t = '+str(t_kn)+' day', (28000, this), horizontalalignment='right', fontsize=22)
#print min(lam), max(lam)
wavelength_array = np.arange(min(lam0), max(lam0), 1.0)
spec_flux_model = interpolate.interp1d(df_model_combined.LAMBDA, df_model_combined.Llam,bounds_error=False, fill_value=0.,kind='linear')
spec_flux_model_array = spec_flux_model(wavelength_array)
df_model_new = pd.DataFrame({'LAMBDA':wavelength_array, 'Llam':spec_flux_model_array})
#norm = df_model_new['Llam'].median()
norm = df_model_new[( (df_model_new.LAMBDA > 3000.) & (df_model_new.LAMBDA < 12000.) )].Llam.max()
df_model_new['normLlam'] = df_model_new['Llam'] / norm
# spec_flux_model = interpolate.interp1d(df_model.LAMBDA0, df_model.Llam,bounds_error=False, fill_value=0.,kind='linear')
# spec_flux_model_array = spec_flux_model(wavelength_array)
# df_model_new = pd.DataFrame({'LAMBDA':wavelength_array, 'Llam':spec_flux_model_array})
#norm = df_model_new['Llam'].median()
# norm = df_model_new[( (df_model_new.LAMBDA > 0.) & (df_model_new.LAMBDA < 12000.) )].Llam.max()
# df_model_new['normLlam'] = df_model_new['Llam']/norm
#ax = df_model_new.plot('LAMBDA','normLlam', c='#000000', label='Model', fontsize=18)
# ax = df_model_new.plot('LAMBDA','normLlam', c='#888888', label='Model')
# Distance to KN in cm...
#HERE IS WHERE WE PRINT THE SEDs
flux_array = spec_flux_model_array / ((4.*np.pi*(Dlum_cm)**2.))#*(1E-17)) #Flam [ergs/s/cm2/Angstrom]
nm_array = wavelength_array / 10.
df_model_final = pd.DataFrame({'LAMBDA':nm_array, 'Flam':flux_array})
outputFile1 = 'sed_'+"{:.2f}".format(z_kn)+"_"+"{:.2f}".format(t_kn)+'.spec'
outputFile1 = os.path.join(sed_dirname, outputFile1)
# print(outputFile1)
# print(df_model_new['LAMBDA'])
df_model_final.to_csv(outputFile1,mode='w', index = False)
#THIS PART MAKES PLOTS.
#Plots are of the filter bandpasses and the kasen spectrum.
#It also plots the different parts of the Kasen model.
#You can turn it on by setting plotter = True at the beginning.
if plotter == True:
ax = df_model_new.plot('LAMBDA','normLlam', c='#888888', label='Model')
for band in bandList:
#print band
df_band.plot('LAMBDA', band, c=bandpassColors_dict[band], ax=ax)
title = """%s""" % (fblue)
plt.title(title, fontsize=16)
plt.xlim([3000., 11000.])
#plt.ylim([0.,1.1])
#ax.legend(loc='upper right', fontsize=14, framealpha=0.5)
#plt.legend(bbox_to_anchor=(1.05, 1.0), loc='upper left')
ax.set_xlabel('wavelength (observed frame) [$\\AA$]',fontsize=16)
ax.set_ylabel('Relative $F_{\lambda}$',fontsize=16)
ax.tick_params(axis = 'both', which = 'major', labelsize = 16)
#textstr = """$z$=%.3f\n$t$=%.2f days""" % (z_kn, t_kn)
# props = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
#ax.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=14, verticalalignment='top', bbox=props)
ax.grid(False)
plt.show()
ax2 = df_model_blue.plot('LAMBDA','Llam', color='blue', label='light r-process component',xlim=[3000.,30000.])
# df_model_blue.plot('LAMBDA', 'Llam', ax=ax, xlim=[3000.,30000.], label='light r-process component')
df_model_red.plot('LAMBDA', 'Llam', color='red', ax=ax2, label='heavy r-process component')
df_model_comb.plot('LAMBDA', 'Llam', color='black', ax=ax2, label = 'composite')
plt.show()
#outputFile = """KNspectrum.%s_z%.3f_a%05.2f.png""" % (fblue, z_kn, t_kn)
outputFile = """KNspectrum._z%.3f_a%05.2f.png""" % (z_kn, t_kn)
#outputFile = 'KNspectrum.z'+str(z_kn)+'_'+str(t_kn)+'.png'
outputFile = os.path.join(png_dirname, outputFile)
plt.tight_layout()
plt.savefig(outputFile)
# Calculate Flam [ergs/s/cm**2/Angstrom] from Llam [ergs/s/Angstrom] and Dlum_cm [cm]...
#Flam is flux and Llam is luminosity.
df_model_new['Flam'] = df_model_new['Llam']/(4*np.pi*Dlum_cm*Dlum_cm)
# Write df_model_new to a temporary output file...
tempFile = os.path.join(output_dirname,'temp_KNSpectrum.csv')
df_model_new.to_csv(tempFile, index=False)
run_calc_abmag('u,g,r,i,z,Y', bandpassFile, tempFile, 'LAMBDA', 'Flam', 'Flam', verbose)
if debug == True: break
print("Yay! All done.")
f.close()
f2.close()
###Output
_____no_output_____
###Markdown
Create Animated GifSee https://stackoverflow.com/questions/753190/programmatically-generate-video-or-animated-gif-in-python, as suggested by S. Allam.
###Code
# Define name of the animated GIF to be output...
kasen_filename = 'blue_red'
outputAnimGif = """KNspectrum.%s_z%.3f.gif""" % (kasen_filename, t_kn)
outputAnimGif = os.path.join(png_dirname, outputAnimGif)
# Identify input png files for the animated GIF...
filenameTemplate = """KNspectrum._z%.3f_a??.??.png""" % (z_kn)
filenameTemplate = os.path.join(png_dirname, filenameTemplate)
#print filenameTemplate
filenames = glob.glob(filenameTemplate)
#for filename in filenames:
# print filename
# Generate animated GIF...
with imageio.get_writer(outputAnimGif, mode='I', duration=0.1) as writer:
for filename in filenames:
print(filename)
image = imageio.imread(filename)
writer.append_data(image)
print("""Animated GIF can be found here: %s""" % (outputAnimGif))
###Output
Animated GIF can be found here: /Users/mwiesner/KDC/kasen/PNGs/KNspectrum.blue_red_z8.950.gif
###Markdown
Display Animated GifSee https://github.com/ipython/ipython/issues/10045issuecomment-608641627
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython import display
# Display GIF in Jupyter, CoLab, IPython
with open(outputAnimGif,'rb') as f:
display.Image(data=f.read(), format='png')
###Output
_____no_output_____
|
Batch_6_January2022/Week4/homework/recommender_systems_homework.ipynb
|
###Markdown
Recommender Systems Homework* This notebook is for recommender systems homework of Applied AI. \* Used dataset for this homework is [The Movies Dataset](https://www.kaggle.com/rounakbanik/the-movies-dataset/data) **Dataset Description** This dataset includes 45k movies with their features like the kind of the movie or the crew of movie. Also, ratings of these movies are in this dataset as User-Movie interaction table.**Tables in Dataset:*** movies_metadata : Features belong to movies (~45k)* keywords : Keywords extracted from plot of the movies* credits : Cast and crew information* links : TMDB and IMDB IDs of all movies* ratings : User-Movie interactions **Task Description**You are supposed to build a **recommendation system** which recommends movies to the user. Input of the system is a movie and the output is recommendation list consisting similar movies to given movie.* This task's approach for recommender systems is **Content Based** Approach.* Similarities between movies can be found by looking at their **common** **cast**.* Another movie features can be added to the system as you wish. **What will you report?*** There is no limitation or scoring function for this task. * You can look at the distances between similar movies for comparison.* Recommend to yourselves movies with using your system and evaluate yourselves fairly 😀--- Preperation * Mount Drive first
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
* Import libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
* Read the _credits_ file
###Code
credits = pd.read_csv('/content/drive/MyDrive/applied_ai_enes_safak/recommender_systems/MovieLens/credits.csv', low_memory=False)
###Output
_____no_output_____
###Markdown
Extracting Cast Names Example
###Code
toy_story_names = []
movie_id = 0
for i in credits['cast'][movie_id].split("'name': '")[1:]:
toy_story_names.append(i.split("'")[0])
toy_story_names
###Output
_____no_output_____
###Markdown
Recommendation System
###Code
###Output
_____no_output_____
###Markdown
Recommendation Function
###Code
def recommend_movie(...):
'''
- takes movie name as a parameter and returns given number of similar movies
'''
###Output
_____no_output_____
|
Basic Python/2-Built-in-functions-finished.ipynb
|
###Markdown
Built-in Functions Python comes with a wide range of functions. However many of these are part of stanard libraries like the `math` library rather than built-in. Converting valuesConversion from hexadecimal to decimal is done by adding prefix **0x** to the hexadecimal value or vice versa by using built in **hex( )**, Octal to decimal by adding prefix **0** to the octal value or vice versa by using built in function **oct( )**.
###Code
hex(150)
0x96
###Output
_____no_output_____
###Markdown
**int( )** converts a number to an integer. This can be decimal numbers, integer or a string. For strings the base can optionally be specified:
###Code
print(int(9.9) , int('1001',2) , int('9') )
###Output
9 9 9
###Markdown
**str()** function can be used to convert almost anything to a string.
###Code
print(str(True))
print(str(1.234))
print(str(-2))
###Output
True
1.234
-2
###Markdown
Mathematical functionsMathematical functions include the usual suspects like logarithms, trigonometric fuctions, the constant $\pi$ and so on.
###Code
import math
math.sin(math.pi/2)
from math import * # avoid having to put a math. in front of every mathematical function
sin(pi/2) # equivalent to the statement above
###Output
_____no_output_____
###Markdown
Simplifying Arithmetic Operations **round( )** function rounds the input value to a specified number of places or to the nearest integer.
###Code
print( round(5.6231) )
print( round(4.55892, 2) )
###Output
6
4.56
###Markdown
complex( ) is used to define a complex number and abs( ) outputs the absolute value of the same.
###Code
c =complex('5+2j')
print( abs(c) )
###Output
5.385164807134504
###Markdown
**divmod(x,y)** function outputs the quotient and the remainder in a tuple in the format (quotient, remainder).
###Code
divmod(9,3)
###Output
_____no_output_____
###Markdown
Accepting User Inputs **input(prompt)**, prompts for and returns input as a string. A useful function to use in conjunction with this is **eval()** which takes a string and evaluates it as a python expression.
###Code
abc = input("abc = ")
a=eval(input('some expression: '))
print(abc , a)
###Output
abc = 1+3
some expression: 2+4
1+3 6
###Markdown
The **print()** function prints all of its arguments as strings, separated by spaces and follows by a linebreak: - print("Hello World") - print("Hello",'World') - print("Hello", )Note that **print** is different in old versions of Python (2.7) where it was a statement and did not need parenthesis around its arguments.
###Code
print("Hello","World")
###Output
Hello World
|
1-Python/1-Basics/Teoria/Python Basics II.ipynb
|
###Markdown
 Python Basics IIYa hemos visto cómo declarar variables, qué tipos hay, y otras funcionalidades importantes de Python como sus flujos de ejecución o las formas que tenemos de comentar el código. En este Notebook aprenderás a realizar **operaciones con tus variables** y descubrirás las colecciones mediante uno de los objetos más usados en Python: **las listas**.1. [Operaciones aritméticas](1.-Operaciones-aritméticas)2. [Operaciones comparativas](2.-Operaciones-comparativas)3. [Operaciones con booleanos](3.-Operaciones-con-booleanos)4. [Funciones *Built-in*](4.-Funciones-Built-in)5. [Métodos](5.-Métodos)6. [Listas](6.-Listas)7. [Resumen](7.-Resumen) 1. Operaciones aritméticasEn el Notebook *Python Basics I* ya vimos por encima las principales operaciones aritméticas en Python. Las recordamos:* Sumar: `+`* Restar: `-`* Multiplicar: `*`* Dividir: `/`* Elevar: `**`* Cociente division: `//`* Resto de la división: `%` Ejercicio de operaciones aritméticas Declara una variable int Declara otra variable float. Suma ambas variables. ¿Qué tipo de dato es el resultado? Multiplica ambas variables Eleva una variable a la potencia de la otra Calcula el resto de dividir 12/5
###Code
var_int = 10
var_float = 10.
var_sum = var_int + var_float
print(var_sum)
# el resultado es float
print(var_sum)
var_prod = var_int * var_float
print(var_prod)
var_pot = var_int ** var_float
print(var_pot)
print( 12 % 5)
###Output
20.0
20.0
100.0
10000000000.0
2
###Markdown
Propiedad conmutativa, asociativa, distributiva y el paréntesisSi queremos concatenar varias operaciones, ten siempre en cuenta las propiedades matemáticas de la multiplicación
###Code
print("Conmutativa")
print(2 * 3)
print(3 * 2)
print("\nAsociativa") # Recuerda que "\n" se usa para que haya un salto de linea en el output.
print(2 * (3 + 5))
print(2 * 3 + 2 * 5)
print("\nDistributiva")
print((3 * 2) * 5)
print(3 * (2 * 5))
print("\nEl Orden de operaciones se mantiene. Siempre podemos usar paréntesis")
print(2 * (2 + 3) * 5)
print((2 * 2 + 3 * 5)/(4 + 7))
###Output
Conmutativa
6
6
Asociativa
16
16
Distributiva
30
30
El Orden de operaciones se mantiene. Siempre podemos usar paréntesis
50
1.7272727272727273
###Markdown
Operaciones más complejasSi salimos de las operaciones básicas de Python, tendremos que importar módulos con más funcionalidades en nuestro código. Esto lo haremos mediante la sentencia `import math`. `math` es un módulo con funciones ya predefinidas, que no vienen por defecto en el núcleo de Python. De esta forma será posible hacer cálculos más complejos como:* Raíz cuadrada* Seno/Coseno* Valor absoluto*...El módulo es completísimo y si estás buscando alguna operación matemática, lo más seguro es que ya esté implementada. Te dejo por aquí el [link a la documentación del módulo.](https://docs.python.org/3/library/math.html).
###Code
import math
print(math.sqrt(25))
print(math.fabs(-4))
print(math.acos(0))
###Output
5.0
4.0
1.5707963267948966
###Markdown
Como en todos los lenguajes de programación, suele haber una serie de componentes básicos (variables, operaciones aritméticas, tipos de datos...) con los que podemos hacer muchas cosas. Ahora bien, si queremos ampliar esas funcionalidades, se suelen importar nuevos módulos, con funciones ya hechas de otros usuarios, como en el caso del módulo `math`. Veremos esto de los módulos más adelante. ERRORES Dividir por cero Cuidado cuando operamos con 0s. Las indeterminaciones y valores infinitos suponen errores en el código. Por suerte, la descripción de estos errores es bastante explícita, obteniendo un error de tipo `ZeroDivisionError`
###Code
4/0
# Hay valores que se salen del dominio de algunas funciones matemáticas, como es el caso de las raices de números negativos
math.sqrt(-10)
###Output
_____no_output_____
###Markdown
Ejercicio de operaciones con mathConsulta la documentación de math para resolver este ejercicio Calcula el valor absoluto de -25. Usa fabs Redondea 4.7 a su enero más bajo. Usa floor Redondea 4.3 a su enero más alto. Usa ceil El número pi ¿Cuál es el área de un círculo de radio 3?
###Code
print(math.fabs(-25))
print(math.floor(4.7))
print(math.ceil(4.7))
print(math.pi)
print(math.pi * 3**2)
###Output
25.0
4
5
3.141592653589793
28.274333882308138
###Markdown
2. Operaciones comparativasEs bastante intuitivo comparar valores en Python. La sintaxis es la siguiente:* `==`: Igualdad. No es un `=`. Hay que diferenciar entre una comparativa, y una asignación de valores* `!=`: Desigualdad* `>`: Mayor que* `<`: Menor que* `>=`: Mayor o igual que* `<=`: Menor o igual que
###Code
# asignación
asign = 1
print(asign)
# comparacion
print(asign == 5)
print(asign)
###Output
1
False
1
###Markdown
En la asignación estamos diciendole a Python que la variable `asign` vale 1, mientras que en la comparación, estamos preguntando a Python si `a` equivale a 5. Como vale 1, nos devuelve un `False`
###Code
print("AAA" == "BBB")
print("AAA" == "AAA")
print(1 == 1)
print(1 == 1.0)
print (67 != 93)
print(67 > 93)
print(67 >= 93)
print(True == 1)
print(True == 5)
###Output
False
True
True
True
True
False
False
True
False
###Markdown
ERRORES en comparativas Este tipo de errores son muy comunes, pues es muy habitual comparar peras con manzanas. Cuando se trata de una igualdad (`==`), no suele haber problemas, ya que si las variables son de distinto tipo, simplemente es `False`. Lo ideal sería que Python nos avisase de estas cosas porque realmente lo estamos haciendo mal, no estamos comparando cosas del mismo tipo
###Code
print (True == 6)
print(True == "verdadero")
print(6 == "cadena")
# Obtenemos un TypeError cuando la comparativa es de > o <
1.0 > "texto"
###Output
_____no_output_____
###Markdown
3. Operaciones con booleanosTodas las operaciones que realizabamos en el apartado anterior devolvían un tipo de dato concreto: un booleano. `True` o `False`. Pero ¿cómo harías si se tienen que cumplir 3 condiciones, o solo una de esas tres, o que no se cumplan 5 condiciones?Para este tipo de operaciones recurrimos al [*Álgebra de Boole*](https://es.wikipedia.org/wiki/%C3%81lgebra_de_Boole:~:text=El%20%C3%A1lgebra%20de%20Boole%2C%20tambi%C3%A9n,que%20esquematiza%20las%20operaciones%20l%C3%B3gicas.). Se trata de una rama del álgebra que se utiliza en electrónica, pero que tiene un sin fin de aplicaciones, no solo téncicas, sino aplicables a la vida cotidiana. Estas matemáticas pueden llegar a ser muy complejas aún utilizando ñunicamente dos valores: `True` y `False`. Las operaciones más comunes son **AND, OR, NOR**.En las siguientes tablas tienes todos los posibles resultados de las puertas AND, OR, NOR, dependiendo de sus inputs.Puede parecer complejo pero a efectos prácticos, y sin meternos con otro tipo de puertas lógicas, te recomiendo seguir estas reglas:* **AND**: Se tienen que cumplir ambas condiciones para que sea un `True`* **OR**: Basta que se cumpla al menos una condicion para que sea `True`* **NOR**: Lo contrario de lo que hayaVeamos un ejemplo práctico para aclarar estos conceptos. Imaginemos que queremos comprar un ordenador, pero nos cuesta decidirnos. Eso sí, tenemos claras las siguentes condiciones a la hora de elegir* La RAM me vale que tenga 16, 32 o 64 GB* En cuanto al procesador y disco duro, la combinación que mejor me viene es un i3 con 500GB de disco.* Precio: que no pase de los 800 €
###Code
# Primer ordenador
ram1 = 32
process1 = "i5"
disco1 = 500
precio1 = 850
# Segundo ordenador
ram2 = 8
process2 = "i5"
disco2 = 500
precio2 = 600
# Tercer ordenador
ram3 = 32
process3 = "i3"
disco3 = 500
precio3 = 780
###Output
_____no_output_____
###Markdown
Veamos cómo implemento esto mediante operaciones booleanas
###Code
# Primero, calculamos el valor de estas condiciones por separado
cond_ram1 = (ram1 == 16 or ram1 == 32 or ram1 == 64) # OR: me vale al menos un True para que se cumpla esta condicion
cond_process1 = (process1 == "i3" and disco1 == 500) # AND: se tienen que cumplir ambas
cond_precio1 = (precio1 <= 800)
print(cond_ram1)
print(cond_process1)
print(cond_precio1)
# todo en una línea
cond_tot1 = cond_ram1 and cond_process1 and cond_precio1
print("Resultado de si me encaja el ordenador 1:", cond_tot1)
###Output
True
False
False
Resultado de si me encaja el ordenador 1: False
###Markdown
El primer ordenador cumple el requisito de ram, pero no los de precio y procesador/disco. Veamos los otros dos si los cumplen
###Code
cond_tot2 = (ram2 == 16 or ram2 == 32 or ram2 == 64) and (process2 == "i3" and disco2 == 500) and (precio2 <= 800)
cond_tot3 = (ram3 == 16 or ram3 == 32 or ram3 == 64) and (process3 == "i3" and disco3 == 500) and (precio3 <= 800)
print("Resultado de si me encaje el ordenador 2: ", cond_tot2)
print("Resultado de si me encaje el ordenador 3: ", cond_tot3)
###Output
Resultado de si me encaje el ordenador 2: False
Resultado de si me encaje el ordenador 3: True
###Markdown
¡Bingo! El tercer ordenador cumple todas las condiciones para ser mi futura compra. Verás en próximos notebooks que esto se puede hacer todavía más sencillo mediante bucles y funciones.Si quieres aprender más sobre el **Álgebra de Boole**, te recomiendo [esta página](https://ryanstutorials.net/boolean-algebra-tutorial/) ERRORES varios ¡No me vas a creer cuando te diga que lo mejor que te puede pasar es que te salten errores por pantalla! Si, estos son los errores más fáciles de detectar y puede que también fáciles de corregir ya que tienes la ayuda del descriptivo del error. El problema gordo viene cuando no saltan errores y ves que tu código no lo está haciendo bien. Para ello tendremos que debugear el código y ver paso a paso que está pasando. Lo veremos en notebooks posteriores. De momento corregiremos el código revisandolo a ojo.Como ves en el siguiente ejemplo, el resultado del ordenador 3 es `False` cuando debería ser `True`. ¿Por qué?
###Code
cond_tot3 = (ram3 == 16 or ram3 == 32 or ram3 == 64) and (process3 == "i3" and disco3 == 500) and (precio3 >= 800)
print("Resultado de si me encaja el ordenador 3: ", cond_tot3)
###Output
Resultado de si me encaja el ordenador 3: False
###Markdown
Cuidado cuando tenemos sentencias muy largas, ya que nos puede bailar perfectamente un paréntesis, un `>`, un `and` por un `or`... Hay que andarse con mil ojos.Y sobretodo, cuidado con el *copy paste*. Muchas veces, por ahorrar tiempo, copiamos código ya escrito para cambiar pequeñas cosas y hay veces que se nos olvida cambiar otras. Pensamos que está bien, ejecutamos, y saltan errores. Copiar código no es una mala práctica, es más, muchas veces evitamos errores con los nombres de las variables, pero hay que hacerlo con cabeza. Ejercicio de operaciones con booleanosSin escribir código, ¿Qué valor devuelve cada una de las siguientes operaciones? not (True and False) False or False or False or False or False or False or True or False or False or False True or True or True or True or True or False or True or True or True or True (False and True and True) or (True and True) 1. True2. True3. True4. True 4. Funciones *Built in*Hay una serie de funciones internas, que vienen en el intérprete de Python. Algunas de las más comunes son:* **Tipos**: `bool()`, `str()`, `int()`, `float()`* **Min/Max**: `min()`, `max()`* **print()*** **type()*** **range()*** **zip()*** **len()*** ...La sintaxis de la función es:```Pythonnombre_funcion(argumentos)```Algunas ya las hemos visto. Sin embargo, hay unas cuantas que las iremos descubriendo a lo largo de estos notebooks. Para más detalle, tienes [aquí](https://docs.python.org/3/library/functions.html) todas las funciones *built-in* de la documentación.De momento, en lo que se refiere a funciones, vamos a ir trabajando con funciones ya hechas, pero más adelante crearemos nuestras propias funciones.
###Code
# Len se usa para calcular la longitud de una variable. Ya veras que lo usaremos mucho en colecciones
print(len("Este string")) # devuelve el número de caracteres
# Funcion max. Tiene tantos argumentos como cantidad de números entre los cuales queramos sacar su valor máximo.
print(max(1,2,3,4))
###Output
11
4
###Markdown
Ejercicio de funciones built-inBusca en la documentación una función que te sirva para ordenar de manera descendente la siguiente lista
###Code
temperaturas_de_hoy = [17, 22, 26, 18, 21, 21, 25, 29]
sorted(temperaturas_de_hoy, reverse = True)
###Output
_____no_output_____
###Markdown
5. MétodosSe trata de una propiedad MUY utilizada en programación. Son funciones propias de las variables/objetos, y que nos permiten modificarlos u obtener más información de los mismos. Dependiendo del tipo de objeto, tendremos unos métodos disponibles diferentes.Para usar un método se usa la sintaxis `objeto.metodo()`. Ponemos un punto entre el nombre del objeto y el del metodo, y unos paréntesis por si el método necesita de algunos argumentos. **Aunque no necesite de argumentos, los paréntesis hay que ponerlos igualmente.**Veamos algunos ejemplos StringUna variable de tipo string, tiene una serie de métodos que permiten sacarle jugo a la cadena de texto. [Aquí](https://docs.python.org/2.5/lib/string-methods.html) tienes todos los métodos que podemos usar en cadenas de texto
###Code
string_ejemplo = "string en mayusculas"
# Para poner un string todo en mayusculas
print("Todo mayusculas:", string_ejemplo.upper())
# Para poner un string todo en minusculas
print("Todo minusculas:", string_ejemplo.lower())
# Para sustituir caracteres. Dos argumentos (busca este string, sustituyelo por este otro)
print("Sustituir m por M:", string_ejemplo.replace("m", "M"))
# El replace también es muy útil cuando queremos eliminar caracteres. Sustituimos por vacío
print("Eliminar m:", string_ejemplo.replace("m", ""))
# Divide el string por un caracter en una LISTA
print("Separalo segun el numero de espacios:", string_ejemplo.split(" "))
# Devuelve la posicion del caracter que le pongamos como argumento
print("'y' está en la posición:", string_ejemplo.index("y"))
###Output
Todo mayusculas: STRING EN MAYUSCULAS
Todo minusculas: string en mayusculas
Sustituir m por M: string en Mayusculas
Eliminar m: string en ayusculas
Separalo segun el numero de espacios: ['string', 'en', 'mayusculas']
'y' está en la posición: 12
###Markdown
Como ves, se pueden hacer muchas cosas en los Strings gracias a sus métodos. Ya verás cómo la cosa se pone más interesante cuando los tipos de los datos sean todavía más complejos.Los métodos son una manera de abstraernos de cierta operativa. Convertir todos los caracteres de una cadena a minuscula, puede ser un poco tedioso si no existiese el método `lower()`. Tendríamos que acudir a bucles o programación funcional. ERRORES en métodos
###Code
# Cuando un método necesita ciertos argumentos, y no se los proporcionamos
string_ejemplo = "string en mayusculas"
string_ejemplo.replace()
###Output
_____no_output_____
###Markdown
6. ListasSe trata de otro de los tipos de datos de Python más usados. Dentro de las colecciones, que veremos más adelante, la lista es la colección que normalmente se le da más uso. **Nos permiten almacenar conjuntos de variables u objetos**, y son elementos de lo más versátiles puesto que podemos almacenar objetos de distintos tipos, modificarlos, eliminarlos, meter listas dentro de listas... Sus dos caractrísticas principales son:* **Mutables**: una vez se ha creado la lista, se puede modificar* **Ordenada**: Los elementos tienen un cierto orden, lo que nos permite acceder al elemento que queramos teniendo en cuenta tal ordenEn cuanto a su sintaxis, cuando declaremos la lista simplemente hay que separar cada elemento con comas, y rodearlo todo con corchetes.
###Code
numeros_favoritos = [3, 6, 1]
print(numeros_favoritos)
string_favoritos = ["ponme", "otra"]
bools_favoritos = [True, True, not False, True or False]
mix_list = ["texto", 1, 7, 55.7, True, False]
list_list = [[], 4, "diez", [True, 43, "mas texto"]]
# concatenamos
lista_a = ['a', 'A']
lista_b = ['b', 'B']
lista_ab = lista_a + lista_b
print(lista_ab)
###Output
['a', 'A', 'b', 'B']
###Markdown
**NOTA**: ¿Ves por qué los decimales en Python siempre van con puntos y no con comas? Con las colecciones el intérprete de Python se volvería loco.Podemos ver tambien el tipo de la lista
###Code
type(list_list)
###Output
_____no_output_____
###Markdown
Calcular la longitud de la misma mediante el método *built-in* ya visto: `len()`
###Code
lista_len = [1,2,3,4]
len(lista_len)
###Output
_____no_output_____
###Markdown
Accedemos a los elemenos de la lista mediante corchetes `[]`**Importante**. El primer elemento es el 0
###Code
lista_index = [4, 5, 6, 7,8]
print(lista_index[0])
print(lista_index[1])
print(lista_index[2])
print(lista_index[3])
###Output
4
5
6
7
###Markdown
Metodos en ListasPara el tipo de objeto lista, también hay una serie de métodos catacterísticos que nos permiten operar con ellas: añadir valores, quitarlos, indexado, filtrado, etc... En [este enlace](https://www.w3schools.com/python/python_ref_list.asp) puedes encontrar todos los métodos que podrás usar con listas.
###Code
asignaturas = ['Mates', 'Fisica', 'ingles']
print(asignaturas)
asignaturas.append("quimica")
print(asignaturas)
print(asignaturas.index("Fisica"))
asignaturas.clear()
print(asignaturas.clear())
print(asignaturas)
###Output
['Mates', 'Fisica', 'ingles']
['Mates', 'Fisica', 'ingles', 'quimica']
1
None
[]
###Markdown
Ejercicio de listas Crea una lista con tus películas favoritas. No te pases de larga! Imprime por pantalla la longitud de la lista Añade a esta lista otra lista con tus series favoritas
###Code
pelis = ["Piratas del Caribe", "Blancanieves", "Regreso al futuro", "La vida de Brian"]
print(len(pelis))
series = ["Dark", "Big Bang", "Los soprano"]
print( pelis + series)
pelis.append(series)
print(pelis)
###Output
4
['Piratas del Caribe', 'Blancanieves', 'Regreso al futuro', 'La vida de Brian', 'Dark', 'Big Bang', 'Los soprano']
['Piratas del Caribe', 'Blancanieves', 'Regreso al futuro', 'La vida de Brian', ['Dark', 'Big Bang', 'Los soprano']]
hola
###Markdown
7. Resumen
###Code
# Operaciones matemáticas
print("Operaciones matemáticas")
print(4 + 6)
print(9*2)
print(2 * (3 + 5))
print(10/5)
print(10 % 3)
print(2**10)
# Funciones matemáticas más complejas
import math
print(math.sqrt(25))
# Operaciones comparativas
print("\nOperaciones comparativas")
print("AAA" == "BBB")
print("AAA" == "AAA")
print(1 == 1)
print(1 == 1.0)
print(67 != 93)
print(67 > 93)
print(67 >= 93)
# Operaciones con booleanos
print("\nOperaciones con booleanos")
print(True and True and False)
print(True or True or False)
print(not False)
# Funciones builtin
print("\nFunciones built-in")
string_builtin = "Fin del notebook"
print(string_builtin.upper())
print(string_builtin.lower())
print( string_builtin.replace("o", "O"))
print(string_builtin.replace("o", ""))
# Listas
print("\nListas")
musica = ["AC/DC", "Metallica", "Nirvana"]
musica.append("Queen")
print(musica)
###Output
Operaciones matemáticas
10
18
16
2.0
1
1024
5.0
Operaciones comparativas
False
True
True
True
True
False
False
Operaciones con booleanos
False
True
True
Funciones built-in
FIN DEL NOTEBOOK
fin del notebook
Fin del nOtebOOk
Fin del ntebk
Listas
['AC/DC', 'Metallica', 'Nirvana', 'Queen']
|
module2-survival-analysis/LS_DS_232_Survival_Analysis-Assignment.ipynb
|
###Markdown
Lambda School Data Science - Survival Analysishttps://xkcd.com/881/The aim of survival analysis is to analyze the effect of different risk factors and use them to predict the duration of time between one event ("birth") and another ("death"). Assignment - Customer ChurnTreselle Systems, a data consulting service, [analyzed customer churn data using logistic regression](http://www.treselle.com/blog/customer-churn-logistic-regression-with-r/). For simply modeling whether or not a customer left this can work, but if we want to model the actual tenure of a customer, survival analysis is more appropriate.The "tenure" feature represents the duration that a given customer has been with them, and "churn" represents whether or not that customer left (i.e. the "event", from a survival analysis perspective). So, any situation where churn is "no" means that a customer is still active, and so from a survival analysis perspective the observation is censored (we have their tenure up to now, but we don't know their *true* duration until event).Your assignment is to [use their data](https://github.com/treselle-systems/customer_churn_analysis) to fit a survival model, and answer the following questions:- What features best model customer churn?- What would you characterize as the "warning signs" that a customer may discontinue service?- What actions would you recommend to this business to try to improve their customer retention?Please create at least *3* plots or visualizations to support your findings, and in general write your summary/results targeting an "interested layperson" (e.g. your hypothetical business manager) as your audience.This means that, as is often the case in data science, there isn't a single objective right answer - your goal is to *support* your answer, whatever it is, with data and reasoning.Good luck!
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import lifelines
pd.set_option('display.max_columns', None) # Unlimited columns
# Loading the data to get you started
churn_data = pd.read_csv(
'https://raw.githubusercontent.com/treselle-systems/'
'customer_churn_analysis/master/WA_Fn-UseC_-Telco-Customer-Churn.csv')
churn_data.head()
churn_data.info() # A lot of these are "object" - some may need to be fixed...
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7043 entries, 0 to 7042
Data columns (total 21 columns):
customerID 7043 non-null object
gender 7043 non-null object
SeniorCitizen 7043 non-null int64
Partner 7043 non-null object
Dependents 7043 non-null object
tenure 7043 non-null int64
PhoneService 7043 non-null object
MultipleLines 7043 non-null object
InternetService 7043 non-null object
OnlineSecurity 7043 non-null object
OnlineBackup 7043 non-null object
DeviceProtection 7043 non-null object
TechSupport 7043 non-null object
StreamingTV 7043 non-null object
StreamingMovies 7043 non-null object
Contract 7043 non-null object
PaperlessBilling 7043 non-null object
PaymentMethod 7043 non-null object
MonthlyCharges 7043 non-null float64
TotalCharges 7043 non-null object
Churn 7043 non-null object
dtypes: float64(1), int64(2), object(18)
memory usage: 1.1+ MB
###Markdown
Data Cleaning
###Code
# Some columns we just don't need.
churn2 = churn_data.drop(columns = 'customerID')
# The two numerical columns at the end should really be numerical
churn3 = churn2
churn3['MonthlyCharges'] = pd.to_numeric(churn3['MonthlyCharges'])
churn3['TotalCharges'] = pd.to_numeric(churn3['TotalCharges'],
errors='coerce')
# There are 11 nulls in TotalCharges, which I'll replace with the mean
# value for that column
TotalCharges_mean = churn3['TotalCharges'].mean()
churn3['TotalCharges'].fillna(TotalCharges_mean, inplace=True)
# Lifelines requires numerical yes and no values.
churn4 = churn3.replace({'Yes':1, 'No':0})
# I'll one-hot encode all the categorical columns
numerical_columns = ['tenure','MonthlyCharges','TotalCharges','Churn']
churn5 = churn4.drop(columns=numerical_columns)
churn6 = pd.get_dummies(churn5)
churn7 = churn4[numerical_columns].join(churn6)
print(churn7.shape)
churn7.head()
###Output
(7043, 42)
###Markdown
Snapshot of customer retention
###Code
# I'll first plot the lifelines of a sample of 80 customers
churn_sample = churn7.sample(80)
time = churn_sample.tenure.values
event = churn_sample.Churn.values
ax = lifelines.plotting.plot_lifetimes(time, event_observed=event)
# ax.set_xlim(0, 40)
ax.grid(axis='x')
ax.set_xlabel("Time in Months")
ax.set_title("Customer Retention");
plt.plot();
###Output
_____no_output_____
###Markdown
This is a small sample of the customer population, showing how long customers (individual lines) have been with the company (length of lines). Red lines end when a customer is lost. Blue lines show a customer that is still with the company. Survival estimate curve
###Code
time = churn7.tenure.values;
event = churn7.Churn.values;
kmf = lifelines.KaplanMeierFitter();
kmf.fit(time, event_observed=event);
kmf.plot();
plt.title('Probability of still having a customer after x months');
print(f'Median time before losing customer: {kmf.median_} weeks')
###Output
0 0.5
dtype: float64
Median time before losing customer: inf weeks
###Markdown
Survival Regression
###Code
# I use the Cox Proportional Hazards model for a survival regression
# The matrix has high collinearity, so the internet suggests
# adding a penalizer term. The rationale here is not entirely clear
cph = lifelines.CoxPHFitter(penalizer=0.01)
cph.fit(churn7, 'tenure', event_col='Churn')
cph.print_summary()
###Output
<lifelines.CoxPHFitter: fitted with 7043 observations, 5174 censored>
duration col = 'tenure'
event col = 'Churn'
number of subjects = 7043
number of events = 1869
log-likelihood = -12659.69
time fit was run = 2019-01-23 03:52:56 UTC
---
coef exp(coef) se(coef) z p log(p) lower 0.95 upper 0.95
MonthlyCharges 0.01 1.01 0.02 0.57 0.57 -0.57 -0.03 0.06
TotalCharges -0.00 1.00 0.00 -39.16 <0.005 -inf -0.00 -0.00 ***
SeniorCitizen 0.03 1.04 0.06 0.61 0.54 -0.61 -0.08 0.15
Partner -0.18 0.84 0.06 -3.23 <0.005 -6.71 -0.29 -0.07 *
Dependents -0.09 0.91 0.07 -1.31 0.19 -1.66 -0.23 0.05
PhoneService 0.40 1.49 24.86 0.02 0.99 -0.01 -48.33 49.13
PaperlessBilling 0.15 1.16 0.06 2.65 0.01 -4.82 0.04 0.26 *
gender_Female 0.02 1.02 14.14 0.00 1.00 -0.00 -27.70 27.74
gender_Male -0.02 0.98 14.14 -0.00 1.00 -0.00 -27.74 27.70
MultipleLines_0 0.03 1.03 13.64 0.00 1.00 -0.00 -26.71 26.77
MultipleLines_1 0.11 1.12 13.64 0.01 0.99 -0.01 -26.62 26.85
MultipleLines_No phone service -0.40 0.67 24.86 -0.02 0.99 -0.01 -49.13 48.33
InternetService_0 -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
InternetService_DSL -0.42 0.66 14.28 -0.03 0.98 -0.02 -28.41 27.57
InternetService_Fiber optic 0.60 1.82 14.28 0.04 0.97 -0.03 -27.39 28.59
OnlineSecurity_0 0.21 1.24 14.54 0.01 0.99 -0.01 -28.28 28.71
OnlineSecurity_1 0.00 1.00 14.54 0.00 1.00 -0.00 -28.50 28.50
OnlineSecurity_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
OnlineBackup_0 0.14 1.15 14.28 0.01 0.99 -0.01 -27.84 28.12
OnlineBackup_1 0.09 1.09 14.28 0.01 1.00 -0.00 -27.89 28.07
OnlineBackup_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
DeviceProtection_0 0.07 1.08 14.28 0.01 1.00 -0.00 -27.91 28.06
DeviceProtection_1 0.16 1.17 14.28 0.01 0.99 -0.01 -27.83 28.15
DeviceProtection_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
TechSupport_0 0.16 1.17 14.52 0.01 0.99 -0.01 -28.30 28.61
TechSupport_1 0.07 1.07 14.52 0.00 1.00 -0.00 -28.38 28.53
TechSupport_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
StreamingTV_0 -0.03 0.97 14.21 -0.00 1.00 -0.00 -27.89 27.83
StreamingTV_1 0.25 1.29 14.21 0.02 0.99 -0.01 -27.61 28.12
StreamingTV_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
StreamingMovies_0 -0.03 0.97 14.21 -0.00 1.00 -0.00 -27.89 27.83
StreamingMovies_1 0.26 1.29 14.21 0.02 0.99 -0.01 -27.60 28.12
StreamingMovies_No internet service -0.32 0.73 22.96 -0.01 0.99 -0.01 -45.33 44.69
Contract_Month-to-month 1.49 4.42 12.96 0.11 0.91 -0.10 -23.91 26.88
Contract_One year 0.22 1.25 12.96 0.02 0.99 -0.01 -25.17 25.62
Contract_Two year -2.21 0.11 12.96 -0.17 0.86 -0.15 -27.61 23.18
PaymentMethod_Bank transfer (automatic) -0.24 0.79 11.62 -0.02 0.98 -0.02 -23.02 22.55
PaymentMethod_Credit card (automatic) -0.25 0.78 11.62 -0.02 0.98 -0.02 -23.03 22.53
PaymentMethod_Electronic check 0.15 1.16 11.62 0.01 0.99 -0.01 -22.63 22.93
PaymentMethod_Mailed check 0.27 1.32 11.62 0.02 0.98 -0.02 -22.51 23.06
---
Signif. codes: 0 '***' 0.0001 '**' 0.001 '*' 0.01 '.' 0.05 ' ' 1
Concordance = 0.93
Likelihood ratio test = 5986.69 on 40 df, log(p)=-inf
###Markdown
After fitting the model, the following variables have coefficients significantly different from zero: TotalCharges, Partner, and PaperlessBilling
###Code
# I plot out the log(Hazard Ratio) for each variable, to show which variables
# are close to being significant.
fig, ax = plt.subplots(figsize=(10,20));
cph.plot(ax=ax);
###Output
_____no_output_____
###Markdown
Best predictors of churn?I will now plot predictions for several covariate groups to see if they look any different.
###Code
cph.plot_covariate_groups('Partner', [0,1]);
cph.plot_covariate_groups('PaperlessBilling', [0,1]);
cph.plot_covariate_groups('MonthlyCharges', [10,20,30,60,80,100,110,120]);
###Output
_____no_output_____
###Markdown
MonthlyCharges makes a big difference!! Customers with higher monthly charges start disappearing sooner. What about the other most significant variable in the model, TotalCharges?
###Code
cph.plot_covariate_groups('TotalCharges', [100,1000,3000,5000,8000]);
###Output
_____no_output_____
###Markdown
This one is even starker! But something is strange, because higher charges push a customer towards later and later disappearences. Ah. It must be that they are heavily correlated (because longer clients have accumulated higher TotalCharges). Sure enough, as the following graph demonstrates:
###Code
fig, ax = plt.subplots();
ax.scatter(churn7.tenure, churn7.TotalCharges);
# What if I remove that variable and run the model again?
churn8 = churn7.drop(columns='TotalCharges')
cph = lifelines.CoxPHFitter(penalizer=0.01)
cph.fit(churn8, 'tenure', event_col='Churn')
cph.print_summary()
###Output
<lifelines.CoxPHFitter: fitted with 7043 observations, 5174 censored>
duration col = 'tenure'
event col = 'Churn'
number of subjects = 7043
number of events = 1869
log-likelihood = -13884.60
time fit was run = 2019-01-23 05:14:37 UTC
---
coef exp(coef) se(coef) z p log(p) lower 0.95 upper 0.95
MonthlyCharges -0.01 0.99 0.02 -0.35 0.73 -0.32 -0.05 0.04
SeniorCitizen -0.07 0.93 0.06 -1.26 0.21 -1.57 -0.18 0.04
Partner -0.52 0.60 0.06 -9.40 <0.005 -46.65 -0.63 -0.41 ***
Dependents -0.05 0.95 0.07 -0.78 0.43 -0.83 -0.19 0.08
PhoneService 0.02 1.02 24.86 0.00 1.00 -0.00 -48.71 48.75
PaperlessBilling 0.18 1.20 0.06 3.20 <0.005 -6.60 0.07 0.29 *
gender_Female 0.04 1.04 14.14 0.00 1.00 -0.00 -27.67 27.76
gender_Male -0.04 0.96 14.14 -0.00 1.00 -0.00 -27.76 27.67
MultipleLines_0 0.21 1.23 13.64 0.02 0.99 -0.01 -26.53 26.95
MultipleLines_1 -0.21 0.81 13.64 -0.02 0.99 -0.01 -26.95 26.53
MultipleLines_No phone service -0.02 0.98 24.86 -0.00 1.00 -0.00 -48.75 48.71
InternetService_0 -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
InternetService_DSL -0.28 0.76 14.28 -0.02 0.98 -0.02 -28.27 27.71
InternetService_Fiber optic 0.31 1.36 14.28 0.02 0.98 -0.02 -27.69 28.30
OnlineSecurity_0 0.30 1.36 14.54 0.02 0.98 -0.02 -28.19 28.80
OnlineSecurity_1 -0.31 0.73 14.54 -0.02 0.98 -0.02 -28.81 28.19
OnlineSecurity_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
OnlineBackup_0 0.32 1.38 14.28 0.02 0.98 -0.02 -27.66 28.30
OnlineBackup_1 -0.29 0.75 14.28 -0.02 0.98 -0.02 -28.27 27.69
OnlineBackup_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
DeviceProtection_0 0.16 1.18 14.28 0.01 0.99 -0.01 -27.82 28.15
DeviceProtection_1 -0.12 0.89 14.28 -0.01 0.99 -0.01 -28.11 27.86
DeviceProtection_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
TechSupport_0 0.20 1.22 14.52 0.01 0.99 -0.01 -28.26 28.65
TechSupport_1 -0.17 0.84 14.52 -0.01 0.99 -0.01 -28.63 28.28
TechSupport_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
StreamingTV_0 0.01 1.01 14.21 0.00 1.00 -0.00 -27.85 27.87
StreamingTV_1 0.05 1.05 14.21 0.00 1.00 -0.00 -27.81 27.91
StreamingTV_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
StreamingMovies_0 0.05 1.06 14.21 0.00 1.00 -0.00 -27.80 27.91
StreamingMovies_1 -0.00 1.00 14.21 -0.00 1.00 -0.00 -27.86 27.86
StreamingMovies_No internet service -0.08 0.93 22.96 -0.00 1.00 -0.00 -45.09 44.93
Contract_Month-to-month 1.44 4.22 12.96 0.11 0.91 -0.09 -23.95 26.83
Contract_One year -0.18 0.84 12.96 -0.01 0.99 -0.01 -25.57 25.22
Contract_Two year -1.79 0.17 12.96 -0.14 0.89 -0.12 -27.19 23.60
PaymentMethod_Bank transfer (automatic) -0.29 0.75 11.62 -0.03 0.98 -0.02 -23.07 22.49
PaymentMethod_Credit card (automatic) -0.38 0.69 11.62 -0.03 0.97 -0.03 -23.16 22.40
PaymentMethod_Electronic check 0.29 1.34 11.62 0.03 0.98 -0.02 -22.49 23.07
PaymentMethod_Mailed check 0.27 1.31 11.62 0.02 0.98 -0.02 -22.51 23.06
---
Signif. codes: 0 '***' 0.0001 '**' 0.001 '*' 0.01 '.' 0.05 ' ' 1
Concordance = 0.87
Likelihood ratio test = 3536.88 on 39 df, log(p)=-inf
###Markdown
Now Partner and PaperlessBilling are the best predictors!
###Code
cph.plot_covariate_groups('Partner', [0,1]);
cph.plot_covariate_groups('PaperlessBilling', [0,1]);
###Output
_____no_output_____
|
workspace-1622692512 3/home/project_1_starter.ipynb
|
###Markdown
Project 1: Trading with Momentum InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages
###Code
import sys
!{sys.executable} -m pip install -r requirements.txt
###Output
Requirement already satisfied: colour==0.1.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (0.1.5)
Collecting cvxpy==1.0.3 (from -r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/a1/59/2613468ffbbe3a818934d06b81b9f4877fe054afbf4f99d2f43f398a0b34/cvxpy-1.0.3.tar.gz (880kB)
[K 100% |████████████████████████████████| 880kB 7.7MB/s eta 0:00:01 33% |██████████▉ | 296kB 7.6MB/s eta 0:00:01 83% |██████████████████████████▉ | 737kB 9.5MB/s eta 0:00:01
[?25hRequirement already satisfied: cycler==0.10.0 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from -r requirements.txt (line 3)) (0.10.0)
Collecting numpy==1.13.3 (from -r requirements.txt (line 4))
[?25l Downloading https://files.pythonhosted.org/packages/57/a7/e3e6bd9d595125e1abbe162e323fd2d06f6f6683185294b79cd2cdb190d5/numpy-1.13.3-cp36-cp36m-manylinux1_x86_64.whl (17.0MB)
[K 100% |████████████████████████████████| 17.0MB 2.3MB/s eta 0:00:01 11% |███▊ | 2.0MB 15.4MB/s eta 0:00:01 39% |████████████▌ | 6.7MB 24.5MB/s eta 0:00:01 46% |███████████████ | 7.9MB 26.0MB/s eta 0:00:01
[?25hCollecting pandas==0.21.1 (from -r requirements.txt (line 5))
[?25l Downloading https://files.pythonhosted.org/packages/3a/e1/6c514df670b887c77838ab856f57783c07e8760f2e3d5939203a39735e0e/pandas-0.21.1-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
[K 100% |████████████████████████████████| 26.2MB 1.9MB/s eta 0:00:01 0% |▏ | 102kB 20.2MB/s eta 0:00:02 18% |██████ | 4.9MB 25.2MB/s eta 0:00:01 28% |█████████ | 7.4MB 24.9MB/s eta 0:00:01 46% |██████████████▉ | 12.2MB 25.8MB/s eta 0:00:01 50% |████████████████▎ | 13.4MB 24.3MB/s eta 0:00:01 59% |███████████████████▏ | 15.7MB 23.5MB/s eta 0:00:01 68% |██████████████████████ | 18.1MB 23.9MB/s eta 0:00:01 86% |███████████████████████████▊ | 22.7MB 23.5MB/s eta 0:00:01 98% |███████████████████████████████▌| 25.8MB 24.6MB/s eta 0:00:01
[?25hCollecting plotly==2.2.3 (from -r requirements.txt (line 6))
[?25l Downloading https://files.pythonhosted.org/packages/99/a6/8214b6564bf4ace9bec8a26e7f89832792be582c042c47c912d3201328a0/plotly-2.2.3.tar.gz (1.1MB)
[K 100% |████████████████████████████████| 1.1MB 13.6MB/s ta 0:00:01 74% |████████████████████████ | 808kB 23.6MB/s eta 0:00:01
[?25hRequirement already satisfied: pyparsing==2.2.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (2.2.0)
Requirement already satisfied: python-dateutil==2.6.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 8)) (2.6.1)
Requirement already satisfied: pytz==2017.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 9)) (2017.3)
Requirement already satisfied: requests==2.18.4 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 10)) (2.18.4)
Collecting scipy==1.0.0 (from -r requirements.txt (line 11))
[?25l Downloading https://files.pythonhosted.org/packages/d8/5e/caa01ba7be11600b6a9d39265440d7b3be3d69206da887c42bef049521f2/scipy-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (50.0MB)
[K 100% |████████████████████████████████| 50.0MB 817kB/s eta 0:00:01 9% |███ | 4.7MB 21.8MB/s eta 0:00:03 11% |███▊ | 5.8MB 21.7MB/s eta 0:00:03 17% |█████▊ | 9.0MB 22.9MB/s eta 0:00:02 19% |██████▍ | 9.9MB 19.5MB/s eta 0:00:03 30% |█████████▊ | 15.2MB 21.7MB/s eta 0:00:02 32% |██████████▎ | 16.1MB 18.1MB/s eta 0:00:02 36% |███████████▋ | 18.1MB 22.9MB/s eta 0:00:02 38% |████████████▎ | 19.3MB 22.6MB/s eta 0:00:02 42% |█████████████▊ | 21.4MB 22.4MB/s eta 0:00:02 45% |██████████████▍ | 22.6MB 21.2MB/s eta 0:00:02 51% |████████████████▍ | 25.7MB 22.1MB/s eta 0:00:02 53% |█████████████████▏ | 26.8MB 21.2MB/s eta 0:00:02 54% |█████████████████▌ | 27.4MB 3.8MB/s eta 0:00:06 58% |██████████████████▉ | 29.4MB 21.8MB/s eta 0:00:01 61% |███████████████████▌ | 30.5MB 22.2MB/s eta 0:00:01 63% |████████████████████▎ | 31.7MB 27.0MB/s eta 0:00:01 65% |█████████████████████ | 32.8MB 20.1MB/s eta 0:00:01 67% |█████████████████████▋ | 33.7MB 22.0MB/s eta 0:00:01 69% |██████████████████████▎ | 34.8MB 23.8MB/s eta 0:00:01 73% |███████████████████████▋ | 36.9MB 18.6MB/s eta 0:00:01 76% |████████████████████████▍ | 38.1MB 24.1MB/s eta 0:00:01 84% |███████████████████████████ | 42.3MB 24.5MB/s eta 0:00:01 86% |███████████████████████████▊ | 43.3MB 19.3MB/s eta 0:00:01 90% |█████████████████████████████ | 45.3MB 26.8MB/s eta 0:00:01 96% |███████████████████████████████ | 48.3MB 24.5MB/s eta 0:00:01 98% |███████████████████████████████▋| 49.3MB 20.2MB/s eta 0:00:01
[?25hRequirement already satisfied: scikit-learn==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 12)) (0.19.1)
Requirement already satisfied: six==1.11.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 13)) (1.11.0)
Collecting tqdm==4.19.5 (from -r requirements.txt (line 14))
[?25l Downloading https://files.pythonhosted.org/packages/71/3c/341b4fa23cb3abc335207dba057c790f3bb329f6757e1fcd5d347bcf8308/tqdm-4.19.5-py2.py3-none-any.whl (51kB)
[K 100% |████████████████████████████████| 61kB 11.0MB/s ta 0:00:01
[?25hCollecting osqp (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/76/82/b0693a167e4b9b5e94f4988f6df3d7866e9e41a316a58f1053dd21370f1a/osqp-0.6.2.post0-cp36-cp36m-manylinux1_x86_64.whl (211kB)
[K 100% |████████████████████████████████| 215kB 14.7MB/s ta 0:00:01
[?25hCollecting ecos>=2 (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/55/ed/d131ff51f3a8f73420eb1191345eb49f269f23cadef515172e356018cde3/ecos-2.0.7.post1-cp36-cp36m-manylinux1_x86_64.whl (147kB)
[K 100% |████████████████████████████████| 153kB 15.3MB/s ta 0:00:01
[?25hCollecting scs>=1.1.3 (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/12/bd/1ab6a3b3f2791741e6e7c142f932ea1808277e92167e322dec43271b2225/scs-2.1.3.tar.gz (147kB)
[K 100% |████████████████████████████████| 153kB 16.0MB/s ta 0:00:01
[?25hCollecting multiprocess (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/8f/dc/426a82723c460cfab653ebb717590103d6e38cebc9d1f599b0898915ac1d/multiprocess-0.70.11.1-py36-none-any.whl (101kB)
[K 100% |████████████████████████████████| 102kB 10.3MB/s a 0:00:01
[?25hRequirement already satisfied: fastcache in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (1.0.2)
Requirement already satisfied: toolz in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (0.8.2)
Requirement already satisfied: decorator>=4.0.6 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.0.11)
Requirement already satisfied: nbformat>=4.2 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.4.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (3.0.4)
Requirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2.6)
Requirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (1.22)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2019.11.28)
###Markdown
Load Packages
###Code
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
###Output
_____no_output_____
###Markdown
Market Data Load DataThe data we use for most of the projects is end of day data. This contains data for many stocks, but we'll be looking at stocks in the S&P 500. We also made things a little easier to run by narrowing down our range of time period instead of using all of the data.
###Code
df = pd.read_csv('../../data/project_1/eod-quotemedia.csv', parse_dates=['date'], index_col=False)
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
print('Loaded Data')
###Output
Loaded Data
###Markdown
View DataRun the cell below to see what the data looks like for `close`.
###Code
project_helper.print_dataframe(close)
###Output
_____no_output_____
###Markdown
Stock ExampleLet's see what a single stock looks like from the closing prices. For this example and future display examples in this project, we'll use Apple's stock (AAPL). If we tried to graph all the stocks, it would be too much information.
###Code
apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], '{} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Resample Adjusted PricesThe trading signal you'll develop in this project does not need to be based on daily prices, for instance, you can use month-end prices to perform trading once a month. To do this, you must first resample the daily adjusted closing prices into monthly buckets, and select the last observation of each month.Implement the `resample_prices` to resample `close_prices` at the sampling frequency of `freq`.
###Code
def resample_prices(close_prices, freq='M'):
"""
Resample close prices for each ticker at specified frequency.
Parameters
----------
close_prices : DataFrame
Close prices for each ticker and date
freq : str
What frequency to sample at
For valid freq choices, see http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
Returns
-------
prices_resampled : DataFrame
Resampled prices for each ticker and date
"""
# TODO: Implement Function
return close_prices.resample(freq).last()
project_tests.test_resample_prices(resample_prices)
###Output
Tests Passed
###Markdown
View DataLet's apply this function to `close` and view the results.
###Code
monthly_close = resample_prices(close)
project_helper.plot_resampled_prices(
monthly_close.loc[:, apple_ticker],
close.loc[:, apple_ticker],
'{} Stock - Close Vs Monthly Close'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Compute Log ReturnsCompute log returns ($R_t$) from prices ($P_t$) as your primary momentum indicator:$$R_t = log_e(P_t) - log_e(P_{t-1})$$Implement the `compute_log_returns` function below, such that it accepts a dataframe (like one returned by `resample_prices`), and produces a similar dataframe of log returns. Use Numpy's [log function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html) to help you calculate the log returns.
###Code
def compute_log_returns(prices):
"""
Compute log returns for each ticker.
Parameters
----------
prices : DataFrame
Prices for each ticker and date
Returns
-------
log_returns : DataFrame
Log returns for each ticker and date
"""
# TODO: Implement Function
return np.log(prices) - np.log(prices.shift(1))
project_tests.test_compute_log_returns(compute_log_returns)
###Output
Tests Passed
###Markdown
View DataUsing the same data returned from `resample_prices`, we'll generate the log returns.
###Code
monthly_close_returns = compute_log_returns(monthly_close)
project_helper.plot_returns(
monthly_close_returns.loc[:, apple_ticker],
'Log Returns of {} Stock (Monthly)'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Shift ReturnsImplement the `shift_returns` function to shift the log returns to the previous or future returns in the time series. For example, the parameter `shift_n` is 2 and `returns` is the following:``` Returns A B C D2013-07-08 0.015 0.082 0.096 0.020 ...2013-07-09 0.037 0.095 0.027 0.063 ...2013-07-10 0.094 0.001 0.093 0.019 ...2013-07-11 0.092 0.057 0.069 0.087 ...... ... ... ... ...```the output of the `shift_returns` function would be:``` Shift Returns A B C D2013-07-08 NaN NaN NaN NaN ...2013-07-09 NaN NaN NaN NaN ...2013-07-10 0.015 0.082 0.096 0.020 ...2013-07-11 0.037 0.095 0.027 0.063 ...... ... ... ... ...```Using the same `returns` data as above, the `shift_returns` function should generate the following with `shift_n` as -2:``` Shift Returns A B C D2013-07-08 0.094 0.001 0.093 0.019 ...2013-07-09 0.092 0.057 0.069 0.087 ...... ... ... ... ... ...... ... ... ... ... ...... NaN NaN NaN NaN ...... NaN NaN NaN NaN ...```_Note: The "..." represents data points we're not showing._
###Code
def shift_returns(returns, shift_n):
"""
Generate shifted returns
Parameters
----------
returns : DataFrame
Returns for each ticker and date
shift_n : int
Number of periods to move, can be positive or negative
Returns
-------
shifted_returns : DataFrame
Shifted returns for each ticker and date
"""
# TODO: Implement Function
return returns.shift(shift_n)
project_tests.test_shift_returns(shift_returns)
###Output
Tests Passed
###Markdown
View DataLet's get the previous month's and next month's returns.
###Code
prev_returns = shift_returns(monthly_close_returns, 1)
lookahead_returns = shift_returns(monthly_close_returns, -1)
project_helper.plot_shifted_returns(
prev_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Previous Returns of {} Stock'.format(apple_ticker))
project_helper.plot_shifted_returns(
lookahead_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Lookahead Returns of {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Generate Trading SignalA trading signal is a sequence of trading actions, or results that can be used to take trading actions. A common form is to produce a "long" and "short" portfolio of stocks on each date (e.g. end of each month, or whatever frequency you desire to trade at). This signal can be interpreted as rebalancing your portfolio on each of those dates, entering long ("buy") and short ("sell") positions as indicated.Here's a strategy that we will try:> For each month-end observation period, rank the stocks by _previous_ returns, from the highest to the lowest. Select the top performing stocks for the long portfolio, and the bottom performing stocks for the short portfolio.Implement the `get_top_n` function to get the top performing stock for each month. Get the top performing stocks from `prev_returns` by assigning them a value of 1. For all other stocks, give them a value of 0. For example, using the following `prev_returns`:``` Previous Returns A B C D E F G2013-07-08 0.015 0.082 0.096 0.020 0.075 0.043 0.0742013-07-09 0.037 0.095 0.027 0.063 0.024 0.086 0.025... ... ... ... ... ... ... ...```The function `get_top_n` with `top_n` set to 3 should return the following:``` Previous Returns A B C D E F G2013-07-08 0 1 1 0 1 0 02013-07-09 0 1 0 1 0 1 0... ... ... ... ... ... ... ...```*Note: You may have to use Panda's [`DataFrame.iterrows`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.iterrows.html) with [`Series.nlargest`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.Series.nlargest.html) in order to implement the function. This is one of those cases where creating a vecorization solution is too difficult.*
###Code
def get_top_n(prev_returns, top_n):
"""
Select the top performing stocks
Parameters
----------
prev_returns : DataFrame
Previous shifted returns for each ticker and date
top_n : int
The number of top performing stocks to get
Returns
-------
top_stocks : DataFrame
Top stocks for each ticker and date marked with a 1
"""
# TODO: Implement Function
signals = prev_returns.copy()
for index, row in signals.iterrows():
signal = row.nlargest(top_n).index
if len(row.nlargest(top_n).index) > 0:
for ticker in signals.columns:
if ticker in signal:
signals[ticker].loc[index] = 1
else:
signals[ticker].loc[index] = 0
signals.fillna(0, inplace = True)
for col in signals.columns:
signals[col] = signals[col].astype('Int64')
return signals
project_tests.test_get_top_n(get_top_n)
###Output
Tests Passed
###Markdown
View DataWe want to get the best performing and worst performing stocks. To get the best performing stocks, we'll use the `get_top_n` function. To get the worst performing stocks, we'll also use the `get_top_n` function. However, we pass in `-1*prev_returns` instead of just `prev_returns`. Multiplying by negative one will flip all the positive returns to negative and negative returns to positive. Thus, it will return the worst performing stocks.
###Code
top_bottom_n = 50
df_long = get_top_n(prev_returns, top_bottom_n)
df_short = get_top_n(-1*prev_returns, top_bottom_n)
project_helper.print_top(df_long, 'Longed Stocks')
project_helper.print_top(df_short, 'Shorted Stocks')
###Output
10 Most Longed Stocks:
INCY, AMD, AVGO, NFX, SWKS, NFLX, ILMN, UAL, NVDA, MU
10 Most Shorted Stocks:
RRC, FCX, CHK, MRO, GPS, WYNN, DVN, FTI, SPLS, TRIP
###Markdown
Projected ReturnsIt's now time to check if your trading signal has the potential to become profitable!We'll start by computing the net returns this portfolio would return. For simplicity, we'll assume every stock gets an equal dollar amount of investment. This makes it easier to compute a portfolio's returns as the simple arithmetic average of the individual stock returns.Implement the `portfolio_returns` function to compute the expected portfolio returns. Using `df_long` to indicate which stocks to long and `df_short` to indicate which stocks to short, calculate the returns using `lookahead_returns`. To help with calculation, we've provided you with `n_stocks` as the number of stocks we're investing in a single period.
###Code
def portfolio_returns(df_long, df_short, lookahead_returns, n_stocks):
"""
Compute expected returns for the portfolio, assuming equal investment in each long/short stock.
Parameters
----------
df_long : DataFrame
Top stocks for each ticker and date marked with a 1
df_short : DataFrame
Bottom stocks for each ticker and date marked with a 1
lookahead_returns : DataFrame
Lookahead returns for each ticker and date
n_stocks: int
The number number of stocks chosen for each month
Returns
-------
portfolio_returns : DataFrame
Expected portfolio returns for each ticker and date
"""
# TODO: Implement Function
lookahead_returns = lookahead_returns.copy()
df_long_ = df_long * lookahead_returns/n_stocks
df_short_ = df_short * -lookahead_returns/n_stocks
return df_long_ + df_short_
project_tests.test_portfolio_returns(portfolio_returns)
###Output
Tests Passed
###Markdown
View DataTime to see how the portfolio did.
###Code
expected_portfolio_returns = portfolio_returns(df_long, df_short, lookahead_returns, 2*top_bottom_n)
project_helper.plot_returns(expected_portfolio_returns.T.sum(), 'Portfolio Returns')
###Output
_____no_output_____
###Markdown
Statistical Tests Annualized Rate of Return
###Code
expected_portfolio_returns_by_date = expected_portfolio_returns.T.sum().dropna()
portfolio_ret_mean = expected_portfolio_returns_by_date.mean()
portfolio_ret_ste = expected_portfolio_returns_by_date.sem()
portfolio_ret_annual_rate = (np.exp(portfolio_ret_mean * 12) - 1) * 100
print("""
Mean: {:.6f}
Standard Error: {:.6f}
Annualized Rate of Return: {:.2f}%
""".format(portfolio_ret_mean, portfolio_ret_ste, portfolio_ret_annual_rate))
###Output
Mean: 0.003253
Standard Error: 0.002203
Annualized Rate of Return: 3.98%
###Markdown
The annualized rate of return allows you to compare the rate of return from this strategy to other quoted rates of return, which are usually quoted on an annual basis. T-TestOur null hypothesis ($H_0$) is that the actual mean return from the signal is zero. We'll perform a one-sample, one-sided t-test on the observed mean return, to see if we can reject $H_0$.We'll need to first compute the t-statistic, and then find its corresponding p-value. The p-value will indicate the probability of observing a t-statistic equally or more extreme than the one we observed if the null hypothesis were true. A small p-value means that the chance of observing the t-statistic we observed under the null hypothesis is small, and thus casts doubt on the null hypothesis. It's good practice to set a desired level of significance or alpha ($\alpha$) _before_ computing the p-value, and then reject the null hypothesis if $p < \alpha$.For this project, we'll use $\alpha = 0.05$, since it's a common value to use.Implement the `analyze_alpha` function to perform a t-test on the sample of portfolio returns. We've imported the `scipy.stats` module for you to perform the t-test.Note: [`scipy.stats.ttest_1samp`](https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.stats.ttest_1samp.html) performs a two-sided test, so divide the p-value by 2 to get 1-sided p-value
###Code
from scipy import stats
def analyze_alpha(expected_portfolio_returns_by_date):
"""
Perform a t-test with the null hypothesis being that the expected mean return is zero.
Parameters
----------
expected_portfolio_returns_by_date : Pandas Series
Expected portfolio returns for each date
Returns
-------
t_value
T-statistic from t-test
p_value
Corresponding p-value
"""
# TODO: Implement Function
hypothesis = 0.00
t_test, p_value = stats.ttest_1samp(expected_portfolio_returns_by_date,
hypothesis)
return t_test, p_value/2
project_tests.test_analyze_alpha(analyze_alpha)
###Output
Tests Passed
###Markdown
View DataLet's see what values we get with our portfolio. After you run this, make sure to answer the question below.
###Code
t_value, p_value = analyze_alpha(expected_portfolio_returns_by_date)
print("""
Alpha analysis:
t-value: {:.3f}
p-value: {:.6f}
""".format(t_value, p_value))
###Output
Alpha analysis:
t-value: 1.476
p-value: 0.073359
|
Regression/Linear Models/Lars_StandardScaler_QuantileTransformer.ipynb
|
###Markdown
**Least Angle Regression with StandardScaler and Quantile Transformer** This Code template is for the regression analysis using LARS Regressor and the feature transformation technique Quantile Transformer and Standard Scaling rescaling technique in a pipeline. **Required Packages**
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import QuantileTransformer,StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import Lars
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
**Initialization**Filepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features = []
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target = ''
###Output
_____no_output_____
###Markdown
**Dataset Overview**Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
**Dataset Information**Print a concise summary of a DataFrame.We will use info() method to print the information about the DataFrame including the index dtype and columns, non-null values and memory usage.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1338 entries, 0 to 1337
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 1338 non-null int64
1 sex 1338 non-null object
2 bmi 1338 non-null float64
3 children 1338 non-null int64
4 smoker 1338 non-null object
5 region 1338 non-null object
6 charges 1338 non-null float64
dtypes: float64(2), int64(2), object(3)
memory usage: 73.3+ KB
###Markdown
**Dataset Describe**Generate descriptive statistics.Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.We will analyzes both numeric and object series, as well as DataFrame column sets of mixed data types.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
**Feature Selection**It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
**Data Preprocessing**Since we do not know what is the number of Null values in each column.So,we print the columns arranged in descreasnig orde
###Code
print(df.isnull().sum().sort_values(ascending=False))
###Output
age 0
sex 0
bmi 0
children 0
smoker 0
region 0
charges 0
dtype: int64
###Markdown
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
**Correlation Map**In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
**Data Splitting**The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
**Power Transformer****sklearn.preprocessing.QuantileTransfromer()**This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. **Model**Least-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the feature most correlated with the target. When there are multiple features having equal correlation, instead of continuing along the same feature, it proceeds in a direction equiangular between the features. **Model Tuning Parameters**> jitter -> Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability.> eps -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.> n_nonzero_coefs -> Target number of non-zero coefficients. Use np.inf for no limit.> precompute -> Whether to use a precomputed Gram matrix to speed up calculations.
###Code
model = make_pipeline(StandardScaler(),QuantileTransformer(),Lars(random_state=123))
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
**Model Accuracy**We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 8.47 %
###Markdown
> r2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.> mae: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.> mse: The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 8.47 %
Mean Absolute Error 9256.95
Mean Squared Error 139947858.16
###Markdown
**Prediction Plot**> First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____
###Markdown
Least Angle Regression with StandardScaler and Quantile Transformer This Code template is for the regression analysis using LARS Regressor and the feature transformation technique Quantile Transformer and Standard Scaling rescaling technique in a pipeline. **Required Packages**
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import QuantileTransformer,StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import Lars
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
**Initialization**Filepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features = []
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target = ''
###Output
_____no_output_____
###Markdown
**Dataset Overview**Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
**Dataset Information**Print a concise summary of a DataFrame.We will use info() method to print the information about the DataFrame including the index dtype and columns, non-null values and memory usage.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1338 entries, 0 to 1337
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 1338 non-null int64
1 sex 1338 non-null object
2 bmi 1338 non-null float64
3 children 1338 non-null int64
4 smoker 1338 non-null object
5 region 1338 non-null object
6 charges 1338 non-null float64
dtypes: float64(2), int64(2), object(3)
memory usage: 73.3+ KB
###Markdown
**Dataset Describe**Generate descriptive statistics.Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.We will analyzes both numeric and object series, as well as DataFrame column sets of mixed data types.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
**Feature Selection**It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
**Data Preprocessing**Since we do not know what is the number of Null values in each column.So,we print the columns arranged in descreasnig orde
###Code
print(df.isnull().sum().sort_values(ascending=False))
###Output
age 0
sex 0
bmi 0
children 0
smoker 0
region 0
charges 0
dtype: int64
###Markdown
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
**Correlation Map**In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
**Data Splitting**The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
**Power Transformer****sklearn.preprocessing.QuantileTransfromer()**This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. **Model**Least-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the feature most correlated with the target. When there are multiple features having equal correlation, instead of continuing along the same feature, it proceeds in a direction equiangular between the features. **Model Tuning Parameters**> jitter -> Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability.> eps -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.> n_nonzero_coefs -> Target number of non-zero coefficients. Use np.inf for no limit.> precompute -> Whether to use a precomputed Gram matrix to speed up calculations.
###Code
model = make_pipeline(StandardScaler(),QuantileTransformer(),Lars(random_state=123))
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
**Model Accuracy**We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 8.47 %
###Markdown
> r2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.> mae: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.> mse: The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 8.47 %
Mean Absolute Error 9256.95
Mean Squared Error 139947858.16
###Markdown
**Prediction Plot**> First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____
|
Data Science and Machine Learning/Thorough Python Data Science Topics/Python Classification and Feature Extraction for Timbl.ipynb
|
###Markdown
Python Classification Feature Extraction for Timbl **(C) 2017 by [Damir Cavar](http://cavar.me/damir/)** **License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)) This is a tutorial related to the discussion of feature extraction for classification and clustering in the textbook [Machine Learning: The Art and Science of Algorithms that Make Sense of Data](https://www.cs.bris.ac.uk/~flach/mlbook/) by [Peter Flach](https://www.cs.bris.ac.uk/~flach/). This tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the [Computational Linguistics Program](http://cl.indiana.edu/) of the [Department of Linguistics](http://www.indiana.edu/~lingdept/) at [Indiana University](https://www.indiana.edu/). Feature Extraction
###Code
from nltk import word_tokenize
text1 = """The city will pay for it by taxes on properties selling for more
than $5 million.
The real estate transfer tax, as it's called, was increased last year for both
residential and commercial properties. The hike was approved by voters in
November.
Powered by SmartAsset.com
SmartAsset.com
The tax starts at 2.25% and goes up to 3% for properties worth at least $25
million. It's expected to bring in an average of $45 million a year, according
to the city controller. But the money goes into the city's general fund and is
also expected to be used for affordable housing and senior support services.
The free tuition plan is expected to impact about 28,000 residents who currently
take classes at City College of San Francisco and encourage more people to sign
up. Chancellor Susan Lamb said the school has the capacity for 85,000 students.
It's difficult to predict how many more people will enroll, and how much the
free-tuition plan will end up costing. San Francisco has committed $5.4 million
a year for the next two years, and then will have to reassess. That includes a
one-time $500,000 stipend to City College to help handle an influx of students.
Related: Why New York's 'tuition-free colleges' still cost $14K
San Francisco's tuition-free plan is more progressive than others round the
country. First, everyone is eligible as long as they have resided in San Francisco
for at least one year.
It covers the $46 cost per credit no matter how rich you are, "even to the
children of the founders of Facebook," said city lawmaker Jane Kim.
You don't have to be enrolled full-time or be a recent high school graduate.
This means that people who are seeking job retraining or want to take a few
foreign language courses won't have to pay for the cost of the credits.
Related: Rhode Island governor wants to make college free, too
Students will still be on the hook for the mandatory $17 per semester fee at
City College and the cost of books, so college won't necessarily be free.
What also sets apart San Francisco's plan is that it offers the poorest students
additional money to help pay for these other expenses. An individual has to earn
less than $17,000 a year to qualify for the aid, or less than $37,000 for a
family of four. Eligible full-time students will get $500 a year and part-time
students will get $200 a year.
"We have the fastest growing income gap than any city across the nation," Kim said
on Monday at a press conference.
"Making city college free is going to provide greater opportunities for more San
Franciscans to enter into the middle class and more San Franciscans to stay in the
middle class if they currently are," she said.
The push for free tuition is gaining support across the country. Tennessee started
offering free community college to residents in 2015, and will expand the program
this year to include adults returning to school. Lawmakers in New York are
discussing a program that would make four-year and two-year public colleges
tuition-free for residents who earn less than $125,000 a year. And Rhode Island's
governor is pushing for two free years at public colleges for recent high school
graduates."""
tokens1 = word_tokenize(text1.lower())
from collections import Counter
fp = Counter(tokens1)
print(fp)
model = [ (i, fp[i], len(i)) for i in fp ]
print(model)
for x in model:
print( "\t".join( (str(x[1]), str(x[2]), x[0]) ) )
from nltk.corpus import stopwords
stopw = stopwords.words("english")
stopw.append("us")
def isStopword(word):
if word in stopw:
return(1)
return(0)
for x in model:
print( "\t".join( (str(x[1]), str(x[2]), x[0], str(isStopword(x[0]))) ) )
from nltk import pos_tag
tokens1 = word_tokenize(text1).lower())
posTokens = pos_tag(tokens1)
tags = list( set( [ x[1][0] for x in posTokens ] ) )
print(tags)
from collections import defaultdict
leftOfToken = defaultdict(Counter)
rightOfToken = defaultdict(Counter)
for i in range(len(posTokens)):
tag = posTokens[i][1][0]
token = (posTokens[i][0], tag)
if i > 0:
ltag = posTokens[i - 1][1][0]
leftOfToken[token][ltag] += 1
if i < len(posTokens) - 1:
rtag = posTokens[i + 1][1][0]
rightOfToken[token][rtag] += 1
for token in leftOfToken.keys():
leftVector = []
rightVector = []
for tag in tags:
leftVector.append(leftOfToken[token][tag])
rightVector.append(rightOfToken[token][tag])
print(" ".join([ str(x) for x in leftVector ]), " ".join([ str(x) for x in rightVector ]), token[0], token[1])
text2 = """A flight out of Austin, Texas, was delayed after
a pilot behaved in a way that caused passengers to believe
she was mentally unstable, a United Airlines spokesman said Sunday.
The pilot, whom CNN is not naming, boarded the plane in street
clothes and began speaking to passengers over the intercom,
spokesman Charlie Hobart said.
Passengers on Saturday's San Francisco-bound flight took to
social media to express concerns after the pilot spoke to
them about her divorce and the presidential election, among
other issues. """
tokens2 = word_tokenize(text2).lower())
posTokens2 = pos_tag(tokens2)
leftOfToken2 = defaultdict(Counter)
rightOfToken2 = defaultdict(Counter)
for i in range(len(posTokens2)):
token = (posTokens2[i][0], posTokens2[i][1][0])
if i > 0:
ltag = posTokens2[i - 1][1][0]
leftOfToken2[token][ltag] += 1
if i < len(posTokens2) - 1:
rtag = posTokens2[i + 1][1][0]
rightOfToken2[token][rtag] += 1
for token in leftOfToken2.keys():
leftVector = []
rightVector = []
for tag in tags:
leftVector.append(leftOfToken2[token][tag])
rightVector.append(rightOfToken2[token][tag])
print(" ".join([ str(x) for x in leftVector ]), " ".join([ str(x) for x in rightVector ]), token[0], token[1])
###Output
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 caused V
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 unstable J
0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 flight N
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Hobart N
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 clothes N
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 presidential J
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 spoke V
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 behaved V
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 Texas N
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 believe V
0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 passengers N
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Airlines N
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 her P
0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 2 0 0 0 0 pilot N
0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 in I
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 plane N
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 CNN N
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Charlie N
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 she P
0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 said V
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 took V
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 about I
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 way N
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 San N
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 whom W
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 divorce N
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 United N
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 social J
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 street N
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Sunday N
0 0 0 0 0 0 0 5 0 2 0 0 0 0 0 0 0 0 1 0 0 0 1 0 2 0 0 0 0 2 0 0 1 0 , ,
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Francisco-bound N
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 over I
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 on I
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 out I
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 Austin N
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 media N
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 them P
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 mentally R
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 not R
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 other J
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 that W
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 's P
0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 and C
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 The D
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 concerns N
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 delayed V
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 naming J
1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 3 0 1 0 0 0 0 0 0 0 the D
0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 spokesman N
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 Passengers N
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 express V
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 election N
0 0 0 0 0 0 0 2 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 . .
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 among I
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 intercom N
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 began V
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Saturday N
0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 after I
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 a D
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 of I
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 is V
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 speaking V
0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 2 0 0 0 0 to T
0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 was V
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 issues N
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 boarded V
|
nltk_pos_tag.ipynb
|
###Markdown
https://www.guru99.com/pos-tagging-chunking-nltk.htm
###Code
import nltk
nltk.download('averaged_perceptron_tagger')
nltk.download('punkt')
from nltk import pos_tag
from nltk import RegexpParser
text ="learn php from guru99 and make study easy".split()
print("After Split:",text)
tokens_tag = pos_tag(text)
print("After Token:",tokens_tag)
patterns= """mychunk:{<NN.?>*<VBD.?>*<JJ.?>*<CC>?}"""
chunker = RegexpParser(patterns)
print("After Regex:",chunker)
output = chunker.parse(tokens_tag)
print("After Chunking",output)
# Tokenize text (word_tokenize)
# apply pos_tag to above step that is nltk.pos_tag(tokenize_text)
from nltk.tokenize import word_tokenize
text = "God is Great! I won a lottery."
text ="learn php from guru99 and make study easy"
print("After Split:",text)
print(word_tokenize(text))
tokens_tag = pos_tag(word_tokenize(text))
print("After Token:",tokens_tag)
patterns= """mychunk:{<NN.?>*<VBD.?>*<JJ.?>*<CC>?}"""
chunker = RegexpParser(patterns)
print("After Regex:",chunker)
output = chunker.parse(tokens_tag)
print("After Chunking",output)
import nltk
text = "learn php from guru99"
tokens = nltk.word_tokenize(text)
print(tokens)
tag = nltk.pos_tag(tokens)
print(tag)
grammar = "NP: {<DT>?<JJ>*<NN>}"
cp =nltk.RegexpParser(grammar)
result = cp.parse(tag)
print(result)
# result.draw() # It will draw the pattern graphically which can be seen in Noun Phrase chunking #
from collections import Counter
import nltk
text = " Guru99 is one of the best sites to learn WEB, SAP, Ethical Hacking and much more online."
lower_case = text.lower()
tokens = nltk.word_tokenize(lower_case)
tags = nltk.pos_tag(tokens)
print(tags)
counts = Counter( tag for word, tag in tags)
print(counts)
import nltk
a = "Guru99 is the site where you can find the best tutorials for Software Testing Tutorial, SAP Course for Beginners. Java Tutorial for Beginners and much more. Please visit the site guru99.com and much more."
words = nltk.tokenize.word_tokenize(a)
fd = nltk.FreqDist(words)
fd.plot()
from sklearn.feature_extraction.text import CountVectorizer
vectorizer=CountVectorizer()
data_corpus=["guru99 is the best sitefor online tutorials. I love to visit guru99."]
vocabulary=vectorizer.fit(data_corpus)
X= vectorizer.transform(data_corpus)
print(X.toarray())
print(vocabulary.get_feature_names())
from __future__ import unicode_literals, print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import pandas as pd
import os
import re
import random
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
###Output
cpu
|
Examples/01-CFopendata.ipynb
|
###Markdown
Downloading CrossFit Open data =====Author: [Ray Bell](https://github.com/raybellwaves) The Class object `cfopendata` is used to download Crossfit Open data from the [mainsite](https://games.crossfit.com/leaderboard/open/2017?division=1®ion=0&scaled=0&sort=0&occupation=0&page=1).
###Code
import cfanalytics as cfa
###Output
_____no_output_____
###Markdown
The paramters for `Cfopendata` are as follows (in bold):__year__ : 2011 - 2018Let's grab this years results (only tested on 2018 and 2017 to a limited degree):
###Code
year = 2018
print(year)
###Output
2018
###Markdown
__division__The CrossFit Open has divisions which are assigned numerical values: 1. = Men2. = Women3. = Men 45-494. = Women 45-495. = Men 50-546. = Women 50-547. = Men 55-598. = Women 55-599. = Men 60+10. = Women 60+11. = Team12. = Men 40-4413. = Women 40-4414. = Boys 14-1515. = Girls 14-1516. = Boys 16-1717. = Girls 16-1718. = Men 35-3919. = Women 35-39Let's grab the division with the least number of entries (Girls 14-15) to make sure we can download some data.
###Code
division = 15
print(division)
###Output
15
###Markdown
__scaled__ : 0 or 10 indicates if the workout is prescribed (Rx) or scaled (Sc).Let's grab the Rx data.
###Code
scaled = 0
print(scaled)
###Output
0
###Markdown
These parameters are used to obtain data which are uploaded to the CrossFit games server. To look at the raw JSON data for these parameters take a look at this [url](https://games.crossfit.com/competitions/api/v1/competitions/open/2018/leaderboards?page=1&competition=1&year=2018&division=15&scaled=0&sort=0&fittest=1&fittest1=0&occupation=0) which will correspond which are displayed in a legible format at the [main website](https://games.crossfit.com/leaderboard/open/2018?division=15®ion=0&scaled=0&sort=0&occupation=0&page=1). __ddir__ is the data directory where we want the data downloaded to. It is encouraged to create a folder named *Data* and store the data there. In a later example... code looks for files in the *Data* directory.
###Code
import os
ddir = os.getcwd()+'/Data'
if not os.path.isdir(ddir):
os.makedirs(ddir)
print(ddir)
###Output
/Volumes/SAMSUNG/WORK/CFanalytics_2017/GitHub_folder/cfanalytics/examples/Data
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.