markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
There's not a "title" column in the comments dataframe, so how is the comment tied to the original post?
# View the first entry in the dataframe and see if you can find that answer # permalink? blueorigin_comments.iloc[0]
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
IN EDA below, we find: "We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes."
def strip_and_rep(word): if len(str(word).strip().replace(" ", "")) < 1: return 'replace_me' else: return word blueorigin['selftext'] = blueorigin['selftext'].map(strip_and_rep) spacex['selftext'] = spacex['selftext'].map(strip_and_rep) spacex.selftext.isna().sum() blueorigin.selftext.isna().sum() blueorigin.selftext.head() spacex.iloc[2300:2320] blo_coms = blueorigin_comments[['subreddit', 'body', 'permalink']] blo_posts = blueorigin[['subreddit', 'selftext', 'permalink']].copy() spx_coms = spacex_comments[['subreddit', 'body', 'permalink']] spx_posts = spacex[['subreddit', 'selftext', 'permalink']].copy() #blueorigin['selftext'][len(blueorigin['selftext'])>0] type(blueorigin.selftext.iloc[1]) blo_posts.rename(columns={'selftext': 'body'}, inplace=True) spx_posts.rename(columns={'selftext': 'body'}, inplace=True) # result = pd.concat(frames) space_wars_2 = pd.concat([blo_coms, blo_posts, spx_coms, spx_posts]) space_wars_2.shape space_wars_2.head() dude.show_details(space_wars_2)
Accessing quick look at {dataframe} {dataframe}.head(2) >>> subreddit body \ 0 BlueOrigin I don't know why they would want to waste prop... 1 BlueOrigin Haha what if we stole one of his houses? permalink 0 /r/BlueOrigin/comments/hoc1in/is_there_a_blue_... 1 /r/BlueOrigin/comments/hus5ng/just_posted_a_ne... {dataframe}.isna().sum() >>> subreddit 0 body 40 permalink 0 dtype: int64 <class 'pandas.core.frame.DataFrame'> Int64Index: 14244 entries, 0 to 4999 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 subreddit 14244 non-null object 1 body 14204 non-null object 2 permalink 14244 non-null object dtypes: object(3) memory usage: 445.1+ KB {dataframe}.info() >>> None {dataframe}.shape (14244, 3)
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes. However, when trying that above, we ended up with more null values. Mapping 'replace_me' in to empty fileds kept the number of null values low. We'll add that token to our stop_words dictionary when creating the BOW from this corpus.
space_wars_2.dropna(inplace=True) space_wars_2.isna().sum() space_wars.to_csv('./data/betaset.csv', index=False)
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we split up the training and testing sets, establish our X and y. If you need to reset the dataframe, run the next cell FIRSTkeyword = RESET
space_wars_2 = pd.read_csv('./data/betaset.csv') space_wars_2.columns
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
I believe that the 'permalink' will be almost as indicative as the 'subreddit' that we are trying to predict, so the X will only include the words...
space_wars_2.head()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Convert target column to binary before moving forwardWe want to predict whether this post is Spacex, 1, or is not Spacex, 0
space_wars_2['subreddit'].value_counts() space_wars_2['subreddit'] = space_wars_2['subreddit'].map({'spacex': 1, 'BlueOrigin': 0}) space_wars_2['subreddit'].value_counts() X = space_wars_2.body y = space_wars_2.subreddit
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Calculate our baseline split
space_wars_2.subreddit.value_counts(normalize=True) base_set = space_wars_2.subreddit.value_counts(normalize=True) baseline = 0.0 if base_set[0] > base_set[1]: baseline = base_set[0] else: baseline = base_set[1] baseline
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we sift out stopwords, etc, let's just run a logistic regression on the words, as well as a decision tree:
from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we can fit the models we need to convert the data to numbers...we can use CountVectorizer or TF-IDF for this
# from https://stackoverflow.com/questions/5511708/adding-words-to-nltk-stoplist # add certain words to the stop_words library import nltk stopwords = nltk.corpus.stopwords.words('english') new_words=('replace_me', 'removed', 'deleted', '0','1', '2', '3', '4', '5', '6', '7', '8','9', '00', '000') for i in new_words: stopwords.append(i) print(stopwords) space_wars_2.isna().sum() space_wars_2.dropna(inplace=True) # This section, next number of cells, borrowed from Noelle's lesson on NLP EDA # Instantiate the "CountVectorizer" object, which is sklearn's # bag of words tool. cnt_vec = CountVectorizer(analyzer = "word", tokenizer = None, preprocessor = None, stop_words = stopwords, max_features = 5000) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=42, stratify=y)
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Keyword = CHANGELING
y_test # This section, next number of cells, borrowed from Noelle's lesson on NLP EDA # fit_transform() does two things: First, it fits the model and # learns the vocabulary; second, it transforms our training data # into feature vectors. The input to fit_transform should be a # list of strings. train_data_features = cnt_vec.fit_transform(X_train, y_train) test_data_features = cnt_vec.transform(X_test) train_data_features.shape train_data_df = pd.DataFrame(train_data_features) test_data_features.shape test_data_df = pd.DataFrame(test_data_features) test_data_df['subreddit'] lr = LogisticRegression( max_iter = 10_000) lr.fit(train_data_features, y_train) train_data_features.shape dt = DecisionTreeClassifier() dt.fit(train_data_features, y_train) print('Logistic Regression without doing anything, really:', lr.score(train_data_features, y_train)) print('Decision Tree without doing anything, really:', dt.score(train_data_features, y_train)) print('*'*80) print('Logistic Regression Test Score without doing anything, really:', lr.score(test_data_features, y_test)) print('Decision Tree Test Score without doing anything, really:', dt.score(test_data_features, y_test)) print('*'*80) print(f'The baseline split is {baseline}')
Logistic Regression without doing anything, really: 0.8318177373618852 Decision Tree without doing anything, really: 0.8375867800919136 ******************************************************************************** Logistic Regression Test Score without doing anything, really: 0.7876417676965194 Decision Tree Test Score without doing anything, really: 0.7442315213140399 ******************************************************************************** The baseline split is 0.5682102628285357
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
So we see that we are above our baseline of 57% accuracy by only guessing a single subreddit without trying to predict. We also see that our initial runs without any GridSearch or HPO tuning gives us a fairly overfit model for either mode. **Let's see next what happens when we sift through our data with stopwords, etc, to really clean up the dataset and also let's do some comparative EDA including comparing lengths of posts, etc. Finally we can create a sepatate dataframe with engineered features and try running a Logistic Regression model using only descriptors in the dataset such as post lenth, word length, most common words, etc.** Deep EDA of our words
space_wars.shape space_wars.describe()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Feature Engineering Map word count and character length funcitons on to the 'body' column to see a difference in each.
def word_count(string): ''' returns the number of words or tokens in a string literal, splitting on spaces, regardless of word lenth. This function will include space-separated punctuation as a word, such as " : " where the colon would be counted string, a string ''' str_list = string.split() return len(str_list) def count_chars(string): ''' returns the total number of characters including spaces in a string literal string, a string ''' count=0 for s in string: count+=1 return count import lebowski as dude space_wars['word_count'] = space_wars['body'].map(word_count) space_wars['word_count'].value_counts().head() # code from https://stackoverflow.com/questions/39132742/groupby-value-counts-on-the-dataframe-pandas #df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0) space_wars.groupby(['subreddit', 'word_count']).size().head() space_wars['post_length'] = space_wars['body'].map(count_chars) space_wars['post_length'].value_counts().head() space_wars.columns import seaborn as sns import matplotlib.pyplot as plt sns.distplot(space_wars['word_count']) # Borrowing from Noelle's nlp II lesson, import the following, # and think about what you want to use in the presentation # imports import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.pipeline import Pipeline from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import confusion_matrix, plot_confusion_matrix # Import CountVectorizer and TFIDFVectorizer from feature_extraction.text. from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Text Feature Extraction Follow along in the NLP EDA II video and do some analysis
X_train_df = pd.DataFrame(train_data_features.toarray(), columns=cntv.get_feature_names()) X_train_df X_train_df['subreddit'] # get count of top-occurring words # empty dictionary top_words = {} # loop through columns for i in X_train_df.columns: # save sum of each column in dictionary top_words[i] = X_train_df[i].sum() # top_words to dataframe sorted by highest occurance most_freq = pd.DataFrame(sorted(top_words.items(), key = lambda x: x[1], reverse = True)) most_freq.head() # Make a different CountVectorizer count_v = CountVectorizer(analyzer='word', stop_words = stopwords, max_features = 1_000, min_df = 50, max_df = .80, ngram_range=(2,3), ) # Redefine the training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .1, stratify = y, random_state=42) baseline
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Implement Naive Bayes because it's in the project instructionsMultinomial Naive Bayes often outperforms other models despite text data being non-independent data
pipe = Pipeline([ ('count_v', CountVectorizer()), ('nb', MultinomialNB()) ]) pipe_params = { 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words': [stopwords], 'count_v__min_df': [2, 3, 10], 'count_v__max_df': [.9, .8, .7], 'count_v__ngram_range': [(1, 1), (1, 2)] } gs = GridSearchCV(pipe, pipe_params, cv = 5, n_jobs=6 ) %%time gs.fit(X_train, y_train) gs.best_params_ print(gs.best_score_) gs.score(X_train, y_train) gs.score(X_test, y_test)
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
So far, the Multinomial Naive Bayes Algorithm is the top function at 79.28% Accuracy. The confusion matrix below is very simiar to that of other models
# Get predictions preds = gs.predict(X_test) # Save confusion matrix values tn, fp, fn, tp = confusion_matrix(y_test, preds).ravel() # View confusion matrix plot_confusion_matrix(gs, X_test, y_test, cmap='Blues', values_format='d'); # Calculate the specificity spec = tn / (tn + fp) print('Specificity:', spec)
Specificity: 0.5670289855072463
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
None of the 1620 different models we tried in this pipeline performed noticibly better than the thrown-together Logistic Regression Classifier that we started out with. Let's try TF-IDF, then Random Cut Forest, and finally Vector Machines. Our last run brought the best accuracy score to 79.3% TF-IDF
# Redefine the training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .1, stratify = y, random_state=42) tvec = TfidfVectorizer(stop_words=stopwords) df = pd.DataFrame(tvec.fit_transform(X_train).toarray(), columns=tvec.get_feature_names()) df.head() # get count of top-occurring words top_words_tf = {} for i in df.columns: top_words_tf[i] = df[i].sum() # top_words to dataframe sorted by highest occurance most_freq_tf = pd.DataFrame(sorted(top_words_tf.items(), key = lambda x: x[1], reverse = True)) plt.figure(figsize = (10, 5)) # visualize top 10 words plt.bar(most_freq_tf[0][:10], most_freq_tf[1][:10]); pipe_tvec = Pipeline([ ('tvec', TfidfVectorizer()), ('nb', MultinomialNB()) ]) pipe_params_tvec = { 'tvec__max_features': [2000, 9000], 'tvec__stop_words' : [None, stopwords], 'tvec__ngram_range': [(1, 1), (1, 2)] } gs_tvec = GridSearchCV(pipe_tvec, pipe_params_tvec, cv = 5) %%time gs_tvec.fit(X_train, y_train) gs_tvec.best_params_ gs_tvec.score(X_train, y_train) gs_tvec.score(X_test, y_test) # Get predictions preds = gs_tvec.predict(X_test) # Save confusion matrix values tn, fp, fn, tp = confusion_matrix(y_test, preds).ravel() # View confusion matrix plot_confusion_matrix(gs_tvec, X_test, y_test, cmap='Blues', values_format='d'); # Calculate the specificity spec = tn / (tn + fp) print('Specificity:', spec)
Specificity: 0.5489130434782609
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Random Cut Forest, Bagging, and Support Vector Machines
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Before we run the decision tree model or RandomForestClassifier(), we need to convert all of the data to numeric data
rf = RandomForestClassifier() et = ExtraTreesClassifier() cross_val_score(rf, train_data_features, X_train_df['subreddit']).mean() cross_val_score(et, train_data_features, X_train_df['subreddit']).mean() #cross_val_score(rf, test_data_features, y_test).mean()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Make sure that we are using X and y data that are completely numeric and free of nulls
space_wars.head(1) space_wars.shape pipe_rf = Pipeline([ ('count_v', CountVectorizer()), ('rf', RandomForestClassifier()), ]) pipe_ef = Pipeline([ ('count_v', CountVectorizer()), ('ef', ExtraTreesClassifier()), ]) pipe_params = 'count_v__max_features': [2000, 5000, 9000], 'count_v__stop_words': [stopwords], 'count_v__min_df': [2, 3, 10], 'count_v__max_df': [.9, .8, .7], 'count_v__ngram_range': [(1, 1), (1, 2)] } %%time gs_rf = GridSearchCV(pipe_rf, pipe_params, cv = 5, n_jobs=6) gs_rf.fit(X_train, y_train) print(gs_rf.best_score_) gs_rf.best_params_ gs_rf.score(X_train, y_train) gs_rf.score(X_test, y_test) # %%time # gs_ef = GridSearchCV(pipe_ef, # pipe_params, # cv = 5, # n_jobs=6) # gs_ef.fit(X_train, y_train) # print(gs_ef.best_score_) # gs_ef.best_params_ #gs_ef.score(X_train, y_train) #gs_ef.score(X_test, y_test)
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Now run through Gradient Boosting and SVM
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier, VotingClassifier from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Using samples from Riley's Lessons:
AdaBoostClassifier() GradientBoostingClassifier()
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Use the CountVectorizer to convert the data to numeric data prior to running it through the below VotingClassifier
'count_v__max_df': 0.9, 'count_v__max_features': 9000, 'count_v__min_df': 2, 'count_v__ngram_range': (1, 1), knn_pipe = Pipeline([ ('ss', StandardScaler()), ('knn', KNeighborsClassifier()) ]) %%time vote = VotingClassifier([ ('ada', AdaBoostClassifier(base_estimator=DecisionTreeClassifier())), ('grad_boost', GradientBoostingClassifier()), ('tree', DecisionTreeClassifier()), ('knn_pipe', knn_pipe) ]) params = {} # 'ada__n_estimators': [50, 51], # 'grad_boost__n_estimators': [10, 11], # 'knn_pipe__knn__n_neighbors': [5], # 'ada__base_estimator__max_depth': [1, 2], # 'weights': [[.25] * 4, [.3, .3, .3, .1]] # } gs = GridSearchCV(vote, param_grid=params, cv=3) gs.fit(X_train, y_train) print(gs.best_score_) gs.best_params_
_____no_output_____
CC0-1.0
code/sandbox-Blue-O.ipynb
MattPat1981/new_space_race_nlp
Uses Fine-Tuned BERT network to classify biomechanics papers from PubMed
# Check date !rm /etc/localtime !ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime !date # might need to restart runtime if timezone didn't change ## Install & load libraries !pip install tensorflow==2.7.0 try: from official.nlp import optimization except: !pip install -q -U tf-models-official==2.4.0 from official.nlp import optimization try: from Bio import Entrez except: !pip install -q -U biopython from Bio import Entrez try: import tensorflow_text as text except: !pip install -q -U tensorflow_text==2.7.3 import tensorflow_text as text import pandas as pd import numpy as np import nltk nltk.download('stopwords') from nltk.corpus import stopwords import tensorflow as tf # probably have to lock version import string import datetime from bs4 import BeautifulSoup from sklearn.preprocessing import LabelEncoder from tensorflow.keras.models import load_model import tensorflow_hub as hub from google.colab import drive import datetime as dt #Define date range today = dt.date.today() yesterday = today - dt.timedelta(days=1) week_ago = yesterday - dt.timedelta(days=7) # ensure overlap in pubmed search days_ago_6 = yesterday - dt.timedelta(days=6) # for text output # Mount Google Drive for model and csv up/download drive.mount('/content/gdrive') print(today) # Define Search Criteria ---- def search(query): Entrez.email = '[email protected]' handle = Entrez.esearch(db='pubmed', sort='most recent', retmax='5000', retmode='xml', datetype='pdat', # pdat is published date, edat is entrez date. # reldate=7, # only within n days from now mindate= min_date, maxdate= max_date, # for searching date range term=query) results = Entrez.read(handle) return results # Perform Search and Pull Paper Titles ---- def fetch_details(ids): Entrez.email = '[email protected]' handle = Entrez.efetch(db='pubmed', retmode='xml', id=ids) results = Entrez.read(handle) return results # Make the stop words for string cleaning ---- def html_strip(text): text = BeautifulSoup(text, 'lxml').text text = text.replace('[','').replace(']','') return text def clean_str(text, stops): text = BeautifulSoup(text, 'lxml').text text = text.split() return ' '.join([word for word in text if word not in stops]) stop = list(stopwords.words('english')) stop_c = [string.capwords(word) for word in stop] for word in stop_c: stop.append(word) new_stop = ['The', 'An', 'A', 'Do', 'Is', 'In', 'StringElement', 'NlmCategory', 'Label', 'attributes', 'INTRODUCTION', 'METHODS', 'BACKGROUND', 'RESULTS', 'CONCLUSIONS'] for s in new_stop: stop.append(s) # Search terms (can test string with Pubmed Advanced Search) ---- # search_results = search('(Biomech*[Title/Abstract] OR locomot*[Title/Abstract])') min_date = week_ago.strftime('%m/%d/%Y') max_date = yesterday.strftime('%m/%d/%Y') search_results = search('(biomech*[Title/Abstract] OR locomot*[Title/Abstract] NOT opiod*[Title/Abstract] NOT pharm*[Journal] NOT mouse[Title/Abstract] NOT drosophil*[Title/Abstract] NOT mice[Title/Abstract] NOT rats*[Title/Abstract] NOT elegans[Title/Abstract])') id_list = search_results['IdList'] papers = fetch_details(id_list) print(len(papers['PubmedArticle']), 'Papers found') titles, full_titles, keywords, authors, links, journals, abstracts = ([] for i in range(7)) for paper in papers['PubmedArticle']: # clean and store titles, abstracts, and links t = clean_str(paper['MedlineCitation']['Article']['ArticleTitle'], stop).replace('[','').replace(']','').capitalize() # rm brackets that survived beautifulsoup, sentence case titles.append(t) full_titles.append(paper['MedlineCitation']['Article']['ArticleTitle']) pmid = paper['MedlineCitation']['PMID'] links.append('[URL="https://www.ncbi.nlm.nih.gov/pubmed/{0}"]{1}[/URL]'.format(pmid, html_strip(paper['MedlineCitation']['Article']['ArticleTitle']))) try: abstracts.append(clean_str(paper['MedlineCitation']['Article']['Abstract']['AbstractText'][0], stop).replace('[','').replace(']','').capitalize()) # rm brackets that survived beautifulsoup, sentence case except: abstracts.append('') # clean and store authors auths = [] try: for auth in paper['MedlineCitation']['Article']['AuthorList']: try: # see if there is a last name and initials auth_name = [auth['LastName'], auth['Initials'] + ','] auth_name = ' '.join(auth_name) auths.append(auth_name) except: if 'LastName' in auth.keys(): # maybe they don't have initials auths.append(auth['LastName'] + ',') else: # no last name auths.append('') print(paper['MedlineCitation']['Article']['ArticleTitle'], 'has an issue with an author name:') except: auths.append('AUTHOR NAMES ERROR') print(paper['MedlineCitation']['Article']['ArticleTitle'], 'has no author list?') # compile authors authors.append(' '.join(auths).replace('[','').replace(']','')) # rm brackets in names # journal names journals.append(paper['MedlineCitation']['Article']['Journal']['Title'].replace('[','').replace(']','')) # rm brackets # store keywords if paper['MedlineCitation']['KeywordList'] != []: kwds = [] for kw in paper['MedlineCitation']['KeywordList'][0]: kwds.append(kw[:]) keywords.append(', '.join(kwds).lower()) else: keywords.append('') # Put Titles, Abstracts, Authors, Journal, and Keywords into dataframe papers_df = pd.DataFrame({'title': titles, 'keywords': keywords, 'abstract': abstracts, 'authors': authors, 'journal': journals, 'links': links, 'raw_title': full_titles, 'mindate': min_date, 'maxdate': max_date}) # remove papers with no title or no authors for index, row in papers_df.iterrows(): if row['title'] == '' or row['authors'] == 'AUTHOR NAMES ERROR': papers_df.drop(index, inplace=True) papers_df.reset_index(drop=True, inplace=True) # join titles and abstract papers_df['BERT_input'] = pd.DataFrame(papers_df['title'] + ' ' + papers_df['abstract']) # Load Fine-Tuned BERT Network ---- model = tf.saved_model.load('/content/gdrive/My Drive/BiomchBERT/Data/BiomchBERT/') print('Loaded model from disk') # Load Label Encoder ---- le = LabelEncoder() le.classes_ = np.load('/content/gdrive/My Drive/BiomchBERT/Data/BERT_label_encoder.npy') print('Loaded Label Encoder') # Predict Paper Topic ---- predicted_topic = model(papers_df['BERT_input'], training=False) # will run out of GPU memory (14GB) if predicting more than ~2000 title+abstracts at once # Determine Publications that BiomchBERT is unsure about ---- topics, pred_val_str = ([] for i in range(2)) for pred_prob in predicted_topic: pred_val = np.max(pred_prob) if pred_val > 1.5 * np.sort(pred_prob)[-2]: # Is top confidence score more than 1.5x the second best confidence score? topics.append(le.inverse_transform([np.argmax(pred_prob)])[0]) top1 = le.inverse_transform([np.argmax(pred_prob)])[0] top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0] # pred_val_str.append(pred_val * 100) # just report top category pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str( np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2)) # report top 2 categories else: topics.append('UNKNOWN') top1 = le.inverse_transform([np.argmax(pred_prob)])[0] top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0] pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str( np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2)) papers_df['topic'] = topics papers_df['pred_val'] = pred_val_str print('BiomchBERT is unsure about {0} papers\n'.format(len(papers_df[papers_df['topic'] == 'UNKNOWN']))) # Prompt User to decide for BiomchBERT ---- unknown_papers = papers_df[papers_df['topic'] == 'UNKNOWN'] for indx, paper in unknown_papers.iterrows(): print(paper['raw_title']) print(paper['journal']) print(paper['pred_val']) print() splt_str = paper['pred_val'].split(';') options = [str for pred_cls in splt_str for str in le.classes_ if (str in pred_cls)] choice = input('(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? ') print() if choice == '1': papers_df.iloc[indx]['topic'] = str(options[0]) elif choice == '2': papers_df.iloc[indx]['topic'] = str(options[1]) elif choice == 'o': # print all categories so you can select for i in zip(range(len(le.classes_)),le.classes_): print(i) new_cat = input('Enter number of new class or type "r" to remove paper: ') print() if new_cat == 'r': papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output else: papers_df.iloc[indx]['topic'] = le.classes_[int(new_cat)] elif choice == 'r': papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output print('Removing {0} papers\n'.format(len(papers_df[papers_df['topic'] == '_REMOVE_']))) # Double check that none of these papers were included in past literature updates ---- # load prior papers # papers_df.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False) # run ONLY if there are no prior papers prior_papers = pd.read_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv') prior_papers.dropna(subset=['title'], inplace=True) prior_papers.reset_index(drop=True, inplace=True) # NEED TO DO: find matching papers between current week and prior papers using Pubmed ID since titles can change from ahead of print to final version. # match = papers_df['links'].split(']')[0].isin(prior_papers['links'].split(']')[0]) match = papers_df['title'].isin(prior_papers['title']) # boolean print('Removing {0} papers found in prior literature updates\n'.format(sum(match))) # filter and check if everything accidentally was removed filtered_papers_df = papers_df.drop(papers_df[match].index) if filtered_papers_df.shape[0] < 1: raise ValueError('might have removed all the papers for some reason. ') else: papers_df = filtered_papers_df papers_df.reset_index(drop=True, inplace=True) updated_prior_papers = pd.concat([prior_papers, papers_df], axis=0) updated_prior_papers.reset_index(drop=True, inplace=True) updated_prior_papers.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False) # Create Text File for Biomch-L ---- # Compile papers grouped by topic txtname = '/content/gdrive/My Drive/BiomchBERT/Updates/' + today.strftime("%Y-%m-%d") + '-litupdate.txt' txt = open(txtname, 'w', encoding='utf-8') txt.write('[SIZE=16px][B]LITERATURE UPDATE[/B][/SIZE]\n') txt.write(days_ago_6.strftime("%b %d, %Y") + ' - '+ yesterday.strftime("%b %d, %Y")+'\n') # a week ago from yesterday. txt.write( """ Literature search terms: biomech* & locomot* Publications are classified by [URL="https://www.ryan-alcantara.com/projects/p88_BiomchBERT/"]BiomchBERT[/URL], a neural network trained on past Biomch-L Literature Updates. BiomchBERT is managed by [URL="https://jouterleys.github.io"]Jereme Outerleys[/URL], a Doctoral Student at Queen's University. Each publication has a score (out of 100%) reflecting how confident BiomchBERT is that the publication belongs in a particular category (top 2 shown). If something doesn't look right, email jereme.outerleys[at]queensu.ca. Twitter: [URL="https://www.twitter.com/jouterleys"]@jouterleys[/URL]. """ ) # Write papers to text file grouped by topic ---- topic_list = np.unique(papers_df.sort_values('topic')['topic']) for topic in topic_list: papers_subset = pd.DataFrame(papers_df[papers_df.topic == topic].reset_index(drop=True)) txt.write('\n') # TOPIC NAME (with some cleaning) if topic == '_REMOVE_': continue elif topic == 'UNKNOWN': txt.write('[SIZE=16px][B]*Papers BiomchBERT is unsure how to classify*[/B][/SIZE]\n') elif topic == 'CARDIOVASCULAR/CARDIOPULMONARY': topic = 'CARDIOVASCULAR/PULMONARY' txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic) elif topic == 'CELLULAR/SUBCELLULAR': topic = 'CELLULAR' txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic) elif topic == 'ORTHOPAEDICS/SURGERY': topic = 'ORTHOPAEDICS (SURGERY)' txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic) elif topic == 'ORTHOPAEDICS/SPINE': topic = 'ORTHOPAEDICS (SPINE)' txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic) else: txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\n' % topic) # HYPERLINKED PAPERS, AUTHORS, JOURNAL NAME for i, paper in enumerate(papers_subset['links']): txt.write('[B]%s[/B] ' % paper) txt.write('%s ' % papers_subset['authors'][i]) txt.write('[I]%s[/I]. ' % papers_subset['journal'][i]) # CONFIDENCE SCORE (BERT softmax categorical crossentropy) try: txt.write('(%.1f%%) \n\n' % papers_subset['pred_val'][i]) except: txt.write('(%s)\n\n' % papers_subset['pred_val'][i]) txt.write('[SIZE=16px][B]*PICK OF THE WEEK*[/B][/SIZE]\n') txt.close() print('Literature Update Exported for Biomch-L') print('Location:', txtname)
_____no_output_____
Apache-2.0
classify_papers.ipynb
jouterleys/BiomchBERT
Wilcoxon and Chi Squared
import numpy as np import pandas as pd df = pd.read_csv("prepared_neuror2_data.csv") def stats_for_neuror2_range(lo, hi): admissions = df[df.NR2_Score.between(lo, hi)] total_patients = admissions.shape[0] readmits = admissions[admissions.UnplannedReadmission] total_readmits = readmits.shape[0] return (total_readmits, total_patients, "%.1f" % (total_readmits/total_patients*100,)) mayo_davis = [] for (expected, (lo, hi)) in [(1.4, (0, 0)), (4, (1, 4)), (5.6, (5, 8)), (14.2, (9, 13)), (33.0, (14, 19)), (0.0, (20, 22))]: (total_readmits, total_patients, readmit_percent) = stats_for_neuror2_range(lo, hi) mayo_davis.append([lo, hi, expected, readmit_percent, total_readmits, total_patients]) title="Davis and Mayo Populations by NeuroR2 Score" print(title) print("-" * len(title)) print(pd.DataFrame(mayo_davis, columns=["Low", "High", "Mayo %", "Davis %", "Readmits", "Total"]).to_string(index=False)) # Continuous variables were compared using wilcoxon from scipy.stats import ranksums as wilcoxon def create_samples(col_name): unplanned = df[df.UnplannedReadmission][col_name].values planned = df[~df.UnplannedReadmission][col_name].values return (unplanned, planned) continous_vars = ["AdmissionAgeYears", "LengthOfStay", "NR2_Score"]#, "MsDrgWeight"] for var in continous_vars: (unplanned, planned) = create_samples(var) (stat, p) = wilcoxon(unplanned, planned) print ("%30s" % (var,), "p-value %f" % (p,)) unplanned, planned = create_samples("LengthOfStay") print(pd.DataFrame(unplanned, columns=["Unplanned Readmission"]).describe()) print(pd.DataFrame(planned, columns=[" Index Only Admission"]).describe()) # Categorical variables were compared using chi squared from scipy.stats import chi2, chi2_contingency from IPython.core.display import display, HTML # Collect all the categorical features cols = sorted([col for col in df.columns if "_" in col]) for var in continous_vars: try: cols.remove(var) except: pass index_only = df[~df.UnplannedReadmission].shape[0] unplanned_readmit = df[df.UnplannedReadmission].shape[0] html = "<table><tr>" for th in ["Characteristic", "Index admission only</br>(n=%d)" % (index_only,), "Unplanned readmission</br>(n = %d)" % (unplanned_readmit,),"<i>p</i> Value"]: html += "<th>%s</th>" % (th,) html += "</tr>" start_row = "<tr><td>%s</td>" end_row = "<td>%d (%.1f)</td><td>%d (%.1f)</td><td></td></tr>" pval_str = lambda p: "<0.001" if p<0.001 else "%.3f" % p col_str = lambda col, p: "<b><i>%s</i></b>" % (col,) if p < 0.05 else col for col in sorted(cols): table = pd.crosstab(df[col], df.UnplannedReadmission) stat, p, dof, expected = chi2_contingency(table) html += "<tr><td>%s</td><td></td><td></td><td>%s</td></tr>" % (col_str(col,p), pval_str(p)) html += start_row % ("No",) html += end_row % (table.values[0][0], expected[0][0], table.values[0][1], expected[0][1]) try: html += start_row % ("Yes",) html += end_row % (table.values[1][0], expected[1,0], table.values[1][1], expected[1][1]) except IndexError: html += "<td>-</td><td>-</td><td></td></tr>" html += "</table>" display(HTML(html))
_____no_output_____
BSD-2-Clause
Wilcoxon and Chi Squared.ipynb
massie/readmission-study
Note: This notebook was executed on google colab pro.
!pip3 install pytorch-lightning --quiet from google.colab import drive drive.mount('/content/drive') import os os.chdir('/content/drive/MyDrive/Colab Notebooks/atmacup11/experiments')
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Settings
EXP_NO = 27 SEED = 1 N_SPLITS = 5 TARGET = 'target' GROUP = 'art_series_id' REGRESSION = False assert((TARGET, REGRESSION) in (('target', True), ('target', False), ('sorting_date', True))) MODEL_NAME = 'resnet' BATCH_SIZE = 512 NUM_EPOCHS = 500
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Library
from collections import defaultdict from functools import partial import gc import glob import json from logging import getLogger, StreamHandler, FileHandler, DEBUG, Formatter import pickle import os import sys import time import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.metrics import confusion_matrix, mean_squared_error, cohen_kappa_score # from sklearnex import patch_sklearn from pytorch_lightning import seed_everything import torch import torch.nn as nn import torch.optim from torch.utils.data import DataLoader from torchvision import transforms SCRIPTS_DIR = os.path.join('..', 'scripts') assert(os.path.isdir(SCRIPTS_DIR)) if SCRIPTS_DIR not in sys.path: sys.path.append(SCRIPTS_DIR) from cross_validation import load_cv_object_ids from dataset import load_csvfiles, load_photofile,load_photofiles, AtmaImageDatasetV02 from folder import experiment_dir_of from models import initialize_model from utils import train_model, predict_by_model pd.options.display.float_format = '{:.5f}'.format DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') DEVICE
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prepare directory
output_dir = experiment_dir_of(EXP_NO) output_dir
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prepare logger
logger = getLogger(__name__) '''Refference https://docs.python.org/ja/3/howto/logging-cookbook.html ''' logger.setLevel(DEBUG) # create file handler which logs even debug messages fh = FileHandler(os.path.join(output_dir, 'log.log')) fh.setLevel(DEBUG) # create console handler with a higher log level ch = StreamHandler() ch.setLevel(DEBUG) # create formatter and add it to the handlers formatter = Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) len(logger.handlers) logger.info('Experiment no: {}'.format(EXP_NO)) logger.info('CV: StratifiedGroupKFold') logger.info('SEED: {}'.format(SEED)) logger.info('REGRESSION: {}'.format(REGRESSION))
2021-07-21 19:41:13,190 - __main__ - INFO - Experiment no: 27 2021-07-21 19:41:13,192 - __main__ - INFO - CV: StratifiedGroupKFold 2021-07-21 19:41:13,194 - __main__ - INFO - SEED: 1 2021-07-21 19:41:13,197 - __main__ - INFO - REGRESSION: False
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Load csv files
SINCE = time.time() logger.debug('Start loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train, test, materials, techniques, sample_submission = load_csvfiles() logger.debug('Complete loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train test
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Cross validation
seed_everything(SEED) train.set_index('object_id', inplace=True) fold_object_ids = load_cv_object_ids() for i, (train_object_ids, valid_object_ids) in enumerate(zip(fold_object_ids[0], fold_object_ids[1])): assert(set(train_object_ids) & set(valid_object_ids) == set()) num_fold = i + 1 logger.debug('Start fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE)) # Separate dataset into training/validation fold y_train = train.loc[train_object_ids, TARGET].values y_valid = train.loc[valid_object_ids, TARGET].values torch.cuda.empty_cache() # Training logger.debug('Start training model ({:.3f} seconds passed)'.format(time.time() - SINCE)) ## Prepare model num_classes = len(set(list(y_train))) model, input_size = initialize_model(MODEL_NAME, num_classes) model.to(DEVICE) ## Prepare transformers train_transformer = transforms.Compose([ transforms.RandomResizedCrop(input_size), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) val_transformer = transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # Prepare dataset if not REGRESSION: # label should be one-hot style y_train = np.identity(num_classes)[y_train].astype('int') y_valid = np.identity(num_classes)[y_valid].astype('int') train_dataset = AtmaImageDatasetV02(train_object_ids, train_transformer, y_train) val_dataset = AtmaImageDatasetV02(valid_object_ids, val_transformer, y_valid) # Prepare dataloader dataloaders = { 'train': DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=os.cpu_count()), 'val': DataLoader(dataset=val_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=os.cpu_count()), } ## train estimator estimator, train_losses, valid_losses = train_model( model, dataloaders, criterion=nn.BCEWithLogitsLoss(), num_epochs=NUM_EPOCHS, device=DEVICE, optimizer=torch.optim.Adam(model.parameters()), log_func=logger.debug, is_inception=MODEL_NAME == 'inception') logger.debug('Complete training ({:.3f} seconds passed)'.format(time.time() - SINCE)) ## Visualize training loss plt.plot(train_losses, label='train') plt.plot(valid_losses, label='valid') plt.legend(loc='upper left', bbox_to_anchor=[1., 1.]) plt.title(f'Fold{num_fold}') plt.show() # Save model and prediction ## Prediction predictions = {} for fold_, object_ids_ in zip(['train', 'val', 'test'], [train_object_ids, valid_object_ids, test['object_id']]): # Prepare transformer transformer_ = transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # Prepare dataset dataset_ = AtmaImageDatasetV02(object_ids_, transformer_) # Prepare dataloader dataloader_ = DataLoader(dataset=dataset_, batch_size=BATCH_SIZE, shuffle=False, num_workers=os.cpu_count()) # Prediction predictions[fold_] = predict_by_model(estimator, dataloader_, DEVICE) logger.debug('Complete prediction for {} fold ({:.3f} seconds passed)' \ .format(fold_, time.time() - SINCE)) if REGRESSION: pred_train = pd.DataFrame(data=predictions['train'], columns=['pred']) pred_valid = pd.DataFrame(data=predictions['val'], columns=['pred']) pred_test = pd.DataFrame(data=predictions['test'], columns=['pred']) else: columns = list(range(num_classes)) pred_train = pd.DataFrame(data=predictions['train'], columns=columns) pred_valid = pd.DataFrame(data=predictions['val'], columns=columns) pred_test = pd.DataFrame(data=predictions['test'], columns=columns) # else: # Do not come here! # raise NotImplemented # try: # pred_train = pd.DataFrame(data=estimator.predict_proba(X_train), # columns=estimator.classes_) # pred_valid = pd.DataFrame(data=estimator.predict_proba(X_valid), # columns=estimator.classes_) # pred_test = pd.DataFrame(data=estimator.predict_proba(X_test), # columns=estimator.classes_) # except AttributeError: # pred_train = pd.DataFrame(data=estimator.decision_function(X_train), # columns=estimator.classes_) # pred_valid = pd.DataFrame(data=estimator.decision_function(X_valid), # columns=estimator.classes_) # pred_test = pd.DataFrame(data=estimator.decision_function(X_test), # columns=estimator.classes_) ## Training set pred_train['object_id'] = train_object_ids filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv') pred_train.to_csv(filepath_fold_train, index=False) logger.debug('Save training fold to {} ({:.3f} seconds passed)' \ .format(filepath_fold_train, time.time() - SINCE)) ## Validation set pred_valid['object_id'] = valid_object_ids filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv') pred_valid.to_csv(filepath_fold_valid, index=False) logger.debug('Save validation fold to {} ({:.3f} seconds passed)' \ .format(filepath_fold_valid, time.time() - SINCE)) ## Test set pred_test['object_id'] = test['object_id'].values filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv') pred_test.to_csv(filepath_fold_test, index=False) logger.debug('Save test result {} ({:.3f} seconds passed)' \ .format(filepath_fold_test, time.time() - SINCE)) ## Model filepath_fold_model = os.path.join(output_dir, f'cv_fold{num_fold}_model.torch') torch.save(estimator.state_dict(), filepath_fold_model) # with open(filepath_fold_model, 'wb') as f: # pickle.dump(estimator, f) logger.debug('Save model {} ({:.3f} seconds passed)'.format(filepath_fold_model, time.time() - SINCE)) # Save memory del (estimator, y_train, y_valid, pred_train, pred_valid, pred_test) gc.collect() logger.debug('Complete fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE))
2021-07-21 19:41:14,941 - __main__ - DEBUG - Start fold 1 (1.737 seconds passed) 2021-07-21 19:41:14,948 - __main__ - DEBUG - Start training model (1.744 seconds passed) 2021-07-21 19:41:21,378 - __main__ - DEBUG - Epoch 0/499 /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) 2021-07-21 19:46:23,273 - __main__ - DEBUG - train Loss: 0.6660 2021-07-21 19:48:28,084 - __main__ - DEBUG - val Loss: 29.5672 2021-07-21 19:48:28,094 - __main__ - DEBUG - Epoch 1/499 2021-07-21 19:48:36,328 - __main__ - DEBUG - train Loss: 0.5293 2021-07-21 19:48:38,931 - __main__ - DEBUG - val Loss: 1.9881 2021-07-21 19:48:38,940 - __main__ - DEBUG - Epoch 2/499 2021-07-21 19:48:47,096 - __main__ - DEBUG - train Loss: 0.5137 2021-07-21 19:48:49,716 - __main__ - DEBUG - val Loss: 0.7746 2021-07-21 19:48:49,725 - __main__ - DEBUG - Epoch 3/499 2021-07-21 19:48:58,048 - __main__ - DEBUG - train Loss: 0.5090 2021-07-21 19:49:00,838 - __main__ - DEBUG - val Loss: 0.5369 2021-07-21 19:49:00,847 - __main__ - DEBUG - Epoch 4/499 2021-07-21 19:49:09,254 - __main__ - DEBUG - train Loss: 0.5060 2021-07-21 19:49:12,055 - __main__ - DEBUG - val Loss: 0.5659 2021-07-21 19:49:12,056 - __main__ - DEBUG - Epoch 5/499 2021-07-21 19:49:20,634 - __main__ - DEBUG - train Loss: 0.5033 2021-07-21 19:49:23,357 - __main__ - DEBUG - val Loss: 0.5304 2021-07-21 19:49:23,366 - __main__ - DEBUG - Epoch 6/499 2021-07-21 19:49:31,648 - __main__ - DEBUG - train Loss: 0.5015 2021-07-21 19:49:34,271 - __main__ - DEBUG - val Loss: 0.5245 2021-07-21 19:49:34,280 - __main__ - DEBUG - Epoch 7/499 2021-07-21 19:49:42,482 - __main__ - DEBUG - train Loss: 0.4997 2021-07-21 19:49:45,132 - __main__ - DEBUG - val Loss: 0.5221 2021-07-21 19:49:45,141 - __main__ - DEBUG - Epoch 8/499 2021-07-21 19:49:53,365 - __main__ - DEBUG - train Loss: 0.5030 2021-07-21 19:49:55,987 - __main__ - DEBUG - val Loss: 0.5143 2021-07-21 19:49:55,996 - __main__ - DEBUG - Epoch 9/499 2021-07-21 19:50:04,363 - __main__ - DEBUG - train Loss: 0.4967 2021-07-21 19:50:07,148 - __main__ - DEBUG - val Loss: 0.5456 2021-07-21 19:50:07,150 - __main__ - DEBUG - Epoch 10/499 2021-07-21 19:50:15,648 - __main__ - DEBUG - train Loss: 0.4901 2021-07-21 19:50:18,438 - __main__ - DEBUG - val Loss: 0.6672 2021-07-21 19:50:18,440 - __main__ - DEBUG - Epoch 11/499 2021-07-21 19:50:26,876 - __main__ - DEBUG - train Loss: 0.4877 2021-07-21 19:50:29,464 - __main__ - DEBUG - val Loss: 0.6374 2021-07-21 19:50:29,465 - __main__ - DEBUG - Epoch 12/499 2021-07-21 19:50:37,714 - __main__ - DEBUG - train Loss: 0.4942 2021-07-21 19:50:40,335 - __main__ - DEBUG - val Loss: 0.5209 2021-07-21 19:50:40,337 - __main__ - DEBUG - Epoch 13/499 2021-07-21 19:50:48,549 - __main__ - DEBUG - train Loss: 0.4861 2021-07-21 19:50:51,206 - __main__ - DEBUG - val Loss: 0.4929 2021-07-21 19:50:51,217 - __main__ - DEBUG - Epoch 14/499 2021-07-21 19:50:59,393 - __main__ - DEBUG - train Loss: 0.4823 2021-07-21 19:51:02,006 - __main__ - DEBUG - val Loss: 0.4895 2021-07-21 19:51:02,015 - __main__ - DEBUG - Epoch 15/499 2021-07-21 19:51:10,370 - __main__ - DEBUG - train Loss: 0.4791 2021-07-21 19:51:13,172 - __main__ - DEBUG - val Loss: 0.5717 2021-07-21 19:51:13,174 - __main__ - DEBUG - Epoch 16/499 2021-07-21 19:51:21,631 - __main__ - DEBUG - train Loss: 0.4846 2021-07-21 19:51:24,422 - __main__ - DEBUG - val Loss: 0.5962 2021-07-21 19:51:24,424 - __main__ - DEBUG - Epoch 17/499 2021-07-21 19:51:32,866 - __main__ - DEBUG - train Loss: 0.4832 2021-07-21 19:51:35,495 - __main__ - DEBUG - val Loss: 0.5160 2021-07-21 19:51:35,497 - __main__ - DEBUG - Epoch 18/499 2021-07-21 19:51:43,801 - __main__ - DEBUG - train Loss: 0.4800 2021-07-21 19:51:46,602 - __main__ - DEBUG - val Loss: 0.5324 2021-07-21 19:51:46,603 - __main__ - DEBUG - Epoch 19/499 2021-07-21 19:51:55,063 - __main__ - DEBUG - train Loss: 0.4860 2021-07-21 19:51:57,835 - __main__ - DEBUG - val Loss: 0.5257 2021-07-21 19:51:57,837 - __main__ - DEBUG - Epoch 20/499 2021-07-21 19:52:06,336 - __main__ - DEBUG - train Loss: 0.4800 2021-07-21 19:52:08,943 - __main__ - DEBUG - val Loss: 0.4967 2021-07-21 19:52:08,945 - __main__ - DEBUG - Epoch 21/499 2021-07-21 19:52:17,229 - __main__ - DEBUG - train Loss: 0.4754 2021-07-21 19:52:19,831 - __main__ - DEBUG - val Loss: 0.5236 2021-07-21 19:52:19,833 - __main__ - DEBUG - Epoch 22/499 2021-07-21 19:52:28,046 - __main__ - DEBUG - train Loss: 0.4702 2021-07-21 19:52:30,621 - __main__ - DEBUG - val Loss: 0.7657 2021-07-21 19:52:30,623 - __main__ - DEBUG - Epoch 23/499 2021-07-21 19:52:38,781 - __main__ - DEBUG - train Loss: 0.4771 2021-07-21 19:52:41,425 - __main__ - DEBUG - val Loss: 0.5258 2021-07-21 19:52:41,427 - __main__ - DEBUG - Epoch 24/499 2021-07-21 19:52:49,780 - __main__ - DEBUG - train Loss: 0.4785 2021-07-21 19:52:52,581 - __main__ - DEBUG - val Loss: 0.5838 2021-07-21 19:52:52,582 - __main__ - DEBUG - Epoch 25/499 2021-07-21 19:53:01,027 - __main__ - DEBUG - train Loss: 0.4716 2021-07-21 19:53:03,804 - __main__ - DEBUG - val Loss: 0.5328 2021-07-21 19:53:03,805 - __main__ - DEBUG - Epoch 26/499 2021-07-21 19:53:12,204 - __main__ - DEBUG - train Loss: 0.4756 2021-07-21 19:53:14,800 - __main__ - DEBUG - val Loss: 0.4866 2021-07-21 19:53:14,810 - __main__ - DEBUG - Epoch 27/499 2021-07-21 19:53:23,079 - __main__ - DEBUG - train Loss: 0.4719 2021-07-21 19:53:25,702 - __main__ - DEBUG - val Loss: 0.4860 2021-07-21 19:53:25,712 - __main__ - DEBUG - Epoch 28/499 2021-07-21 19:53:33,937 - __main__ - DEBUG - train Loss: 0.4692 2021-07-21 19:53:36,519 - __main__ - DEBUG - val Loss: 0.5204 2021-07-21 19:53:36,521 - __main__ - DEBUG - Epoch 29/499 2021-07-21 19:53:44,728 - __main__ - DEBUG - train Loss: 0.4676 2021-07-21 19:53:47,384 - __main__ - DEBUG - val Loss: 0.6347 2021-07-21 19:53:47,386 - __main__ - DEBUG - Epoch 30/499 2021-07-21 19:53:55,706 - __main__ - DEBUG - train Loss: 0.4624 2021-07-21 19:53:58,531 - __main__ - DEBUG - val Loss: 0.5099 2021-07-21 19:53:58,533 - __main__ - DEBUG - Epoch 31/499 2021-07-21 19:54:07,009 - __main__ - DEBUG - train Loss: 0.4610 2021-07-21 19:54:09,778 - __main__ - DEBUG - val Loss: 0.5070 2021-07-21 19:54:09,780 - __main__ - DEBUG - Epoch 32/499 2021-07-21 19:54:18,213 - __main__ - DEBUG - train Loss: 0.4653 2021-07-21 19:54:20,824 - __main__ - DEBUG - val Loss: 0.5578 2021-07-21 19:54:20,826 - __main__ - DEBUG - Epoch 33/499 2021-07-21 19:54:29,107 - __main__ - DEBUG - train Loss: 0.4667 2021-07-21 19:54:31,772 - __main__ - DEBUG - val Loss: 0.5599 2021-07-21 19:54:31,774 - __main__ - DEBUG - Epoch 34/499 2021-07-21 19:54:40,000 - __main__ - DEBUG - train Loss: 0.4706 2021-07-21 19:54:42,560 - __main__ - DEBUG - val Loss: 0.5165 2021-07-21 19:54:42,562 - __main__ - DEBUG - Epoch 35/499 2021-07-21 19:54:50,828 - __main__ - DEBUG - train Loss: 0.4660 2021-07-21 19:54:53,462 - __main__ - DEBUG - val Loss: 0.4866 2021-07-21 19:54:53,464 - __main__ - DEBUG - Epoch 36/499 2021-07-21 19:55:01,718 - __main__ - DEBUG - train Loss: 0.4794 2021-07-21 19:55:04,562 - __main__ - DEBUG - val Loss: 0.5003 2021-07-21 19:55:04,564 - __main__ - DEBUG - Epoch 37/499 2021-07-21 19:55:13,021 - __main__ - DEBUG - train Loss: 0.4659 2021-07-21 19:55:15,792 - __main__ - DEBUG - val Loss: 0.6377 2021-07-21 19:55:15,794 - __main__ - DEBUG - Epoch 38/499 2021-07-21 19:55:24,230 - __main__ - DEBUG - train Loss: 0.4629 2021-07-21 19:55:26,857 - __main__ - DEBUG - val Loss: 0.5010 2021-07-21 19:55:26,859 - __main__ - DEBUG - Epoch 39/499 2021-07-21 19:55:35,030 - __main__ - DEBUG - train Loss: 0.4628 2021-07-21 19:55:37,684 - __main__ - DEBUG - val Loss: 0.6017 2021-07-21 19:55:37,686 - __main__ - DEBUG - Epoch 40/499 2021-07-21 19:55:45,924 - __main__ - DEBUG - train Loss: 0.4599 2021-07-21 19:55:48,566 - __main__ - DEBUG - val Loss: 0.4994 2021-07-21 19:55:48,568 - __main__ - DEBUG - Epoch 41/499 2021-07-21 19:55:56,760 - __main__ - DEBUG - train Loss: 0.4653 2021-07-21 19:55:59,377 - __main__ - DEBUG - val Loss: 0.4944 2021-07-21 19:55:59,378 - __main__ - DEBUG - Epoch 42/499 2021-07-21 19:56:07,769 - __main__ - DEBUG - train Loss: 0.4592 2021-07-21 19:56:10,548 - __main__ - DEBUG - val Loss: 0.5708 2021-07-21 19:56:10,549 - __main__ - DEBUG - Epoch 43/499 2021-07-21 19:56:19,025 - __main__ - DEBUG - train Loss: 0.4603 2021-07-21 19:56:21,796 - __main__ - DEBUG - val Loss: 0.5421 2021-07-21 19:56:21,797 - __main__ - DEBUG - Epoch 44/499 2021-07-21 19:56:30,181 - __main__ - DEBUG - train Loss: 0.4655 2021-07-21 19:56:32,768 - __main__ - DEBUG - val Loss: 0.5487 2021-07-21 19:56:32,770 - __main__ - DEBUG - Epoch 45/499 2021-07-21 19:56:41,050 - __main__ - DEBUG - train Loss: 0.4623 2021-07-21 19:56:43,858 - __main__ - DEBUG - val Loss: 0.5050 2021-07-21 19:56:43,860 - __main__ - DEBUG - Epoch 46/499 2021-07-21 19:56:52,286 - __main__ - DEBUG - train Loss: 0.4548 2021-07-21 19:56:55,063 - __main__ - DEBUG - val Loss: 0.6900 2021-07-21 19:56:55,064 - __main__ - DEBUG - Epoch 47/499 2021-07-21 19:57:03,475 - __main__ - DEBUG - train Loss: 0.4575 2021-07-21 19:57:06,066 - __main__ - DEBUG - val Loss: 0.4961 2021-07-21 19:57:06,068 - __main__ - DEBUG - Epoch 48/499 2021-07-21 19:57:14,347 - __main__ - DEBUG - train Loss: 0.4496 2021-07-21 19:57:16,958 - __main__ - DEBUG - val Loss: 0.4835 2021-07-21 19:57:16,968 - __main__ - DEBUG - Epoch 49/499 2021-07-21 19:57:25,134 - __main__ - DEBUG - train Loss: 0.4471 2021-07-21 19:57:27,751 - __main__ - DEBUG - val Loss: 0.5383 2021-07-21 19:57:27,752 - __main__ - DEBUG - Epoch 50/499 2021-07-21 19:57:35,928 - __main__ - DEBUG - train Loss: 0.4487 2021-07-21 19:57:38,555 - __main__ - DEBUG - val Loss: 0.5361 2021-07-21 19:57:38,557 - __main__ - DEBUG - Epoch 51/499 2021-07-21 19:57:46,837 - __main__ - DEBUG - train Loss: 0.4499 2021-07-21 19:57:49,625 - __main__ - DEBUG - val Loss: 0.5852 2021-07-21 19:57:49,627 - __main__ - DEBUG - Epoch 52/499 2021-07-21 19:57:58,091 - __main__ - DEBUG - train Loss: 0.4511 2021-07-21 19:58:00,847 - __main__ - DEBUG - val Loss: 0.5596 2021-07-21 19:58:00,849 - __main__ - DEBUG - Epoch 53/499 2021-07-21 19:58:09,294 - __main__ - DEBUG - train Loss: 0.4473 2021-07-21 19:58:11,891 - __main__ - DEBUG - val Loss: 0.5320 2021-07-21 19:58:11,892 - __main__ - DEBUG - Epoch 54/499 2021-07-21 19:58:20,145 - __main__ - DEBUG - train Loss: 0.4491 2021-07-21 19:58:22,752 - __main__ - DEBUG - val Loss: 0.5050 2021-07-21 19:58:22,754 - __main__ - DEBUG - Epoch 55/499 2021-07-21 19:58:30,937 - __main__ - DEBUG - train Loss: 0.4516 2021-07-21 19:58:33,520 - __main__ - DEBUG - val Loss: 0.5315 2021-07-21 19:58:33,522 - __main__ - DEBUG - Epoch 56/499 2021-07-21 19:58:41,708 - __main__ - DEBUG - train Loss: 0.4537 2021-07-21 19:58:44,383 - __main__ - DEBUG - val Loss: 0.5311 2021-07-21 19:58:44,385 - __main__ - DEBUG - Epoch 57/499 2021-07-21 19:58:52,756 - __main__ - DEBUG - train Loss: 0.4443 2021-07-21 19:58:55,519 - __main__ - DEBUG - val Loss: 0.4704 2021-07-21 19:58:55,529 - __main__ - DEBUG - Epoch 58/499 2021-07-21 19:59:03,950 - __main__ - DEBUG - train Loss: 0.4418 2021-07-21 19:59:06,734 - __main__ - DEBUG - val Loss: 0.4765 2021-07-21 19:59:06,736 - __main__ - DEBUG - Epoch 59/499 2021-07-21 19:59:15,155 - __main__ - DEBUG - train Loss: 0.4430 2021-07-21 19:59:17,766 - __main__ - DEBUG - val Loss: 0.5835 2021-07-21 19:59:17,768 - __main__ - DEBUG - Epoch 60/499 2021-07-21 19:59:26,001 - __main__ - DEBUG - train Loss: 0.4437 2021-07-21 19:59:28,642 - __main__ - DEBUG - val Loss: 0.6347 2021-07-21 19:59:28,644 - __main__ - DEBUG - Epoch 61/499 2021-07-21 19:59:36,845 - __main__ - DEBUG - train Loss: 0.4517 2021-07-21 19:59:39,502 - __main__ - DEBUG - val Loss: 0.4941 2021-07-21 19:59:39,504 - __main__ - DEBUG - Epoch 62/499 2021-07-21 19:59:47,744 - __main__ - DEBUG - train Loss: 0.4495 2021-07-21 19:59:50,337 - __main__ - DEBUG - val Loss: 0.5197 2021-07-21 19:59:50,339 - __main__ - DEBUG - Epoch 63/499 2021-07-21 19:59:58,636 - __main__ - DEBUG - train Loss: 0.4450 2021-07-21 20:00:01,402 - __main__ - DEBUG - val Loss: 0.6089 2021-07-21 20:00:01,404 - __main__ - DEBUG - Epoch 64/499 2021-07-21 20:00:09,916 - __main__ - DEBUG - train Loss: 0.4399 2021-07-21 20:00:12,691 - __main__ - DEBUG - val Loss: 0.5423 2021-07-21 20:00:12,692 - __main__ - DEBUG - Epoch 65/499 2021-07-21 20:00:21,089 - __main__ - DEBUG - train Loss: 0.4401 2021-07-21 20:00:23,691 - __main__ - DEBUG - val Loss: 0.4724 2021-07-21 20:00:23,693 - __main__ - DEBUG - Epoch 66/499 2021-07-21 20:00:31,961 - __main__ - DEBUG - train Loss: 0.4406 2021-07-21 20:00:34,630 - __main__ - DEBUG - val Loss: 0.5328 2021-07-21 20:00:34,632 - __main__ - DEBUG - Epoch 67/499 2021-07-21 20:00:42,844 - __main__ - DEBUG - train Loss: 0.4427 2021-07-21 20:00:45,485 - __main__ - DEBUG - val Loss: 0.4897 2021-07-21 20:00:45,488 - __main__ - DEBUG - Epoch 68/499 2021-07-21 20:00:53,717 - __main__ - DEBUG - train Loss: 0.4463 2021-07-21 20:00:56,345 - __main__ - DEBUG - val Loss: 0.4767 2021-07-21 20:00:56,347 - __main__ - DEBUG - Epoch 69/499 2021-07-21 20:01:04,737 - __main__ - DEBUG - train Loss: 0.4377 2021-07-21 20:01:07,512 - __main__ - DEBUG - val Loss: 0.6272 2021-07-21 20:01:07,514 - __main__ - DEBUG - Epoch 70/499 2021-07-21 20:01:15,955 - __main__ - DEBUG - train Loss: 0.4476 2021-07-21 20:01:18,721 - __main__ - DEBUG - val Loss: 0.4830 2021-07-21 20:01:18,724 - __main__ - DEBUG - Epoch 71/499 2021-07-21 20:01:27,117 - __main__ - DEBUG - train Loss: 0.4481 2021-07-21 20:01:29,716 - __main__ - DEBUG - val Loss: 0.5387 2021-07-21 20:01:29,718 - __main__ - DEBUG - Epoch 72/499 2021-07-21 20:01:37,909 - __main__ - DEBUG - train Loss: 0.4436 2021-07-21 20:01:40,526 - __main__ - DEBUG - val Loss: 0.5136 2021-07-21 20:01:40,527 - __main__ - DEBUG - Epoch 73/499 2021-07-21 20:01:48,737 - __main__ - DEBUG - train Loss: 0.4286 2021-07-21 20:01:51,367 - __main__ - DEBUG - val Loss: 0.5946 2021-07-21 20:01:51,369 - __main__ - DEBUG - Epoch 74/499 2021-07-21 20:01:59,667 - __main__ - DEBUG - train Loss: 0.4318 2021-07-21 20:02:02,338 - __main__ - DEBUG - val Loss: 0.5065 2021-07-21 20:02:02,340 - __main__ - DEBUG - Epoch 75/499 2021-07-21 20:02:10,622 - __main__ - DEBUG - train Loss: 0.4376 2021-07-21 20:02:13,350 - __main__ - DEBUG - val Loss: 0.4697 2021-07-21 20:02:13,362 - __main__ - DEBUG - Epoch 76/499 2021-07-21 20:02:21,822 - __main__ - DEBUG - train Loss: 0.4400 2021-07-21 20:02:24,562 - __main__ - DEBUG - val Loss: 0.5548 2021-07-21 20:02:24,564 - __main__ - DEBUG - Epoch 77/499 2021-07-21 20:02:33,007 - __main__ - DEBUG - train Loss: 0.4413 2021-07-21 20:02:35,632 - __main__ - DEBUG - val Loss: 0.5041 2021-07-21 20:02:35,633 - __main__ - DEBUG - Epoch 78/499 2021-07-21 20:02:43,939 - __main__ - DEBUG - train Loss: 0.4339 2021-07-21 20:02:46,697 - __main__ - DEBUG - val Loss: 0.5462 2021-07-21 20:02:46,699 - __main__ - DEBUG - Epoch 79/499 2021-07-21 20:02:55,151 - __main__ - DEBUG - train Loss: 0.4280 2021-07-21 20:02:57,976 - __main__ - DEBUG - val Loss: 0.4559 2021-07-21 20:02:57,990 - __main__ - DEBUG - Epoch 80/499 2021-07-21 20:03:06,388 - __main__ - DEBUG - train Loss: 0.4232 2021-07-21 20:03:09,030 - __main__ - DEBUG - val Loss: 0.4905 2021-07-21 20:03:09,031 - __main__ - DEBUG - Epoch 81/499 2021-07-21 20:03:17,238 - __main__ - DEBUG - train Loss: 0.4218 2021-07-21 20:03:19,871 - __main__ - DEBUG - val Loss: 0.5359 2021-07-21 20:03:19,873 - __main__ - DEBUG - Epoch 82/499 2021-07-21 20:03:28,035 - __main__ - DEBUG - train Loss: 0.4409 2021-07-21 20:03:30,704 - __main__ - DEBUG - val Loss: 0.5545 2021-07-21 20:03:30,706 - __main__ - DEBUG - Epoch 83/499 2021-07-21 20:03:38,928 - __main__ - DEBUG - train Loss: 0.4381 2021-07-21 20:03:41,557 - __main__ - DEBUG - val Loss: 0.5519 2021-07-21 20:03:41,558 - __main__ - DEBUG - Epoch 84/499 2021-07-21 20:03:49,901 - __main__ - DEBUG - train Loss: 0.4238 2021-07-21 20:03:52,658 - __main__ - DEBUG - val Loss: 0.5609 2021-07-21 20:03:52,660 - __main__ - DEBUG - Epoch 85/499 2021-07-21 20:04:01,131 - __main__ - DEBUG - train Loss: 0.4288 2021-07-21 20:04:03,881 - __main__ - DEBUG - val Loss: 0.4923 2021-07-21 20:04:03,883 - __main__ - DEBUG - Epoch 86/499 2021-07-21 20:04:12,265 - __main__ - DEBUG - train Loss: 0.4238 2021-07-21 20:04:14,919 - __main__ - DEBUG - val Loss: 0.4963 2021-07-21 20:04:14,921 - __main__ - DEBUG - Epoch 87/499 2021-07-21 20:04:23,169 - __main__ - DEBUG - train Loss: 0.4244 2021-07-21 20:04:25,799 - __main__ - DEBUG - val Loss: 0.5751 2021-07-21 20:04:25,801 - __main__ - DEBUG - Epoch 88/499 2021-07-21 20:04:33,992 - __main__ - DEBUG - train Loss: 0.4200 2021-07-21 20:04:36,578 - __main__ - DEBUG - val Loss: 0.4822 2021-07-21 20:04:36,580 - __main__ - DEBUG - Epoch 89/499 2021-07-21 20:04:44,767 - __main__ - DEBUG - train Loss: 0.4193 2021-07-21 20:04:47,437 - __main__ - DEBUG - val Loss: 0.4927 2021-07-21 20:04:47,440 - __main__ - DEBUG - Epoch 90/499 2021-07-21 20:04:55,759 - __main__ - DEBUG - train Loss: 0.4197 2021-07-21 20:04:58,512 - __main__ - DEBUG - val Loss: 0.4789 2021-07-21 20:04:58,514 - __main__ - DEBUG - Epoch 91/499 2021-07-21 20:05:06,931 - __main__ - DEBUG - train Loss: 0.4148 2021-07-21 20:05:09,653 - __main__ - DEBUG - val Loss: 0.5120 2021-07-21 20:05:09,654 - __main__ - DEBUG - Epoch 92/499 2021-07-21 20:05:18,114 - __main__ - DEBUG - train Loss: 0.4213 2021-07-21 20:05:20,742 - __main__ - DEBUG - val Loss: 0.4933 2021-07-21 20:05:20,745 - __main__ - DEBUG - Epoch 93/499 2021-07-21 20:05:28,994 - __main__ - DEBUG - train Loss: 0.4180 2021-07-21 20:05:31,636 - __main__ - DEBUG - val Loss: 0.4979 2021-07-21 20:05:31,639 - __main__ - DEBUG - Epoch 94/499 2021-07-21 20:05:39,874 - __main__ - DEBUG - train Loss: 0.4224 2021-07-21 20:05:42,487 - __main__ - DEBUG - val Loss: 0.5177 2021-07-21 20:05:42,489 - __main__ - DEBUG - Epoch 95/499 2021-07-21 20:05:50,741 - __main__ - DEBUG - train Loss: 0.4177 2021-07-21 20:05:53,378 - __main__ - DEBUG - val Loss: 0.5166 2021-07-21 20:05:53,380 - __main__ - DEBUG - Epoch 96/499 2021-07-21 20:06:01,745 - __main__ - DEBUG - train Loss: 0.4193 2021-07-21 20:06:04,548 - __main__ - DEBUG - val Loss: 0.5801 2021-07-21 20:06:04,550 - __main__ - DEBUG - Epoch 97/499 2021-07-21 20:06:13,014 - __main__ - DEBUG - train Loss: 0.4167 2021-07-21 20:06:15,741 - __main__ - DEBUG - val Loss: 0.5017 2021-07-21 20:06:15,743 - __main__ - DEBUG - Epoch 98/499 2021-07-21 20:06:24,186 - __main__ - DEBUG - train Loss: 0.4245 2021-07-21 20:06:26,781 - __main__ - DEBUG - val Loss: 0.5806 2021-07-21 20:06:26,783 - __main__ - DEBUG - Epoch 99/499 2021-07-21 20:06:34,990 - __main__ - DEBUG - train Loss: 0.4187 2021-07-21 20:06:37,632 - __main__ - DEBUG - val Loss: 0.4704 2021-07-21 20:06:37,633 - __main__ - DEBUG - Epoch 100/499 2021-07-21 20:06:45,855 - __main__ - DEBUG - train Loss: 0.4169 2021-07-21 20:06:48,462 - __main__ - DEBUG - val Loss: 0.4901 2021-07-21 20:06:48,464 - __main__ - DEBUG - Epoch 101/499 2021-07-21 20:06:56,734 - __main__ - DEBUG - train Loss: 0.4158 2021-07-21 20:06:59,309 - __main__ - DEBUG - val Loss: 0.4950 2021-07-21 20:06:59,310 - __main__ - DEBUG - Epoch 102/499 2021-07-21 20:07:07,600 - __main__ - DEBUG - train Loss: 0.4170 2021-07-21 20:07:10,347 - __main__ - DEBUG - val Loss: 0.4729 2021-07-21 20:07:10,348 - __main__ - DEBUG - Epoch 103/499 2021-07-21 20:07:18,774 - __main__ - DEBUG - train Loss: 0.4097 2021-07-21 20:07:21,522 - __main__ - DEBUG - val Loss: 0.5407 2021-07-21 20:07:21,524 - __main__ - DEBUG - Epoch 104/499 2021-07-21 20:07:29,910 - __main__ - DEBUG - train Loss: 0.4088 2021-07-21 20:07:32,520 - __main__ - DEBUG - val Loss: 0.4948 2021-07-21 20:07:32,521 - __main__ - DEBUG - Epoch 105/499 2021-07-21 20:07:40,804 - __main__ - DEBUG - train Loss: 0.4009 2021-07-21 20:07:43,615 - __main__ - DEBUG - val Loss: 0.4878 2021-07-21 20:07:43,617 - __main__ - DEBUG - Epoch 106/499 2021-07-21 20:07:52,022 - __main__ - DEBUG - train Loss: 0.3969 2021-07-21 20:07:54,777 - __main__ - DEBUG - val Loss: 0.5315 2021-07-21 20:07:54,779 - __main__ - DEBUG - Epoch 107/499 2021-07-21 20:08:03,170 - __main__ - DEBUG - train Loss: 0.4026 2021-07-21 20:08:05,777 - __main__ - DEBUG - val Loss: 0.4986 2021-07-21 20:08:05,778 - __main__ - DEBUG - Epoch 108/499 2021-07-21 20:08:14,053 - __main__ - DEBUG - train Loss: 0.4094 2021-07-21 20:08:16,652 - __main__ - DEBUG - val Loss: 0.6988 2021-07-21 20:08:16,654 - __main__ - DEBUG - Epoch 109/499 2021-07-21 20:08:24,830 - __main__ - DEBUG - train Loss: 0.4105 2021-07-21 20:08:27,395 - __main__ - DEBUG - val Loss: 0.5153 2021-07-21 20:08:27,396 - __main__ - DEBUG - Epoch 110/499 2021-07-21 20:08:35,530 - __main__ - DEBUG - train Loss: 0.4099 2021-07-21 20:08:38,133 - __main__ - DEBUG - val Loss: 0.5246 2021-07-21 20:08:38,134 - __main__ - DEBUG - Epoch 111/499 2021-07-21 20:08:46,363 - __main__ - DEBUG - train Loss: 0.4116 2021-07-21 20:08:49,156 - __main__ - DEBUG - val Loss: 0.5546 2021-07-21 20:08:49,158 - __main__ - DEBUG - Epoch 112/499 2021-07-21 20:08:57,612 - __main__ - DEBUG - train Loss: 0.4005 2021-07-21 20:09:00,334 - __main__ - DEBUG - val Loss: 0.5666 2021-07-21 20:09:00,336 - __main__ - DEBUG - Epoch 113/499 2021-07-21 20:09:08,804 - __main__ - DEBUG - train Loss: 0.4022 2021-07-21 20:09:11,406 - __main__ - DEBUG - val Loss: 0.4990 2021-07-21 20:09:11,408 - __main__ - DEBUG - Epoch 114/499 2021-07-21 20:09:19,695 - __main__ - DEBUG - train Loss: 0.4061 2021-07-21 20:09:22,389 - __main__ - DEBUG - val Loss: 0.5179 2021-07-21 20:09:22,390 - __main__ - DEBUG - Epoch 115/499 2021-07-21 20:09:30,594 - __main__ - DEBUG - train Loss: 0.3996 2021-07-21 20:09:33,208 - __main__ - DEBUG - val Loss: 0.4999 2021-07-21 20:09:33,210 - __main__ - DEBUG - Epoch 116/499 2021-07-21 20:09:41,400 - __main__ - DEBUG - train Loss: 0.4055 2021-07-21 20:09:43,994 - __main__ - DEBUG - val Loss: 0.5155 2021-07-21 20:09:43,996 - __main__ - DEBUG - Epoch 117/499 2021-07-21 20:09:52,204 - __main__ - DEBUG - train Loss: 0.4031 2021-07-21 20:09:54,999 - __main__ - DEBUG - val Loss: 0.5037 2021-07-21 20:09:55,001 - __main__ - DEBUG - Epoch 118/499 2021-07-21 20:10:03,521 - __main__ - DEBUG - train Loss: 0.4036 2021-07-21 20:10:06,352 - __main__ - DEBUG - val Loss: 0.5519 2021-07-21 20:10:06,354 - __main__ - DEBUG - Epoch 119/499 2021-07-21 20:10:14,766 - __main__ - DEBUG - train Loss: 0.3960 2021-07-21 20:10:17,323 - __main__ - DEBUG - val Loss: 0.4741 2021-07-21 20:10:17,325 - __main__ - DEBUG - Epoch 120/499 2021-07-21 20:10:25,518 - __main__ - DEBUG - train Loss: 0.4064 2021-07-21 20:10:28,194 - __main__ - DEBUG - val Loss: 0.4859 2021-07-21 20:10:28,196 - __main__ - DEBUG - Epoch 121/499 2021-07-21 20:10:36,328 - __main__ - DEBUG - train Loss: 0.4001 2021-07-21 20:10:38,948 - __main__ - DEBUG - val Loss: 0.5070 2021-07-21 20:10:38,950 - __main__ - DEBUG - Epoch 122/499 2021-07-21 20:10:47,112 - __main__ - DEBUG - train Loss: 0.3854 2021-07-21 20:10:49,750 - __main__ - DEBUG - val Loss: 0.4955 2021-07-21 20:10:49,752 - __main__ - DEBUG - Epoch 123/499 2021-07-21 20:10:58,025 - __main__ - DEBUG - train Loss: 0.3852 2021-07-21 20:11:00,773 - __main__ - DEBUG - val Loss: 0.5777 2021-07-21 20:11:00,774 - __main__ - DEBUG - Epoch 124/499 2021-07-21 20:11:09,214 - __main__ - DEBUG - train Loss: 0.3969 2021-07-21 20:11:11,977 - __main__ - DEBUG - val Loss: 0.6686 2021-07-21 20:11:11,979 - __main__ - DEBUG - Epoch 125/499 2021-07-21 20:11:20,425 - __main__ - DEBUG - train Loss: 0.3990 2021-07-21 20:11:23,030 - __main__ - DEBUG - val Loss: 0.5622 2021-07-21 20:11:23,031 - __main__ - DEBUG - Epoch 126/499 2021-07-21 20:11:31,261 - __main__ - DEBUG - train Loss: 0.3904 2021-07-21 20:11:33,899 - __main__ - DEBUG - val Loss: 0.4785 2021-07-21 20:11:33,900 - __main__ - DEBUG - Epoch 127/499 2021-07-21 20:11:42,057 - __main__ - DEBUG - train Loss: 0.3853 2021-07-21 20:11:44,649 - __main__ - DEBUG - val Loss: 0.6093 2021-07-21 20:11:44,651 - __main__ - DEBUG - Epoch 128/499 2021-07-21 20:11:52,879 - __main__ - DEBUG - train Loss: 0.3901 2021-07-21 20:11:55,476 - __main__ - DEBUG - val Loss: 0.4859 2021-07-21 20:11:55,479 - __main__ - DEBUG - Epoch 129/499 2021-07-21 20:12:03,723 - __main__ - DEBUG - train Loss: 0.3890 2021-07-21 20:12:06,556 - __main__ - DEBUG - val Loss: 0.5002 2021-07-21 20:12:06,558 - __main__ - DEBUG - Epoch 130/499 2021-07-21 20:12:15,028 - __main__ - DEBUG - train Loss: 0.3872 2021-07-21 20:12:17,796 - __main__ - DEBUG - val Loss: 0.5427 2021-07-21 20:12:17,798 - __main__ - DEBUG - Epoch 131/499 2021-07-21 20:12:26,222 - __main__ - DEBUG - train Loss: 0.3901 2021-07-21 20:12:28,829 - __main__ - DEBUG - val Loss: 0.5499 2021-07-21 20:12:28,830 - __main__ - DEBUG - Epoch 132/499 2021-07-21 20:12:36,995 - __main__ - DEBUG - train Loss: 0.3945 2021-07-21 20:12:39,679 - __main__ - DEBUG - val Loss: 0.5409 2021-07-21 20:12:39,680 - __main__ - DEBUG - Epoch 133/499 2021-07-21 20:12:47,823 - __main__ - DEBUG - train Loss: 0.3851 2021-07-21 20:12:50,449 - __main__ - DEBUG - val Loss: 0.4935 2021-07-21 20:12:50,451 - __main__ - DEBUG - Epoch 134/499 2021-07-21 20:12:58,650 - __main__ - DEBUG - train Loss: 0.3849 2021-07-21 20:13:01,251 - __main__ - DEBUG - val Loss: 0.5270 2021-07-21 20:13:01,252 - __main__ - DEBUG - Epoch 135/499 2021-07-21 20:13:09,502 - __main__ - DEBUG - train Loss: 0.3738 2021-07-21 20:13:12,299 - __main__ - DEBUG - val Loss: 0.6596 2021-07-21 20:13:12,301 - __main__ - DEBUG - Epoch 136/499 2021-07-21 20:13:20,715 - __main__ - DEBUG - train Loss: 0.3740 2021-07-21 20:13:23,442 - __main__ - DEBUG - val Loss: 0.4937 2021-07-21 20:13:23,444 - __main__ - DEBUG - Epoch 137/499 2021-07-21 20:13:31,887 - __main__ - DEBUG - train Loss: 0.3675 2021-07-21 20:13:34,474 - __main__ - DEBUG - val Loss: 0.5034 2021-07-21 20:13:34,475 - __main__ - DEBUG - Epoch 138/499 2021-07-21 20:13:42,691 - __main__ - DEBUG - train Loss: 0.3894 2021-07-21 20:13:45,452 - __main__ - DEBUG - val Loss: 0.5025 2021-07-21 20:13:45,454 - __main__ - DEBUG - Epoch 139/499 2021-07-21 20:13:53,870 - __main__ - DEBUG - train Loss: 0.3801 2021-07-21 20:13:56,635 - __main__ - DEBUG - val Loss: 0.4824 2021-07-21 20:13:56,637 - __main__ - DEBUG - Epoch 140/499 2021-07-21 20:14:05,031 - __main__ - DEBUG - train Loss: 0.3824 2021-07-21 20:14:07,677 - __main__ - DEBUG - val Loss: 0.5749 2021-07-21 20:14:07,678 - __main__ - DEBUG - Epoch 141/499 2021-07-21 20:14:15,869 - __main__ - DEBUG - train Loss: 0.3765 2021-07-21 20:14:18,475 - __main__ - DEBUG - val Loss: 0.5110 2021-07-21 20:14:18,476 - __main__ - DEBUG - Epoch 142/499 2021-07-21 20:14:26,621 - __main__ - DEBUG - train Loss: 0.3665 2021-07-21 20:14:29,219 - __main__ - DEBUG - val Loss: 0.4913 2021-07-21 20:14:29,222 - __main__ - DEBUG - Epoch 143/499 2021-07-21 20:14:37,432 - __main__ - DEBUG - train Loss: 0.3673 2021-07-21 20:14:40,035 - __main__ - DEBUG - val Loss: 0.4918 2021-07-21 20:14:40,037 - __main__ - DEBUG - Epoch 144/499 2021-07-21 20:14:48,235 - __main__ - DEBUG - train Loss: 0.3718 2021-07-21 20:14:51,000 - __main__ - DEBUG - val Loss: 0.6820 2021-07-21 20:14:51,001 - __main__ - DEBUG - Epoch 145/499 2021-07-21 20:14:59,463 - __main__ - DEBUG - train Loss: 0.3729 2021-07-21 20:15:02,200 - __main__ - DEBUG - val Loss: 0.4613 2021-07-21 20:15:02,202 - __main__ - DEBUG - Epoch 146/499 2021-07-21 20:15:10,619 - __main__ - DEBUG - train Loss: 0.3763 2021-07-21 20:15:13,269 - __main__ - DEBUG - val Loss: 0.5138 2021-07-21 20:15:13,271 - __main__ - DEBUG - Epoch 147/499 2021-07-21 20:15:21,511 - __main__ - DEBUG - train Loss: 0.3747 2021-07-21 20:15:24,160 - __main__ - DEBUG - val Loss: 0.5858 2021-07-21 20:15:24,162 - __main__ - DEBUG - Epoch 148/499 2021-07-21 20:15:32,355 - __main__ - DEBUG - train Loss: 0.3702 2021-07-21 20:15:34,986 - __main__ - DEBUG - val Loss: 0.6189 2021-07-21 20:15:34,988 - __main__ - DEBUG - Epoch 149/499 2021-07-21 20:15:43,193 - __main__ - DEBUG - train Loss: 0.3695 2021-07-21 20:15:45,843 - __main__ - DEBUG - val Loss: 0.5755 2021-07-21 20:15:45,845 - __main__ - DEBUG - Epoch 150/499 2021-07-21 20:15:54,136 - __main__ - DEBUG - train Loss: 0.3721 2021-07-21 20:15:56,924 - __main__ - DEBUG - val Loss: 0.5198 2021-07-21 20:15:56,926 - __main__ - DEBUG - Epoch 151/499 2021-07-21 20:16:05,389 - __main__ - DEBUG - train Loss: 0.3639 2021-07-21 20:16:08,218 - __main__ - DEBUG - val Loss: 0.4767 2021-07-21 20:16:08,219 - __main__ - DEBUG - Epoch 152/499 2021-07-21 20:16:16,708 - __main__ - DEBUG - train Loss: 0.3622 2021-07-21 20:16:19,355 - __main__ - DEBUG - val Loss: 0.5455 2021-07-21 20:16:19,357 - __main__ - DEBUG - Epoch 153/499 2021-07-21 20:16:27,486 - __main__ - DEBUG - train Loss: 0.3618 2021-07-21 20:16:30,112 - __main__ - DEBUG - val Loss: 0.5550 2021-07-21 20:16:30,114 - __main__ - DEBUG - Epoch 154/499 2021-07-21 20:16:38,328 - __main__ - DEBUG - train Loss: 0.3618 2021-07-21 20:16:40,936 - __main__ - DEBUG - val Loss: 0.5522 2021-07-21 20:16:40,938 - __main__ - DEBUG - Epoch 155/499 2021-07-21 20:16:49,166 - __main__ - DEBUG - train Loss: 0.3654 2021-07-21 20:16:51,802 - __main__ - DEBUG - val Loss: 0.6095 2021-07-21 20:16:51,804 - __main__ - DEBUG - Epoch 156/499 2021-07-21 20:16:59,995 - __main__ - DEBUG - train Loss: 0.3592 2021-07-21 20:17:02,765 - __main__ - DEBUG - val Loss: 0.5349 2021-07-21 20:17:02,767 - __main__ - DEBUG - Epoch 157/499 2021-07-21 20:17:11,238 - __main__ - DEBUG - train Loss: 0.3657 2021-07-21 20:17:14,009 - __main__ - DEBUG - val Loss: 0.5453 2021-07-21 20:17:14,010 - __main__ - DEBUG - Epoch 158/499 2021-07-21 20:17:22,471 - __main__ - DEBUG - train Loss: 0.3652 2021-07-21 20:17:25,133 - __main__ - DEBUG - val Loss: 0.4869 2021-07-21 20:17:25,135 - __main__ - DEBUG - Epoch 159/499 2021-07-21 20:17:33,348 - __main__ - DEBUG - train Loss: 0.3658 2021-07-21 20:17:35,978 - __main__ - DEBUG - val Loss: 0.6156 2021-07-21 20:17:35,980 - __main__ - DEBUG - Epoch 160/499 2021-07-21 20:17:44,197 - __main__ - DEBUG - train Loss: 0.3641 2021-07-21 20:17:46,863 - __main__ - DEBUG - val Loss: 0.4780 2021-07-21 20:17:46,864 - __main__ - DEBUG - Epoch 161/499 2021-07-21 20:17:55,051 - __main__ - DEBUG - train Loss: 0.3649 2021-07-21 20:17:57,686 - __main__ - DEBUG - val Loss: 0.5176 2021-07-21 20:17:57,688 - __main__ - DEBUG - Epoch 162/499 2021-07-21 20:18:05,942 - __main__ - DEBUG - train Loss: 0.3628 2021-07-21 20:18:08,713 - __main__ - DEBUG - val Loss: 0.4872 2021-07-21 20:18:08,715 - __main__ - DEBUG - Epoch 163/499 2021-07-21 20:18:17,103 - __main__ - DEBUG - train Loss: 0.3550 2021-07-21 20:18:19,876 - __main__ - DEBUG - val Loss: 0.5916 2021-07-21 20:18:19,877 - __main__ - DEBUG - Epoch 164/499 2021-07-21 20:18:28,366 - __main__ - DEBUG - train Loss: 0.3489 2021-07-21 20:18:30,969 - __main__ - DEBUG - val Loss: 0.5172 2021-07-21 20:18:30,971 - __main__ - DEBUG - Epoch 165/499 2021-07-21 20:18:39,201 - __main__ - DEBUG - train Loss: 0.3475 2021-07-21 20:18:42,054 - __main__ - DEBUG - val Loss: 0.5257 2021-07-21 20:18:42,056 - __main__ - DEBUG - Epoch 166/499 2021-07-21 20:18:50,510 - __main__ - DEBUG - train Loss: 0.3334 2021-07-21 20:18:53,302 - __main__ - DEBUG - val Loss: 0.5475 2021-07-21 20:18:53,303 - __main__ - DEBUG - Epoch 167/499 2021-07-21 20:19:01,782 - __main__ - DEBUG - train Loss: 0.3469 2021-07-21 20:19:04,388 - __main__ - DEBUG - val Loss: 0.7318 2021-07-21 20:19:04,390 - __main__ - DEBUG - Epoch 168/499 2021-07-21 20:19:12,637 - __main__ - DEBUG - train Loss: 0.3541 2021-07-21 20:19:15,265 - __main__ - DEBUG - val Loss: 0.5272 2021-07-21 20:19:15,267 - __main__ - DEBUG - Epoch 169/499 2021-07-21 20:19:23,475 - __main__ - DEBUG - train Loss: 0.3636 2021-07-21 20:19:26,079 - __main__ - DEBUG - val Loss: 0.5108 2021-07-21 20:19:26,080 - __main__ - DEBUG - Epoch 170/499 2021-07-21 20:19:34,264 - __main__ - DEBUG - train Loss: 0.3488 2021-07-21 20:19:36,880 - __main__ - DEBUG - val Loss: 0.5908 2021-07-21 20:19:36,882 - __main__ - DEBUG - Epoch 171/499 2021-07-21 20:19:45,124 - __main__ - DEBUG - train Loss: 0.3593 2021-07-21 20:19:47,880 - __main__ - DEBUG - val Loss: 0.5713 2021-07-21 20:19:47,881 - __main__ - DEBUG - Epoch 172/499 2021-07-21 20:19:56,316 - __main__ - DEBUG - train Loss: 0.3390 2021-07-21 20:19:59,112 - __main__ - DEBUG - val Loss: 0.5112 2021-07-21 20:19:59,114 - __main__ - DEBUG - Epoch 173/499 2021-07-21 20:20:07,561 - __main__ - DEBUG - train Loss: 0.3412 2021-07-21 20:20:10,180 - __main__ - DEBUG - val Loss: 0.5806 2021-07-21 20:20:10,182 - __main__ - DEBUG - Epoch 174/499 2021-07-21 20:20:18,391 - __main__ - DEBUG - train Loss: 0.3455 2021-07-21 20:20:21,083 - __main__ - DEBUG - val Loss: 0.5144 2021-07-21 20:20:21,085 - __main__ - DEBUG - Epoch 175/499 2021-07-21 20:20:29,300 - __main__ - DEBUG - train Loss: 0.3341 2021-07-21 20:20:31,898 - __main__ - DEBUG - val Loss: 0.5369 2021-07-21 20:20:31,900 - __main__ - DEBUG - Epoch 176/499 2021-07-21 20:20:40,075 - __main__ - DEBUG - train Loss: 0.3499 2021-07-21 20:20:42,689 - __main__ - DEBUG - val Loss: 0.6134 2021-07-21 20:20:42,691 - __main__ - DEBUG - Epoch 177/499 2021-07-21 20:20:50,910 - __main__ - DEBUG - train Loss: 0.3450 2021-07-21 20:20:53,687 - __main__ - DEBUG - val Loss: 0.5693 2021-07-21 20:20:53,689 - __main__ - DEBUG - Epoch 178/499 2021-07-21 20:21:02,155 - __main__ - DEBUG - train Loss: 0.3422 2021-07-21 20:21:04,979 - __main__ - DEBUG - val Loss: 0.6029 2021-07-21 20:21:04,981 - __main__ - DEBUG - Epoch 179/499 2021-07-21 20:21:13,489 - __main__ - DEBUG - train Loss: 0.3313 2021-07-21 20:21:16,147 - __main__ - DEBUG - val Loss: 0.6410 2021-07-21 20:21:16,149 - __main__ - DEBUG - Epoch 180/499 2021-07-21 20:21:24,366 - __main__ - DEBUG - train Loss: 0.3356 2021-07-21 20:21:27,012 - __main__ - DEBUG - val Loss: 0.5680 2021-07-21 20:21:27,014 - __main__ - DEBUG - Epoch 181/499 2021-07-21 20:21:35,181 - __main__ - DEBUG - train Loss: 0.3420 2021-07-21 20:21:37,777 - __main__ - DEBUG - val Loss: 0.5432 2021-07-21 20:21:37,779 - __main__ - DEBUG - Epoch 182/499 2021-07-21 20:21:45,999 - __main__ - DEBUG - train Loss: 0.3402 2021-07-21 20:21:48,640 - __main__ - DEBUG - val Loss: 0.6301 2021-07-21 20:21:48,642 - __main__ - DEBUG - Epoch 183/499 2021-07-21 20:21:56,843 - __main__ - DEBUG - train Loss: 0.3478 2021-07-21 20:21:59,667 - __main__ - DEBUG - val Loss: 0.5517 2021-07-21 20:21:59,669 - __main__ - DEBUG - Epoch 184/499 2021-07-21 20:22:08,136 - __main__ - DEBUG - train Loss: 0.3459 2021-07-21 20:22:10,939 - __main__ - DEBUG - val Loss: 0.5787 2021-07-21 20:22:10,940 - __main__ - DEBUG - Epoch 185/499 2021-07-21 20:22:19,331 - __main__ - DEBUG - train Loss: 0.3445 2021-07-21 20:22:21,972 - __main__ - DEBUG - val Loss: 0.6333 2021-07-21 20:22:21,974 - __main__ - DEBUG - Epoch 186/499 2021-07-21 20:22:30,169 - __main__ - DEBUG - train Loss: 0.3439 2021-07-21 20:22:32,792 - __main__ - DEBUG - val Loss: 0.5235 2021-07-21 20:22:32,793 - __main__ - DEBUG - Epoch 187/499 2021-07-21 20:22:41,060 - __main__ - DEBUG - train Loss: 0.3368 2021-07-21 20:22:43,700 - __main__ - DEBUG - val Loss: 0.5019 2021-07-21 20:22:43,702 - __main__ - DEBUG - Epoch 188/499 2021-07-21 20:22:51,958 - __main__ - DEBUG - train Loss: 0.3464 2021-07-21 20:22:54,653 - __main__ - DEBUG - val Loss: 0.5108 2021-07-21 20:22:54,655 - __main__ - DEBUG - Epoch 189/499 2021-07-21 20:23:02,964 - __main__ - DEBUG - train Loss: 0.3494 2021-07-21 20:23:05,773 - __main__ - DEBUG - val Loss: 0.6541 2021-07-21 20:23:05,775 - __main__ - DEBUG - Epoch 190/499 2021-07-21 20:23:14,268 - __main__ - DEBUG - train Loss: 0.3334 2021-07-21 20:23:17,084 - __main__ - DEBUG - val Loss: 0.4633 2021-07-21 20:23:17,086 - __main__ - DEBUG - Epoch 191/499 2021-07-21 20:23:25,655 - __main__ - DEBUG - train Loss: 0.3209 2021-07-21 20:23:28,284 - __main__ - DEBUG - val Loss: 0.5079 2021-07-21 20:23:28,287 - __main__ - DEBUG - Epoch 192/499 2021-07-21 20:23:36,605 - __main__ - DEBUG - train Loss: 0.3162 2021-07-21 20:23:39,254 - __main__ - DEBUG - val Loss: 0.5408 2021-07-21 20:23:39,256 - __main__ - DEBUG - Epoch 193/499 2021-07-21 20:23:47,622 - __main__ - DEBUG - train Loss: 0.3432 2021-07-21 20:23:50,254 - __main__ - DEBUG - val Loss: 0.5864 2021-07-21 20:23:50,255 - __main__ - DEBUG - Epoch 194/499 2021-07-21 20:23:58,520 - __main__ - DEBUG - train Loss: 0.3226 2021-07-21 20:24:01,226 - __main__ - DEBUG - val Loss: 0.5458 2021-07-21 20:24:01,228 - __main__ - DEBUG - Epoch 195/499 2021-07-21 20:24:09,542 - __main__ - DEBUG - train Loss: 0.3172 2021-07-21 20:24:12,369 - __main__ - DEBUG - val Loss: 0.4888 2021-07-21 20:24:12,371 - __main__ - DEBUG - Epoch 196/499 2021-07-21 20:24:20,910 - __main__ - DEBUG - train Loss: 0.3219 2021-07-21 20:24:23,689 - __main__ - DEBUG - val Loss: 0.6979 2021-07-21 20:24:23,691 - __main__ - DEBUG - Epoch 197/499 2021-07-21 20:24:32,205 - __main__ - DEBUG - train Loss: 0.3216 2021-07-21 20:24:34,863 - __main__ - DEBUG - val Loss: 0.5263 2021-07-21 20:24:34,865 - __main__ - DEBUG - Epoch 198/499 2021-07-21 20:24:43,168 - __main__ - DEBUG - train Loss: 0.3221 2021-07-21 20:24:45,981 - __main__ - DEBUG - val Loss: 0.6137 2021-07-21 20:24:45,984 - __main__ - DEBUG - Epoch 199/499 2021-07-21 20:24:54,515 - __main__ - DEBUG - train Loss: 0.3171 2021-07-21 20:24:57,328 - __main__ - DEBUG - val Loss: 0.5743 2021-07-21 20:24:57,330 - __main__ - DEBUG - Epoch 200/499 2021-07-21 20:25:05,874 - __main__ - DEBUG - train Loss: 0.3271 2021-07-21 20:25:08,518 - __main__ - DEBUG - val Loss: 0.5958 2021-07-21 20:25:08,520 - __main__ - DEBUG - Epoch 201/499 2021-07-21 20:25:16,845 - __main__ - DEBUG - train Loss: 0.3309 2021-07-21 20:25:19,490 - __main__ - DEBUG - val Loss: 0.6098 2021-07-21 20:25:19,492 - __main__ - DEBUG - Epoch 202/499 2021-07-21 20:25:27,757 - __main__ - DEBUG - train Loss: 0.3295 2021-07-21 20:25:30,505 - __main__ - DEBUG - val Loss: 0.5212 2021-07-21 20:25:30,507 - __main__ - DEBUG - Epoch 203/499 2021-07-21 20:25:38,900 - __main__ - DEBUG - train Loss: 0.3197 2021-07-21 20:25:41,550 - __main__ - DEBUG - val Loss: 0.5129 2021-07-21 20:25:41,552 - __main__ - DEBUG - Epoch 204/499 2021-07-21 20:25:49,894 - __main__ - DEBUG - train Loss: 0.3120 2021-07-21 20:25:52,697 - __main__ - DEBUG - val Loss: 0.5657 2021-07-21 20:25:52,699 - __main__ - DEBUG - Epoch 205/499 2021-07-21 20:26:01,226 - __main__ - DEBUG - train Loss: 0.3194 2021-07-21 20:26:04,024 - __main__ - DEBUG - val Loss: 0.5907 2021-07-21 20:26:04,026 - __main__ - DEBUG - Epoch 206/499 2021-07-21 20:26:12,521 - __main__ - DEBUG - train Loss: 0.3091 2021-07-21 20:26:15,235 - __main__ - DEBUG - val Loss: 0.5573 2021-07-21 20:26:15,237 - __main__ - DEBUG - Epoch 207/499 2021-07-21 20:26:23,617 - __main__ - DEBUG - train Loss: 0.3192 2021-07-21 20:26:26,327 - __main__ - DEBUG - val Loss: 0.5896 2021-07-21 20:26:26,329 - __main__ - DEBUG - Epoch 208/499 2021-07-21 20:26:34,602 - __main__ - DEBUG - train Loss: 0.3079 2021-07-21 20:26:37,252 - __main__ - DEBUG - val Loss: 0.5461 2021-07-21 20:26:37,254 - __main__ - DEBUG - Epoch 209/499 2021-07-21 20:26:45,561 - __main__ - DEBUG - train Loss: 0.3002 2021-07-21 20:26:48,240 - __main__ - DEBUG - val Loss: 0.6271 2021-07-21 20:26:48,242 - __main__ - DEBUG - Epoch 210/499 2021-07-21 20:26:56,592 - __main__ - DEBUG - train Loss: 0.3016 2021-07-21 20:26:59,412 - __main__ - DEBUG - val Loss: 0.7629 2021-07-21 20:26:59,414 - __main__ - DEBUG - Epoch 211/499 2021-07-21 20:27:07,935 - __main__ - DEBUG - train Loss: 0.3079 2021-07-21 20:27:10,763 - __main__ - DEBUG - val Loss: 0.7392 2021-07-21 20:27:10,765 - __main__ - DEBUG - Epoch 212/499 2021-07-21 20:27:19,328 - __main__ - DEBUG - train Loss: 0.3117 2021-07-21 20:27:21,964 - __main__ - DEBUG - val Loss: 0.6046 2021-07-21 20:27:21,966 - __main__ - DEBUG - Epoch 213/499 2021-07-21 20:27:30,337 - __main__ - DEBUG - train Loss: 0.3229 2021-07-21 20:27:33,065 - __main__ - DEBUG - val Loss: 0.5931 2021-07-21 20:27:33,067 - __main__ - DEBUG - Epoch 214/499 2021-07-21 20:27:41,563 - __main__ - DEBUG - train Loss: 0.3113 2021-07-21 20:27:44,315 - __main__ - DEBUG - val Loss: 0.5669 2021-07-21 20:27:44,316 - __main__ - DEBUG - Epoch 215/499 2021-07-21 20:27:52,591 - __main__ - DEBUG - train Loss: 0.3098 2021-07-21 20:27:55,288 - __main__ - DEBUG - val Loss: 0.8357 2021-07-21 20:27:55,291 - __main__ - DEBUG - Epoch 216/499 2021-07-21 20:28:03,834 - __main__ - DEBUG - train Loss: 0.2904 2021-07-21 20:28:06,651 - __main__ - DEBUG - val Loss: 0.5543 2021-07-21 20:28:06,653 - __main__ - DEBUG - Epoch 217/499 2021-07-21 20:28:15,216 - __main__ - DEBUG - train Loss: 0.2992 2021-07-21 20:28:18,023 - __main__ - DEBUG - val Loss: 0.5817 2021-07-21 20:28:18,025 - __main__ - DEBUG - Epoch 218/499 2021-07-21 20:28:26,618 - __main__ - DEBUG - train Loss: 0.3089 2021-07-21 20:28:29,355 - __main__ - DEBUG - val Loss: 0.6606 2021-07-21 20:28:29,356 - __main__ - DEBUG - Epoch 219/499 2021-07-21 20:28:37,611 - __main__ - DEBUG - train Loss: 0.3004 2021-07-21 20:28:40,294 - __main__ - DEBUG - val Loss: 0.6560 2021-07-21 20:28:40,296 - __main__ - DEBUG - Epoch 220/499 2021-07-21 20:28:48,644 - __main__ - DEBUG - train Loss: 0.3037 2021-07-21 20:28:51,355 - __main__ - DEBUG - val Loss: 0.6510 2021-07-21 20:28:51,357 - __main__ - DEBUG - Epoch 221/499 2021-07-21 20:28:59,706 - __main__ - DEBUG - train Loss: 0.3079 2021-07-21 20:29:02,411 - __main__ - DEBUG - val Loss: 0.6042 2021-07-21 20:29:02,413 - __main__ - DEBUG - Epoch 222/499 2021-07-21 20:29:11,016 - __main__ - DEBUG - train Loss: 0.3104 2021-07-21 20:29:13,859 - __main__ - DEBUG - val Loss: 0.5764 2021-07-21 20:29:13,862 - __main__ - DEBUG - Epoch 223/499 2021-07-21 20:29:22,416 - __main__ - DEBUG - train Loss: 0.2997 2021-07-21 20:29:25,275 - __main__ - DEBUG - val Loss: 0.6142 2021-07-21 20:29:25,277 - __main__ - DEBUG - Epoch 224/499 2021-07-21 20:29:33,763 - __main__ - DEBUG - train Loss: 0.2985 2021-07-21 20:29:36,418 - __main__ - DEBUG - val Loss: 0.5204 2021-07-21 20:29:36,420 - __main__ - DEBUG - Epoch 225/499 2021-07-21 20:29:44,947 - __main__ - DEBUG - train Loss: 0.2929 2021-07-21 20:29:47,811 - __main__ - DEBUG - val Loss: 0.5537 2021-07-21 20:29:47,813 - __main__ - DEBUG - Epoch 226/499 2021-07-21 20:29:56,369 - __main__ - DEBUG - train Loss: 0.3054 2021-07-21 20:29:59,183 - __main__ - DEBUG - val Loss: 0.6361 2021-07-21 20:29:59,185 - __main__ - DEBUG - Epoch 227/499 2021-07-21 20:30:07,618 - __main__ - DEBUG - train Loss: 0.2967 2021-07-21 20:30:10,368 - __main__ - DEBUG - val Loss: 0.7319 2021-07-21 20:30:10,370 - __main__ - DEBUG - Epoch 228/499 2021-07-21 20:30:18,696 - __main__ - DEBUG - train Loss: 0.2915 2021-07-21 20:30:21,298 - __main__ - DEBUG - val Loss: 0.5815 2021-07-21 20:30:21,300 - __main__ - DEBUG - Epoch 229/499 2021-07-21 20:30:29,560 - __main__ - DEBUG - train Loss: 0.3089 2021-07-21 20:30:32,233 - __main__ - DEBUG - val Loss: 0.5950 2021-07-21 20:30:32,235 - __main__ - DEBUG - Epoch 230/499 2021-07-21 20:30:40,530 - __main__ - DEBUG - train Loss: 0.2908 2021-07-21 20:30:43,335 - __main__ - DEBUG - val Loss: 0.6695 2021-07-21 20:30:43,337 - __main__ - DEBUG - Epoch 231/499 2021-07-21 20:30:51,985 - __main__ - DEBUG - train Loss: 0.2913 2021-07-21 20:30:54,808 - __main__ - DEBUG - val Loss: 0.5716 2021-07-21 20:30:54,810 - __main__ - DEBUG - Epoch 232/499 2021-07-21 20:31:03,412 - __main__ - DEBUG - train Loss: 0.2942 2021-07-21 20:31:06,252 - __main__ - DEBUG - val Loss: 0.5891 2021-07-21 20:31:06,254 - __main__ - DEBUG - Epoch 233/499 2021-07-21 20:31:14,637 - __main__ - DEBUG - train Loss: 0.2968 2021-07-21 20:31:17,370 - __main__ - DEBUG - val Loss: 0.5838 2021-07-21 20:31:17,372 - __main__ - DEBUG - Epoch 234/499 2021-07-21 20:31:25,718 - __main__ - DEBUG - train Loss: 0.2824 2021-07-21 20:31:28,341 - __main__ - DEBUG - val Loss: 0.5770 2021-07-21 20:31:28,343 - __main__ - DEBUG - Epoch 235/499 2021-07-21 20:31:36,619 - __main__ - DEBUG - train Loss: 0.2887 2021-07-21 20:31:39,221 - __main__ - DEBUG - val Loss: 0.6019 2021-07-21 20:31:39,223 - __main__ - DEBUG - Epoch 236/499 2021-07-21 20:31:47,534 - __main__ - DEBUG - train Loss: 0.2870 2021-07-21 20:31:50,346 - __main__ - DEBUG - val Loss: 0.5778 2021-07-21 20:31:50,348 - __main__ - DEBUG - Epoch 237/499 2021-07-21 20:31:58,862 - __main__ - DEBUG - train Loss: 0.2935 2021-07-21 20:32:01,731 - __main__ - DEBUG - val Loss: 0.5843 2021-07-21 20:32:01,733 - __main__ - DEBUG - Epoch 238/499 2021-07-21 20:32:10,261 - __main__ - DEBUG - train Loss: 0.3023 2021-07-21 20:32:13,115 - __main__ - DEBUG - val Loss: 0.5628 2021-07-21 20:32:13,117 - __main__ - DEBUG - Epoch 239/499 2021-07-21 20:32:21,389 - __main__ - DEBUG - train Loss: 0.3038 2021-07-21 20:32:24,036 - __main__ - DEBUG - val Loss: 0.6136 2021-07-21 20:32:24,038 - __main__ - DEBUG - Epoch 240/499 2021-07-21 20:32:32,364 - __main__ - DEBUG - train Loss: 0.2996 2021-07-21 20:32:35,063 - __main__ - DEBUG - val Loss: 0.6337 2021-07-21 20:32:35,065 - __main__ - DEBUG - Epoch 241/499 2021-07-21 20:32:43,330 - __main__ - DEBUG - train Loss: 0.2952 2021-07-21 20:32:46,016 - __main__ - DEBUG - val Loss: 0.6673 2021-07-21 20:32:46,017 - __main__ - DEBUG - Epoch 242/499 2021-07-21 20:32:54,395 - __main__ - DEBUG - train Loss: 0.2930 2021-07-21 20:32:57,220 - __main__ - DEBUG - val Loss: 0.5919 2021-07-21 20:32:57,221 - __main__ - DEBUG - Epoch 243/499 2021-07-21 20:33:05,821 - __main__ - DEBUG - train Loss: 0.2876 2021-07-21 20:33:08,633 - __main__ - DEBUG - val Loss: 0.6276 2021-07-21 20:33:08,635 - __main__ - DEBUG - Epoch 244/499 2021-07-21 20:33:17,220 - __main__ - DEBUG - train Loss: 0.2865 2021-07-21 20:33:19,989 - __main__ - DEBUG - val Loss: 0.5775 2021-07-21 20:33:19,991 - __main__ - DEBUG - Epoch 245/499 2021-07-21 20:33:28,263 - __main__ - DEBUG - train Loss: 0.2851 2021-07-21 20:33:30,979 - __main__ - DEBUG - val Loss: 0.5618 2021-07-21 20:33:30,981 - __main__ - DEBUG - Epoch 246/499 2021-07-21 20:33:39,358 - __main__ - DEBUG - train Loss: 0.2806 2021-07-21 20:33:42,094 - __main__ - DEBUG - val Loss: 0.6023 2021-07-21 20:33:42,096 - __main__ - DEBUG - Epoch 247/499 2021-07-21 20:33:50,413 - __main__ - DEBUG - train Loss: 0.2788 2021-07-21 20:33:53,092 - __main__ - DEBUG - val Loss: 0.5500 2021-07-21 20:33:53,095 - __main__ - DEBUG - Epoch 248/499 2021-07-21 20:34:01,406 - __main__ - DEBUG - train Loss: 0.2603 2021-07-21 20:34:04,208 - __main__ - DEBUG - val Loss: 0.6558 2021-07-21 20:34:04,210 - __main__ - DEBUG - Epoch 249/499 2021-07-21 20:34:12,722 - __main__ - DEBUG - train Loss: 0.2673 2021-07-21 20:34:15,520 - __main__ - DEBUG - val Loss: 0.6333 2021-07-21 20:34:15,521 - __main__ - DEBUG - Epoch 250/499 2021-07-21 20:34:24,048 - __main__ - DEBUG - train Loss: 0.2661 2021-07-21 20:34:26,808 - __main__ - DEBUG - val Loss: 1.2307 2021-07-21 20:34:26,810 - __main__ - DEBUG - Epoch 251/499 2021-07-21 20:34:35,018 - __main__ - DEBUG - train Loss: 0.2697 2021-07-21 20:34:37,696 - __main__ - DEBUG - val Loss: 0.6750 2021-07-21 20:34:37,698 - __main__ - DEBUG - Epoch 252/499 2021-07-21 20:34:46,031 - __main__ - DEBUG - train Loss: 0.2725 2021-07-21 20:34:48,732 - __main__ - DEBUG - val Loss: 0.5750 2021-07-21 20:34:48,734 - __main__ - DEBUG - Epoch 253/499 2021-07-21 20:34:57,013 - __main__ - DEBUG - train Loss: 0.2591 2021-07-21 20:34:59,711 - __main__ - DEBUG - val Loss: 0.5694 2021-07-21 20:34:59,713 - __main__ - DEBUG - Epoch 254/499 2021-07-21 20:35:08,032 - __main__ - DEBUG - train Loss: 0.2580 2021-07-21 20:35:10,858 - __main__ - DEBUG - val Loss: 0.6123 2021-07-21 20:35:10,859 - __main__ - DEBUG - Epoch 255/499 2021-07-21 20:35:19,441 - __main__ - DEBUG - train Loss: 0.2530 2021-07-21 20:35:22,284 - __main__ - DEBUG - val Loss: 0.6661 2021-07-21 20:35:22,286 - __main__ - DEBUG - Epoch 256/499 2021-07-21 20:35:30,824 - __main__ - DEBUG - train Loss: 0.2690 2021-07-21 20:35:33,569 - __main__ - DEBUG - val Loss: 0.6892 2021-07-21 20:35:33,571 - __main__ - DEBUG - Epoch 257/499 2021-07-21 20:35:41,905 - __main__ - DEBUG - train Loss: 0.2646 2021-07-21 20:35:44,769 - __main__ - DEBUG - val Loss: 0.6833 2021-07-21 20:35:44,771 - __main__ - DEBUG - Epoch 258/499 2021-07-21 20:35:53,399 - __main__ - DEBUG - train Loss: 0.2530 2021-07-21 20:35:56,311 - __main__ - DEBUG - val Loss: 0.5608 2021-07-21 20:35:56,313 - __main__ - DEBUG - Epoch 259/499 2021-07-21 20:36:04,893 - __main__ - DEBUG - train Loss: 0.2487 2021-07-21 20:36:07,707 - __main__ - DEBUG - val Loss: 0.5700 2021-07-21 20:36:07,708 - __main__ - DEBUG - Epoch 260/499 2021-07-21 20:36:16,026 - __main__ - DEBUG - train Loss: 0.2499 2021-07-21 20:36:18,686 - __main__ - DEBUG - val Loss: 0.5872 2021-07-21 20:36:18,687 - __main__ - DEBUG - Epoch 261/499 2021-07-21 20:36:27,060 - __main__ - DEBUG - train Loss: 0.2544 2021-07-21 20:36:29,705 - __main__ - DEBUG - val Loss: 0.6045 2021-07-21 20:36:29,706 - __main__ - DEBUG - Epoch 262/499 2021-07-21 20:36:38,048 - __main__ - DEBUG - train Loss: 0.2665 2021-07-21 20:36:40,681 - __main__ - DEBUG - val Loss: 0.7628 2021-07-21 20:36:40,683 - __main__ - DEBUG - Epoch 263/499 2021-07-21 20:36:49,015 - __main__ - DEBUG - train Loss: 0.2717 2021-07-21 20:36:51,827 - __main__ - DEBUG - val Loss: 0.6828 2021-07-21 20:36:51,829 - __main__ - DEBUG - Epoch 264/499 2021-07-21 20:37:00,278 - __main__ - DEBUG - train Loss: 0.2664 2021-07-21 20:37:03,177 - __main__ - DEBUG - val Loss: 0.5842 2021-07-21 20:37:03,179 - __main__ - DEBUG - Epoch 265/499 2021-07-21 20:37:11,694 - __main__ - DEBUG - train Loss: 0.2798 2021-07-21 20:37:14,295 - __main__ - DEBUG - val Loss: 0.6068 2021-07-21 20:37:14,297 - __main__ - DEBUG - Epoch 266/499 2021-07-21 20:37:22,638 - __main__ - DEBUG - train Loss: 0.2598 2021-07-21 20:37:25,280 - __main__ - DEBUG - val Loss: 0.5900 2021-07-21 20:37:25,282 - __main__ - DEBUG - Epoch 267/499 2021-07-21 20:37:33,520 - __main__ - DEBUG - train Loss: 0.2545 2021-07-21 20:37:36,145 - __main__ - DEBUG - val Loss: 0.5676 2021-07-21 20:37:36,147 - __main__ - DEBUG - Epoch 268/499 2021-07-21 20:37:44,511 - __main__ - DEBUG - train Loss: 0.2560 2021-07-21 20:37:47,190 - __main__ - DEBUG - val Loss: 0.6845 2021-07-21 20:37:47,191 - __main__ - DEBUG - Epoch 269/499 2021-07-21 20:37:55,588 - __main__ - DEBUG - train Loss: 0.2531 2021-07-21 20:37:58,395 - __main__ - DEBUG - val Loss: 0.8293 2021-07-21 20:37:58,397 - __main__ - DEBUG - Epoch 270/499 2021-07-21 20:38:06,935 - __main__ - DEBUG - train Loss: 0.2695 2021-07-21 20:38:09,753 - __main__ - DEBUG - val Loss: 0.6507 2021-07-21 20:38:09,754 - __main__ - DEBUG - Epoch 271/499 2021-07-21 20:38:18,341 - __main__ - DEBUG - train Loss: 0.2556 2021-07-21 20:38:20,973 - __main__ - DEBUG - val Loss: 0.5408 2021-07-21 20:38:20,975 - __main__ - DEBUG - Epoch 272/499 2021-07-21 20:38:29,277 - __main__ - DEBUG - train Loss: 0.2608 2021-07-21 20:38:31,960 - __main__ - DEBUG - val Loss: 0.6709 2021-07-21 20:38:31,962 - __main__ - DEBUG - Epoch 273/499 2021-07-21 20:38:40,293 - __main__ - DEBUG - train Loss: 0.2520 2021-07-21 20:38:42,911 - __main__ - DEBUG - val Loss: 0.5867 2021-07-21 20:38:42,913 - __main__ - DEBUG - Epoch 274/499 2021-07-21 20:38:51,184 - __main__ - DEBUG - train Loss: 0.2509 2021-07-21 20:38:53,844 - __main__ - DEBUG - val Loss: 0.6472 2021-07-21 20:38:53,846 - __main__ - DEBUG - Epoch 275/499 2021-07-21 20:39:02,197 - __main__ - DEBUG - train Loss: 0.2501 2021-07-21 20:39:05,074 - __main__ - DEBUG - val Loss: 0.6576 2021-07-21 20:39:05,076 - __main__ - DEBUG - Epoch 276/499 2021-07-21 20:39:13,597 - __main__ - DEBUG - train Loss: 0.2500 2021-07-21 20:39:16,433 - __main__ - DEBUG - val Loss: 0.7051 2021-07-21 20:39:16,435 - __main__ - DEBUG - Epoch 277/499 2021-07-21 20:39:24,940 - __main__ - DEBUG - train Loss: 0.2575 2021-07-21 20:39:27,671 - __main__ - DEBUG - val Loss: 0.6352 2021-07-21 20:39:27,673 - __main__ - DEBUG - Epoch 278/499 2021-07-21 20:39:36,073 - __main__ - DEBUG - train Loss: 0.2540 2021-07-21 20:39:38,764 - __main__ - DEBUG - val Loss: 0.5794 2021-07-21 20:39:38,765 - __main__ - DEBUG - Epoch 279/499 2021-07-21 20:39:47,143 - __main__ - DEBUG - train Loss: 0.2484 2021-07-21 20:39:49,821 - __main__ - DEBUG - val Loss: 0.6683 2021-07-21 20:39:49,822 - __main__ - DEBUG - Epoch 280/499 2021-07-21 20:39:58,118 - __main__ - DEBUG - train Loss: 0.2323 2021-07-21 20:40:00,824 - __main__ - DEBUG - val Loss: 0.5921 2021-07-21 20:40:00,826 - __main__ - DEBUG - Epoch 281/499 2021-07-21 20:40:09,236 - __main__ - DEBUG - train Loss: 0.2649 2021-07-21 20:40:12,019 - __main__ - DEBUG - val Loss: 0.7006 2021-07-21 20:40:12,021 - __main__ - DEBUG - Epoch 282/499 2021-07-21 20:40:20,536 - __main__ - DEBUG - train Loss: 0.2419 2021-07-21 20:40:23,330 - __main__ - DEBUG - val Loss: 0.7426 2021-07-21 20:40:23,332 - __main__ - DEBUG - Epoch 283/499 2021-07-21 20:40:31,844 - __main__ - DEBUG - train Loss: 0.2650 2021-07-21 20:40:34,537 - __main__ - DEBUG - val Loss: 0.6561 2021-07-21 20:40:34,538 - __main__ - DEBUG - Epoch 284/499 2021-07-21 20:40:42,983 - __main__ - DEBUG - train Loss: 0.2471 2021-07-21 20:40:45,845 - __main__ - DEBUG - val Loss: 0.6553 2021-07-21 20:40:45,847 - __main__ - DEBUG - Epoch 285/499 2021-07-21 20:40:54,384 - __main__ - DEBUG - train Loss: 0.2376 2021-07-21 20:40:57,182 - __main__ - DEBUG - val Loss: 0.7177 2021-07-21 20:40:57,184 - __main__ - DEBUG - Epoch 286/499 2021-07-21 20:41:05,770 - __main__ - DEBUG - train Loss: 0.2309 2021-07-21 20:41:08,478 - __main__ - DEBUG - val Loss: 0.6248 2021-07-21 20:41:08,479 - __main__ - DEBUG - Epoch 287/499 2021-07-21 20:41:16,824 - __main__ - DEBUG - train Loss: 0.2313 2021-07-21 20:41:19,530 - __main__ - DEBUG - val Loss: 0.6535 2021-07-21 20:41:19,532 - __main__ - DEBUG - Epoch 288/499 2021-07-21 20:41:27,881 - __main__ - DEBUG - train Loss: 0.2398 2021-07-21 20:41:30,588 - __main__ - DEBUG - val Loss: 0.5976 2021-07-21 20:41:30,590 - __main__ - DEBUG - Epoch 289/499 2021-07-21 20:41:38,868 - __main__ - DEBUG - train Loss: 0.2345 2021-07-21 20:41:41,502 - __main__ - DEBUG - val Loss: 0.6227 2021-07-21 20:41:41,504 - __main__ - DEBUG - Epoch 290/499 2021-07-21 20:41:50,004 - __main__ - DEBUG - train Loss: 0.2384 2021-07-21 20:41:52,766 - __main__ - DEBUG - val Loss: 0.5890 2021-07-21 20:41:52,768 - __main__ - DEBUG - Epoch 291/499 2021-07-21 20:42:01,321 - __main__ - DEBUG - train Loss: 0.2389 2021-07-21 20:42:04,171 - __main__ - DEBUG - val Loss: 0.6106 2021-07-21 20:42:04,173 - __main__ - DEBUG - Epoch 292/499 2021-07-21 20:42:12,645 - __main__ - DEBUG - train Loss: 0.2373 2021-07-21 20:42:15,333 - __main__ - DEBUG - val Loss: 0.7296 2021-07-21 20:42:15,335 - __main__ - DEBUG - Epoch 293/499 2021-07-21 20:42:23,729 - __main__ - DEBUG - train Loss: 0.2525 2021-07-21 20:42:26,386 - __main__ - DEBUG - val Loss: 0.6527 2021-07-21 20:42:26,388 - __main__ - DEBUG - Epoch 294/499 2021-07-21 20:42:34,681 - __main__ - DEBUG - train Loss: 0.2498 2021-07-21 20:42:37,341 - __main__ - DEBUG - val Loss: 0.6828 2021-07-21 20:42:37,343 - __main__ - DEBUG - Epoch 295/499 2021-07-21 20:42:45,611 - __main__ - DEBUG - train Loss: 0.2476 2021-07-21 20:42:48,266 - __main__ - DEBUG - val Loss: 0.7436 2021-07-21 20:42:48,268 - __main__ - DEBUG - Epoch 296/499 2021-07-21 20:42:56,840 - __main__ - DEBUG - train Loss: 0.2456 2021-07-21 20:42:59,701 - __main__ - DEBUG - val Loss: 0.6463 2021-07-21 20:42:59,703 - __main__ - DEBUG - Epoch 297/499 2021-07-21 20:43:08,251 - __main__ - DEBUG - train Loss: 0.2352 2021-07-21 20:43:11,143 - __main__ - DEBUG - val Loss: 0.7020 2021-07-21 20:43:11,145 - __main__ - DEBUG - Epoch 298/499 2021-07-21 20:43:19,538 - __main__ - DEBUG - train Loss: 0.2268 2021-07-21 20:43:22,256 - __main__ - DEBUG - val Loss: 0.7418 2021-07-21 20:43:22,258 - __main__ - DEBUG - Epoch 299/499 2021-07-21 20:43:30,554 - __main__ - DEBUG - train Loss: 0.2352 2021-07-21 20:43:33,219 - __main__ - DEBUG - val Loss: 0.6732 2021-07-21 20:43:33,221 - __main__ - DEBUG - Epoch 300/499 2021-07-21 20:43:41,511 - __main__ - DEBUG - train Loss: 0.2481 2021-07-21 20:43:44,160 - __main__ - DEBUG - val Loss: 0.6101 2021-07-21 20:43:44,162 - __main__ - DEBUG - Epoch 301/499 2021-07-21 20:43:52,465 - __main__ - DEBUG - train Loss: 0.2323 2021-07-21 20:43:55,095 - __main__ - DEBUG - val Loss: 0.5886 2021-07-21 20:43:55,097 - __main__ - DEBUG - Epoch 302/499 2021-07-21 20:44:03,714 - __main__ - DEBUG - train Loss: 0.2297 2021-07-21 20:44:06,512 - __main__ - DEBUG - val Loss: 0.6009 2021-07-21 20:44:06,514 - __main__ - DEBUG - Epoch 303/499 2021-07-21 20:44:15,013 - __main__ - DEBUG - train Loss: 0.2156 2021-07-21 20:44:17,827 - __main__ - DEBUG - val Loss: 0.6664 2021-07-21 20:44:17,828 - __main__ - DEBUG - Epoch 304/499 2021-07-21 20:44:26,194 - __main__ - DEBUG - train Loss: 0.2144 2021-07-21 20:44:28,883 - __main__ - DEBUG - val Loss: 0.6875 2021-07-21 20:44:28,884 - __main__ - DEBUG - Epoch 305/499 2021-07-21 20:44:37,188 - __main__ - DEBUG - train Loss: 0.2155 2021-07-21 20:44:39,845 - __main__ - DEBUG - val Loss: 0.6150 2021-07-21 20:44:39,847 - __main__ - DEBUG - Epoch 306/499 2021-07-21 20:44:48,131 - __main__ - DEBUG - train Loss: 0.2079 2021-07-21 20:44:50,826 - __main__ - DEBUG - val Loss: 0.6467 2021-07-21 20:44:50,828 - __main__ - DEBUG - Epoch 307/499 2021-07-21 20:44:59,134 - __main__ - DEBUG - train Loss: 0.2322 2021-07-21 20:45:01,846 - __main__ - DEBUG - val Loss: 0.6044 2021-07-21 20:45:01,848 - __main__ - DEBUG - Epoch 308/499 2021-07-21 20:45:10,355 - __main__ - DEBUG - train Loss: 0.2294 2021-07-21 20:45:13,165 - __main__ - DEBUG - val Loss: 0.7746 2021-07-21 20:45:13,168 - __main__ - DEBUG - Epoch 309/499 2021-07-21 20:45:21,721 - __main__ - DEBUG - train Loss: 0.2246 2021-07-21 20:45:24,556 - __main__ - DEBUG - val Loss: 0.7321 2021-07-21 20:45:24,558 - __main__ - DEBUG - Epoch 310/499 2021-07-21 20:45:32,851 - __main__ - DEBUG - train Loss: 0.2251 2021-07-21 20:45:35,503 - __main__ - DEBUG - val Loss: 0.6353 2021-07-21 20:45:35,505 - __main__ - DEBUG - Epoch 311/499 2021-07-21 20:45:43,845 - __main__ - DEBUG - train Loss: 0.2299 2021-07-21 20:45:46,522 - __main__ - DEBUG - val Loss: 0.7244 2021-07-21 20:45:46,524 - __main__ - DEBUG - Epoch 312/499 2021-07-21 20:45:54,909 - __main__ - DEBUG - train Loss: 0.2161 2021-07-21 20:45:57,545 - __main__ - DEBUG - val Loss: 0.7419 2021-07-21 20:45:57,547 - __main__ - DEBUG - Epoch 313/499 2021-07-21 20:46:05,806 - __main__ - DEBUG - train Loss: 0.2075 2021-07-21 20:46:08,623 - __main__ - DEBUG - val Loss: 0.6372 2021-07-21 20:46:08,625 - __main__ - DEBUG - Epoch 314/499 2021-07-21 20:46:17,259 - __main__ - DEBUG - train Loss: 0.2078 2021-07-21 20:46:20,073 - __main__ - DEBUG - val Loss: 0.7573 2021-07-21 20:46:20,075 - __main__ - DEBUG - Epoch 315/499 2021-07-21 20:46:28,617 - __main__ - DEBUG - train Loss: 0.2102 2021-07-21 20:46:31,501 - __main__ - DEBUG - val Loss: 0.6528 2021-07-21 20:46:31,503 - __main__ - DEBUG - Epoch 316/499 2021-07-21 20:46:40,115 - __main__ - DEBUG - train Loss: 0.2177 2021-07-21 20:46:42,905 - __main__ - DEBUG - val Loss: 0.6523 2021-07-21 20:46:42,907 - __main__ - DEBUG - Epoch 317/499 2021-07-21 20:46:51,444 - __main__ - DEBUG - train Loss: 0.2115 2021-07-21 20:46:54,262 - __main__ - DEBUG - val Loss: 0.8199 2021-07-21 20:46:54,264 - __main__ - DEBUG - Epoch 318/499 2021-07-21 20:47:02,778 - __main__ - DEBUG - train Loss: 0.2143 2021-07-21 20:47:05,550 - __main__ - DEBUG - val Loss: 0.8887 2021-07-21 20:47:05,552 - __main__ - DEBUG - Epoch 319/499 2021-07-21 20:47:13,859 - __main__ - DEBUG - train Loss: 0.2318 2021-07-21 20:47:16,487 - __main__ - DEBUG - val Loss: 0.6540 2021-07-21 20:47:16,489 - __main__ - DEBUG - Epoch 320/499 2021-07-21 20:47:24,788 - __main__ - DEBUG - train Loss: 0.2357 2021-07-21 20:47:27,436 - __main__ - DEBUG - val Loss: 0.6616 2021-07-21 20:47:27,438 - __main__ - DEBUG - Epoch 321/499 2021-07-21 20:47:35,742 - __main__ - DEBUG - train Loss: 0.2099 2021-07-21 20:47:38,360 - __main__ - DEBUG - val Loss: 0.6025 2021-07-21 20:47:38,362 - __main__ - DEBUG - Epoch 322/499 2021-07-21 20:47:46,639 - __main__ - DEBUG - train Loss: 0.2046 2021-07-21 20:47:49,433 - __main__ - DEBUG - val Loss: 0.6163 2021-07-21 20:47:49,435 - __main__ - DEBUG - Epoch 323/499 2021-07-21 20:47:57,985 - __main__ - DEBUG - train Loss: 0.2039 2021-07-21 20:48:00,773 - __main__ - DEBUG - val Loss: 0.6068 2021-07-21 20:48:00,775 - __main__ - DEBUG - Epoch 324/499 2021-07-21 20:48:09,307 - __main__ - DEBUG - train Loss: 0.2084 2021-07-21 20:48:12,132 - __main__ - DEBUG - val Loss: 0.6466 2021-07-21 20:48:12,134 - __main__ - DEBUG - Epoch 325/499 2021-07-21 20:48:20,493 - __main__ - DEBUG - train Loss: 0.2141 2021-07-21 20:48:23,212 - __main__ - DEBUG - val Loss: 0.6229 2021-07-21 20:48:23,214 - __main__ - DEBUG - Epoch 326/499 2021-07-21 20:48:31,538 - __main__ - DEBUG - train Loss: 0.2061 2021-07-21 20:48:34,176 - __main__ - DEBUG - val Loss: 0.6555 2021-07-21 20:48:34,178 - __main__ - DEBUG - Epoch 327/499 2021-07-21 20:48:42,519 - __main__ - DEBUG - train Loss: 0.1976 2021-07-21 20:48:45,173 - __main__ - DEBUG - val Loss: 0.6620 2021-07-21 20:48:45,175 - __main__ - DEBUG - Epoch 328/499 2021-07-21 20:48:53,503 - __main__ - DEBUG - train Loss: 0.2051 2021-07-21 20:48:56,346 - __main__ - DEBUG - val Loss: 0.6814 2021-07-21 20:48:56,348 - __main__ - DEBUG - Epoch 329/499 2021-07-21 20:49:04,893 - __main__ - DEBUG - train Loss: 0.2064 2021-07-21 20:49:07,710 - __main__ - DEBUG - val Loss: 0.6405 2021-07-21 20:49:07,712 - __main__ - DEBUG - Epoch 330/499 2021-07-21 20:49:16,282 - __main__ - DEBUG - train Loss: 0.2191 2021-07-21 20:49:18,983 - __main__ - DEBUG - val Loss: 0.7247 2021-07-21 20:49:18,986 - __main__ - DEBUG - Epoch 331/499 2021-07-21 20:49:27,316 - __main__ - DEBUG - train Loss: 0.2160 2021-07-21 20:49:29,964 - __main__ - DEBUG - val Loss: 0.7504 2021-07-21 20:49:29,966 - __main__ - DEBUG - Epoch 332/499 2021-07-21 20:49:38,319 - __main__ - DEBUG - train Loss: 0.2109 2021-07-21 20:49:40,988 - __main__ - DEBUG - val Loss: 0.7068 2021-07-21 20:49:40,989 - __main__ - DEBUG - Epoch 333/499 2021-07-21 20:49:49,297 - __main__ - DEBUG - train Loss: 0.2013 2021-07-21 20:49:51,988 - __main__ - DEBUG - val Loss: 0.8111 2021-07-21 20:49:51,990 - __main__ - DEBUG - Epoch 334/499 2021-07-21 20:50:00,351 - __main__ - DEBUG - train Loss: 0.2125 2021-07-21 20:50:03,215 - __main__ - DEBUG - val Loss: 0.7056 2021-07-21 20:50:03,217 - __main__ - DEBUG - Epoch 335/499 2021-07-21 20:50:11,785 - __main__ - DEBUG - train Loss: 0.2115 2021-07-21 20:50:14,607 - __main__ - DEBUG - val Loss: 0.7033 2021-07-21 20:50:14,609 - __main__ - DEBUG - Epoch 336/499 2021-07-21 20:50:23,153 - __main__ - DEBUG - train Loss: 0.1942 2021-07-21 20:50:25,799 - __main__ - DEBUG - val Loss: 0.8294 2021-07-21 20:50:25,801 - __main__ - DEBUG - Epoch 337/499 2021-07-21 20:50:34,059 - __main__ - DEBUG - train Loss: 0.1844 2021-07-21 20:50:36,691 - __main__ - DEBUG - val Loss: 0.6374 2021-07-21 20:50:36,693 - __main__ - DEBUG - Epoch 338/499 2021-07-21 20:50:45,018 - __main__ - DEBUG - train Loss: 0.1857 2021-07-21 20:50:47,683 - __main__ - DEBUG - val Loss: 0.7689 2021-07-21 20:50:47,685 - __main__ - DEBUG - Epoch 339/499 2021-07-21 20:50:55,989 - __main__ - DEBUG - train Loss: 0.1956 2021-07-21 20:50:58,599 - __main__ - DEBUG - val Loss: 0.7678 2021-07-21 20:50:58,601 - __main__ - DEBUG - Epoch 340/499 2021-07-21 20:51:06,924 - __main__ - DEBUG - train Loss: 0.2012 2021-07-21 20:51:09,735 - __main__ - DEBUG - val Loss: 0.8389 2021-07-21 20:51:09,737 - __main__ - DEBUG - Epoch 341/499 2021-07-21 20:51:18,344 - __main__ - DEBUG - train Loss: 0.2047 2021-07-21 20:51:21,306 - __main__ - DEBUG - val Loss: 0.7374 2021-07-21 20:51:21,308 - __main__ - DEBUG - Epoch 342/499 2021-07-21 20:51:29,977 - __main__ - DEBUG - train Loss: 0.2110 2021-07-21 20:51:32,844 - __main__ - DEBUG - val Loss: 0.8270 2021-07-21 20:51:32,846 - __main__ - DEBUG - Epoch 343/499 2021-07-21 20:51:41,166 - __main__ - DEBUG - train Loss: 0.2015 2021-07-21 20:51:44,011 - __main__ - DEBUG - val Loss: 0.7179 2021-07-21 20:51:44,013 - __main__ - DEBUG - Epoch 344/499 2021-07-21 20:51:52,546 - __main__ - DEBUG - train Loss: 0.1960 2021-07-21 20:51:55,350 - __main__ - DEBUG - val Loss: 0.8712 2021-07-21 20:51:55,352 - __main__ - DEBUG - Epoch 345/499 2021-07-21 20:52:03,923 - __main__ - DEBUG - train Loss: 0.2005 2021-07-21 20:52:06,639 - __main__ - DEBUG - val Loss: 0.7855 2021-07-21 20:52:06,641 - __main__ - DEBUG - Epoch 346/499 2021-07-21 20:52:14,965 - __main__ - DEBUG - train Loss: 0.1978 2021-07-21 20:52:17,691 - __main__ - DEBUG - val Loss: 0.8043 2021-07-21 20:52:17,693 - __main__ - DEBUG - Epoch 347/499 2021-07-21 20:52:25,968 - __main__ - DEBUG - train Loss: 0.1941 2021-07-21 20:52:28,636 - __main__ - DEBUG - val Loss: 0.6839 2021-07-21 20:52:28,638 - __main__ - DEBUG - Epoch 348/499 2021-07-21 20:52:37,002 - __main__ - DEBUG - train Loss: 0.1914 2021-07-21 20:52:39,694 - __main__ - DEBUG - val Loss: 0.6572 2021-07-21 20:52:39,696 - __main__ - DEBUG - Epoch 349/499 2021-07-21 20:52:48,056 - __main__ - DEBUG - train Loss: 0.1823 2021-07-21 20:52:50,874 - __main__ - DEBUG - val Loss: 0.6472 2021-07-21 20:52:50,876 - __main__ - DEBUG - Epoch 350/499 2021-07-21 20:52:59,409 - __main__ - DEBUG - train Loss: 0.1857 2021-07-21 20:53:02,217 - __main__ - DEBUG - val Loss: 0.6588 2021-07-21 20:53:02,219 - __main__ - DEBUG - Epoch 351/499 2021-07-21 20:53:10,780 - __main__ - DEBUG - train Loss: 0.1801 2021-07-21 20:53:13,575 - __main__ - DEBUG - val Loss: 0.6908 2021-07-21 20:53:13,576 - __main__ - DEBUG - Epoch 352/499 2021-07-21 20:53:21,904 - __main__ - DEBUG - train Loss: 0.2082 2021-07-21 20:53:24,593 - __main__ - DEBUG - val Loss: 0.6821 2021-07-21 20:53:24,594 - __main__ - DEBUG - Epoch 353/499 2021-07-21 20:53:32,911 - __main__ - DEBUG - train Loss: 0.1877 2021-07-21 20:53:35,571 - __main__ - DEBUG - val Loss: 0.6791 2021-07-21 20:53:35,572 - __main__ - DEBUG - Epoch 354/499 2021-07-21 20:53:43,811 - __main__ - DEBUG - train Loss: 0.1814 2021-07-21 20:53:46,486 - __main__ - DEBUG - val Loss: 0.8051 2021-07-21 20:53:46,488 - __main__ - DEBUG - Epoch 355/499 2021-07-21 20:53:54,983 - __main__ - DEBUG - train Loss: 0.1851 2021-07-21 20:53:57,845 - __main__ - DEBUG - val Loss: 0.7503 2021-07-21 20:53:57,847 - __main__ - DEBUG - Epoch 356/499 2021-07-21 20:54:06,384 - __main__ - DEBUG - train Loss: 0.1835 2021-07-21 20:54:09,204 - __main__ - DEBUG - val Loss: 0.7995 2021-07-21 20:54:09,207 - __main__ - DEBUG - Epoch 357/499 2021-07-21 20:54:17,747 - __main__ - DEBUG - train Loss: 0.1895 2021-07-21 20:54:20,389 - __main__ - DEBUG - val Loss: 0.8240 2021-07-21 20:54:20,391 - __main__ - DEBUG - Epoch 358/499 2021-07-21 20:54:28,756 - __main__ - DEBUG - train Loss: 0.1931 2021-07-21 20:54:31,495 - __main__ - DEBUG - val Loss: 0.7655 2021-07-21 20:54:31,496 - __main__ - DEBUG - Epoch 359/499 2021-07-21 20:54:39,848 - __main__ - DEBUG - train Loss: 0.1900 2021-07-21 20:54:42,456 - __main__ - DEBUG - val Loss: 0.6751 2021-07-21 20:54:42,458 - __main__ - DEBUG - Epoch 360/499 2021-07-21 20:54:50,765 - __main__ - DEBUG - train Loss: 0.1952 2021-07-21 20:54:53,446 - __main__ - DEBUG - val Loss: 0.7949 2021-07-21 20:54:53,449 - __main__ - DEBUG - Epoch 361/499 2021-07-21 20:55:01,899 - __main__ - DEBUG - train Loss: 0.1992 2021-07-21 20:55:04,740 - __main__ - DEBUG - val Loss: 0.7651 2021-07-21 20:55:04,742 - __main__ - DEBUG - Epoch 362/499 2021-07-21 20:55:13,319 - __main__ - DEBUG - train Loss: 0.2119 2021-07-21 20:55:16,141 - __main__ - DEBUG - val Loss: 0.6916 2021-07-21 20:55:16,143 - __main__ - DEBUG - Epoch 363/499 2021-07-21 20:55:24,659 - __main__ - DEBUG - train Loss: 0.2030 2021-07-21 20:55:27,312 - __main__ - DEBUG - val Loss: 0.7267 2021-07-21 20:55:27,314 - __main__ - DEBUG - Epoch 364/499 2021-07-21 20:55:35,621 - __main__ - DEBUG - train Loss: 0.1933 2021-07-21 20:55:38,248 - __main__ - DEBUG - val Loss: 0.7149 2021-07-21 20:55:38,250 - __main__ - DEBUG - Epoch 365/499 2021-07-21 20:55:46,542 - __main__ - DEBUG - train Loss: 0.1913 2021-07-21 20:55:49,191 - __main__ - DEBUG - val Loss: 0.7755 2021-07-21 20:55:49,193 - __main__ - DEBUG - Epoch 366/499 2021-07-21 20:55:57,589 - __main__ - DEBUG - train Loss: 0.1822 2021-07-21 20:56:00,257 - __main__ - DEBUG - val Loss: 0.6533 2021-07-21 20:56:00,259 - __main__ - DEBUG - Epoch 367/499 2021-07-21 20:56:08,799 - __main__ - DEBUG - train Loss: 0.1809 2021-07-21 20:56:11,625 - __main__ - DEBUG - val Loss: 0.7266 2021-07-21 20:56:11,629 - __main__ - DEBUG - Epoch 368/499 2021-07-21 20:56:20,229 - __main__ - DEBUG - train Loss: 0.1769 2021-07-21 20:56:23,024 - __main__ - DEBUG - val Loss: 0.7307 2021-07-21 20:56:23,026 - __main__ - DEBUG - Epoch 369/499 2021-07-21 20:56:31,575 - __main__ - DEBUG - train Loss: 0.1614 2021-07-21 20:56:34,314 - __main__ - DEBUG - val Loss: 0.6398 2021-07-21 20:56:34,315 - __main__ - DEBUG - Epoch 370/499 2021-07-21 20:56:43,030 - __main__ - DEBUG - train Loss: 0.1532 2021-07-21 20:56:45,930 - __main__ - DEBUG - val Loss: 0.7411 2021-07-21 20:56:45,932 - __main__ - DEBUG - Epoch 371/499 2021-07-21 20:56:54,504 - __main__ - DEBUG - train Loss: 0.1529 2021-07-21 20:56:57,305 - __main__ - DEBUG - val Loss: 0.7158 2021-07-21 20:56:57,307 - __main__ - DEBUG - Epoch 372/499 2021-07-21 20:57:05,885 - __main__ - DEBUG - train Loss: 0.1671 2021-07-21 20:57:08,624 - __main__ - DEBUG - val Loss: 0.7599 2021-07-21 20:57:08,626 - __main__ - DEBUG - Epoch 373/499 2021-07-21 20:57:16,995 - __main__ - DEBUG - train Loss: 0.1810 2021-07-21 20:57:19,667 - __main__ - DEBUG - val Loss: 0.8080 2021-07-21 20:57:19,669 - __main__ - DEBUG - Epoch 374/499 2021-07-21 20:57:27,929 - __main__ - DEBUG - train Loss: 0.1912 2021-07-21 20:57:30,626 - __main__ - DEBUG - val Loss: 0.8501 2021-07-21 20:57:30,628 - __main__ - DEBUG - Epoch 375/499 2021-07-21 20:57:38,905 - __main__ - DEBUG - train Loss: 0.1791 2021-07-21 20:57:41,624 - __main__ - DEBUG - val Loss: 0.8045 2021-07-21 20:57:41,626 - __main__ - DEBUG - Epoch 376/499 2021-07-21 20:57:50,229 - __main__ - DEBUG - train Loss: 0.1855 2021-07-21 20:57:53,127 - __main__ - DEBUG - val Loss: 0.7174 2021-07-21 20:57:53,129 - __main__ - DEBUG - Epoch 377/499 2021-07-21 20:58:01,727 - __main__ - DEBUG - train Loss: 0.1824 2021-07-21 20:58:04,534 - __main__ - DEBUG - val Loss: 0.7334 2021-07-21 20:58:04,535 - __main__ - DEBUG - Epoch 378/499 2021-07-21 20:58:12,777 - __main__ - DEBUG - train Loss: 0.1833 2021-07-21 20:58:15,534 - __main__ - DEBUG - val Loss: 0.7145 2021-07-21 20:58:15,536 - __main__ - DEBUG - Epoch 379/499 2021-07-21 20:58:23,852 - __main__ - DEBUG - train Loss: 0.1793 2021-07-21 20:58:26,573 - __main__ - DEBUG - val Loss: 0.6800 2021-07-21 20:58:26,575 - __main__ - DEBUG - Epoch 380/499 2021-07-21 20:58:34,899 - __main__ - DEBUG - train Loss: 0.1691 2021-07-21 20:58:37,581 - __main__ - DEBUG - val Loss: 0.7067 2021-07-21 20:58:37,583 - __main__ - DEBUG - Epoch 381/499 2021-07-21 20:58:45,892 - __main__ - DEBUG - train Loss: 0.1596 2021-07-21 20:58:48,785 - __main__ - DEBUG - val Loss: 0.7738 2021-07-21 20:58:48,787 - __main__ - DEBUG - Epoch 382/499 2021-07-21 20:58:57,376 - __main__ - DEBUG - train Loss: 0.1730 2021-07-21 20:59:00,179 - __main__ - DEBUG - val Loss: 0.9183 2021-07-21 20:59:00,182 - __main__ - DEBUG - Epoch 383/499 2021-07-21 20:59:08,714 - __main__ - DEBUG - train Loss: 0.1831 2021-07-21 20:59:11,597 - __main__ - DEBUG - val Loss: 0.9309 2021-07-21 20:59:11,599 - __main__ - DEBUG - Epoch 384/499 2021-07-21 20:59:19,934 - __main__ - DEBUG - train Loss: 0.1874 2021-07-21 20:59:22,638 - __main__ - DEBUG - val Loss: 0.8295 2021-07-21 20:59:22,640 - __main__ - DEBUG - Epoch 385/499 2021-07-21 20:59:31,003 - __main__ - DEBUG - train Loss: 0.1820 2021-07-21 20:59:33,736 - __main__ - DEBUG - val Loss: 0.8723 2021-07-21 20:59:33,738 - __main__ - DEBUG - Epoch 386/499 2021-07-21 20:59:42,032 - __main__ - DEBUG - train Loss: 0.1841 2021-07-21 20:59:44,702 - __main__ - DEBUG - val Loss: 0.7032 2021-07-21 20:59:44,704 - __main__ - DEBUG - Epoch 387/499 2021-07-21 20:59:53,074 - __main__ - DEBUG - train Loss: 0.1707 2021-07-21 20:59:55,917 - __main__ - DEBUG - val Loss: 0.7629 2021-07-21 20:59:55,918 - __main__ - DEBUG - Epoch 388/499 2021-07-21 21:00:04,492 - __main__ - DEBUG - train Loss: 0.1779 2021-07-21 21:00:07,293 - __main__ - DEBUG - val Loss: 0.7083 2021-07-21 21:00:07,295 - __main__ - DEBUG - Epoch 389/499 2021-07-21 21:00:15,911 - __main__ - DEBUG - train Loss: 0.1955 2021-07-21 21:00:18,697 - __main__ - DEBUG - val Loss: 0.7326 2021-07-21 21:00:18,699 - __main__ - DEBUG - Epoch 390/499 2021-07-21 21:00:26,946 - __main__ - DEBUG - train Loss: 0.1814 2021-07-21 21:00:29,672 - __main__ - DEBUG - val Loss: 0.6677 2021-07-21 21:00:29,674 - __main__ - DEBUG - Epoch 391/499 2021-07-21 21:00:38,072 - __main__ - DEBUG - train Loss: 0.1895 2021-07-21 21:00:40,736 - __main__ - DEBUG - val Loss: 0.7881 2021-07-21 21:00:40,738 - __main__ - DEBUG - Epoch 392/499 2021-07-21 21:00:49,045 - __main__ - DEBUG - train Loss: 0.1820 2021-07-21 21:00:51,795 - __main__ - DEBUG - val Loss: 0.7696 2021-07-21 21:00:51,797 - __main__ - DEBUG - Epoch 393/499 2021-07-21 21:01:00,098 - __main__ - DEBUG - train Loss: 0.1858 2021-07-21 21:01:02,947 - __main__ - DEBUG - val Loss: 0.7760 2021-07-21 21:01:02,949 - __main__ - DEBUG - Epoch 394/499 2021-07-21 21:01:11,452 - __main__ - DEBUG - train Loss: 0.1658 2021-07-21 21:01:14,243 - __main__ - DEBUG - val Loss: 0.7122 2021-07-21 21:01:14,245 - __main__ - DEBUG - Epoch 395/499 2021-07-21 21:01:22,782 - __main__ - DEBUG - train Loss: 0.1672 2021-07-21 21:01:25,532 - __main__ - DEBUG - val Loss: 0.7156 2021-07-21 21:01:25,534 - __main__ - DEBUG - Epoch 396/499 2021-07-21 21:01:33,881 - __main__ - DEBUG - train Loss: 0.1619 2021-07-21 21:01:36,542 - __main__ - DEBUG - val Loss: 0.6904 2021-07-21 21:01:36,544 - __main__ - DEBUG - Epoch 397/499 2021-07-21 21:01:44,891 - __main__ - DEBUG - train Loss: 0.1619 2021-07-21 21:01:47,611 - __main__ - DEBUG - val Loss: 0.6996 2021-07-21 21:01:47,613 - __main__ - DEBUG - Epoch 398/499 2021-07-21 21:01:55,990 - __main__ - DEBUG - train Loss: 0.1833 2021-07-21 21:01:58,715 - __main__ - DEBUG - val Loss: 0.7707 2021-07-21 21:01:58,717 - __main__ - DEBUG - Epoch 399/499 2021-07-21 21:02:07,143 - __main__ - DEBUG - train Loss: 0.1780 2021-07-21 21:02:09,974 - __main__ - DEBUG - val Loss: 0.7628 2021-07-21 21:02:09,976 - __main__ - DEBUG - Epoch 400/499 2021-07-21 21:02:18,636 - __main__ - DEBUG - train Loss: 0.1705 2021-07-21 21:02:21,461 - __main__ - DEBUG - val Loss: 0.7354 2021-07-21 21:02:21,463 - __main__ - DEBUG - Epoch 401/499 2021-07-21 21:02:30,008 - __main__ - DEBUG - train Loss: 0.1579 2021-07-21 21:02:32,750 - __main__ - DEBUG - val Loss: 0.6263 2021-07-21 21:02:32,752 - __main__ - DEBUG - Epoch 402/499 2021-07-21 21:02:41,136 - __main__ - DEBUG - train Loss: 0.1495 2021-07-21 21:02:44,043 - __main__ - DEBUG - val Loss: 0.6559 2021-07-21 21:02:44,046 - __main__ - DEBUG - Epoch 403/499 2021-07-21 21:02:52,654 - __main__ - DEBUG - train Loss: 0.1585 2021-07-21 21:02:55,514 - __main__ - DEBUG - val Loss: 0.6773 2021-07-21 21:02:55,516 - __main__ - DEBUG - Epoch 404/499 2021-07-21 21:03:04,098 - __main__ - DEBUG - train Loss: 0.1774 2021-07-21 21:03:06,795 - __main__ - DEBUG - val Loss: 0.7621 2021-07-21 21:03:06,796 - __main__ - DEBUG - Epoch 405/499 2021-07-21 21:03:15,133 - __main__ - DEBUG - train Loss: 0.1703 2021-07-21 21:03:17,841 - __main__ - DEBUG - val Loss: 0.7763 2021-07-21 21:03:17,844 - __main__ - DEBUG - Epoch 406/499 2021-07-21 21:03:26,130 - __main__ - DEBUG - train Loss: 0.1598 2021-07-21 21:03:28,843 - __main__ - DEBUG - val Loss: 0.7846 2021-07-21 21:03:28,851 - __main__ - DEBUG - Complete training (4935.647 seconds passed)
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Evaluation
rmse = partial(mean_squared_error, squared=False) # qwk = partial(cohen_kappa_score, labels=np.sort(train['target'].unique()), weights='quadratic') @np.vectorize def predict(proba_0: float, proba_1: float, proba_2: float, proba_3: float) -> int: return np.argmax((proba_0, proba_1, proba_2, proba_3)) metrics = defaultdict(list)
_____no_output_____
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Training set
pred_train_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (training set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv') pred_train_df = pd.read_csv(filepath_fold_train) pred_train_df['actual'] = train.loc[pred_train_df['object_id'], TARGET].values if REGRESSION: if TARGET == 'target': pred_train_df['pred'].clip(lower=0, upper=3, inplace=True) else: pred_train_df['pred'] = np.vectorize(soring_date2target)(pred_train_df['pred']) pred_train_df['actual'] = np.vectorize(soring_date2target)(pred_train_df['actual']) else: pred_train_df['pred'] = predict(pred_train_df['0'], pred_train_df['1'], pred_train_df['2'], pred_train_df['3']) if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_train_df['actual'], pred_train_df['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_train_df['actual'], pred_train_df['pred']) # score = qwk(pred_train_df['actual'], pred_train_df['pred']) logger.debug('Loss: {}'.format(loss)) # logger.debug('Score: {}'.format(score)) metrics['train_losses'].append(loss) # metrics['train_scores'].append(score) pred_train_dfs.append(pred_train_df) metrics['train_losses_avg'] = np.mean(metrics['train_losses']) metrics['train_losses_std'] = np.std(metrics['train_losses']) # metrics['train_scores_avg'] = np.mean(metrics['train_scores']) # metrics['train_scores_std'] = np.std(metrics['train_scores']) pred_train = pd.concat(pred_train_dfs).groupby('object_id').sum() pred_train = pred_train / N_SPLITS if not REGRESSION: pred_train['pred'] = predict(pred_train['0'], pred_train['1'], pred_train['2'], pred_train['3']) pred_train['actual'] = train.loc[pred_train.index, TARGET].values if REGRESSION and TARGET == 'sorting_date': pred_train['actual'] = np.vectorize(soring_date2target)(pred_train['actual']) # for c in ('pred', 'actual'): # pred_train[c] = pred_train[c].astype('int') pred_train if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_train['actual'], pred_train['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_train['actual'], pred_train['pred']) # score = qwk(pred_train['actual'], pred_train['pred']) metrics['train_loss'] = loss # metrics['train_score'] = score logger.info('Training loss: {}'.format(loss)) # logger.info('Training score: {}'.format(score)) pred_train.to_csv(os.path.join(output_dir, 'prediction_train.csv')) logger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_train.csv')))
2021-07-22 02:18:12,072 - __main__ - DEBUG - Write cv result to ../scripts/../experiments/exp027/prediction_train.csv
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Validation set
pred_valid_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (validation set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv') pred_valid_df = pd.read_csv(filepath_fold_valid) pred_valid_df['actual'] = train.loc[pred_valid_df['object_id'], TARGET].values if REGRESSION: if TARGET == 'target': pred_valid_df['pred'].clip(lower=0, upper=3, inplace=True) else: pred_valid_df['pred'] = np.vectorize(soring_date2target)(pred_valid_df['pred']) pred_valid_df['actual'] = np.vectorize(soring_date2target)(pred_valid_df['actual']) else: pred_valid_df['pred'] = predict(pred_valid_df['0'], pred_valid_df['1'], pred_valid_df['2'], pred_valid_df['3']) if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_valid_df['actual'], pred_valid_df['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_valid_df['actual'], pred_valid_df['pred']) # score = qwk(pred_valid_df['actual'], pred_valid_df['pred']) logger.debug('Loss: {}'.format(loss)) # logger.debug('Score: {}'.format(score)) metrics['valid_losses'].append(loss) # metrics['valid_scores'].append(score) pred_valid_dfs.append(pred_valid_df) metrics['valid_losses_avg'] = np.mean(metrics['valid_losses']) metrics['valid_losses_std'] = np.std(metrics['valid_losses']) # metrics['valid_scores_avg'] = np.mean(metrics['valid_scores']) # metrics['valid_scores_std'] = np.std(metrics['valid_scores']) pred_valid = pd.concat(pred_valid_dfs).groupby('object_id').sum() pred_valid = pred_valid / N_SPLITS if not REGRESSION: pred_valid['pred'] = predict(pred_valid['0'], pred_valid['1'], pred_valid['2'], pred_valid['3']) pred_valid['actual'] = train.loc[pred_valid.index, TARGET].values if REGRESSION and TARGET == 'sorting_date': pred_valid['actual'] = np.vectorize(soring_date2target)(pred_valid['actual']) # for c in ('pred', 'actual'): # pred_valid[c] = pred_valid[c].astype('int') pred_valid if not REGRESSION: print(confusion_matrix(pred_valid['actual'], pred_valid['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_valid['actual'], pred_valid['pred']) # score = qwk(pred_valid['actual'], pred_valid['pred']) metrics['valid_loss'] = loss # metrics['valid_score'] = score logger.info('Validatino loss: {}'.format(loss)) # logger.info('Validatino score: {}'.format(score)) pred_valid.to_csv(os.path.join(output_dir, 'prediction_valid.csv')) logger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_valid.csv'))) with open(os.path.join(output_dir, 'metrics.json'), 'w') as f: json.dump(dict(metrics), f) logger.debug('Write metrics to {}'.format(os.path.join(output_dir, 'metrics.json')))
2021-07-22 02:18:12,298 - __main__ - DEBUG - Write metrics to ../scripts/../experiments/exp027/metrics.json
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Prediction
pred_test_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 # Read cv result filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv') pred_test_df = pd.read_csv(filepath_fold_test) pred_test_dfs.append(pred_test_df) pred_test = pd.concat(pred_test_dfs).groupby('object_id').sum() pred_test = pred_test / N_SPLITS if REGRESSION: if TARGET == 'target': pred_test['pred'].clip(lower=0, upper=3, inplace=True) else: pred_test['pred'] = np.vectorize(soring_date2target)(pred_test['pred']) else: pred_test['pred'] = predict(pred_test['0'], pred_test['1'], pred_test['2'], pred_test['3']) pred_test test['target'] = pred_test.loc[test['object_id'], 'pred'].values test = test[['target']] test sample_submission test.to_csv(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv'), index=False) logger.debug('Write submission to {}'.format(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv'))) fig = plt.figure() if not (REGRESSION and TARGET == 'target'): sns.countplot(data=test, x='target') else: sns.histplot(data=test, x='target') sns.despine() fig.savefig(os.path.join(output_dir, 'prediction.png')) logger.debug('Write figure to {}'.format(os.path.join(output_dir, 'prediction.png'))) logger.debug('Complete ({:.3f} seconds passed)'.format(time.time() - SINCE))
2021-07-22 02:18:12,639 - __main__ - DEBUG - Complete (23819.435 seconds passed)
MIT
experiments/exp027.ipynb
Quvotha/atmacup11
Qonto - Get statement aggregated by date **Tags:** qonto bank statement naas_drivers Input Import library
from naas_drivers import qonto
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Get your Qonto credentialsHow to get your credentials ?
QONTO_USER_ID = 'YOUR_USER_ID' QONTO_SECRET_KEY = 'YOUR_SECRET_KEY'
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Parameters
# Date to start extraction, format: "AAAA-MM-JJ", example: "2021-01-01" date_from = None # Date to end extraction, format: "AAAA-MM-JJ", example: "2021-01-01", default = now date_to = None
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Model Get statement aggregated by date
df_statement = qonto.connect(QONTO_USER_ID, QONTO_SECRET_KEY).statement.aggregated(date_from, date_to)
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Output Display result
df_statement
_____no_output_____
BSD-3-Clause
Qonto/Qonto_Get_statement_aggregated_by_date.ipynb
Charles-de-Montigny/awesome-notebooks
Essential ObjectsThis tutorial covers several object types that are foundational to much of what pyGSTi does: [circuits](circuits), [processor specifications](pspecs), [models](models), and [data sets](datasets). Our objective is to explain what these objects are and how they relate to one another at a high level while providing links to other notebooks that cover details we skip over here.
import pygsti from pygsti.circuits import Circuit from pygsti.models import Model from pygsti.data import DataSet
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
CircuitsThe `Circuit` object encapsulates a quantum circuit as a sequence of *layers*, each of which contains zero or more non-identity *gates*. A `Circuit` has some number of labeled *lines* and each gate label is assigned to one or more lines. Line labels can be integers or strings. Gate labels have two parts: a `str`-type name and a tuple of line labels. A gate name typically begins with 'G' because this is expected when we parse circuits from text files.For example, `('Gx',0)` is a gate label that means "do the Gx gate on qubit 0", and `('Gcnot',(2,3))` means "do the Gcnot gate on qubits 2 and 3".A `Circuit` can be created from a list of gate labels:
c = Circuit( [('Gx',0),('Gcnot',0,1),(),('Gy',3)], line_labels=[0,1,2,3]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
If you want multiple gates in a single layer, just put those gate labels in their own nested list:
c = Circuit( [('Gx',0),[('Gcnot',0,1),('Gy',3)],()] , line_labels=[0,1,2,3]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
We distinguish three basic types of circuit layers. We call layers containing quantum gates *operation layers*. All the circuits we've seen so far just have operation layers. It's also possible to have a *preparation layer* at the beginning of a circuit and a *measurement layer* at the end of a circuit. There can also be a fourth type of layer called an *instrument layer* which we dicuss in a separate [tutorial on Instruments](objects/advanced/Instruments.ipynb). Assuming that `'rho'` labels a (n-qubit) state preparation and `'Mz'` labels a (n-qubit) measurement, here's a circuit with all three types of layers:
c = Circuit( ['rho',('Gz',1),[('Gswap',0,1),('Gy',2)],'Mz'] , line_labels=[0,1,2]) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Finally, when dealing with small systems (e.g. 1 or 2 qubits), we typically just use a `str`-type label (without any line-labels) to denote every possible layer. In this case, all the labels operate on the entire state space so we don't need the notion of 'lines' in a `Circuit`. When there are no line-labels, a `Circuit` assumes a single default **'\*'-label**, which you can usually just ignore:
c = Circuit( ['Gx','Gy','Gi'] ) print(c)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Pretty simple, right? The `Circuit` object allows you to easily manipulate its labels (similar to a NumPy array) and even perform some basic operations like depth reduction and simple compiling. For lots more details on how to create, modify, and use circuit objects see the [circuit tutorial](objects/Circuit.ipynb). Processor SpecificationsA processor specification describes the interface that a quantum processor exposes to the outside world. Actual quantum processors often have a "native" interface associated with them, but can also be viewed as implementing various other derived interfaces. For example, while a 1-qubit quantum processor may natively implement the $X(\pi/2)$ and $Z(\pi/2)$ gates, it can also implement the set of all 1-qubit Clifford gates. Both of these interfaces would correspond to a processor specification in pyGSTi.Currently pyGSTi only supports processor specifications having an integral number of qubits. The `QubitProcessorSpec` object describes the number of qubits and what gates are available on them. For example,
pspec = pygsti.processors.QubitProcessorSpec(num_qubits=2, gate_names=['Gxpi2', 'Gypi2', 'Gcnot'], geometry="line") print("Qubit labels are", pspec.qubit_labels) print("X(pi/2) gates on qubits: ", pspec.resolved_availability('Gxpi2')) print("CNOT gates on qubits: ", pspec.resolved_availability('Gcnot'))
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
creates a processor specification for a 2-qubits with $X(\pi/2)$, $Y(\pi/2)$, and CNOT gates. Setting the geometry to `"line"` causes 1-qubit gates to be available on each qubit and the CNOT between the two qubits (in either control/target direction). Processor specifications are used to build experiment designs and models, and so defining or importing an appropriate processor specification is often the first step in many analyses. To learn more about processor specification objects, see the [processor specification tutorial](objects/ProcessorSpec.ipynb). ModelsAn instance of the `Model` class represents something that can predict the outcome probabilities of quantum circuits. We define any such thing to be a "QIP model", or just a "model", as these probabilities define the behavior of some real or virtual QIP. Because there are so many types of models, the `Model` class in pyGSTi is just a base class and is never instaniated directly. Classes `ExplicitOpModel` and `ImplicitOpModel` (subclasses of `Model`) define two broad categories of models, both of which sequentially operate on circuit *layers* (the "Op" in the class names is short for "layer operation"). Explicit layer-operation modelsAn `ExplicitOpModel` is a container object. Its `.preps`, `.povms`, and `.operations` members are essentially dictionaires of state preparation, measurement, and layer-operation objects, respectively. How to create these objects and build up explicit models from scratch is a central capability of pyGSTi and a topic of the [explicit-model tutorial](objects/ExplicitModel.ipynb). Presently, we'll create a 2-qubit model using the processor specification above via the `create_explicit_model` function:
mdl = pygsti.models.create_explicit_model(pspec)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
This creates an `ExplicitOpModel` with a default preparation (prepares all qubits in the zero-state) labeled `'rho0'`, a default measurement labeled `'Mdefault'` in the Z-basis and with 5 layer-operations given by the labels in the 2nd argument (the first argument is akin to a circuit's line labels and the third argument contains special strings that the function understands):
print("Preparations: ", ', '.join(map(str,mdl.preps.keys()))) print("Measurements: ", ', '.join(map(str,mdl.povms.keys()))) print("Layer Ops: ", ', '.join(map(str,mdl.operations.keys())))
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
We can now use this model to do what models were made to do: compute the outcome probabilities of circuits.
c = Circuit( [('Gxpi2',0),('Gcnot',0,1),('Gypi2',1)] , line_labels=[0,1]) print(c) mdl.probabilities(c) # Compute the outcome probabilities of circuit `c`
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
An `ExplictOpModel` only "knows" how to operate on circuit layers it explicitly contains in its dictionaries,so, for example, a circuit layer with two X gates in parallel (layer-label = `[('Gxpi2',0),('Gxpi2',1)]`) cannot be used with our model until we explicitly associate an operation with the layer-label `[('Gxpi2',0),('Gxpi2',1)]`:
import numpy as np c = Circuit( [[('Gxpi2',0),('Gxpi2',1)],('Gxpi2',1)] , line_labels=[0,1]) print(c) try: p = mdl.probabilities(c) except KeyError as e: print("!!KeyError: ",str(e)) #Create an operation for two parallel X-gates & rerun (now it works!) mdl.operations[ [('Gxpi2',0),('Gxpi2',1)] ] = np.dot(mdl.operations[('Gxpi2',0)].to_dense(), mdl.operations[('Gxpi2',1)].to_dense()) p = mdl.probabilities(c) print("Probability_of_outcome(00) = ", p['00']) # p is like a dictionary of outcomes mdl.probabilities((('Gxpi2',0),('Gcnot',0,1)))
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Implicit layer-operation modelsIn the above example, you saw how it is possible to manually add a layer-operation to an `ExplicitOpModel` based on its other, more primitive layer operations. This often works fine for a few qubits, but can quickly become tedious as the number of qubits increases (since the number of potential layers that involve a given set of gates grows exponentially with qubit number). This is where `ImplicitOpModel` objects come into play: these models contain rules for building up arbitrary layer-operations based on more primitive operations. PyGSTi offers several "built-in" types of implicit models and a rich set of tools for building your own custom ones. See the [tutorial on implicit models](objects/ImplicitModel.ipynb) for details. Data SetsThe `DataSet` object is a container for tabulated outcome counts. It behaves like a dictionary whose keys are `Circuit` objects and whose values are dictionaries that associate *outcome labels* with (usually) integer counts. There are two primary ways you go about getting a `DataSet`. The first is by reading in a simply formatted text file:
dataset_txt = \ """## Columns = 00 count, 01 count, 10 count, 11 count {} 100 0 0 0 Gxpi2:0 55 5 40 0 Gxpi2:0Gypi2:1 20 27 23 30 Gxpi2:0^4 85 3 10 2 Gxpi2:0Gcnot:0:1 45 1 4 50 [Gxpi2:0Gxpi2:1]Gypi2:0 25 32 17 26 """ with open("tutorial_files/Example_Short_Dataset.txt","w") as f: f.write(dataset_txt) ds = pygsti.io.read_dataset("tutorial_files/Example_Short_Dataset.txt")
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
The second is by simulating a `Model` and thereby generating "fake data". This essentially calls `mdl.probabilities(c)` for each circuit in a given list, and samples from the output probability distribution to obtain outcome counts:
circuit_list = pygsti.circuits.to_circuits([ (), (('Gxpi2',0),), (('Gxpi2',0),('Gypi2',1)), (('Gxpi2',0),)*4, (('Gxpi2',0),('Gcnot',0,1)), ((('Gxpi2',0),('Gxpi2',1)),('Gxpi2',0)) ], line_labels=(0,1)) ds_fake = pygsti.data.simulate_data(mdl, circuit_list, num_samples=100, sample_error='multinomial', seed=8675309)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Outcome counts are accessible by indexing a `DataSet` as if it were a dictionary with `Circuit` keys:
c = Circuit( (('Gxpi2',0),('Gypi2',1)), line_labels=(0,1) ) print(ds[c]) # index using a Circuit print(ds[ [('Gxpi2',0),('Gypi2',1)] ]) # or with something that can be converted to a Circuit
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Because `DataSet` object can also store *timestamped* data (see the [time-dependent data tutorial](objects/advanced/TimestampedDataSets.ipynb), the values or "rows" of a `DataSet` aren't simple dictionary objects. When you'd like a `dict` of counts use the `.counts` member of a data set row:
row = ds[c] row['00'] # this is ok for outlbl, cnt in row.counts.items(): # Note: `row` doesn't have .items(), need ".counts" print(outlbl, cnt)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Another thing to note is that `DataSet` objects are "sparse" in that 0-counts are not typically stored:
c = Circuit([('Gxpi2',0)], line_labels=(0,1)) print("No 01 or 11 outcomes here: ",ds_fake[c]) for outlbl, cnt in ds_fake[c].counts.items(): print("Item: ",outlbl, cnt) # Note: this loop never loops over 01 or 11!
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb
maij/pyGSTi
Hacker Factory Cyber Hackathon Solution by Team Jugaad (Abhiraj Singh Rajput, Deepanshu Gupta, Manuj Mehrotra) We are a team of members, that are NOT moved by the buzzwords like Machine Learning, Data Science, AI etc. However we are a team of people who get adrenaline rush for seeking the solution to a problem. And the approach to solve the problem is never a constraint for us. Keeping our heads down we tried out bit to solve the problem of “Preventive analytics with AI – How to use AI to predict probability of occurrence of a crime.”Formally our team members: - Abhiraj Singh Rajput(BI Engineer, Emp ID -1052530)Deepanshu Gupta(Performance Test Engineer, Emp ID - 1048606)Manuj Mehrotra(Analyst-Data Science, Emp ID - 1061322) Preventive analytics with AI – How to use AI to predict probability of occurrence of a crime Context:- We tried to create a classification ML model to analyze the data points and using that we tried to Forecast the occurrence of the Malware attack. For the study we have taken two separate datasets i.e. bifurcated datasets, on the basis of Static and Dynamic features (Sources: Ref[3]).Scope of the solution covered :- Since the Model that we have created considers nearly 350 Features (i.e. 331 statics features and 13 dynamic Features) for predicting the attack, so the model is very Robust and is scalable very easily. The objective behind building this predictive model was to forecast the attack of a malicious app by capturing these features and hence preventing them from attacking the device. Soultion Archicture Additional Information – How can it enhance further The data set that we used for Static Analysis has just 398 data points that is comparatively very less to generalize a statistical model (to population).We haven’t tuned the all the hyper parameter of ML models, however we have considered the important hyper parameters while model building ex: - tuning K in K-NN.We have analyses the Static and Dynamic Analysis separately. However a more robust model will be that, analyzes both the features together, provided we have sufficient number of data points.Stacking or ensembling of the ML models from both the data sets could be done to make the model more Robust, provided we capture both the static and dynamic feature of the application.Dynamic features likes duracion , avg_local_pkt_rate and avg_remote_pkt_rate were not captured which would have degraded the model quality by some amount. Proof Of Concept Static Analysis Includes analysing the application that we want to analyse without executing it, like the study of resources, app permission etc.
import pandas as pd df = pd.read_csv("train.csv", sep=";") df.head() df.columns df.shape
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Let's get the top 10 of permissions that are used for our malware samples Malicious
series = pd.Series.sort_values(df[df.type==1].sum(axis=0), ascending=False)[1:11] series pd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10] import matplotlib.pyplot as plt fig, axs = plt.subplots(nrows=2, sharex=True) pd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10].plot.bar(ax=axs[0], color="green") pd.Series.sort_values(df[df.type==1].sum(axis=0), ascending=False)[1:11].plot.bar(ax=axs[1], color="red")
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Now will try to predict with the exsisting data set, i.e. model creation Machine Learning Models
from sklearn.naive_bayes import GaussianNB, BernoulliNB from sklearn.metrics import accuracy_score, classification_report,roc_auc_score from sklearn.ensemble import BaggingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import SGDClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import cohen_kappa_score from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from sklearn import preprocessing #import torch from sklearn import svm from sklearn import tree import pandas as pd from sklearn.externals import joblib import pickle import numpy as np import seaborn as sns y = df["type"] X = df.drop("type", axis=1) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.33,random_state=7) # Naive Bayes algorithm gnb = GaussianNB() gnb.fit(X_train, y_train) # pred pred = gnb.predict(X_test) # accuracy accuracy = accuracy_score(pred, y_test) print("naive_bayes") print(accuracy) print(classification_report(pred, y_test, labels=None)) for i in range(3,15,3): neigh = KNeighborsClassifier(n_neighbors=i) neigh.fit(X_train, y_train) pred = neigh.predict(X_test) # accuracy accuracy = accuracy_score(pred, y_test) print("kneighbors {}".format(i)) print(accuracy) print(classification_report(pred, y_test, labels=None)) print("") clf = tree.DecisionTreeClassifier() clf.fit(X_train, y_train) # Read the csv test file pred = clf.predict(X_test) # accuracy accuracy = accuracy_score(pred, y_test) print(clf) print(accuracy) print(classification_report(pred, y_test, labels=None))
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') 0.8560606060606061 precision recall f1-score support 0 0.80 0.90 0.85 59 1 0.91 0.82 0.86 73 avg / total 0.86 0.86 0.86 132
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Dynamic AnalysisFor this approach, we used a set of pcap files from the DroidCollector project integrated by 4705 benign and 7846 malicious applications. All of the files were processed by our feature extractor script (a result from [4]), the idea of this analysis is to answer the next question, according to the static analysis previously seen a lot of applications use a network connection, in other words, they are trying to communicate or transmit information, so.. is it possible to distinguish between malware and benign application using network traffic?
import pandas as pd data = pd.read_csv("android_traffic.csv", sep=";") data.head() data.columns data.shape data.type.value_counts() data.isna().sum() data = data.drop(['duracion','avg_local_pkt_rate','avg_remote_pkt_rate'], axis=1).copy() data.describe() sns.pairplot(data) data.loc[data.tcp_urg_packet > 0].shape[0] data = data.drop(columns=["tcp_urg_packet"], axis=1).copy() data.shape data=data[data.tcp_packets<20000].copy() data=data[data.dist_port_tcp<1400].copy() data=data[data.external_ips<35].copy() data=data[data.vulume_bytes<2000000].copy() data=data[data.udp_packets<40].copy() data=data[data.remote_app_packets<15000].copy() data[data.duplicated()].sum() data=data.drop('source_app_packets.1',axis=1).copy() scaler = preprocessing.RobustScaler() scaledData = scaler.fit_transform(data.iloc[:,1:11]) scaledData = pd.DataFrame(scaledData, columns=['tcp_packets','dist_port_tcp','external_ips','vulume_bytes','udp_packets','source_app_packets','remote_app_packets',' source_app_bytes','remote_app_bytes','dns_query_times']) X_train, X_test, y_train, y_test = train_test_split(scaledData.iloc[:,0:10], data.type.astype("str"), test_size=0.25, random_state=45) gnb = GaussianNB() gnb.fit(X_train, y_train) pred = gnb.predict(X_test) ## accuracy accuracy = accuracy_score(y_test,pred) print("naive_bayes") print(accuracy) print(classification_report(y_test,pred, labels=None)) print("cohen kappa score") print(cohen_kappa_score(y_test, pred)) for i in range(3,15,3): neigh = KNeighborsClassifier(n_neighbors=i) neigh.fit(X_train, y_train) pred = neigh.predict(X_test) # accuracy accuracy = accuracy_score(pred, y_test) print("kneighbors {}".format(i)) print(accuracy) print(classification_report(pred, y_test, labels=None)) print("cohen kappa score") print(cohen_kappa_score(y_test, pred)) print("") rdF=RandomForestClassifier(n_estimators=250, max_depth=50,random_state=45) rdF.fit(X_train,y_train) pred=rdF.predict(X_test) cm=confusion_matrix(y_test, pred) accuracy = accuracy_score(y_test,pred) print(rdF) print(accuracy) print(classification_report(y_test,pred, labels=None)) print("cohen kappa score") print(cohen_kappa_score(y_test, pred)) print(cm) from lightgbm import LGBMClassifier rdF=LGBMClassifier(n_estimators=250, max_depth=50,random_state=45) rdF.fit(X_train,y_train) pred=rdF.predict(X_test) cm=confusion_matrix(y_test, pred) accuracy = accuracy_score(y_test,pred) print(rdF) print(accuracy) print(classification_report(y_test,pred, labels=None)) print("cohen kappa score") print(cohen_kappa_score(y_test, pred)) print(cm) import pandas as pd feature_importances = pd.DataFrame(rdF.feature_importances_,index = X_train.columns,columns=['importance']).sort_values('importance',ascending=False) feature_importances x= feature_importances.index y=feature_importances["importance"] plt.figure(figsize=(6,4)) sns.barplot(x=y,y=x)
_____no_output_____
MIT
Cyber Hackathon.ipynb
MANUJMEHROTRA/CyberHacathon
Salary Data
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split import seaborn as sns salary = pd.read_csv("Salary_Data.csv") salary.head() salary.info() salary.describe() X = salary['YearsExperience'].values y = salary['Salary'].values y minimum = salary['Salary'].min() middle = salary['Salary'].median() maximum = salary['Salary'].max() print(middle) print(minimum) print(maximum) X=X.reshape(-1,1) y=y.reshape(-1,1) x_train, x_test, y_train, y_test = train_test_split(X,y,train_size=0.8,test_size=0.2,random_state=100) print(f"X_train shape {x_train.shape}") print(f"y_train shape {y_train.shape}") print(f"X_test shape {x_test.shape}") print(f"y_test shape {y_test.shape}") print(y_test) print(x_test) %matplotlib inline plt.scatter(x_train,y_train,color='red') plt.xlabel('Year of Experience') plt.ylabel('Salary') plt.title('Salary Data') plt.show() sns.set() plt.figure(figsize=(12,6),dpi=100) sns.regplot(x='YearsExperience', y='Salary', data=salary, order=1)
_____no_output_____
MIT
salary-data.ipynb
JCode1986/data_analysis
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
# Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/"
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
# Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training)
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
--- 1. Visualize the input images
# Print out 1. The shape of the image and 2. The image's label # Select an image and its label by list index image_index = 0 selected_image = IMAGE_LIST[image_index][0] selected_label = IMAGE_LIST[image_index][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label: " + str(selected_label))
Shape: (458, 800, 3) Label: day
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
2. Pre-process the DataAfter loading in each image, you have to standardize the input and output. Solution codeYou are encouraged to try to complete this code on your own, but if you are struggling or want to make sure your code is correct, there i solution code in the `helpers.py` file in this directory. You can look at that python file to see complete `standardize_input` and `encode` function code. For this day and night challenge, you can often jump one notebook ahead to see the solution code for a previous notebook! --- InputIt's important to make all your images the same size so that they can be sent through the same pipeline of classification steps! Every input image should be in the same format, of the same size, and so on. TODO: Standardize the input images* Resize each image to the desired input size: 600x1100px (hxw).
# This function should take in an RGB image and return a new, standardized version def standardize_input(image): ## TODO: Resize image so that all "standard" images are the same size 600x1100 (hxw) standard_im = image[0:600, 0:1100] return standard_im
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
TODO: Standardize the outputWith each loaded image, you also need to specify the expected output. For this, use binary numerical values 0/1 = night/day.
# Examples: # encode("day") should return: 1 # encode("night") should return: 0 def encode(label): numerical_val = 0 ## TODO: complete the code to produce a numerical label if label == "day": numerical_val = 1 return numerical_val
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.This uses the functions you defined above to standardize the input and output, so those functions must be complete for this standardization to work!
def standardize(image_list): # Empty image data array standard_list = [] # Iterate through all the image-label pairs for item in image_list: image = item[0] label = item[1] # Standardize the image standardized_im = standardize_input(image) # Create a numerical label binary_label = encode(label) # Append the image, and it's one hot encoded label to the full, processed list of image data standard_list.append((standardized_im, binary_label)) return standard_list # Standardize all training images STANDARDIZED_LIST = standardize(IMAGE_LIST)
_____no_output_____
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
# Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it ## TODO: Make sure the images have numerical labels and are of the same size plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label))
Shape: (458, 800, 3) Label [1 = day, 0 = night]: 1
MIT
1_1_Image_Representation/6_2. Standardizing the Data.ipynb
georgiagn/CVND_Exercises
生成随机数
# 这里使用默认的参数,按照均匀分布的中心点 # TODO: task 上说可以尝试有趣的pattern,我们可以手动给定centroid再生成周围点,详见 gen.py 的文档 centroids, points, N = gen_data() y_true = np.repeat(np.arange(len(N)),N) len(y_true) len(points) # 简单画个图 plt.figure(figsize=(10,10)) plot_generated_data(centroids, points, N) len(points)
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
AGM Sample
lbd = 0.05 delta = 1e-3 n = len(points) step = step_size(n,lbd,delta) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans,AGM_loss = AGM(grad,points,step,0.005) groups = get_group(ans, tol=1.5) groups purity_score(y_true,groups) plt.figure(figsize=(10,10)) plot_res_data(points,ans,groups) plt.figure(figsize=(10,10)) plot_res_data(points,ans,groups,way='ans') plt.figure(figsize=(10,10)) plot_res_data(points,ans,groups,way='points') plt.plot(np.log(AGM_loss))
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
GM Sample
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans2,GM_loss = GM(points,func,grad,1e-2) len(GM_loss) groups = get_group(ans2, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans2,groups,way='points') plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'}) plt.plot(np.log(GM_loss - GM_loss[len(GM_loss)-1])) plt.ylabel("Loss: |f(xk) - f(x*)|") plt.xlabel("Iters")
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
GM_BB Sample
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_BB,GM_BB_loss = GM_BB(points,func,grad,1e-5) len(GM_BB_loss) groups = get_group(ans_BB, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_BB,groups,way='points') plt.rc_context({'axes.edgecolor':'black', 'xtick.color':'black', 'ytick.color':'black', 'figure.facecolor':'white'}) plt.figure(figsize=(8,8)) plt.ylabel("Loss: log(|f(xk) - f(x*)|)") plt.plot(np.log(GM_BB_loss - GM_BB_loss[len(GM_BB_loss)-1]),label="GM_BB") plt.plot(np.log(GM_loss - GM_loss[len(GM_loss)-1]),color="green",label="GM") plt.legend() plt.savefig("D:\Study\MDS\Term 1\Optimization\Final\Figure\BB_GM_Loss") plt.show()
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
BFGStol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_BFGS,BFGS_loss = BFGS(points,func,grad,0.003) groups = get_group(ans_BFGS, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_BFGS,groups) plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'}) plt.figure(figsize=(5,5)) plt.ylabel("Loss") plt.plot(np.log(BFGS_loss - BFGS_loss[len(BFGS_loss)-1])) plt.show()
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
LBFGStol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan
lbd = 0.05 delta = 1e-3 func = lambda X,B: loss_func(X,points,lbd,delta,B) grad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D) ans_LBFGS,LBFGS_loss = LBFGS(points,func,grad,0.003,1,5) groups = get_group(ans_LBFGS, tol=2) plt.figure(figsize=(10,10)) plot_res_data(points,ans_LBFGS,groups) plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'}) plt.figure(figsize=(5,5)) plt.ylabel("Loss") plt.plot(np.log(LBFGS_loss - LBFGS_loss[len(LBFGS_loss)-1])) plt.show()
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
计算Hessian
from itertools import combinations def huber(x, delta): ''' Args: x: input that has been norm2ed (n*(n-1)/2,) delta: threshold Output: (n*(n-1)/2,) ''' return np.where(x > delta ** 2, np.sqrt(x) - delta / 2, x / (2 * delta)) def pair_col_diff_norm2(x, idx): ''' compute norm2 of pairwise column difference Args: x: (d, n) idx: (n*(n - 1)/2, 2), used to indexing pairwise column combinations Output: (n*(n-1)/2,) ''' x = x[:, idx] # (d, n*(n - 1)/2, 2) x = np.diff(x, axis=-1).squeeze() # (d, n*(n-1)/2) x = np.sum(x ** 2, axis=0) # (n*(n-1)/2,) return x def pair_col_diff_sum(x, t, idx): ''' compute sum of pairwise column difference Args: x: (d, n) t: (d, n) idx: (n*(n - 1)/2, 2), used to indexing pairwise column combinations Output: (n*(n-1)/2,) ''' x = np.diff(x[:, idx], axis=-1).squeeze() # (d, n*(n-1)/2) t = np.diff(t[:, idx], axis=-1).squeeze() # (d, n*(n-1)/2) return np.sum(x * t, axis=0) # (n*(n-1)/2,) class OBJ: def __init__(self, d, n, delta): ''' a: training data samples of shape (d, n) ''' self.d = d self.n = n self.delta = delta self.idx = np.array(list(combinations(list(range(n)), 2))) self.triu_idx = np.triu_indices(self.n, 1) def __call__(self, x, a, lamb): ''' Args: x: (d, n) a: (d, n) lamb: control effect of regularization Output: scalar ''' v = np.sum((x - a) ** 2) / 2 v += lamb * np.sum(huber(pair_col_diff_norm2(x, self.idx), self.delta)) return v def grad(self, x, a, lamb): ''' gradient Output: (d, n) ''' g = x - a diff_norm2 = pair_col_diff_norm2(x, self.idx) # (n*(n-1)/2,) tmp = np.zeros((self.n, self.n)) tmp[self.triu_idx] = diff_norm2 tmp += tmp.T # (n, n) mask = (tmp > self.delta ** 2) tmp = np.where(mask, np.divide(1, np.sqrt(tmp), where=mask), 0) x = x.T g = g + lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T tmp = 1 - mask g = g + lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T / self.delta return g.flatten() def hessiant(self, x, t, lamb): ''' returns the result of hessian matrix dot product a vector t Args: t: (d, n) Output: (d, n) ''' ht = 0 ht += t diff_norm2 = pair_col_diff_norm2(x, self.idx) # (n*(n-1)/2,) diff_sum = pair_col_diff_sum(x, t, self.idx) tmp = np.zeros((self.n, self.n)) tmp[self.triu_idx] = diff_norm2 tmp += tmp.T mask = (tmp > self.delta ** 2) tmp = np.where(mask, np.divide(1, np.sqrt(tmp), where=mask), 0) t = t.T x = x.T ht += (lamb * (tmp.sum(axis=1, keepdims=True) * t - tmp @ t).T) # tmp1 = np.where(tmp1 > 0, tmp1 ** 3, 0) tmp = tmp ** 3 tmp[self.triu_idx] *= diff_sum tmp[(self.triu_idx[1], self.triu_idx[0])] *= diff_sum ht -= lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T tmp = 1 - mask ht += (lamb * (tmp.sum(axis=1, keepdims=True) * t - tmp @ t).T / self.delta) return ht.flatten() import numpy as np # from numpy.lib.function_base import _delete_dispatcher def Hessian_hub(X, p, delta, B): n = X.shape[0]; d = X.shape[1] res = np.zeros(n*d).reshape((n*d,1)) for i in range(n): H_tmp = Hessian_rows(i,n,d,delta,B,X) res[i*d:(i+1)*d] = H_tmp.dot(p).reshape((d,1)) return np.array(res) def Hessian_rows(i,n,d,delta,B,X):#i从0开始 I = np.identity(d) DF = np.tile((-1/delta) * np.identity(d), n) choose_BX = (B.T[i] != 0) #choose material Xi-Xk, n-1 in total DBX = B.dot(X)[choose_BX] DBX[:i,:] = -DBX[:i,:] mask = np.linalg.norm(DBX, axis=1) > delta #find ||Xi-Xk|| which is greater than delta mask2 = np.tile(mask,(d,1)).T.reshape(1,-1)[0] #will be further use norm = np.linalg.norm(DBX[mask], axis=1) #Calculate the norm which is greater than delta (prepare for the left part of tmp) #prepare for the right part of tmp row = np.repeat(np.arange(n-1),d) col = np.arange((n-1)*d) DBX_trans = np.array(sps.csr_matrix((DBX.flatten(),(row,col)),shape=((n-1),(n-1)*d)).todense()) tmp = -(np.tile(I,(1,len(norm)))/np.repeat(norm,d)) + \ (DBX[mask].T.dot(DBX_trans[:,mask2]))/np.repeat(norm**3,d) #change the values of items whose norm are greater than delta DF[:,:i*d][:, mask2[:i*d]] = tmp[:,:i*d] DF[:,(i+1)*d:][:,mask2[i*d:]] = tmp[:,i*d:] z = np.zeros((d,d)) DF[:,i*d:(i+1)*d] = z I_tmp = np.tile(I,(n,1)) iblock_tmp = -DF.dot(I_tmp) DF[:,i*d:(i+1)*d] = iblock_tmp return np.array(DF) i=1 n, d = 5,3 delta = 0.1 B = gen_B(n, sparse=False) X = np.arange(n*d).reshape(n,d) DBX = B.T.dot(B.dot(X)) p = np.arange(n*d).reshape((n*d,1)) # Hessian_hub(X, p, 0.1, B) Hdn = Hessian_rows(i,n,d,delta,B,X) Hdn[0].dot(p)
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
测试Hessian
n, d = 4,2 test = OBJ(d,n,0.1) X = np.arange(n*d).reshape(n,d) t = np.arange(n*d).reshape((d,n)).astype(float) test.hessiant(X.T, t, 0.1) test.hessiant(X.T, t, 0.1) n, d = 4,2 delta = 0.1 X = np.arange(n*d).reshape(n,d) B = gen_B(n, sparse=False) p = np.arange(n*d).reshape((n*d,1)) Hessian_hub(X, p, delta, B) i=1 B = gen_B(n, sparse=False) X = np.random.randn(n,d) DBX = B.T.dot(B.dot(X)) i = 0 n, d = 5,3 delta = 0.1 B = gen_B(n, sparse=False) X = np.random.randn(n,d) DBX = B.T.dot(B.dot(X)) I = np.identity(d) DF = np.tile((-1/delta) * np.identity(d), n) choose_BX = (B.T[i] != 0) #choose material Xi-Xk, n-1 in total DBX = B.dot(X)[choose_BX] mask = np.linalg.norm(DBX, axis=1) > delta #find ||Xi-Xk|| which is greater than delta mask2 = np.tile(mask,(d,1)).T.reshape(1,-1)[0] #will be further use norm = np.linalg.norm(DBX[mask], axis=1) #Calculate the norm which is greater than delta (prepare for the left part of tmp) #prepare for the right part of tmp row = np.repeat(np.arange(n-1),d) col = np.arange((n-1)*d) DBX_trans = np.array(sps.csr_matrix((DBX.flatten(),(row,col)),shape=((n-1),(n-1)*d)).todense()) tmp = -(np.tile(I,(1,len(norm)))/np.repeat(norm,d)) + \ (DBX[mask].T.dot(DBX_trans[:,mask2]))/np.repeat(norm**3,d) #change the values of items whose norm are greater than delta DF[:,:i*d][:, mask2[:i*d]] = tmp[:,:i*d] DF[:,(i+1)*d:][:,mask2[i*d:]] = tmp[:,i*d:] z = np.zeros((d,d)) DF[:,i*d:(i+1)*d] = z I_tmp = np.tile(-I,(n,1)) iblock_tmp = DF.dot(I_tmp) DF[:,i*d:(i+1)*d] = iblock_tmp DF
_____no_output_____
Apache-2.0
optimize.ipynb
QSCTech-Sange/Optimization_Project
Copyright 2018 The AdaNet Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Customizing AdaNetOften times, as a researcher or machine learning practitioner, you will havesome prior knowledge about a dataset. Ideally you should be able to encode thatknowledge into your machine learning algorithm. With `adanet`, you can do so bydefining the *neural architecture search space* that the AdaNet algorithm shouldexplore.In this tutorial, we will explore the flexibility of the `adanet` framework, andcreate a custom search space for an image-classificatio dataset using high-levelTensorFlow libraries like `tf.layers`.
from __future__ import absolute_import from __future__ import division from __future__ import print_function import functools import adanet from adanet.examples import simple_dnn import tensorflow as tf # The random seed to use. RANDOM_SEED = 42
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Fashion MNIST datasetIn this example, we will use the Fashion MNIST dataset[[Xiao et al., 2017](https://arxiv.org/abs/1708.07747)] for classifying fashionapparel images into one of ten categories:1. T-shirt/top2. Trouser3. Pullover4. Dress5. Coat6. Sandal7. Shirt8. Sneaker9. Bag10. Ankle boot![Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist/blob/master/doc/img/fashion-mnist-sprite.png?raw=true) Download the dataConveniently, the data is available via Keras:
(x_train, y_train), (x_test, y_test) = ( tf.keras.datasets.fashion_mnist.load_data())
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Supply the data in TensorFlowOur first task is to supply the data in TensorFlow. Using thetf.estimator.Estimator covention, we will define a function that returns an`input_fn` which returns feature and label `Tensors`.We will also use the `tf.data.Dataset` API to feed the data into our models.
FEATURES_KEY = "images" def generator(images, labels): """Returns a generator that returns image-label pairs.""" def _gen(): for image, label in zip(images, labels): yield image, label return _gen def preprocess_image(image, label): """Preprocesses an image for an `Estimator`.""" # First let's scale the pixel values to be between 0 and 1. image = image / 255. # Next we reshape the image so that we can apply a 2D convolution to it. image = tf.reshape(image, [28, 28, 1]) # Finally the features need to be supplied as a dictionary. features = {FEATURES_KEY: image} return features, label def input_fn(partition, training, batch_size): """Generate an input_fn for the Estimator.""" def _input_fn(): if partition == "train": dataset = tf.data.Dataset.from_generator( generator(x_train, y_train), (tf.float32, tf.int32), ((28, 28), ())) else: dataset = tf.data.Dataset.from_generator( generator(x_test, y_test), (tf.float32, tf.int32), ((28, 28), ())) # We call repeat after shuffling, rather than before, to prevent separate # epochs from blending together. if training: dataset = dataset.shuffle(10 * batch_size, seed=RANDOM_SEED).repeat() dataset = dataset.map(preprocess_image).batch(batch_size) iterator = dataset.make_one_shot_iterator() features, labels = iterator.get_next() return features, labels return _input_fn
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Establish baselinesThe next task should be to get somes baselines to see how our model performs onthis dataset.Let's define some information to share with all our `tf.estimator.Estimators`:
# The number of classes. NUM_CLASSES = 10 # We will average the losses in each mini-batch when computing gradients. loss_reduction = tf.losses.Reduction.SUM_OVER_BATCH_SIZE # A `Head` instance defines the loss function and metrics for `Estimators`. head = tf.contrib.estimator.multi_class_head( NUM_CLASSES, loss_reduction=loss_reduction) # Some `Estimators` use feature columns for understanding their input features. feature_columns = [ tf.feature_column.numeric_column(FEATURES_KEY, shape=[28, 28, 1]) ] # Estimator configuration. config = tf.estimator.RunConfig( save_checkpoints_steps=50000, save_summary_steps=50000, tf_random_seed=RANDOM_SEED)
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Let's start simple, and train a linear model:
#@test {"skip": true} #@title Parameters LEARNING_RATE = 0.001 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} estimator = tf.estimator.LinearClassifier( feature_columns=feature_columns, n_classes=NUM_CLASSES, optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE), loss_reduction=loss_reduction, config=config) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=input_fn("train", training=True, batch_size=BATCH_SIZE), max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=input_fn("test", training=False, batch_size=BATCH_SIZE), steps=None)) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"])
Accuracy: 0.8413 Loss: 0.464809
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
The linear model with default parameters achieves about **84.13% accuracy**.Let's see if we can do better with the `simple_dnn` AdaNet:
#@test {"skip": true} #@title Parameters LEARNING_RATE = 0.003 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} estimator = adanet.Estimator( head=head, subnetwork_generator=simple_dnn.Generator( feature_columns=feature_columns, optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE), seed=RANDOM_SEED), max_iteration_steps=TRAIN_STEPS // ADANET_ITERATIONS, evaluator=adanet.Evaluator( input_fn=input_fn("train", training=False, batch_size=BATCH_SIZE), steps=None), config=config) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=input_fn("train", training=True, batch_size=BATCH_SIZE), max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=input_fn("test", training=False, batch_size=BATCH_SIZE), steps=None)) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"])
Accuracy: 0.8566 Loss: 0.408646
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
The `simple_dnn` AdaNet model with default parameters achieves about **85.66%accuracy**.This improvement can be attributed to `simple_dnn` searching overfully-connected neural networks which have more expressive power than the linearmodel due to their non-linear activations.Fully-connected layers are permutation invariant to their inputs, meaning thatif we consistently swapped two pixels before training, the final model wouldperform identically. However, there is spatial and locality information inimages that we should try to capture. Applying a few convolutions to our inputswill allow us to do so, and that will require defining a custom`adanet.subnetwork.Builder` and `adanet.subnetwork.Generator`. Define a convolutional AdaNet modelCreating a new search space for AdaNet to explore is straightforward. There aretwo abstract classes you need to extend:1. `adanet.subnetwork.Builder`2. `adanet.subnetwork.Generator`Similar to the tf.estimator.Estimator `model_fn`, `adanet.subnetwork.Builder`allows you to define your own TensorFlow graph for creating a neural network,and specify the training operations.Below we define one that applies a 2D convolution, max-pooling, and then afully-connected layer to the images:
class SimpleCNNBuilder(adanet.subnetwork.Builder): """Builds a CNN subnetwork for AdaNet.""" def __init__(self, learning_rate, max_iteration_steps, seed): """Initializes a `SimpleCNNBuilder`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `SimpleCNNBuilder`. """ self._learning_rate = learning_rate self._max_iteration_steps = max_iteration_steps self._seed = seed def build_subnetwork(self, features, logits_dimension, training, iteration_step, summary, previous_ensemble=None): """See `adanet.subnetwork.Builder`.""" images = features.values()[0] kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed) x = tf.layers.conv2d( images, filters=16, kernel_size=3, padding="same", activation="relu", kernel_initializer=kernel_initializer) x = tf.layers.max_pooling2d(x, pool_size=2, strides=2) x = tf.layers.flatten(x) x = tf.layers.dense( x, units=64, activation="relu", kernel_initializer=kernel_initializer) # The `Head` passed to adanet.Estimator will apply the softmax activation. logits = tf.layers.dense( x, units=10, activation=None, kernel_initializer=kernel_initializer) # Use a constant complexity measure, since all subnetworks have the same # architecture and hyperparameters. complexity = tf.constant(1) return adanet.Subnetwork( last_layer=x, logits=logits, complexity=complexity, persisted_tensors={}) def build_subnetwork_train_op(self, subnetwork, loss, var_list, labels, iteration_step, summary, previous_ensemble=None): """See `adanet.subnetwork.Builder`.""" # Momentum optimizer with cosine learning rate decay works well with CNNs. learning_rate = tf.train.cosine_decay( learning_rate=self._learning_rate, global_step=iteration_step, decay_steps=self._max_iteration_steps) optimizer = tf.train.MomentumOptimizer(learning_rate, .9) # NOTE: The `adanet.Estimator` increments the global step. return optimizer.minimize(loss=loss, var_list=var_list) def build_mixture_weights_train_op(self, loss, var_list, logits, labels, iteration_step, summary): """See `adanet.subnetwork.Builder`.""" return tf.no_op("mixture_weights_train_op") @property def name(self): """See `adanet.subnetwork.Builder`.""" return "simple_cnn"
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
Next, we extend a `adanet.subnetwork.Generator`, which defines the searchspace of candidate `SimpleCNNBuilders` to consider including the final network.It can create one or more at each iteration with different parameters, and theAdaNet algorithm will select the candidate that best improves the overall neuralnetwork's `adanet_loss` on the training set.The one below is very simple: it always creates the same architecture, but givesit a different random seed at each iteration:
class SimpleCNNGenerator(adanet.subnetwork.Generator): """Generates a `SimpleCNN` at each iteration. """ def __init__(self, learning_rate, max_iteration_steps, seed=None): """Initializes a `Generator` that builds `SimpleCNNs`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `Generator`. """ self._seed = seed self._dnn_builder_fn = functools.partial( SimpleCNNBuilder, learning_rate=learning_rate, max_iteration_steps=max_iteration_steps) def generate_candidates(self, previous_ensemble, iteration_number, previous_ensemble_reports, all_reports): """See `adanet.subnetwork.Generator`.""" seed = self._seed # Change the seed according to the iteration so that each subnetwork # learns something different. if seed is not None: seed += iteration_number return [self._dnn_builder_fn(seed=seed)]
_____no_output_____
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
With these defined, we pass them into a new `adanet.Estimator`:
#@title Parameters LEARNING_RATE = 0.05 #@param {type:"number"} TRAIN_STEPS = 5000 #@param {type:"integer"} BATCH_SIZE = 64 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} max_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS estimator = adanet.Estimator( head=head, subnetwork_generator=SimpleCNNGenerator( learning_rate=LEARNING_RATE, max_iteration_steps=max_iteration_steps, seed=RANDOM_SEED), max_iteration_steps=max_iteration_steps, evaluator=adanet.Evaluator( input_fn=input_fn("train", training=False, batch_size=BATCH_SIZE), steps=None), adanet_loss_decay=.99, config=config) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=input_fn("train", training=True, batch_size=BATCH_SIZE), max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=input_fn("test", training=False, batch_size=BATCH_SIZE), steps=None)) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"])
Accuracy: 0.9041 Loss: 0.26544
Apache-2.0
adanet/examples/tutorials/customizing_adanet.ipynb
xhlulu/adanet
CW Attack ExampleTJ Kim 1.28.21 Summary: Implement CW attack on toy network example given in the readme of the github. https://github.com/tj-kim/pytorch-cw2?organization=tj-kim&organization=tj-kimA dummy network is made using CIFAR example. https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html Build Dummy Pytorch Network
import torch import torchvision import torchvision.transforms as transforms
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Download a few classes from the dataset.
batch_size = 10 transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
Files already downloaded and verified Files already downloaded and verified
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Show a few images from the dataset.
import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Define a NN.
import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Define loss and optimizer
import torch.optim as optim net = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Train the Network
train_flag = False PATH = './cifar_net.pth' if train_flag: for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') else: net.load_state_dict(torch.load(PATH))
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Save Existing Network.
if train_flag: torch.save(net.state_dict(), PATH)
_____no_output_____
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Test Acc.
correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
Accuracy of the network on the 10000 test images: 52 %
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
C&W AttackPerform attack on toy network.Before running the example code, we have to set the following parameters:- dataloader- mean- std The mean and std are one value per each channel of input
dataloader = trainloader mean = (0.5,0.5,0.5) std = (0.5,0.5,0.5) import torch import cw inputs_box = (min((0 - m) / s for m, s in zip(mean, std)), max((1 - m) / s for m, s in zip(mean, std))) """ # an untargeted adversary adversary = cw.L2Adversary(targeted=False, confidence=0.0, search_steps=10, box=inputs_box, optimizer_lr=5e-4) inputs, targets = next(iter(dataloader)) adversarial_examples = adversary(net, inputs, targets, to_numpy=False) assert isinstance(adversarial_examples, torch.FloatTensor) assert adversarial_examples.size() == inputs.size() """ # a targeted adversary adversary = cw.L2Adversary(targeted=True, confidence=0.0, search_steps=10, box=inputs_box, optimizer_lr=5e-4) inputs, orig_label = next(iter(dataloader)) # a batch of any attack targets attack_targets = torch.ones(inputs.size(0), dtype = torch.long) * 3 adversarial_examples = adversary(net, inputs, attack_targets, to_numpy=False) assert isinstance(adversarial_examples, torch.FloatTensor) assert adversarial_examples.size() == inputs.size() # Obtain the outputs of the adversarial perturbations vs. original print("attacked:", torch.argmax(net(adversarial_examples),dim=1)) print("original:", orig_label)
attacked: tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3]) original: tensor([6, 9, 6, 2, 6, 9, 2, 3, 8, 5])
MIT
Run Example.ipynb
tj-kim/pytorch-cw2
Singular Value Decomposition
import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn import decomposition from scipy import linalg categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space'] remove = ('headers', 'footers', 'quotes') newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove) newsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove) first_3_text = newsgroups_train.data[:3] first_3_label = newsgroups_train.target[:3] for text,label in zip(first_3_text, first_3_label): print(f'{text}') print(f'topic: {label}') newsgroups_train.target_names from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(stop_words='english') vectors = vectorizer.fit_transform(newsgroups_train.data).todense() vectors.shape len(newsgroups_train.data) vocab = np.array(vectorizer.get_feature_names()) vocab.shape # Usinf svd to decompose term document matrix U, s, Vh = linalg.svd(vectors, full_matrices=False) U.shape, s.shape, Vh.shape num_top_words=8 def show_topics(a): top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]] topic_words = ([top_words(t) for t in a]) return [' '.join(t) for t in topic_words] show_topics(Vh[383:384]) np.argmax(U[0]) newsgroups_train.data[7] np.argmax(U[7]) show_topics(Vh[431:432])
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Non-negative Matrix Factorization
clf = decomposition.NMF(n_components=5, random_state=1) W1 = clf.fit_transform(vectors) H1 = clf.components_ show_topics(H1) W1[0]
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Truncated SVD
!pip install fbpca import fbpca %time u, s, v = np.linalg.svd(vectors, full_matrices=False) %time u, s, v = decomposition.randomized_svd(vectors, 10) %time u, s, v = fbpca.pca(vectors, 10) show_topics(v)
_____no_output_____
MIT
Topic_modelling_with_svd_and_nmf.ipynb
AdityaVarmaUddaraju/Topic_Modelling
Simple RNNIn this notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!> * First, we'll create our data* Then, define an RNN in PyTorch* Finally, we'll train our network and see how it performs Import resources and create data
import torch from torch import nn import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(8,5)) # how many time steps/data pts are in one batch of data seq_length = 20 # generate evenly spaced data pts time_steps = np.linspace(0, np.pi, seq_length + 1) data = np.sin(time_steps) data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension x = data[:-1] # all but the last piece of data y = data[1:] # all but the first # display the data plt.plot(time_steps[1:], x, 'r.', label='input, x') # x plt.plot(time_steps[1:], y, 'b.', label='target, y') # y plt.legend(loc='best') plt.show()
_____no_output_____
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch
--- Define the RNNNext, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:* **input_size** - the size of the input* **hidden_dim** - the number of features in the RNN output and in the hidden state* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)Take a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.htmlrnn) to read more about recurrent layers.
class RNN(nn.Module): def __init__(self, input_size, output_size, hidden_dim, n_layers): super(RNN, self).__init__() self.hidden_dim=hidden_dim # define an RNN with specified parameters # batch_first means that the first dim of the input and output will be the batch_size self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True) # last, fully-connected layer self.fc = nn.Linear(hidden_dim, output_size) def forward(self, x, hidden): # x (batch_size, seq_length, input_size) # hidden (n_layers, batch_size, hidden_dim) # r_out (batch_size, seq_length, hidden_dim) batch_size = x.size(0) # get RNN outputs r_out, hidden = self.rnn(x, hidden) # shape output to be (batch_size*seq_length, hidden_dim) r_out = r_out.view(-1, self.hidden_dim) # get final output output = self.fc(r_out) return output, hidden
_____no_output_____
MIT
recurrent-neural-networks/time-series/Simple_RNN.ipynb
johnsonjoseph37/deep-learning-v2-pytorch