markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Applying functionsThis step allows for the application of a function to each group independently. There are many types of operations we can perform here. AggregationThis involves generating a descriptive statistic for each of the groups.
grouped.agg(np.mean) # the mean value of each column (only relevant for one column) grouped.agg(len) # how many samples does each group have
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
We can even select a **different** aggregation function for each column.
grouped.agg({'B': len, # number of values in each group 'C': lambda x: len(x.unique()), # unique values in each group 'D': np.sum}) # sum the values of each group
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
TransformationThis involves changing some values in the data (each group's values are changed in a different manner). For example:
grouped.transform(lambda x: (x - x.min()) / (x.max() - x.min())) # normalize values in each group grouped.transform(lambda x: (x - x.mean()) / x.std()) # standardize values in each group grouped.transform(lambda x: x.fillna(x.mean())) # replace nan values with the mean of each group
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
All the above operations are relevant only for column `'D'` (since it is the only containing numeric values) and are **not** performed inplace. FilteringThis operation filters groups based on some condition.
grouped.filter(lambda x: x['D'].sum() > 15) # keep only groups that have a sum of values in column 'D' greater than 15
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Regular `.apply()`All of the above three effects can be accomplished through `.apply()`.
# Aggregation: grouped['D'].apply(np.sum) # same as: grouped.apply(lambda x: x['D'].sum()) # Transformation: grouped['D'].apply(lambda x: (x - x.min()) / (x.max() - x.min())) # equivalent with: grouped.apply(lambda x: (x['D'] - x['D'].min()) / (x['D'].max() - x['D'].min())) # Filtering: grouped.apply(lambda x: x if x['D'].sum() > 15 else None)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The "group by" process can be done on multiple indices. However, we won't go more details about this.
df2.groupby(['A','B']).sum() # roughly equivalent to the pivot_table we did previously
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Shape manipulationUnlike *numpy* arrays, *DataFrames* usually aren't made to be reshaped. Nevertheless, *pandas* offers support for stacking and unstacking.
# The stack() method “compresses” a level in the DataFrame’s columns. stk = df.stack() print(stk) # The inverse operation is unstack() stk.unstack()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Input / output operationsThe most common format associated with *DataFrames* is csv. CSV Writing a *DataFrame* to a csv file can be accomplished with a single line.
df.to_csv('tmp/my_dataframe.csv') # writes df to file 'my_dataframe.csv' in folder 'tmp'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
`DataFrame.to_csv()` by default stores **both row and column labels**. Usually we don't want to write the row labels and sometimes we might not even want to write the column labels. This can be accomplished with the following arguments:
df.to_csv('tmp/my_dataframe.csv', header=False, index=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
To load a csv into a *DataFrame* we can use `pd.read_csv()`.
tmp = pd.read_csv('tmp/my_dataframe.csv') tmp
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
As you can see, by default, *pandas* uses the first line of the csv as its column names. If this isn't desirable, we can use the `header` argument.
tmp = pd.read_csv('tmp/my_dataframe.csv', header=None) tmp
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
MS ExcelPandas can read and write to excel files through two simple functions: `pd.read_excel(file.xlsx)` and `DataFrame.to_excel(file.xlsx)`. Note that this requires an extra library (`xlrd`) Other options Other options include pickle, json, SQL databases, clipboard, URLs and even integration with the google analytics API. Exploratory Data AnalysisWe've only scratched the surface of the capabilities of the *pandas* library. In order to get a better understanding of the library and how it's used, we'll attempt to perform an exploratory data analysis on the adult income dataset.When doing Exploratory Data Analysis (EDA), we want to observe and summarize our data through descriptive statistics so that we have a better understanding of them.
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data' data = pd.read_csv(url, header=None) data.columns = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income']
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The first thing we want to do is inspect the shape of the *DataFrame*.
data.shape
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Our data contains 32561 rows and 15 columns. If we take a look at the [description](https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.names) of the dataset we see that it contains both continuous valued variables (age, working hours etc.) and categorical ones (sex, relationship etc.). When performing data analysis it is important to know what each variable represents.The next thing we'll do is to look at a sample of the dataset.
data.head()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
For each variable we'll see what values it can take.
print('minimum:', data['age'].min()) print('maximum:', data['age'].max()) print('mean: ', data['age'].mean())
minimum: 17 maximum: 90 mean: 38.58164675532078
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
`age` is a numeric variable that has a minimum value of $17$ and a max of $90$. While we can run any descriptive statistics on this variable, to have a complete perspective we must visualize it (see a later tutorial on how to do so).
data['workclass'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Here we find our first occurrence of missing values. In this dataset, these are represented by question marks (`?`).
print('minimum:', data['fnlwgt'].min()) print('maximum:', data['fnlwgt'].max()) print('mean: ', data['fnlwgt'].mean())
minimum: 12285 maximum: 1484705 mean: 189778.36651208502
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This variable is continuous-valued and represents the demographics of the individual. >Description of fnlwgt (final weight)> The weights on the CPS files are controlled to independent estimates of the civilian noninstitutional population of the US. These are prepared monthly for us by Population Division here at the Census Bureau. We use 3 sets of controls. These are: 1. A single cell estimate of the population 16+ for each state. 2. Controls for Hispanic Origin by age and sex. 3. Controls by Race, age and sex. > We use all three sets of controls in our weighting program and "rake" through them 6 times so that by the end we come back to all the controls we used. > The term estimate refers to population totals derived from CPS by creating "weighted tallies" of any specified socio-economic characteristics of the population. > People with similar demographic characteristics should have similar weights. There is one important caveat to remember about this statement. That is that since the CPS sample is actually a collection of 51 state samples, each with its own probability of selection, the statement only applies within state.
data['education'].value_counts() data['education-num'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The latter is simply an encoded version of the first.
data['marital-status'].value_counts() data['occupation'].value_counts() data['relationship'].value_counts() data['race'].value_counts() data['sex'].value_counts() print(len(data[data['capital-gain'] == 0])) print(len(data[data['capital-gain'] != 0])) print(len(data[data['capital-loss'] == 0])) print(len(data[data['capital-loss'] != 0])) print('minimum:', data['hours-per-week'].min()) print('maximum:', data['hours-per-week'].max()) print('mean: ', data['hours-per-week'].mean()) data['native-country'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Data PreparationNext, well look at several ways we might have to manipulate our data, including data cleaning, imputing and transforming Because the unknown values are represented as question marks this dataset, we need to handle them. Example: fill the occupation with the most frequent element
most_freq = data['occupation'].mode()[0] # find the most common element data['occupation'] = data['occupation'].apply(lambda x: most_freq if x == ' ?' else x) # the line above first keeps just the column that represents the occupations from the dataframe # then it applies a function which checks if those values are question marks and changes them to the most common element # finally it replaces the original occupations with the new ones data['occupation'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Notice that all elements are proceeded by a whitespace? Can we remove it and clean our data?
print('Before cleaning: `{}`'.format(data.occupation[0])) data.occupation = data['occupation'].apply(lambda x: x.strip()) print('After cleaning: `{}`'.format(data.occupation[0]))
Before cleaning: ` Adm-clerical` After cleaning: `Adm-clerical`
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Exercise 2Fill the missing values of the DataFrame's `native-country` column with whatever strategy you wish. SolutionThis time we'll drop the rows containing missing values.
print('DataFrame length:', len(data)) print('missing: ', len(data[data['native-country'] == ' ?'])) data = data.drop(data[data['native-country'] == ' ?'].index) print('DataFrame length:', len(data)) print('missing: ', len(data[data['native-country'] == ' ?']))
DataFrame length: 32561 missing: 583 DataFrame length: 31978 missing: 0
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Finally, let's try to **encode** our data. To illustrate how they would be performed in *pandas*: We will first encode the `education` variable preserving its sequential nature. Next, we will perform a custom encoding on the `marital-status` variable so that we keep only two categories (i.e. currently married and not). Finally, we will one-hot encode all the remaining categorical variables in the dataset. First, `education`.
data['education'] = data['education'].apply(lambda x: x.strip()) # clean whitespace on category # Create a dictionary mapping the categories to their encodings. # This has to be done manually as the exact sequence has to be taken into consideration. mappings = {'Preschool': 1, '1st-4th': 2, '5th-6th': 3, '7th-8th': 4, '9th': 5, '10th': 6, '11th': 7, '12th': 8, 'HS-grad': 9, 'Some-college': 10, 'Assoc-voc': 11, 'Assoc-acdm':12, 'Bachelors': 13, 'Masters': 14, 'Prof-school': 15, 'Doctorate': 16} data['education'] = data['education'].map(mappings) # encode categorical variable with custom mapping # another way to do this would be: data.replace(mappings, inplace=True) # another way this could be done would be through data.education.astype('category'), # this however would prevent us from choosing the mapping scheme data['education'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Next, `marital-status`.
data['marital-status'] = np.where(data["marital-status"] == ' Married-civ-spouse', 1, 0) # the above function replaces ' Married-civ-spouse' with 1 and all the rest with 0 data['marital-status'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
After this, we'll one-hot encode all rest categorical variables. Note that we haven't dealt with all missing values yet (in a real scenario we should).
print('Before one-hot encoding:', data.shape) data = pd.get_dummies(data) # one-hot encode all categorical variables print('After one-hot encoding: ', data.shape)
Before one-hot encoding: (31978, 15) After one-hot encoding: (31978, 87)
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Finally, we'll see how we can split numerical values to separate bins, in order to convert them to categorical. This time around we won't replace the numerical data but create a new variable.
data['age_categories'] = pd.cut(data.age, 3, labels=['young', 'middle aged', 'old']) data['age_categories'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
When binning would usually want each bin to have the same number of samples. In order to do this we need to manually find there to cut each bin input the *cut points* instead of the number of bins we want. But we'll leave that up to you! Bonus material: Data wrangling:- [extended data wrangling tutorial](http://nbviewer.jupyter.org/github/fonnesbeck/Bios8366/blob/master/notebooks/Section2_2-Data-Wrangling-with-Pandas.ipynb) Dealing with inconsistent text dataOne of the most common problems when dealing with text data is inconsistency. This may occur due to spelling errors, differences when multiple people perform the data entry, etc.
df6 = pd.DataFrame({'fname':['George', 'george ', 'GEORGIOS', 'Giorgos', ' Peter', 'Petet'], 'sname':['Papadopoulos', 'alexakos ', 'Georgiou', 'ANTONOPOULOS', ' Anastasiou', 'Κ'], 'age': [46, 34, 75, 24, 54, 33]}) df6
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
When looking at the example above, several inconsistencies become apparent. The first thing we want to do when dealing with strings is to convert them all to lowercase (or uppercase depending on preference) and remove preceding and succeeding whitespace.
def clean_text(text): text = text.strip() # strip whitespace text = text.lower() # convert to lowercase return text df6['fname'] = df6['fname'].apply(clean_text) df6['sname'] = df6['sname'].apply(clean_text) # same could be done through a lambda function # df.fname.apply(lambda x: x.strip().lower()) df6
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Another problem originates from the way each name was entered. There are two ways to deal with this, one is to manually look and change errors, and the other is to compare the strings to find differences.We are going to try the second, through the python package [fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy).
from fuzzywuzzy import process process.extract('george', df6['fname'], limit=None)
c:\users\thano\appdata\local\programs\python\python36\lib\site-packages\fuzzywuzzy\fuzz.py:35: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
*Fuzzywuzzy* compares strings and outputs a score depending on how close they are. Let's replace the close ones:
def replace_matches_in_column(df, column, target_string, min_ratio=50): # find unique elements in specified column strings = df[column].unique() # see how close these elements are to the target string matches = process.extract(target_string, strings, limit=None) # keep only the closest ones close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio] # get the rows of all the close matches in our dataframe rows_with_matches = df[column].isin(close_matches) # replace all rows with close matches with the input matches df.loc[rows_with_matches, column] = target_string replace_matches_in_column(df6, 'fname', 'george') replace_matches_in_column(df6, 'fname', 'peter') df6
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Обработка и разбор данных от Semantic Hub Клетки, которые выводят информацию о данных предоставленных Semantic Hub, были очищены
import pandas as pd import os from tqdm import tqdm import re import json from collections import Counter import ast import seaborn as sns import matplotlib.pyplot as plt import numpy as np import nltk
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Заходим в папку с полученными данными и получаем список файлов
# путь к папке со всеми файлами path = '/Users/egor/Desktop/SemanticHub/diploma/relatives_data/jsons' list_of_files = os.listdir(path) # список названий файлов
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Делаем список json объектов, с которыми удобно работать
list_full_file_json = [] # список, в котором каждый элемент - содержимое одного файла # цикл для прохода по всем файлам и сбора их содержимого for filename in tqdm(list_of_files): # абсолютный путь к единичному файлу path_single = path + '/' + filename # открываем with open(path_single) as file: file = json.load(file) list_full_file_json.append(file) # пример удобной работы list_full_file_json[90564]['text']
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Подготовим поле *text* к обработке
# для этого удалим все, кроме текста самого сообщения # вообще можно будет пытаться еще пол вытягивать и всякие другие данные типа даты, города, имени # 1 # текст окружен двумя пробелами странной длины, это нам поможет pattern = re.compile(' (.*) ') # это будет список номеров тех сообщений, которые не прошли по первому паттерну i = 0 list_err = [] for message in tqdm(list_full_file_json): try: message['text'] = pattern.search(message['text']).group(1) i += 1 except AttributeError: list_err.append(i) i += 1 list_err[:10] # 2 # ненужный текст начиается с хмл, а заканчивается годом, после нее идет само сообщение # возможно такое, что в сообщении есть упоминание года, поэтому оно будет урезано, но иначе работать сложно, # потому что встречается много вариантов времени и других вещей перед годом: # наличие/отсутсвие часа, секунд, UTC и т.п. # # upd: посмотрел глазами, некрасивых вариантов особо не заметил pattern = re.compile('xml.*\d\d\d\d(.*)') for number in tqdm(list_err): if pattern.search(list_full_file_json[number]['text']).group(1) != ' ': list_full_file_json[number]['text'] = pattern.search(list_full_file_json[number]['text']).group(1) for number in list_err[50:60]: print(list_full_file_json[number]['text']) print('//////////////////////////////////////') # уберем форумные смайлики из сообщений pattern = re.compile(':[a-zA-Z]+:') for message in tqdm(list_full_file_json): message['text'] = pattern.sub('', message['text']) # заменим множественные пробелы на единичные, # уберем пробелы в начале и в конце предложения # паттерн для поиска пробелов pattern = re.compile('\s+') for message in tqdm(list_full_file_json): message['text'] = pattern.sub(' ', message['text']) # убираю пробел в начале if message['text'][0] == ' ': message['text'] = message['text'][1:] # убираю пробел в конце if message['text'][-1:] == ' ': message['text'] = message['text'][:len(message['text'])-1] # добавляю знак препинания в конце, если его нет. if message['text'][-1:] != '.' and message['text'][-1:] != '!' and message['text'][-1:] != '?' and message['text'][-1:] != '…': message['text'] = message['text'] + '.' list_full_file_json[90564]['text'] list_full_file_json[0]['annotation_sets']
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Поле **text** подготовлено, можно провести количественный анализ и построить разные визуализации Анализ текстового содержания – поле *text* Для проведения анализа заполним датафрейм пандас важными для анализа характеристиками:+ количество слов в каждом документе+ количество предложений в каждом документе+ количество цепочек кореференции в каждом документе+ количество выделенных слов в каждом документе+ количество цепочек кореференции всего+ количество выделенных слов всего+ количество документов всего Распределение количества слов в каждом документе
# список количеств слов words_list = [] # паттерн для деления по пробелам/переносам pattern = re.compile('\s+') for message in tqdm(list_full_file_json): words_message = len(pattern.split(message['text'])) words_list.append(words_message) # посмотрим на самые частые количества слов на одно сообщение c = Counter(words_list).most_common(10) c # построим по полученным данным нормализованную гистограмму c = Counter(words_list) keys = list(c.keys()) values = list(c.values()) #n, bins, patches = sns.histplot(x=keys, weights=values, discrete=True, , bins=90, facecolor='#2ab0ff', edgecolor='#e0e0e0', linewidth=0.5, alpha=0.7) n, bins, patches = plt.hist(x = keys, weights=values, bins=np.arange(len(keys))-0.5, facecolor='#2ab0ff', alpha=0.9) n = n.astype('int') for i in range(len(patches)): patches[i].set_facecolor(plt.cm.viridis(n[i]/max(n))) #plt.style.use('seaborn-whitegrid') plt.xticks(np.arange(min(keys), max(keys)+1, 30.0)) plt.xlim(0, 700) plt.xlabel('Words', fontsize=20) plt.ylabel('Documents', fontsize=20) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) #fig.savefig('test2png.png', dpi=100) plt.show()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Распределение количества предложений в каждом документе
# список количеств слов sentences_list = [] # паттерн для деления на предложения, объяснение:(https://regex101.com/r/he9d1P/1) # почти всегда делит правильно, в предлоежниях с именем и отчеством через точку ошибается на +1, что # для выборки в 400к незначительно pattern = re.compile('.*?(\.|\?|\!|…)(?= *[А-Я]|$)') for message in tqdm(list_full_file_json): sentences_message = len(pattern.findall(message['text'])) sentences_list.append(sentences_message) # посмотрим на самые частые количества предложений на одно сообщение c = Counter(sentences_list).most_common(15) c # построим по полученным данным нормализованную гистограмму c = Counter(sentences_list) keys = list(c.keys()) values = list(c.values()) #n, bins, patches = sns.histplot(x=keys, weights=values, discrete=True, , bins=90, facecolor='#2ab0ff', edgecolor='#e0e0e0', linewidth=0.5, alpha=0.7) n, bins, patches = plt.hist(x = keys, weights=values, bins=np.arange(len(keys))-0.5, facecolor='#2ab0ff', edgecolor='#e0e0e0', alpha=0.7) n = n.astype('int') for i in range(len(patches)): patches[i].set_facecolor(plt.cm.viridis(n[i]/max(n))) #plt.style.use('seaborn-whitegrid') plt.xticks(np.arange(min(keys), max(keys)+1, 3.0)) plt.xlim(0, 61) patches[5].set_fc('#FDEE70') # Set color plt.xlabel('Sentences', fontsize=20) plt.ylabel('Documents', fontsize=20) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) #fig.savefig('test2png.png', dpi=100) plt.show()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Количество кореференциальных цепочек в каждом документе
# скрипт для приведения каждого размеченного для кореференции элемента к JSON виду # для удобного обращения к параметрам files_chain_list = [] for message in tqdm(list_full_file_json): list_coref_ent = message['annotation_sets']['']['annotations'] message_chain_list = [] for entity in list_coref_ent: entity = str(entity)[13:-1] # адхок подрезка формата SH entity = ast.literal_eval(entity) # превращаю строку в виде дикта в дикт entity = json.dumps(entity) # делаем джсон из дикта message_chain_list.append(json.loads(entity)) # парсим джсон в список files_chain_list.append(message_chain_list) # в этом списке каждый файл идет отдельным списком-джсоном # теперь составим список, где каждый элемент будет представлять количество цепочек в документе chain_amount_list = [] for json_item in tqdm(files_chain_list): chain_num_list = [] for word in json_item: chain_num_list.append(word['antecedent_id']) # через каунтер приведем все к списку туплов и посчитаем количество туплов c = Counter(chain_num_list) chain_amount_list.append(len(c)) c = Counter(chain_amount_list).most_common(15) c # построим по полученным данным нормализованную гистограмму c = Counter(chain_amount_list) keys = list(c.keys()) values = list(c.values()) #n, bins, patches = sns.histplot(x=keys, weights=values, discrete=True, , bins=90, facecolor='#2ab0ff', edgecolor='#e0e0e0', linewidth=0.5, alpha=0.7) n, bins, patches = plt.hist(x = keys, weights=values, bins=np.arange(len(keys))-0.5, facecolor='#2ab0ff', edgecolor='#e0e0e0', alpha=0.7) n = n.astype('int') for i in range(len(patches)): patches[i].set_facecolor(plt.cm.viridis(n[i]/max(n))) #plt.style.use('seaborn-whitegrid') plt.xticks(np.arange(min(keys), max(keys)+1, 3.0)) plt.xlim(0, 24) plt.xlabel('Coreference Chains', fontsize=20) plt.ylabel('Documents', fontsize=20) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) #fig.savefig('test2png.png', dpi=100) plt.show()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Распределение количества слов, выделенных, как значимые для кореференции.
# сделаем список, каждый элемент которого - количество выделенных сущностей на файл chain_entity_count_list = [] for chain_list in tqdm(files_chain_list): file_chain = len(chain_list) chain_entity_count_list.append(file_chain) # посмотрим на самые частые количества выделенных слов на одно сообщение c = Counter(chain_entity_count_list).most_common(10) c # построим по полученным данным нормализованную гистограмму c = Counter(chain_entity_count_list) keys = list(c.keys()) values = list(c.values()) #n, bins, patches = sns.histplot(x=keys, weights=values, discrete=True, , bins=90, facecolor='#2ab0ff', edgecolor='#e0e0e0', linewidth=0.5, alpha=0.7) n, bins, patches = plt.hist(x = keys, weights=values, bins=np.arange(len(keys))-0.5, facecolor='#2ab0ff', edgecolor='#e0e0e0', alpha=0.7) n = n.astype('int') for i in range(len(patches)): patches[i].set_facecolor(plt.cm.viridis(n[i]/max(n))) #plt.style.use('seaborn-whitegrid') plt.xticks(np.arange(min(keys), max(keys)+1, 3.0)) plt.xlim(0, 51) plt.xlabel('Tagged Words', fontsize=20) plt.ylabel('Documents', fontsize=20) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) #fig.savefig('test2png.png', dpi=100) plt.show()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Общее количество кореференциальных цепочек
# список количества цепочек в сообщении c = Counter(chain_amount_list).most_common() # счетчик цепочек counter = 0 for pair in tqdm(c): counter += pair[0]*pair[1] counter
100%|██████████| 64/64 [00:00<00:00, 292413.35it/s]
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Общее количество элементов в кореференциальных цепочках
# список количества выделенных слов в сообщении c = Counter(chain_entity_count_list).most_common() # счетчик цепочек counter = 0 for pair in tqdm(c): counter += pair[0]*pair[1] counter
100%|██████████| 136/136 [00:00<00:00, 245028.07it/s]
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Отношение количества слов в сообщении к количеству цепочек
# количество слов по сообщениям уже есть # количество цепочек по сообщениям уже есть # осталось посчитать количество каждых возможных пар word_chain_count_list = [] for word_count, chain_count in zip(words_list, chain_amount_list): word_chain_count_list.append((word_count, chain_count, )) c = Counter(word_chain_count_list) c ser = pd.Series(list(dict(c).values()), index=pd.MultiIndex.from_tuples(dict(c).keys())) df = ser.unstack().fillna(0) df.shape sns.heatmap(df) plt.xlim(0, 29) plt.ylim(0, 295) plt.xlabel('Coreference Chains', fontsize=20) plt.ylabel('Words', fontsize=20) fig = plt.gcf() fig.set_size_inches(18.5, 10.5) #fig.savefig('test2png.png', dpi=100) plt.show()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Перевод данных в формат CoNLL Имеющиеся переменные:+ количество слов+ количество предложений+ количество кореференциальных цеопчек+ количество выделенных слов+ список джейсонов аннотаций+ список полных джейсонов
words_list[:10] sentences_list[:10] files_chain_list[:1] list_full_file_json[8]['text'] conll_df = pd.DataFrame({'doc_name': [], 'zeros': [], 'word_num_sent': [], 'sent_entry': [], 'pos': [], 'star1': [], 'empty1': [], 'empty2': [],'empty3': [],'empty4': [],'empty5': [], 'star2': [], 'coref': [],}) conll_df.head() name_ptt = re.compile('[^/]+$') # а был же уже файл ну ладно и так можно.... sentence_ptt = re.compile('.*?(?:\.|\?|\!|…)(?= *[А-Я]|$)') # создадим все нужные параметры, чтобы потом построчно записывать в датафрейм document_name = '' sentences_list = [] names_list = [] for document in tqdm(list_full_file_json): # имя документа document_name = name_ptt.search(document['features']['path']).group(0) # nedug-ru_734132-0000.xml # предложения документа document_sentences = sentence_ptt.findall(document['text']) sentences_list.append(document_sentences) names_list.append(document_name) sentences_list[0][0] sentences_list[0][0][0] tagged_sentences_list = [] for sentences_document in tqdm(sentences_list): tagged_sentences_doc = [] for sentence in sentences_document: # слова в предложениях + pos tokens = nltk.word_tokenize(sentence) tagged_sentences_doc.append(nltk.pos_tag(tokens, lang = 'rus')) tagged_sentences_list.append(tagged_sentences_doc) tagged_sentences_list[1] #[0]#[0]#[1] # осталось только сделать кореференцию # получим список неочищенных файлов, чтобы обращаться к кореференциальным # цепочкам по их положению в тексте, # а не по написанию, таким образом ничего не перепутается list_full_file_json_dirty = [] # список, в котором каждый элемент - содержимое одного файла # цикл для прохода по всем файлам и сбора их содержимого for filename in tqdm(list_of_files): # абсолютный путь к единичному файлу path_single = path + '/' + filename # открываем with open(path_single) as file: file = json.load(file) list_full_file_json_dirty.append(file) type(list_full_file_json_dirty[1]['annotation_sets']['']['annotations'])#[1]['features']['_string'] print(len(names_list), len(tagged_sentences_list), len(list_full_file_json_dirty)) # количество элементов во всех списках одинаковое, поэтому можно сделать zip, а не map for filename, tagged_sentences_list_small, annotations_text in zip(names_list, tagged_sentences_list, list_full_file_json_dirty): # кореференция # отсюда буду получать номера начала и конца кореференциального участника annotations = annotations_text['annotation_sets']['']['annotations'].copy() # здесь буду искать по номерам начала и конца кусочек текста - на всякий случай text = annotations_text['text'] # список, который станет строчками датафрейма list_of_rows = [] for sentences in tagged_sentences_list_small: word_position_counter = 0 for sentence in sentences: row = [] #первый столбец имя файла row.append(filename) # строка нулей row.append(0) # номер слова row.append(word_position_counter) word_position_counter += 1 # единица предложения row.append(sentence[0]) # pos-tag для единицы row.append(sentence[1]) # строка для дерева row.append('-') # строка row.append('-') # строка row.append('-') # строка row.append('-') # строка row.append('-') # строка row.append('__') # строка row.append('*') # реверс нужен для удаления for features_list in reversed(annotations): # проходим по списку фичерсов и смотрим: если совпадает sentence[0][0] со _string, #то добавляем слову кореф string_from_txt = features_list['features']['_string'] if sentence[0] == string_from_txt: row.append('(' + str(features_list['features']['antecedent_id']) + ')') # удаляем использованный тег кореференции annotations.remove(features_list) break else: row.append('-') list_of_rows.append(row) conll_df = conll_df.append(pd.DataFrame(list_of_rows, columns=conll_df.columns)) #conll_df = conll_df.iloc[0:0] # лучше дропа conll_df.info() def getSizeOfNestedList(listOfElem): ''' Get number of elements in a nested list''' count = 0 # итерируем по списку for elem in listOfElem: # проверяем тип элемента if type(elem) == list: # и ркекурсивно вызываем снова эту функцию для подсчета count += getSizeOfNestedList(elem) else: count += 1 return count getSizeOfNestedList(tagged_sentences_list)
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Результаты 180 часов работы программы.Было обработано и внесено в датафрейм 11 932 678 элементов из 64 851 795.Осталось обработать 52 919 117 элементовНа настоящем этапе правильно будет остановиться на этом и выделить доступные компьютерные мощности на запуск следующих этапов пайплайна.
# нужна функция для добавления колонки номера предложения зфтвф sentence_number_list = [] i = 0 for document in tqdm(tagged_sentences_list): for sentence in document: for word in sentence: sentence_number_list.append(i) i += 1 len(sentence_number_list) sentence_number_list_180h = sentence_number_list[:11932678] len(sentence_number_list_180h) len(conll_df) conll_df['sentence_number'] = sentence_number_list_180h conll_df['zeros'] = conll_df['zeros'].astype(np.int64) conll_df['word_num_sent'] = conll_df['word_num_sent'].astype(np.int64) conll_df.info() conll_df.head()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Заменить тире на два нижних подчеркивания (__)
def remean_points(cell): cell = "__" return cell conll_df.empty5 = conll_df.empty5.apply(remean_points) conll_df.head()
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Замена успешна Заменить двойные кавычки
changed_cells = [] def remean_points(cell): if str(cell) == '"': cell = "'" changed_cells.append(1) else: changed_cells.append(0) return cell conll_df.sent_entry.apply(remean_points) len(changed_cells) [i for i, e in enumerate(changed_cells) if e == 1]
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Двойных кавычек нет, можно будет спокойно удалять.
# правкой тегов имеет смысл заняться после получения результатов на имеющемся тегсете.
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Подготовка документа
conll_df['join'] = conll_df['doc_name'] + ' ' + conll_df['zeros'].astype(str) + ' ' + conll_df['word_num_sent'].astype(str) + ' ' + conll_df['sent_entry'] + ' ' + conll_df['pos'] + ' ' + conll_df['star1'] + ' ' + conll_df['empty1'] + ' ' + conll_df['empty2'] + ' ' + conll_df['empty3'] + ' ' + conll_df['empty4'] + ' ' + conll_df['empty5'] + ' ' + conll_df['star2'] + ' ' + conll_df['coref'] for i, g in tqdm(conll_df.groupby('sentence_number')['join']): out = g.append(pd.Series({'new':np.nan})) out.to_csv('diploma_data_12m.txt', index=False, header=None, mode='a')
Exception ignored in: <function tqdm.__del__ at 0x7fba6d99a4d0> Traceback (most recent call last): File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1145, in __del__ self.close() File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1274, in close if self.last_print_t < self.start_t + self.delay: AttributeError: 'tqdm' object has no attribute 'last_print_t' Exception ignored in: <function tqdm.__del__ at 0x7fba6d99a4d0> Traceback (most recent call last): File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1145, in __del__ self.close() File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1274, in close if self.last_print_t < self.start_t + self.delay: AttributeError: 'tqdm' object has no attribute 'last_print_t' Exception ignored in: <function tqdm.__del__ at 0x7fba6d99a4d0> Traceback (most recent call last): File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1145, in __del__ self.close() File "/Users/egor/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1274, in close if self.last_print_t < self.start_t + self.delay: AttributeError: 'tqdm' object has no attribute 'last_print_t' 0%| | 0/748278 [00:00<?, ?it/s]/Users/egor/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:3: FutureWarning: The signature of `Series.to_csv` was aligned to that of `DataFrame.to_csv`, and argument 'header' will change its default value from False to True: please pass an explicit value to suppress this warning. This is separate from the ipykernel package so we can avoid doing imports until 100%|██████████| 748278/748278 [29:25<00:00, 423.76it/s]
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Деление на подгруппы для заданий берта
from sklearn.model_selection import train_test_split RANDOM_SEED = 1 np.random.seed(RANDOM_SEED) !pwd
/Users/egor/Desktop/SemanticHub/diploma
MIT
preprocessing_pipeline.ipynb
toskn/diploma
К сожалению тетрадка упала, поэтому придется загрузить документ сначала.
path = 'diploma_data_12m.txt' with open(path) as f: contents = f.readlines() contents[:5] len(contents) # удаление # pattern = re.compile('\"') conll = [] for row in tqdm(contents): row = re.sub(r'"','', row) conll.append(row) len(conll)
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
Лишние артефактные кавычки очищены.
# файл со всеми вхождениями with open('conll_clear_12m.txt', 'w') as f: f.writelines(conll)
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
В окошки ниже впишу номера кусков **conll [ ]** для быстрого создания вручную файлов для трейн/тест/дев, таким образом можно будет избежать нарезки предложений. train = 95%, dev = 2.5%, test = 2.5%
# файл test # group 1 паттерна будет имя документа, ноль и пробелы для уверенности, но матч должен и без них работать pattern = re.compile(r'(.*xml) 0') # заполним позицию предыдущего файла первым файлом в списке для удобства. previous_filename = pattern.match(conll[0]).group(1) # счетчик документов doc_num = 1 with open('test.russian_12m_comments.v4_gold_conll', 'w') as f: # запишем первую строку f.write('#begin document (%d); part 0\n' % doc_num) for row in tqdm(conll[:317027]): # если строка просто разделитель предложений, пропускаем ее if row == '\n': f.write(row) continue # получаем имя документа, на строку которого сейчас смотрим now_filename = pattern.match(row).group(1) if now_filename == previous_filename: f.write(row) previous_filename = now_filename # вроде неопасно, копирование для строк не нужно else: f.write('#end document\n') doc_num += 1 f.write('#begin document (%d); part 0\n' % doc_num) f.write(row) previous_filename = now_filename f.write('#end document\n') # файл dev # group 1 паттерна будет имя документа, ноль и пробелы для уверенности, но матч должен и без них работать pattern = re.compile(r'(.*xml) 0') # заполним позицию предыдущего файла первым файлом в списке для удобства, # позиция +1 от предыдущей для пропуска \n previous_filename = pattern.match(conll[317028]).group(1) # счетчик документов doc_num = 1 with open('dev.russian_12m_comments.v4_gold_conll', 'w') as f: # запишем первую строку f.write('#begin document (%d); part 0\n' % doc_num) for row in tqdm(conll[317028:634212]): # если строка просто разделитель предложений, пропускаем ее if row == '\n': f.write(row) continue # получаем имя документа, на строку которого сейчас смотрим now_filename = pattern.match(row).group(1) if now_filename == previous_filename: f.write(row) previous_filename = now_filename # вроде неопасно, копирование для строк не нужно else: f.write('#end document\n') doc_num += 1 f.write('#begin document (%d); part 0\n' % doc_num) f.write(row) previous_filename = now_filename # файл train # group 1 паттерна будет имя документа, ноль и пробелы для уверенности, но матч должен и без них работать pattern = re.compile(r'(.*xml) 0') # заполним позицию предыдущего файла первым файлом в списке для удобства. previous_filename = pattern.match(conll[634213]).group(1) # счетчик документов doc_num = 1 with open('train.russian_12m_comments.v4_gold_conll', 'w') as f: # запишем первую строку f.write('#begin document (%d); part 0\n' % doc_num) for row in tqdm(conll[634213:]): # если строка просто разделитель предложений, пропускаем ее if row == '\n': f.write(row) continue # получаем имя документа, на строку которого сейчас смотрим now_filename = pattern.match(row).group(1) if now_filename == previous_filename: f.write(row) previous_filename = now_filename # вроде неопасно, копирование для строк не нужно else: f.write('#end document\n') doc_num += 1 f.write('#begin document (%d); part 0\n' % doc_num) f.write(row) previous_filename = now_filename col_names=['doc_name', 'zeros', 'word_num_sent', 'sent_entry', 'pos', 'star1', 'empty1', 'empty2', 'empty3', 'empty4', 'empty5', 'star2', 'coref'] conll_df = pd.read_csv('conll_clear_12m.txt', sep=' ', engine='python', names=col_names) # пока поделим выборку просто по численным размерам df_train, df_test = train_test_split(conll_df, test_size=0.1, random_state=RANDOM_SEED, shuffle=False) df_val, df_test = train_test_split(df_test, test_size=0.5, random_state=RANDOM_SEED, shuffle=False) df_train.shape, df_val.shape, df_test.shape
_____no_output_____
MIT
preprocessing_pipeline.ipynb
toskn/diploma
DBT 1. Activate virtual environment, because airflow and dbt have both a lot of dependencies
source /home/flo/dbt-env/bin/activate
_____no_output_____
MIT
notebooks/bash.ipynb
elcolumbio/waipawama
2. Compile your sql model to a runable query, its nice for debugging
dbt compile --vars 'timespan: 2018-10'
_____no_output_____
MIT
notebooks/bash.ipynb
elcolumbio/waipawama
2. If you only want to run 1 modal a time
dbt run --models bankkonto_monthly --vars 'timespan: 2018-01'
_____no_output_____
MIT
notebooks/bash.ipynb
elcolumbio/waipawama
Config
class Config: n_folds=10 random_state=42 tbs = 1024 vbs = 512 data_path="data" result_path="results" models_path="models"
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
plot and util
def write_to_txt(file_name,column): with open(file_name, 'w') as f: for item in column: f.write("%s\n" % item)
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
Load data
train=pd.read_csv(os.path.join(Config.data_path,"train.csv")) test=pd.read_csv(os.path.join(Config.data_path,"test.csv")) aae=pd.read_csv(os.path.join(Config.data_path,"amino_acid_embeddings.csv")) submission=pd.read_csv(os.path.join(Config.data_path,"SampleSubmission.csv"))
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
Prepare and split data
train["Sequence_len"]=train["Sequence"].apply(lambda x : len(x)) test["Sequence_len"]=test["Sequence"].apply(lambda x : len(x)) max_seq_length = 550 # max seq length in this data set is 550 #stratified k fold train["folds"]=-1 kf = StratifiedKFold(n_splits=Config.n_folds, random_state=Config.random_state, shuffle=True) for fold, (_, val_index) in enumerate(kf.split(train,train["target"])): train.loc[val_index, "folds"] = fold train.head() # reduce seq length if max_seq_length>550 : train["Sequence"] = train["Sequence"].apply(lambda x: "".join(list(x)[0:max_seq_length])) test["Sequence"] = test["Sequence"].apply(lambda x: "".join(list(x)[0:max_seq_length])) voc_set = set(['P', 'V', 'I', 'K', 'N', 'B', 'F', 'Y', 'E', 'W', 'R', 'D', 'X', 'S', 'C', 'U', 'Q', 'A', 'M', 'H', 'L', 'G', 'T']) voc_set_map = { k:v for k , v in zip(voc_set,range(1,len(voc_set)+1))} number_of_class = train["target"].nunique() def encode(text_tensor, label): encoded_text = [ voc_set_map[e] for e in list(text_tensor.numpy().decode())] return encoded_text, label def encode_map_fn(text, label): # py_func doesn't set the shape of the returned tensors. encoded_text, label = tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64)) encoded_text.set_shape([None]) label=tf.one_hot(label,number_of_class) label.set_shape([number_of_class]) return encoded_text, label def get_data_loader(file,batch_size,labels): label_data=tf.data.Dataset.from_tensor_slices(labels) data_set=tf.data.TextLineDataset(file) data_set=tf.data.Dataset.zip((data_set,label_data)) data_set=data_set.repeat() data_set = data_set.shuffle(len(labels)) data_set=data_set.map(encode_map_fn,tf.data.experimental.AUTOTUNE) data_set=data_set.padded_batch(batch_size) data_set = data_set.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return data_set def get_data_loader_test(file,batch_size,labels): label_data=tf.data.Dataset.from_tensor_slices(labels.target) data_set=tf.data.TextLineDataset(file) data_set=tf.data.Dataset.zip((data_set,label_data)) data_set=data_set.map(encode_map_fn,tf.data.experimental.AUTOTUNE) data_set=data_set.padded_batch(batch_size) data_set = data_set.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return data_set
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
Model
def model(): name = "seq" dropout_rate = 0.1 learning_rate = 0.001 sequnce = Input([None],name="sequnce") EMB_layer = Embedding(input_dim = len(voc_set)+1, output_dim = 64, name = "emb_layer") GRU_layer_2 = GRU(units=256, name = "gru_2", return_sequences = False) BIDIR_layer_2 = Bidirectional(GRU_layer_2, name="bidirectional_2") Dens_layer_1 = Dense(units=512, activation=relu, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_1") Dens_layer_2 = Dense(units=256, activation=relu, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_2") output = Dense(units=number_of_class, activation=softmax, kernel_regularizer=None, bias_regularizer=None, name=name+"_dense_layer_output") dropout_1 = Dropout(dropout_rate) emb_layer = EMB_layer(sequnce) logits = output(Dens_layer_2(dropout_1(Dens_layer_1(BIDIR_layer_2(emb_layer))))) model = tf.keras.Model(inputs={"sequnce":sequnce, },outputs=logits) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) #loss= tfa.losses.SigmoidFocalCrossEntropy(reduction=tf.keras.losses.Reduction.AUTO) loss=CategoricalCrossentropy() model.compile(optimizer=optimizer, loss=loss, metrics=[tf.keras.metrics.CategoricalAccuracy(name="Acc")]) model.summary() return model
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
training
def trainn(fold): model_path=f"model_{fold}.h5" df_train = train[train["folds"] != fold].reset_index(drop=True) df_valid = train[train["folds"] == fold].reset_index(drop=True) write_to_txt(f"data/train_{fold}.txt",df_train.Sequence) write_to_txt(f"data/valid_{fold}.txt",df_valid.Sequence) train_label=df_train["target"] valid_label=df_valid["target"] train_dl = get_data_loader(f"data/train_{fold}.txt",Config.tbs,train_label) valid_dl = get_data_loader(f"data/valid_{fold}.txt",Config.vbs,valid_label) checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(Config.models_path,model_path), save_weights_only=True,monitor = 'val_loss', save_best_only=True,mode="min", verbose=1) callbacks=[checkpoint] my_model = model() history = my_model.fit(train_dl, validation_data=valid_dl, epochs=15, verbose=1, batch_size=Config.tbs, validation_batch_size=Config.vbs, validation_steps=len(df_valid)//Config.vbs, steps_per_epoch=len(df_train)/Config.tbs, callbacks=callbacks ) def predict(fold): model_path=f"model_{fold}.h5" write_to_txt(f"data/test_{fold}.txt",test.Sequence) test["target"]=0 test_label=test["target"] test_dl = get_data_loader_test(f"data/test_{fold}.txt",Config.vbs,test) my_model = model() my_model.load_weights(os.path.join(Config.models_path,model_path)) prediction=my_model.predict(test_dl) return prediction trainn(2) p=predict(2) sub=test[["ID"]].copy() for i in range(number_of_class): sub["target_{}".format(i)]=p[:,i] sub.head() sub.to_csv(os.path.join(Config.result_path,"sub_p2_epoch15.csv"),index=False)
_____no_output_____
MIT
work2.ipynb
Mo5mami/kinasa_2nd_place
Dependencies
import json, warnings, shutil, glob from jigsaw_utility_scripts import * from scripts_step_lr_schedulers import * from transformers import TFXLMRobertaModel, XLMRobertaConfig from tensorflow.keras.models import Model from tensorflow.keras import optimizers, metrics, losses, layers SEED = 0 seed_everything(SEED) warnings.filterwarnings("ignore")
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
TPU configuration
strategy, tpu = set_up_strategy() print("REPLICAS: ", strategy.num_replicas_in_sync) AUTO = tf.data.experimental.AUTOTUNE
Running on TPU grpc://10.0.0.2:8470 REPLICAS: 8
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Load data
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/' k_fold = pd.read_csv(database_base_path + '5-fold.csv') valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang']) print('Train samples: %d' % len(k_fold)) display(k_fold.head()) print('Validation samples: %d' % len(valid_df)) display(valid_df.head()) base_data_path = 'fold_1/' # Unzip files !tar -xvf /kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/fold_1.tar.gz
Train samples: 400830
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Model parameters
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/' config = { "MAX_LEN": 192, "BATCH_SIZE": 128, "EPOCHS": 4, "LEARNING_RATE": 1e-5, "ES_PATIENCE": None, "base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5', "config_path": base_path + 'xlm-roberta-large-config.json' } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file)
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Learning rate schedule
lr_min = 1e-7 lr_start = 1e-7 lr_max = config['LEARNING_RATE'] step_size = len(k_fold[k_fold['fold_1'] == 'train']) // config['BATCH_SIZE'] total_steps = config['EPOCHS'] * step_size hold_max_steps = 0 warmup_steps = step_size * 1 decay = .9997 rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])] y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps, lr_start, lr_max, lr_min, decay) for x in rng] sns.set(style="whitegrid") fig, ax = plt.subplots(figsize=(20, 6)) plt.plot(rng, y) print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
Learning rate schedule: 1e-07 to 9.84e-06 to 1.06e-06
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Model
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config) last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) cls_token = last_hidden_state[:, 0, :] output = layers.Dense(1, activation='sigmoid', name='output')(cls_token) model = Model(inputs=[input_ids, attention_mask], outputs=output) return model
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Train
# Load data x_train = np.load(base_data_path + 'x_train.npy') y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32) x_valid_ml = np.load(database_base_path + 'x_valid.npy') y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32) #################### ADD TAIL #################### x_train = np.hstack([x_train, np.load(base_data_path + 'x_train_tail.npy')]) y_train = np.vstack([y_train, y_train]) step_size = x_train.shape[1] // config['BATCH_SIZE'] valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE'] # Build TF datasets train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED)) valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED)) train_data_iter = iter(train_dist_ds) valid_data_iter = iter(valid_dist_ds) # Step functions @tf.function def train_step(data_iter): def train_step_fn(x, y): with tf.GradientTape() as tape: probabilities = model(x, training=True) loss = loss_fn(y, probabilities) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) train_auc.update_state(y, probabilities) train_loss.update_state(loss) for _ in tf.range(step_size): strategy.experimental_run_v2(train_step_fn, next(data_iter)) @tf.function def valid_step(data_iter): def valid_step_fn(x, y): probabilities = model(x, training=False) loss = loss_fn(y, probabilities) valid_auc.update_state(y, probabilities) valid_loss.update_state(loss) for _ in tf.range(valid_step_size): strategy.experimental_run_v2(valid_step_fn, next(data_iter)) # Train model with strategy.scope(): model = model_fn(config['MAX_LEN']) optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32), warmup_steps, hold_max_steps, lr_start, lr_max, lr_min, decay)) loss_fn = losses.binary_crossentropy train_auc = metrics.AUC() valid_auc = metrics.AUC() train_loss = metrics.Sum() valid_loss = metrics.Sum() metrics_dict = {'loss': train_loss, 'auc': train_auc, 'val_loss': valid_loss, 'val_auc': valid_auc} history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter, step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'], config['ES_PATIENCE'], save_last=False) # model.save_weights('model.h5') # Make predictions # x_train = np.load(base_data_path + 'x_train.npy') # x_valid = np.load(base_data_path + 'x_valid.npy') x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy') # train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO)) # valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO)) valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO)) # k_fold.loc[k_fold['fold_1'] == 'train', 'pred_1'] = np.round(train_preds) # k_fold.loc[k_fold['fold_1'] == 'validation', 'pred_1'] = np.round(valid_preds) valid_df['pred_1'] = valid_ml_preds # Fine-tune on validation set #################### ADD TAIL #################### x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')]) y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml]) valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE'] # Build TF datasets train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail, config['BATCH_SIZE'], AUTO, seed=SEED)) train_ml_data_iter = iter(train_ml_dist_ds) history_ml = custom_fit(model, metrics_dict, train_step, valid_step, train_ml_data_iter, valid_data_iter, valid_step_size_tail, valid_step_size, config['BATCH_SIZE'], 1, config['ES_PATIENCE'], save_last=False) model.save_weights('model_ml.h5') # Make predictions valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO)) valid_df['pred_ml_1'] = valid_ml_preds ### Delete data dir shutil.rmtree(base_data_path)
Train for 5010 steps, validate for 62 steps EPOCH 1/4 time: 1715.0s loss: 0.2442 auc: 0.9590 val_loss: 0.2856 val_auc: 0.9211 EPOCH 2/4 time: 1520.0s loss: 0.1623 auc: 0.9816 val_loss: 0.2865 val_auc: 0.9164 EPOCH 3/4 time: 1519.8s loss: 0.1449 auc: 0.9852 val_loss: 0.3106 val_auc: 0.9086 EPOCH 4/4 time: 1520.0s loss: 0.1406 auc: 0.9860 val_loss: 0.2875 val_auc: 0.9180 Training finished Train for 125 steps, validate for 62 steps EPOCH 1/1 time: 1623.4s loss: 7.3732 auc: 0.9554 val_loss: 0.1360 val_auc: 0.9772 Training finished
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Model loss graph
plot_metrics(history) # ML fine-tunned preds plot_metrics(history_ml)
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Model evaluation
# display(evaluate_model(k_fold, 1, label_col='toxic_int').style.applymap(color_map))
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Confusion matrix
# train_set = k_fold[k_fold['fold_1'] == 'train'] # validation_set = k_fold[k_fold['fold_1'] == 'validation'] # plot_confusion_matrix(train_set['toxic_int'], train_set['pred_1'], # validation_set['toxic_int'], validation_set['pred_1'])
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Model evaluation by language
display(evaluate_model_lang(valid_df, 1).style.applymap(color_map)) # ML fine-tunned preds display(evaluate_model_lang(valid_df, 1, pred_col='pred_ml').style.applymap(color_map))
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Visualize predictions
pd.set_option('max_colwidth', 120) print('English validation set') display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10)) print('Multilingual validation set') display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
English validation set
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Test set predictions
x_test = np.load(database_base_path + 'x_test.npy') test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO)) submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv') submission['toxic'] = test_preds submission.to_csv('submission.csv', index=False) display(submission.describe()) display(submission.head(10))
_____no_output_____
MIT
Model backlog/Train/75-jigsaw-fold1-xlm-roberta-large-cls-tail.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Sparse GP Regression 14th January 2014 James Hensman 29th September 2014 Neil Lawrence (added sub-titles, notes and some references). This example shows the variational compression effect of so-called 'sparse' Gaussian processes. In particular we show how using the variational free energy framework of [Titsias, 2009](http://jmlr.csail.mit.edu/proceedings/papers/v5/titsias09a/titsias09a.pdf) we can compress a Gaussian process fit. First we set up the notebook with a fixed random seed, and import GPy.
%matplotlib inline %config InlineBackend.figure_format = 'svg' import GPy import numpy as np np.random.seed(101)
_____no_output_____
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
Sample FunctionNow we'll sample a Gaussian process regression problem directly from a Gaussian process prior. We'll use an exponentiated quadratic covariance function with a lengthscale and variance of 1 and sample 50 equally spaced points.
N = 50 noise_var = 0.05 X = np.linspace(0,10,50)[:,None] k = GPy.kern.RBF(1) y = np.random.multivariate_normal(np.zeros(N),k.K(X)+np.eye(N)*np.sqrt(noise_var)).reshape(-1,1)
_____no_output_____
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
Full Gaussian Process FitNow we use GPy to optimize the parameters of a Gaussian process given the sampled data. Here, there are no approximations, we simply fit the full Gaussian process.
m_full = GPy.models.GPRegression(X,y) m_full.optimize('bfgs') m_full.plot() print m_full
Name : GP regression Objective : 50.0860723468 Number of Parameters : 3 Number of Optimization Parameters : 3 Updates : True Parameters: GP_regression.  | value | constraints | priors rbf.variance  | 1.65824860473 | +ve | rbf.lengthscale  | 1.11215383162 | +ve | Gaussian_noise.variance | 0.236134236859 | +ve |
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
A Poor `Sparse' GP FitNow we construct a sparse Gaussian process. This model uses the inducing variable approximation and initialises the inducing variables in two 'clumps'. Our initial fit uses the *correct* covariance function parameters, but a badly placed set of inducing points.
Z = np.hstack((np.linspace(2.5,4.,3),np.linspace(7,8.5,3)))[:,None] m = GPy.models.SparseGPRegression(X,y,Z=Z) m.likelihood.variance = noise_var m.plot() print m
Name : sparse gp Objective : 260.809828016 Number of Parameters : 9 Number of Optimization Parameters : 9 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing inputs  | (6, 1) | | rbf.variance  | 1.0 | +ve | rbf.lengthscale  | 1.0 | +ve | Gaussian_noise.variance | 0.05 | +ve |
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
Notice how the fit is reasonable where there are inducing points, but bad elsewhere. Optimizing Covariance ParametersNext, we will try and find the optimal covariance function parameters, given that the inducing inputs are held in their current location.
m.inducing_inputs.fix() m.optimize('bfgs') m.plot() print m
Name : sparse gp Objective : 53.9735537142 Number of Parameters : 9 Number of Optimization Parameters : 3 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing inputs  | (6, 1) | fixed | rbf.variance  | 1.73905117564 | +ve | rbf.lengthscale  | 3.02312650701 | +ve | Gaussian_noise.variance | 0.372990010041 | +ve |
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
The poor location of the inducing inputs causes the model to 'underfit' the data. The lengthscale is much longer than the full GP, and the noise variance is larger. This is because in this case the Kullback Leibler term in the objective free energy is dominating, and requires a larger lengthscale to improve the quality of the approximation. This is due to the poor location of the inducing inputs. Optimizing Inducing InputsFirstly we try optimzing the location of the inducing inputs to fix the problem, however we still get a larger lengthscale than the Gaussian process we sampled from (or the full GP fit we did at the beginning).
m.randomize() m.Z.unconstrain() m.optimize('bfgs') m.plot()
_____no_output_____
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
The inducing points spread out to cover the data space, but the fit isn't quite there. We can try increasing the number of the inducing points. Train with More Inducing PointsNow we try 12 inducing points, rather than the original six. We then compare with the full Gaussian process likelihood.
Z = np.random.rand(12,1)*12 m = GPy.models.SparseGPRegression(X,y,Z=Z) m.optimize('bfgs') m.plot() m_full.plot() print m.log_likelihood(), m_full.log_likelihood()
[[-50.09844715]] -50.0860723468
BSD-3-Clause
GPy/sparse_gp_regression.ipynb
olumighty1/notebook
1. Experimentally prove weak law of large numbers. $$\lim_{n\to \infty} P[\vert M_n-m_x\vert>\epsilon]=0$$ Where, $M_n$ is the sample mean, $m_x$ is the actual mean, $\epsilon$ is a small positive number and $n$ is the number of sample points.
#Law of weak numbers #write code here #sample = np.random.normal(0, 1, 1000) #p= np.zeros(sample.shape) #for i in range(0,9999): #p[i]=np.sum(sample[0:i])/(i+1) count=0 for i in range (1,1000): sample = np.random.normal(0, 1, 1000) #n=1000 meanv= np.mean(sample) if meanv<0.1: #epsilon count+=1 count=float(count/1000) print(count)
0.997
MIT
CT/314_statistical_communication_theory/mv_joshi/2019/lab_sessions/lab8.ipynb
u-square2/AcadVault
2. Experimentally prove strong law of large numbers. $$P[\lim_{n\to \infty} M_n=m_x]=1,$$ Where, $M_n$ is the sample mean, $m_x$ is the actual mean, and $n$ is the number of sample points.
#Law of strong numbers #write code here itr=np.arange(10000) actual_mean=np.zeros(10000) sample = np.random.normal(0, 1, 10000) Mn= np.zeros(sample.shape) for i in range(0,9999): Mn[i]=np.sum(sample[0:i])/(i+1) plt.figure() plt.plot(itr,actual_mean,label='actual_mean') plt.plot(itr,Mn,label='Mn') plt.legend() plt.figure() plt.plot(itr,Mn-actual_mean,label='error') plt.figure()
_____no_output_____
MIT
CT/314_statistical_communication_theory/mv_joshi/2019/lab_sessions/lab8.ipynb
u-square2/AcadVault
3. Experimentally prove central limit theorem. $$\frac{S_N-E[S_N]}{\sqrt{var(S_N)}}=\frac{\sum_{i=1}^{N}{X_i}-\sum_{i=1}^{N}{E[X_i]}}{\sqrt{\sum_{i=1}^{N}{var[X_i]}}}\to N(0,1)$$,Where, $X_1, X_2, . . , X_N$ are random variables with mean $E[X_i]$ and variance $var[X_i]$, $i=1,2,…, N$.
N=100 M=10000 x = np.zeros([M,N]) #write code here x=np.random.uniform(0,10,[M,N]) plt.hist(x[:,0],normed=True) plt.figure() Sn=np.sum(x,0) Sn_mean=np.mean(Sn) sig=np.var(Sn) lim=(Sn-Sn_mean)/np.sqrt(sig) #write code here plt.figure() plt.hist(lim,normed=True) x=np.linspace(-4,4,1000) fx=(1/(np.sqrt(2*np.pi)))*np.exp((-x**2)/2); plt.plot(x,fx)
_____no_output_____
MIT
CT/314_statistical_communication_theory/mv_joshi/2019/lab_sessions/lab8.ipynb
u-square2/AcadVault
Generate a NumPy array containing 301 values from 0 to 3 and assign to x, then Transform x by applying the function: `y = -(x^2) + 3x - 1` and assign the resulting array of transformed values to y.
def transform(x): return -(x**2) + (3 * x) - 1 x = np.linspace(0, 3, 301) f = lambda x : -(x**2) + (3 * x) - 1 y = f(x)
_____no_output_____
Unlicense
dataquest/notebooks/lesson_linear_nonlinear_functions/Lesson - Linear and Nonlinear Functions.ipynb
monocongo/datascience_portfolio
Plot the X vs. Y using a simple matplotlib line plot:
%matplotlib inline plt.plot(x, y)
_____no_output_____
Unlicense
dataquest/notebooks/lesson_linear_nonlinear_functions/Lesson - Linear and Nonlinear Functions.ipynb
monocongo/datascience_portfolio
Hash Tables**Attendance code: 4482**Arrays have O(1) data retrieval _if you have the index_.If you have to search for the data/index, arrays are O(n). That's a bummer.What if we had a magic function that would tell you the index for a given "key"?Store data as _key/value pairs_.With `dict`s:```d = {}d["key"] = value``` Operations on Hash Tables* GET - retrieve a value from the table* PUT - store a value in the table* DELETE - delete a value from the tableShould all be O(1) over the number of keys/values. StructureWe'll have an array to hold the data.Values will be at specific indexes in the array.We'll have something called a _hashing function_ that takes a key and turns it into an index. This tells us where to look in the array for the value.This function is _deterministic_, meaning that the same key will always produce the same index. Operations Part II```GET(key): index = hashing_function(key) return table[index]``````PUT(key, value): index = hashing_function(key) table[index] = value``` Hashing FunctionNeed some way to map from a string to a number. Preferable a unique-randomish number.
d = {} d["goatcount"] = 9 d["key"] = 'value' print(d) print(d["key"]) # should print 'value', should also take O(1) time over the number of keys class HashTable: def __init__(self): self.table = [None] * 8 # Build an array of 8 elements to hold values def hashing_function(self, key): """ Naive hashing function use a real one like DJB2 or FNV1 """ bignum = "" # O(n) over the length of the key # O(1) over the number of values in the table for c in key: bignum += str(ord(c)) bignum = int(bignum) return bignum % len(self.table) def put(self, key, value): index = self.hashing_function(key) print(index) self.table[index] = value def get(self, key): index = self.hashing_function(key) return self.table[index] ht = HashTable() #print(ht.hashing_function("goatcount")) #print(ht.hashing_function("hello, world")) ht.put("goatcount", 9) ht.put("hello!", "foo") #ht.put("test", "bar") # Causes a collision with "goatcount" print(ht.table) print(f"Value for goatcount: {ht.get('goatcount')}") # Print "9" print(f"Value for hello!: {ht.get('hello!')}") # Print "9"
_____no_output_____
MIT
CS42_DS_&_A_1_M2_Hash_Tables_I.ipynb
juancaruizc/CS42-DS-A-1-M2-Hash-Tables-I
Applications of Hash TablesGoing to use `dict` for these.```d = {} PUTd["key"] = value GETprint(d["key"])``` Counting Items
#%%timeit a = [1,6,7,9,5,3,3,5,7,8,8,6,5,4,3,4,6,7,8,8,5,4,6,7,8,9,7] * 70 def counter1(): # O(n^2) for e in a: count = 0 for e2 in a: if e == e2: count += 1 #print(e,count) def counter2(): # O(n) count = {} for e in a: if e not in count: # Finding key `in` dictionary is O(1) count[e] = 0 count[e] += 1 print(count) counter2() a = [1,6,7,9,5,3,3,5,7,8,8,6,5,4,3,4,6,7,8,8,5,4,6,7,8,9,7] * 70 def counter2(): # O(n) count = {} for e in a: if e not in count: # Finding key `in` dictionary is O(1) count[e] = 0 count[e] += 1 # If you want to sort, first use dict.items() print(count) # sort by key sorted_count = sorted(count.items()) print(list(count.items())) for k, v in sorted_count: print(f"{k}: {v}") print("------------") # Sort by value """ def sort_by(e): return e[1] sorted_count = sorted(count.items(), key=sort_by) """ sorted_count = sorted(count.items(), key=lambda e: e[1]) # Same as above for k, v in sorted_count: print(f"{v:>3}: {k}") counter2() d = {} d["hi"] = 12 d["hi"] = 22 print(d["hi"])
22
MIT
CS42_DS_&_A_1_M2_Hash_Tables_I.ipynb
juancaruizc/CS42-DS-A-1-M2-Hash-Tables-I
Your first neural networkIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Load and prepare the dataA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head()
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Checking out the dataThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
rides[:24*10].plot(x='dteday', y='cnt')
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Dummy variablesHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head()
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Scaling target variablesTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.The scaling factors are saved so we can go backwards when we use the network for predictions.
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Splitting the data into training, testing, and validation setsWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:]
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Time to build the networkBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.Below, you have these tasks:1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.2. Implement the forward pass in the `train` method.3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.4. Implement the forward pass in the `run` method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error - Replace these values with your calculations. hidden_errors = np.dot(output_errors,self.weights_hidden_to_output) hidden_grad = hidden_outputs * (1.0 - hidden_outputs) # hidden layer gradients hidden_error_term = hidden_grad * hidden_errors.T # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs.T # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * hidden_error_term * inputs.T # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Training the networkHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochsThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rateThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodesThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### epochs = 1500 learning_rate = 0.01 hidden_nodes = 8 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=1)
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
Check out your predictionsHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45)
_____no_output_____
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric). Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer belowThe model predicted as well a good part of the results, but by the end of the year it probably missed the holidays. Unit testsRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite)
..... ---------------------------------------------------------------------- Ran 5 tests in 0.014s OK
MIT
nd101/p1-bike-sharing/DLND Your first neural network.ipynb
julianogalgaro/udacity
**** Generation of Test Data**** In this notebook we generate some test data for the interactive histogram. We create both unbinned energy values and binned efficiency curves.
import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_formats = ['svg']
_____no_output_____
CC-BY-4.0
data/test_data/generate_data.ipynb
fewagner/excess
Energy Data We generate the data randomly sampled from some standard distributions for three exemplary experiments.
exp_A = np.concatenate((np.random.exponential(scale=1, size=100000), np.random.normal(loc=5,scale=0.2, size=100000))) exp_B = np.concatenate((np.random.exponential(scale=0.5, size=50000), np.random.uniform(low=0,high=15, size=20000))) exp_C = np.concatenate((np.random.exponential(scale=2, size=150000), np.random.normal(loc=10,scale=2, size=150000))) exp_D = np.concatenate((np.random.exponential(scale=5, size=50000), np.random.normal(loc=15,scale=0.7, size=4000)))
_____no_output_____
CC-BY-4.0
data/test_data/generate_data.ipynb
fewagner/excess