markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
6) Create A New ModelCreate a new model called `second_fashion_model` in the cell below. Make some changes so it is different than `fashion_model` that you've trained above. The change could be using a different number of layers, different number of convolutions in the layers, etc.Define the model, compile it and fit it in the cell below. See how it's validation score compares to that of the original model. | # Your code below
second_fashion_model = Sequential()
second_fashion_model.add(Conv2D(16, kernel_size=3, activation='relu', input_shape=(img_rows, img_cols, 1)))
second_fashion_model.add(Conv2D(24, kernel_size=2, activation='relu'))
second_fashion_model.add(Conv2D(32, kernel_size=2, activation='relu'))
second_fashion_model.add(Conv2D(24, kernel_size=3, activation='relu'))
second_fashion_model.add(Flatten())
second_fashion_model.add(Dense(100, activation='sigmoid'))
second_fashion_model.add(Dense(num_classes, activation='softmax'))
second_fashion_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
second_fashion_model.fit(x, y, batch_size=100, epochs=6, validation_split=0.2)
# q_6.check()
second_fashion_model.summary()
# q_6.solution() | _____no_output_____ | MIT | deep_learning/07-deep-learning-from-scratch.ipynb | drakearch/kaggle-courses |
[](https://colab.sandbox.google.com/github/kornia/tutorials/blob/master/source/hello_world_tutorial.ipynb) Hello world: Planet KorniaWelcome to Planet Kornia: a set of tutorials to learn about **Computer Vision** in [PyTorch](https://pytorch.org).This is the first tutorial that show how one can simply start loading images with [Torchvision](https://pytorch.org/vision), [Kornia](https://kornia.org) and [OpenCV](https://opencv.org). | %%capture
!pip install kornia
import cv2
from matplotlib import pyplot as plt
import numpy as np
import torch
import torchvision
import kornia as K | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Download first an image form internet to start to work. | %%capture
!wget https://github.com/kornia/data/raw/main/arturito.jpg | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Load an image with OpenCVWe can use OpenCV to load an image. By default, OpenCV loads images in BGR format and casts to a `numpy.ndarray` with the data layout `(H,W,C)`. However, because matplotlib saves an image in RGB format, in OpenCV you need to change the BGR to RGB so that an image is displayed properly. | img_bgr: np.array = cv2.imread('arturito.jpg') # HxWxC / np.uint8
img_rgb: np.array = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
plt.imshow(img_rgb); plt.axis('off'); | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Load an image with TorchvisionThe images can be also loaded using `torchvision` which directly returns the images in a `torch.Tensor` in the shape `(C,H,W)`. | x_rgb: torch.tensor = torchvision.io.read_image('arturito.jpg') # CxHxW / torch.uint8
x_rgb = x_rgb.unsqueeze(0) # BxCxHxW
print(x_rgb.shape) | torch.Size([1, 3, 144, 256])
| Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Load an image with KorniaWith Kornia we can do all the preceding.We have a couple of utilities to cast the image to a `torch.Tensor` to make it compliant to the other Kornia components and arrange the data in `(B,C,H,W)`. The utility is [`kornia.image_to_tensor`](https://kornia.readthedocs.io/en/latest/utils.htmlkornia.utils.image_to_tensor) which casts a `numpy.ndarray` to a `torch.Tensor` and permutes the channels to leave the image ready for being used with any other PyTorch or Kornia component. The image is casted into a 4D `torch.Tensor` with zero-copy. | x_bgr: torch.tensor = K.image_to_tensor(img_bgr) # CxHxW / torch.uint8
x_bgr = x_bgr.unsqueeze(0) # 1xCxHxW
print(f"convert from '{img_bgr.shape}' to '{x_bgr.shape}'") | convert from '(144, 256, 3)' to 'torch.Size([1, 3, 144, 256])'
| Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
We can convert from BGR to RGB with a [`kornia.color`](https://kornia.readthedocs.io/en/latest/color.html) component. | x_rgb: torch.tensor = K.color.bgr_to_rgb(x_bgr) # 1xCxHxW / torch.uint8 | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Visualize an image with Matplotib We will use [Matplotlib](https://matplotlib.org/) for the visualisation inside the notebook. Matplotlib requires a `numpy.ndarray` in the `(H,W,C)` format, and for doing so we will go back with [`kornia.tensor_to_image`](https://kornia.readthedocs.io/en/latest/utils.htmlkornia.utils.image_to_tensor) which will convert the image to the correct format. | img_bgr: np.array = K.tensor_to_image(x_bgr)
img_rgb: np.array = K.tensor_to_image(x_rgb) | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Create a subplot to visualize the original an a modified image | fig, axs = plt.subplots(1, 2, figsize=(32, 16))
axs = axs.ravel()
axs[0].axis('off')
axs[0].imshow(img_rgb)
axs[1].axis('off')
axs[1].imshow(img_bgr)
plt.show() | _____no_output_____ | Apache-2.0 | source/hello_world_tutorial.ipynb | oskarflordal/tutorials |
Data cleaning GoalIn this notebook, we will be taking in raw.csv and cleaning/parsing its different columns. The notebook contains the transformations below in order:1. Read in the data2. Removing unused columns for this analysis3. Removing rows with certain null columns4. Cleaning of columns * ad_age * ad_impressions * ad_clicks * ad_creation_date * ad_end_date * ad_targeting_interests * ad_targeting_people_who_match5. Writing to file6. Summary of lost rowsWe first import 3 packages which will be useful for our data cleaning:* pandas for handling csv as a table* numpy to handle mulitdimensional array operations* re to handle regular expression parsing | import pandas as pd
import numpy as np
import re
# We read in the data
ads_df = pd.read_csv('../raw_data/raw.csv')
# Output first 2 rows
ads_df.head(2) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Removing unused columnsOur first step will be to remove columns we will not be using for this analysis.| Column name | Reason for removal||-------------| ------------------|| Unnamed | Index column added by accident in the data production step. || ad_id | We will be using the file_name column as identifier. || ad_text | Although interesting its analysis is outside the scope of this project | | ad_landing_page | This will not be useful in answering our research questions. || ad_targeting_location | We will not be studying the impact of location for the ads || ad_targeting_custom_audience | This field doesn't contain information not already in the ad_targeting_interests column || ad_targeting_language | This field is almost always US English and is not present on most of the dataset || ad_targeting_placements | We will not be studying the impact of location on the page for the ads | | # Columns we will not be using
columns_to_remove = ['Unnamed: 0', 'ad_id', 'ad_text', 'ad_landing_page', 'ad_targeting_location', 'ad_targeting_custom_audience', 'ad_targeting_excluded_connections', 'ad_targeting_language', 'ad_targeting_placements']
ads_df = ads_df.drop(columns=columns_to_remove)
ads_df.head(2) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Removing rows with null columnsWe will be removing rows with null for: ad_creation_date, ad_spend, ad_targeting_age, ad_impressions and ad_clicks. Our first step will be to create a dictionary which can keep track of the number of rows remaining after a given operation. We will be using this dictionary when summarizing the cleaning of the dataset and its repercussions on our analysis. We then create a function which remove the null values for a column name and adds the row count after removal to our summary dictionary. | # Dictionary to keep track of row removals
cleaning_summary_format = {'before_cleaning_count': len(ads_df)}
# Function to remove null rows for a given column
def remove_nulls(ads_df, column_name):
# np.where returns a tuple, we want the first member (the indexes of rows)
null_indexes = np.where(pd.isnull(ads_df[column_name]))[0]
# We drop the list of indexes
ads_df = ads_df.drop(null_indexes)
# We add the entry to our summary
cleaning_summary_format['null_' + column_name + '_count'] = len(ads_df)
return ads_df
# We remove nulls for given columns
ads_df = remove_nulls(ads_df, 'ad_creation_date')
ads_df = remove_nulls(ads_df, 'ad_spend')
ads_df = remove_nulls(ads_df, 'ad_targeting_age')
ads_df = remove_nulls(ads_df, 'ad_impressions')
ads_df = remove_nulls(ads_df, 'ad_clicks')
print('''Before cleaning our dataset had {before_cleaning_count} columns.
After removing rows with null creation dates: {null_ad_creation_date_count} columns.
After removing rows with null ad spending: {null_ad_spend_count} columns.
After removing rows with null ad targeting age: {null_ad_targeting_age_count} columns.
After removing rows with null ad impressions: {null_ad_impressions_count} columns.
After removing rows with null ad clicks: {null_ad_clicks_count} columns.'''.format(**cleaning_summary_format)) | Before cleaning our dataset had 3517 columns.
After removing rows with null creation dates: 3497 columns.
After removing rows with null ad spending: 3497 columns.
After removing rows with null ad targeting age: 3497 columns.
After removing rows with null ad impressions: 3497 columns.
After removing rows with null ad clicks: 3497 columns.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Cleaning ad_ageFirst we look at the values for the field and whether we will be able to leverage them in our analysis. | ads_df.ad_targeting_age.value_counts().index | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
The initial parsing for this field was not perfect... Let's simplify this bucketing by removing gender information. To do so we crop the string at 8 characters. | # Crop the ad_targeting_age to 8 characters
ads_df.ad_targeting_age = ads_df.ad_targeting_age.apply(lambda s: s if len(s)<=8 else s[0:8])
# Count rows for the different values
count_table = ads_df.ad_targeting_age.value_counts().to_frame()
# Rename the column for clarity
count_table.columns = ['Ad count']
# Output top 5
count_table.head(5) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
As per this table, almost all ads targeted voting age facebook users (18+). Bucketing the ads by age groups will not result in significant/interesting analysis. We drop the column. | ads_df = ads_df.drop(columns=['ad_targeting_age']) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Cleaning ad_impressions and ad_clicksBoth these columns are numerical and do not contain None and or NaN values. [The Oxford study](https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report.pdf), mention that ads without impressions or clicks where unlikely to have been shown to Facebook users.* We will first be parsing these fields that sometimes use . or , to separate thousands.* We will then remove 0 values.The studies mention that this removed quite a few entries. | # Parsing of string to integer
def format_string_to_integer(string):
# Removing dots and commas and semicolons
s = string.replace(',', '').replace('.', '').replace(';', '')
# Removing typos betwee o, O (lower, upper letter o) and 0 (zero digit)
s = s.replace('o', '0').replace('O', '0')
# Accidental whitespace
s = s.replace(' ', '')
# Coerce string to integer
return int(s)
# Remvoe rows with 0s and adds count after removal to summary
def remove_zeros(ads_df, column_name):
ads_df = ads_df[ads_df[column_name] != 0]
cleaning_summary_format['zeros_' + column_name + '_count'] = len(ads_df)
return ads_df
# How many columns do we have before removing zeros
cleaning_summary_format['before_zeros_count'] = len(ads_df)
# Conversion to integers
ads_df['ad_clicks'] = ads_df['ad_clicks'].apply(format_string_to_integer)
ads_df['ad_impressions'] = ads_df['ad_impressions'].apply(format_string_to_integer)
# Removing zeros values
ads_df = remove_zeros(ads_df, 'ad_impressions')
ads_df = remove_zeros(ads_df, 'ad_clicks')
# Reporting
print('''Before removing 0 ad_impressions or ad_clicks our dataset had {before_zeros_count} columns.
After removing rows with 0 ad impressions: {zeros_ad_impressions_count} columns.
After removing rows with 0 ad clicks: {zeros_ad_clicks_count} columns.'''.format(**cleaning_summary_format)) | Before removing 0 ad_impressions or ad_clicks our dataset had 3497 columns.
After removing rows with 0 ad impressions: 2588 columns.
After removing rows with 0 ad clicks: 2450 columns.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Parsing creation date and end dateCreation date and end date are written in a complex format: 04/13/16 07:48:33 AM PDT. Our analysis only requires the date. In this section, we will extract the first 8 characters mm/dd/yy and convert them to a datetime object. We take a look at the entries: | ads_df['ad_creation_date'] | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We find that sometimes the first few characters contain spaces. We write a regular expression for this and apply the removal of these white space as part of a function. We also need to complete the year to be 4 characters for later date parsing. | # We first compile our date extraction regex to improve performance
date_regex = re.compile(r'(?P<date>\d\s*\d\s*\/\s*\d\s*\d\s*\/\s*\d\s*\d)')
# Given a string beginning with mm/dd/yy we produce mm/dd/YYYY
# Function returns 'parse_error' on failure to parse and null if the
# input string was null
def extract_date_from_string(string):
matches = None
date = None
# If the string is not null attempt to find matches
if not pd.isnull(string):
matches = date_regex.search(string)
else:
# null value for string in pandas
date = np.nan
# If the ?P<date> group was found
if matches and matches.groupdict():
group_dict = matches.groupdict()
date = group_dict.get('date')
if date:
# Remove whitespace
date = date.replace(' ', '')
# We prefix '20' to the year to make 01/01/17 -> 01/01/2017
date = date[:6] + '20' + date[6:]
# We identify parsing errors with the 'parse_error'
return date if date else 'parse_error' | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We apply the function to every row and create a new column: 'ad_creation_date_parsed' | ads_df['ad_creation_date_parsed'] = ads_df.ad_creation_date.apply(extract_date_from_string) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We check how many dates could not be parsed: | (ads_df['ad_creation_date_parsed'] == 'parse_error').sum() | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Since only one date could not be parsed we validate its value: | row = ads_df[ads_df['ad_creation_date_parsed'] == 'parse_error']
row | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
In this case the date should be 02/21/2017. An l was mistaken to a 1. We replace it manually. | ads_df.loc[row.index, 'ad_creation_date_parsed'] = '02/21/2017'
ads_df.loc[row.index] | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Now that all dates have been parsed, we replace the original column with the parsed one and remove the temporary parsed column. Since no columns were lost in the process we will not be adding an entry to the summary. | # Replace original column with parsed
ads_df['ad_creation_date'] = ads_df['ad_creation_date_parsed']
# Drop temporary parsed column
ads_df = ads_df.drop(columns=['ad_creation_date_parsed']) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We now execute the same steps for the end date. | ads_df['ad_end_date_parsed'] = ads_df.ad_end_date.apply(extract_date_from_string) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We check how many dates could not be parsed: | (ads_df['ad_end_date_parsed'] == 'parse_error').sum()
ads_df['ad_end_date'] = ads_df['ad_end_date_parsed']
ads_df = ads_df.drop(columns=['ad_end_date_parsed']) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Now that both ad_start_date and ad_end_date are properly parsed strings, we can apply a pandas function to transform them into datetime objects. This will make date handling easier during our analysis. | ads_df.ad_creation_date = ads_df.ad_creation_date.apply(lambda date_string : pd.to_datetime(date_string, format='%m/%d/%Y'))
ads_df.ad_end_date = ads_df.ad_end_date.apply(lambda date_string : pd.to_datetime(date_string, format='%m/%d/%Y'))
# Output first 3 rows
ads_df.head(3) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Parsing ad_spendSometimes the ad_spend field contains spaces, dots instead of commas to seperate thousands and the 'RUB' currency shorthand. We use a regular expression to extract the amount of the ad_spend field. We then convert the string to a float. | ads_df['ad_spend']
# Pre compile regex for performance
amount_regex = re.compile(r'(?P<amount>([0-9]{1,3}(\.|,)?)+(\.|,)?[0-9]{2})')
# Function returns 'parse_error' on failure to parse and null if the
# input string was null or the string 'None'
def extract_amount_from_string(string):
matches = None
amount = None
# If the string is not null or 'None' search for matches
if not pd.isnull(string) and string != 'None':
matches = amount_regex.search(string)
else:
# null value for string in pandas
amount = np.nan
# If the amount was found
if matches and matches.groupdict():
group_dict = matches.groupdict()
amount = group_dict.get('amount')
if amount:
# Remove whitespace
amount = amount.replace(' ', '')
# Remove dots and commas
amount = amount.replace('.', '').replace(',', '')
# Add a dot two digits form the end
amount = amount[:-2] + '.' + amount[-2:]
# Return a parse_error if amount parsing failed
return amount if amount else 'parse_error' | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We run the function over our dataset and output the number of parsing errors we've encountered. | ads_df['ad_spend_parsed'] = ads_df.ad_spend.apply(extract_amount_from_string)
(ads_df['ad_spend_parsed'] == 'parse_error').sum() | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We validate nan values and remove them from the dataset. | cleaning_summary_format['none_ad_spend_count'] = (~pd.isnull(ads_df['ad_spend_parsed'])).sum()
print('There are a total of ' + str(pd.isnull(ads_df['ad_spend_parsed']).sum()) + ' nan values.')
ads_df[pd.isnull(ads_df['ad_spend_parsed'])]
# Remove nulls
ads_df = ads_df[~pd.isnull(ads_df['ad_spend_parsed'])]
# Replace ad_spend with the parse column
ads_df['ad_spend'] = ads_df['ad_spend_parsed']
# Drop the parsed column
ads_df = ads_df.drop(columns=['ad_spend_parsed']) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We transform the ad_spend field from string into a float. | ads_df['ad_spend'] = ads_df['ad_spend'].astype(float) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We validate that all values are positive and remove other values after validation. | print('There are ' + str((ads_df['ad_spend'] > 0).sum()) + ' positive values and a total of ' + str(len(ads_df)) + ' entries.')
cleaning_summary_format['non_positive_ad_spend_count'] = (ads_df['ad_spend'] > 0).sum()
ads_df[ads_df['ad_spend'] <= 0] | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We remove the two entries with values equal to zero and print out the summary. | ads_df = ads_df[ads_df['ad_spend'] > 0]
cleaning_summary_format['before_ad_spend_count'] = cleaning_summary_format['zeros_ad_clicks_count']
# Reporting
print('''Before formating ad_spend our dataset had {before_ad_spend_count} columns.
After removing rows with 'None': {none_ad_spend_count} columns.
After removing rows with 0 ad clicks: {non_positive_ad_spend_count} columns.'''.format(**cleaning_summary_format)) | Before formating ad_spend our dataset had 2450 columns.
After removing rows with 'None': 2442 columns.
After removing rows with 0 ad clicks: 2440 columns.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Parsing ad_targeting_interests & ad_targeting_people_who_matchThe ad_targeting_interests column is split between its own column and a portion of the ad_targeting_people_who_match column's string. To make treatment of this column simpler, our first step will be to extract the 'interest' portion of ad_targeting_people_who_match. We will then parse the ad_targeting_interests column and combine the result.First we take a look at ad_targeting_people_who_match for entries with and without 'Interests'. We will investigate those without 'Interests' first. | count_null = 0
count_interests = 0
count_other = 0
for s in ads_df['ad_targeting_people_who_match']:
if pd.isnull(s):
count_null += 1
elif 'Interests' in s:
count_interests += 1
else:
count_other +=1
print(s)
print('Null:' + str(count_null) +
' Interests: ' + str(count_interests) +
' Other: ' + str(count_other) +
' Total: ' + str(count_null + count_interests + count_other)) | Null:822 Interests: 1243 Other: 375 Total: 2440
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
From this print out, we see that although some rows belonging to the "Other" category have the 'interests' field missing, we can grab a proxy by taking the "like" groups. We can grab the value of the "like" groups correctly by taking the string after 'Friends of people who are connected to'. We've created the function treat_string_with_friends below to treat these entries.To treat the rows with 'Interests', we create the funciton treat_string_with_interest. After looking at a few of the raw_files, we see that ad_targeting_people_who_match sometimes contains other fields. To identify rows which had additonal fields, we looked for the number of ':' characters, we then identified patterns in those strings that didn't match the interests field. These patterns are used in the treat_string_with_interest function below.The crop_to_interest function was created to dynamically use the method when interests are present or not. | # Utility function to crop everything after a given word
def crop_everything_after(string, contains):
return string[:string.index(contains)] if contains in string else string
# Returns a string containing everything after 'Friends of people who are connected to '
def treat_string_with_friends(string):
friends_string = 'Friends of people who are connected to '
start = string.index(friends_string)
return string[start+len(friends_string):]
# Returns a string containing everything in the Interests: marker, but nothing in the other markers (see the crop_after variable)
def treat_string_with_interest(string):
# Crop everything before 'Interests'
string = string[string.index('Interests'):]
# Strings identified by visual inspections of entries
crop_after = [
'And Must Also Match',
'School:',
'Behaviors:',
'expansion:',
'Job title:',
'Multicultural Affinity:',
'Politics:',
'Employers:',
'Field of study:'
]
for to_crop in crop_after:
string = crop_everything_after(string, to_crop)
# Finally this substring had a typo
if 'Stop Racism!:.' in string:
string = string.replace('Stop Racism!:.', 'Stop Racism!!,')
return string
# If Interests is part of the string use the interest
# method otherwise use the crop to like method.
def crop_to_interest(string):
if not pd.isnull(string):
if 'Interests' in string:
string = treat_string_with_interest(string)
elif 'Friends of people who are connected to ' in string:
string = treat_string_with_friends(string)
else:
# pd.isnull or does not contain interests nor likes
string = np.nan
return string
ads_df['ad_targeting_people_who_match'] = ads_df['ad_targeting_people_who_match'].apply(crop_to_interest) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
During this operation we lost a few rows that could not be parsed as it did not contain interests. | cleaning_summary_format['null_people_who_match_count'] = pd.isnull(ads_df['ad_targeting_people_who_match']).sum() - count_null
print(str(cleaning_summary_format['null_people_who_match_count']) + ' rows where lost.') | 29 rows where lost.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
The last cleaning step for this field is to remove the 'Interests' keyword which is sometimes followed by a colon. We use a regular expression to replace this string. | interests_regex = re.compile(r'Interests\s*:?')
def remove_interests_marker(string):
if not pd.isnull(string):
string = interests_regex.sub('', string)
return string
ads_df['ad_targeting_people_who_match'] = ads_df['ad_targeting_people_who_match'].apply(remove_interests_marker)
ads_df.head(3) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We now do the same exercise with ad_targeting_interests. We first identify non-null rows that may contain an extra field. We do so by looking for the ':' character and printing out these rows. | non_null_interests = ads_df[~pd.isnull(ads_df['ad_targeting_interests'])]['ad_targeting_interests']
for row_with_colon in non_null_interests[non_null_interests.str.contains(':')]:
print(row_with_colon) | BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Humanitarianism, Human rights or Humanitarian aid Behaviors: African American (US)
Black Power Behaviors: Multicultural Affinity: African American (US)
Human rights or Malcolm X Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Humanitarianism, Human rights or Humanitarian aid Behaviors: African American (US)
Muslims Are Not Terrorists. Islamism or Muslim Brotherhood Connections: People who like United Muslims of America
History Politics: US politics (conservative)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
HuffPost Black Voices Behaviors: African American (US)
Veterans, United States Department of Veterans Affairs, Disabled American Veterans or Supporting Our Veterans Home Composition: Veterans in home
TV talkshows or Black (Color) Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
The Second Amendment, AR-15, 2nd Amendment or Guns & Ammo Politics: US politics (conservative)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Tax Behaviors: African American (US)
Homeless shelter Politics: US politics (liberal) or US politics (moderate)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Human rights or Malcolm X Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Human rights, Police, Police officer or Order of Merit of the Police Forces Behaviors: Multicultural Affinity: Hispanic(US -All), Multicultural Affinity: Hispanic(US - English dominant) or Multicultural Affinity: African American (US)
Muslims Are Not Terrorists, Islamism or Muslim Brotherhood Connections: People who like United Muslims of America
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Sports Behaviors: Facebook access (mobile): all mobile devices Job title: Combat medic, ???????????, Mercenary, Polisi militer, Engenharia militar or Soldado Generation: Millennials
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Fitness and wellness, Sports and outdoors or Family and relationships Behaviors: Facebook access (mobile): smartphones and tablets Generation: Millennials
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
The Second Amendment, AR-15, Protect the Second Amendment, 2nd Amendment or Guns & Ammo Politics: US politics (conservative)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
BlackNews.com or HuffPost Black Voices Behaviors: African American (US)
Cop Block Behaviors: African American (US)
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Most of these rows seem to contain an additional field "Behaviors" we will remove it from the rows as we can easily use the interest part to identify the targeted demographic. | # Removes additional section part of the ad_targeting_interests string
def treat_interest(string):
if not pd.isnull(string):
# Strings identified by visual inspections of entries
crop_after = [
'And Must Also Match',
'School:',
'Behaviors:',
'expansion:',
'Job title:',
'Multicultural Affinity:',
'Politics:',
'Employers:',
'Field of study:',
'Connections:',
'Home Composition:'
]
for to_crop in crop_after:
string = crop_everything_after(string, to_crop)
else:
# pd.isnull value for strings
string = np.nan
return string
ads_df['ad_targeting_interests'] = ads_df['ad_targeting_interests'].apply(treat_interest)
non_null_interests = ads_df[~pd.isnull(ads_df['ad_targeting_interests'])]['ad_targeting_interests']
print('After treatment, there are ' + str(non_null_interests[non_null_interests.str.contains(':')].count()) +' rows with more than one field.') | After treatment, there are 0 rows with more than one field.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Now that both ad_targeting_interests and ad_targeting_people_who_match have been cleaned, we can now merge the two columns into one. First let's verify that there are no rows where both columns are non-null or both null. | def interests_both_null(row):
return (pd.isnull(row.ad_targeting_interests) and pd.isnull(row.ad_targeting_people_who_match))
def interests_both_non_null(row):
return (not pd.isnull(row.ad_targeting_interests) and not pd.isnull(row.ad_targeting_people_who_match))
# How many rows have both columns as null
cleaning_summary_format['null_all_interests_columns_count'] = ads_df.apply(interests_both_null, axis=1).sum()
# How many rows have both columns populated
non_null_count = ads_df.apply(interests_both_non_null, axis=1).sum()
print('We have a total of ' + str(both_null) + ' rows with both columns null and a total of ' + str(non_null_count) + ' rows which have both values set.') | We have a total of 214 rows with both columns null and a total of 0 rows which have both values set.
| MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
We drop rows that do not contain interests information in both columns. We will merge the other rows by replacing the values of ad_targeting_interests with ad_targeting_people_who_match. | def merge_interests(row):
return row.ad_targeting_interests if not pd.isnull(row.ad_targeting_interests) else row.ad_targeting_people_who_match
# Merge interests
ads_df['ad_targeting_interests'] = ads_df.apply(merge_interests, axis=1)
# Drop 'ad_targeting_people_who_match'
ads_df = ads_df.drop(columns=['ad_targeting_people_who_match'])
# Drop null columns
ads_df = ads_df[(~pd.isnull(ads_df['ad_targeting_interests']))] | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Writing to file | ads_df.head(3)
ads_df.to_csv('../clean_data/clean_data.csv', index=None, header=True) | _____no_output_____ | MIT | src/data_cleaning.ipynb | ALotOfData/data-512-a5 |
Initialize the framework Import torch libraries and try to use the GPU device (if available) | import torch
from torch import nn
import random
# Try to use GPU device
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using %s device" %(device)) | Using cpu device
| MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Mount Google Drive to load* the lyrics dataset,* the word2vec pretrained embedding dictionaries,* the one hot encoding dictionary for the genres,* the lyrics generator neural network | from google.colab import drive
drive.mount("/content/drive") | Mounted at /content/drive
| MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Load the dictionaries to convert words to indices and viceversa | #import pickle
import json
FILENAME_W2I = '/content/drive/MyDrive/DM project - NLP lyrics generation/Dictionaries/words2indices'
FILENAME_I2W = '/content/drive/MyDrive/DM project - NLP lyrics generation/Dictionaries/indices2words'
# Load a dictionary from a stored file converting keys to integers if needed
def load_dictionary(filename, convert_keys=False):
#with open(filename + ".pkl", "rb") as f:
# return pickle.load(f)
with open(filename + ".json", "r") as f:
d = json.load(f)
if convert_keys:
dd = {}
for key, value in d.items():
dd[int(key)] = value
return dd
return d
words2indices = load_dictionary(FILENAME_W2I)
indices2words = load_dictionary(FILENAME_I2W, convert_keys=True) # JSON stores keys as strings while here we expect integers | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Load the word vectors tensor (word2vec embedding) | FILENAME = '/content/drive/MyDrive/DM project - NLP lyrics generation/Dictionaries/word_vectors.pt'
word_vectors = torch.load(FILENAME, map_location=device) | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Load the one hot encoding dictionary for the genres | FILENAME = '/content/drive/MyDrive/DM project - NLP lyrics generation/Dictionaries/one_hot_encoding_genres'
one_hot_encoding_genres = load_dictionary(FILENAME)
NUMBER_GENRES = len(one_hot_encoding_genres) | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Define vocabulary functions | # Get word from index
def get_word_from_index(idx):
# Use get to automatically return None if the index is not present in the dictionary
return indices2words.get(idx)
# Get index from word
def get_index_from_word(word):
# Use get to automatically return None if the word is not present in the dictionary
return words2indices.get(word)
# Get word vector from word
def get_word_vector(word):
idx = get_index_from_word(word)
return word_vectors[idx] if idx != None else None | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Define the generator neural network | class Generator(nn.Module):
def __init__(
self,
word_vectors: torch.Tensor,
lstm_hidden_size: int,
dense_size: int,
vocab_size: int
):
super().__init__()
# Embedding layer
self.embedding = torch.nn.Embedding.from_pretrained(word_vectors)
# Recurrent layer (LSTM)
self.rnn = torch.nn.LSTM(input_size=word_vectors.size(1), hidden_size=lstm_hidden_size, num_layers=1, batch_first=True)
# Dense layer
self.lin1 = torch.nn.Linear(dense_size, vocab_size) # Legacy
#self.dense = torch.nn.Linear(dense_size, vocab_size)
#torch.nn.init.uniform_(self.dense.weight)
# Dropout function
self.dropout = nn.Dropout(p=0.1)
# Loss function
self.loss = torch.nn.CrossEntropyLoss()
self.global_epoch = 0
def forward(self, x, y=None, states=None):
# Split input in lyrics and genre
lyrics = x[0]
genres = x[1]
# Embedding words from indices
out = self.embedding(lyrics)
# Recurrent layer
out, states = self.rnn(out, states)
# Duplicate the genre vector associated to a sequence for each word in the sequence
seq_length = lyrics.size()[1]
if seq_length > 1:
genres_duplicated = []
for tensor in genres:
duplicated = [list(tensor) for i in range(seq_length)]
genres_duplicated.append(duplicated)
genres = torch.tensor(genres_duplicated, device=device)
else:
# Just increment the genres vector dimension
genres = genres.unsqueeze(0)
# Concatenate the LSTM output with the encoding of genres
out = torch.cat((out, genres), dim=-1)
# Dense layer
out = self.lin1(out) # Legacy
#out = self.dense(out)
# Use the last prediction
logits = out[:, -1, :]
# Scale logits in [0,1] to avoid negative logits
logits = torch.softmax(logits, dim=-1)
# Max likelihood can return repeated sequences over and over.
# Sample from the multinomial probability distribution of 'logits' (after softmax).
# Return the index of the sample (one for each row of the input matrix)
# that corresponds to the index in the vocabulary as logits are calculated on the whole vocabulary
sampled_indices = torch.multinomial(logits, num_samples=1)
result = {'logits': logits, 'pred': sampled_indices, 'states': states}
if y is not None:
result['loss'] = self.loss(logits, y)
result['accuracy'] = self.accuracy(sampled_indices, y.unsqueeze(-1))
return result
def accuracy(self, pred, target):
return torch.sum(pred == target) / pred.size()[0] | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Load the generator model | PATH = '/content/drive/MyDrive/DM project - NLP lyrics generation/Models/generator_model.pt'
#PATH = '/content/drive/MyDrive/DM project - NLP lyrics generation/generator_model_GAN.pt'
gen = Generator(
word_vectors,
lstm_hidden_size=256,
dense_size=256+NUMBER_GENRES,
vocab_size=len(word_vectors))
checkpoint = torch.load(PATH, map_location=device)
gen.load_state_dict(checkpoint['model_state_dict'])
# Try to move the model on the GPU
if torch.cuda.is_available():
gen.cuda() | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
User input Sort genres | genres = [key for key in one_hot_encoding_genres]
genres.sort() | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Display input form | from ipywidgets import Layout, Box, Label, Dropdown, Text
print("Enter a word and a genre to generate a lyrics\n")
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
word_widget = Text()
genres_widget = Dropdown(options=genres)
form_items = [
Box([Label(value='Word'), word_widget], layout=form_item_layout),
Box([Label(value='Genre'), genres_widget], layout=form_item_layout),
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
align_items='stretch',
width='22%'
))
form | Enter a word and a genre to generate a lyrics
| MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Get user input | word = word_widget.value
genre = genres_widget.value
##@title # Insert a word and a genre to generate a lyrics
#word = "" #@param {type:"string", required: true}
#genre = "Country" #@param ["Country", "Electronic", "Folk", "Hip-Hop", "Indie", "Jazz", "Metal", "Pop", "Rock", "R&B"] | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Preprocess the user input | # Split entered words on whitespaces to support also sequences of words
input_words = word.strip().split()
if not input_words:
raise ValueError("No word entered")
# Check if every input word is present in the vocabulary (or in lowercase form)
for word in input_words:
if word not in words2indices and word.lower() not in words2indices:
raise ValueError("The entered word is not valid") | _____no_output_____ | MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
Generate the lyrics | TEXT_LENGTH = 100 # Truncate the text when the goal text length has been generated (hard truncation)
LINES = random.randrange(10, 50) # Truncate the text when the goal lines number has been generated (soft truncation)
states = None
text = ""
prev_word = ""
lines = 0
generated_words = 0
word2capitalize = ["I", "I'm", "I'd"]
punctuation_subset = { '.', ',', ';', ':', '!', '?', ')', ']', '}', '$', '/', '…', '...', '..' }
# Iterate input words
for i in range(len(input_words)):
w = input_words[i]
# Check if the word is not present in the vocabulary in the current form
if w not in words2indices:
# Use the lowercase version (as it must be present in one of the two forms)
input_words[i] = w.lower()
# Check if this is the first word
if i == 0:
# Capitalize the first letter of the word
w = w[0].upper() + w[1:]
text = w
else:
text += ' ' + w
prev_word = w
# Copy user input words to allow generating multiple lyrics with the same input
input_words_ = input_words.copy()
# One hot encode the genre
input_genre = one_hot_encoding_genres[genre]
input_genre = torch.tensor(input_genre, device=device).unsqueeze(0)
def generate_next_word(input_words, states=None):
# Convert words to indices
indices = [get_index_from_word(w) for w in input_words]
indices = torch.tensor(indices, device=device).unsqueeze(0)
y = gen((indices, input_genre), states=states)
next_word_index = y['pred'].item()
#print("next_word_index:", next_word_index)
return get_word_from_index(next_word_index), y['states']
#for i in range(TEXT_LENGTH):
while lines < LINES:
# Generate next word
next_word, states = generate_next_word(input_words_, states)
# Append at the end removing the head
input_words_ = input_words_[1:]
input_words_.append(next_word)
#print("next word:", next_word)
# Check if next word must be capitalized in the output text
for word in word2capitalize:
if next_word == word.lower():
# Replace the generated word with the capitalized version
next_word = word
break
# Check if previous word is newline (i.e. the generated word belongs to a new line) or a dot
if prev_word == '\n' or prev_word == '.':
# Capitalize the first letter of the generated word
next_word = next_word[0].upper() + next_word[1:]
# Check if previous word is newline or a parenthesis or next word is newline or punctuation
if prev_word == '\n' or prev_word == '(' or next_word == '\n' or next_word in punctuation_subset:
if next_word == '\n':
# Update generated lines
lines += 1
# Check if the number of lines has been achieved
if lines == LINES:
break
# Add the generated word to the output text without prepending a space
text += next_word
else:
# Add the generated word to the output text prepending a space
text += ' ' + next_word
prev_word = next_word
generated_words += 1
print("Word:", input_words)
print("Genre:", genre)
print("\nlines:", LINES)
print("generated words:", generated_words)
print("\nLyrics:")
print(text) | Word: ['saturday']
Genre: Rock
lines: 27
generated words: 233
Lyrics:
Saturday night
In the midnight room for the street
She walked dressed to themselves
You should have took her away, in a call as she cried for me
I was given to: about your mom. Punisher's bad news
You living why a girl I told you before
Don't you think that's the world?
Black space, smells blue?
There was her name and way: to the father french, out of the albums
Bring your blood underneath your gut bare
If you move your house
Don't know me
No I don't really want I love you (just make you to catch me!)
She's in the zone
I don't need you, half souls, forgive me, or nothing
You could ever leave it
When all fired this ray right into town
I don't even dream, fuck, step in my limousine
I can swear your ways
And leave me the questions I will know you are ready
In the wonderful dream
I like my real strange skies would tell me to find that girl I laid to rest
On the telephone I just steal trouble
My whole loving life
Though it's over, I can't go back
| MIT | Lyrics_generator.ipynb | NLP-Lyrics-Team/nlp-lyrics |
core> This is module which provide core utilities | #hide
from nbdev.showdoc import *
#export
def say_hello():
return "Hello From Learnathon Module"
#export
def say_hello2():
return "This is a test for new function" | _____no_output_____ | Apache-2.0 | 00_core.ipynb | Rahuketu86/Learnathon |
SMA Percent Band 1. The SPY closes above its upper band, buy 2. If the SPY closes below its lower band, sell your long position. Optimize: sma, percent band. | import datetime
import matplotlib.pyplot as plt
import pandas as pd
from talib.abstract import *
import pinkfish as pf
import strategy
# Format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7) | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Some global data | symbol = '^GSPC'
#symbol = 'SPY'
#symbol = 'ES=F'
#symbol = 'DIA'
#symbol = 'QQQ'
#symbol = 'IWM'
#symbol = 'TLT'
#symbol = 'GLD'
#symbol = 'AAPL'
#symbol = 'BBRY'
#symbol = 'GDX'
capital = 10000
start = datetime.datetime(1900, 1, 1)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now() | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Define Optimizations | # pick one
optimize_sma = True
optimize_band = False
# define SMAs ranges
if optimize_sma:
Xs = range(50, 525, 25)
Xs = [str(X) for X in Xs]
# define band ranges
elif optimize_band:
Xs = range(0, 100, 5)
Xs = [str(X) for X in Xs]
options = {
'use_adj' : True,
'use_cache' : True,
'sma' : 200,
'band' : 0.0
} | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Run Strategy | strategies = pd.Series(dtype=object)
for X in Xs:
print(X, end=" ")
if optimize_sma:
options['sma'] = int(X)
elif optimize_band:
options['band'] = int(X)/10
strategies[X] = strategy.Strategy(symbol, capital, start, end, options)
strategies[X].run() | 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400 425 450 475 500 | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Summarize results | metrics = ('annual_return_rate',
'max_closed_out_drawdown',
'annualized_return_over_max_drawdown',
'drawdown_recovery_period',
'expected_shortfall',
'best_month',
'worst_month',
'sharpe_ratio',
'sortino_ratio',
'monthly_std',
'pct_time_in_market',
'total_num_trades',
'pct_profitable_trades',
'avg_points')
df = pf.optimizer_summary(strategies, metrics)
df | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Bar graphs | pf.optimizer_plot_bar_graph(df, 'annual_return_rate')
pf.optimizer_plot_bar_graph(df, 'sharpe_ratio')
pf.optimizer_plot_bar_graph(df, 'max_closed_out_drawdown') | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Run Benchmark | s = strategies[Xs[0]]
benchmark = pf.Benchmark(symbol, capital, s.start, s.end)
benchmark.run() | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
Equity curve | if optimize_sma : Y = '200'
elif optimize_band: Y = '30'
pf.plot_equity_curve(strategies[Y].dbal, benchmark=benchmark.dbal) | _____no_output_____ | MIT | examples/090.sma-percent-band/optimize.ipynb | alialamiidrissi/pinkfish |
sklearn-porterRepository: [https://github.com/nok/sklearn-porter](https://github.com/nok/sklearn-porter) MLPClassifierDocumentation: [sklearn.neural_network.MLPClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) | import sys
sys.path.append('../../../../..') | _____no_output_____ | MIT | examples/estimator/classifier/MLPClassifier/js/basics_imported.pct.ipynb | karoka/sklearn-porter |
Load data | from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
iris_data = load_iris()
X = iris_data.data
y = iris_data.target
X = shuffle(X, random_state=0)
y = shuffle(y, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=5)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape) | ((90, 4), (90,))
((60, 4), (60,))
| MIT | examples/estimator/classifier/MLPClassifier/js/basics_imported.pct.ipynb | karoka/sklearn-porter |
Train classifier | from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(activation='relu', hidden_layer_sizes=50,
max_iter=500, alpha=1e-4, solver='sgd',
tol=1e-4, random_state=1, learning_rate_init=.1)
clf.fit(X_train, y_train) | _____no_output_____ | MIT | examples/estimator/classifier/MLPClassifier/js/basics_imported.pct.ipynb | karoka/sklearn-porter |
Transpile classifier | from sklearn_porter import Porter
porter = Porter(clf, language='js')
output = porter.export(export_data=True)
print(output) | if (typeof XMLHttpRequest === 'undefined') {
var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
}
var MLPClassifier = function(jsonFile) {
this.mdl = undefined;
var promise = new Promise(function(resolve, reject) {
var httpRequest = new XMLHttpRequest();
httpRequest.onreadystatechange = function() {
if (httpRequest.readyState === 4) {
if (httpRequest.status === 200) {
resolve(JSON.parse(httpRequest.responseText));
} else {
reject(new Error(httpRequest.status + ': ' + httpRequest.statusText));
}
}
};
httpRequest.open('GET', jsonFile, true);
httpRequest.send();
});
// Return max index:
var maxi = function(nums) {
var index = 0;
for (var i=0, l=nums.length; i < l; i++) {
index = nums[i] > nums[index] ? i : index;
}
return index;
};
// Compute the activation function:
var compute = function(activation, v) {
switch (activation) {
case 'LOGISTIC':
for (var i = 0, l = v.length; i < l; i++) {
v[i] = 1. / (1. + Math.exp(-v[i]));
}
break;
case 'RELU':
for (var i = 0, l = v.length; i < l; i++) {
v[i] = Math.max(0, v[i]);
}
break;
case 'TANH':
for (var i = 0, l = v.length; i < l; i++) {
v[i] = Math.tanh(v[i]);
}
break;
case 'SOFTMAX':
var max = Number.NEGATIVE_INFINITY;
for (var i = 0, l = v.length; i < l; i++) {
if (v[i] > max) {
max = v[i];
}
}
for (var i = 0, l = v.length; i < l; i++) {
v[i] = Math.exp(v[i] - max);
}
var sum = 0.0;
for (var i = 0, l = v.length; i < l; i++) {
sum += v[i];
}
for (var i = 0, l = v.length; i < l; i++) {
v[i] /= sum;
}
break;
}
return v;
};
this.predict = function(neurons) {
return new Promise(function(resolve, reject) {
promise.then(function(mdl) {
// Initialization:
if (typeof this.mdl === 'undefined') {
mdl.hidden_activation = mdl.hidden_activation.toUpperCase();
mdl.output_activation = mdl.output_activation.toUpperCase();
mdl.network = new Array(mdl.layers.length + 1);
for (var i = 0, l = mdl.layers.length; i < l; i++) {
mdl.network[i + 1] = new Array(mdl.layers[i]).fill(0.);
}
this.mdl = mdl;
}
// Feed forward:
this.mdl.network[0] = neurons;
for (var i = 0; i < this.mdl.network.length - 1; i++) {
for (var j = 0; j < this.mdl.network[i + 1].length; j++) {
for (var l = 0; l < this.mdl.network[i].length; l++) {
this.mdl.network[i + 1][j] += this.mdl.network[i][l] * this.mdl.weights[i][l][j];
}
this.mdl.network[i + 1][j] += this.mdl.bias[i][j];
}
if ((i + 1) < (this.mdl.network.length - 1)) {
this.mdl.network[i + 1] = compute(this.mdl.hidden_activation, this.mdl.network[i + 1]);
}
}
this.mdl.network[this.mdl.network.length - 1] = compute(this.mdl.output_activation, this.mdl.network[this.mdl.network.length - 1]);
// Return result:
if (this.mdl.network[this.mdl.network.length - 1].length == 1) {
if (this.mdl.network[this.mdl.network.length - 1][0] > .5) {
resolve(1);
}
resolve(0);
} else {
resolve(maxi(this.mdl.network[this.mdl.network.length - 1]));
}
}, function(error) {
reject(error);
});
});
};
};
if (typeof process !== 'undefined' && typeof process.argv !== 'undefined') {
if (process.argv[2].trim().endsWith('.json')) {
// Features:
var features = process.argv.slice(3);
// Parameters:
var json = process.argv[2];
// Estimator:
var clf = new MLPClassifier(json);
// Prediction:
clf.predict(features).then(function(prediction) {
console.log(prediction);
}, function(error) {
console.log(error);
});
}
}
| MIT | examples/estimator/classifier/MLPClassifier/js/basics_imported.pct.ipynb | karoka/sklearn-porter |
Run classification in JavaScript | # Save classifier:
# with open('MLPClassifier.js', 'w') as f:
# f.write(output)
# Check model data:
# $ cat data.json
# Run classification:
# if hash node 2/dev/null; then
# python -m SimpleHTTPServer 8877 & serve_pid=$!
# node MLPClassifier.js http://127.0.0.1:8877/data.json 1 2 3 4
# kill $serve_pid
# fi | _____no_output_____ | MIT | examples/estimator/classifier/MLPClassifier/js/basics_imported.pct.ipynb | karoka/sklearn-porter |
Core> Basic functions used in the fastai library | # export
defaults = SimpleNamespace() | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Metaclasses | #export
class PrePostInitMeta(type):
"A metaclass that calls optional `__pre_init__` and `__post_init__` methods"
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
def _pass(self, *args,**kwargs): pass
for o in ('__init__', '__pre_init__', '__post_init__'):
if not hasattr(x,o): setattr(x,o,_pass)
old_init = x.__init__
@functools.wraps(old_init)
def _init(self,*args,**kwargs):
self.__pre_init__()
old_init(self, *args,**kwargs)
self.__post_init__()
setattr(x, '__init__', _init)
return x
show_doc(PrePostInitMeta, title_level=3)
class _T(metaclass=PrePostInitMeta):
def __pre_init__(self): self.a = 0; assert self.a==0
def __init__(self): self.a += 1; assert self.a==1
def __post_init__(self): self.a += 1; assert self.a==2
t = _T()
t.a
#export
class BaseObj(metaclass=PrePostInitMeta):
"Base class that provides `PrePostInitMeta` metaclass to subclasses"
pass
class _T(BaseObj):
def __pre_init__(self): self.a = 0; assert self.a==0
def __init__(self): self.a += 1; assert self.a==1
def __post_init__(self): self.a += 1; assert self.a==2
t = _T()
t.a
#export
class NewChkMeta(PrePostInitMeta):
"Metaclass to avoid recreating object passed to constructor (plus all `PrePostInitMeta` functionality)"
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
old_init,old_new = x.__init__,x.__new__
@functools.wraps(old_init)
def _new(cls, x=None, *args, **kwargs):
if x is not None and isinstance(x,cls):
x._newchk = 1
return x
res = old_new(cls)
res._newchk = 0
return res
@functools.wraps(old_init)
def _init(self,*args,**kwargs):
if self._newchk: return
old_init(self, *args, **kwargs)
x.__init__,x.__new__ = _init,_new
return x
class _T(metaclass=NewChkMeta):
"Testing"
def __init__(self, o=None): self.foo = getattr(o,'foo',0) + 1
class _T2():
def __init__(self, o): self.foo = getattr(o,'foo',0) + 1
t = _T(1)
test_eq(t.foo,1)
t2 = _T(t)
test_eq(t2.foo,1)
test_is(t,t2)
t = _T2(1)
test_eq(t.foo,1)
t2 = _T2(t)
test_eq(t2.foo,2)
test_eq(_T.__doc__, "Testing")
test_eq(str(inspect.signature(_T)), '(o=None)')
#export
class BypassNewMeta(type):
"Metaclass: casts `x` to this class, initializing with `_new_meta` if available"
def __call__(cls, x, *args, **kwargs):
if hasattr(cls, '_new_meta'): x = cls._new_meta(x, *args, **kwargs)
if cls!=x.__class__: x.__class__ = cls
return x
class T0: pass
class _T(T0, metaclass=BypassNewMeta): pass
t = T0()
t.a = 1
t2 = _T(t)
test_eq(type(t2), _T)
test_eq(t2.a,1) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Foundational functions Decorators | #export
def patch_to(cls, as_prop=False):
"Decorator: add `f` to `cls`"
def _inner(f):
nf = copy(f)
# `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually
for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o))
nf.__qualname__ = f"{cls.__name__}.{f.__name__}"
setattr(cls, f.__name__, property(nf) if as_prop else nf)
return f
return _inner
class _T3(int): pass
@patch_to(_T3)
def func1(x, a:bool): return x+2
t = _T3(1)
test_eq(t.func1(1), 3)
#export
def patch(f):
"Decorator: add `f` to the first parameter's class (based on f's type annotations)"
cls = next(iter(f.__annotations__.values()))
return patch_to(cls)(f)
@patch
def func(x:_T3, a:bool):
"test"
return x+2
t = _T3(1)
test_eq(t.func(1), 3)
test_eq(t.func.__qualname__, '_T3.func')
#export
def patch_property(f):
"Decorator: add `f` as a property to the first parameter's class (based on f's type annotations)"
cls = next(iter(f.__annotations__.values()))
return patch_to(cls, as_prop=True)(f)
@patch_property
def prop(x:_T3): return x+1
t = _T3(1)
test_eq(t.prop, 2)
#export
def _mk_param(n,d=None): return inspect.Parameter(n, inspect.Parameter.KEYWORD_ONLY, default=d)
def test_sig(f, b): test_eq(str(inspect.signature(f)), b)
#export
def use_kwargs(names, keep=False):
"Decorator: replace `**kwargs` in signature with `names` params"
def _f(f):
sig = inspect.signature(f)
sigd = dict(sig.parameters)
k = sigd.pop('kwargs')
s2 = {n:_mk_param(n) for n in names if n not in sigd}
sigd.update(s2)
if keep: sigd['kwargs'] = k
f.__signature__ = sig.replace(parameters=sigd.values())
return f
return _f
@use_kwargs(['y', 'z'])
def foo(a, b=1, **kwargs): pass
test_sig(foo, '(a, b=1, *, y=None, z=None)')
@use_kwargs(['y', 'z'], keep=True)
def foo(a, *args, b=1, **kwargs): pass
test_sig(foo, '(a, *args, b=1, y=None, z=None, **kwargs)')
#export
def delegates(to=None, keep=False):
"Decorator: replace `**kwargs` in signature with params from `to`"
def _f(f):
if to is None: to_f,from_f = f.__base__.__init__,f.__init__
else: to_f,from_f = to,f
sig = inspect.signature(from_f)
sigd = dict(sig.parameters)
k = sigd.pop('kwargs')
s2 = {k:v for k,v in inspect.signature(to_f).parameters.items()
if v.default != inspect.Parameter.empty and k not in sigd}
sigd.update(s2)
if keep: sigd['kwargs'] = k
from_f.__signature__ = sig.replace(parameters=sigd.values())
return f
return _f
def basefoo(e, c=2): pass
@delegates(basefoo)
def foo(a, b=1, **kwargs): pass
test_sig(foo, '(a, b=1, c=2)')
@delegates(basefoo, keep=True)
def foo(a, b=1, **kwargs): pass
test_sig(foo, '(a, b=1, c=2, **kwargs)')
class BaseFoo:
def __init__(self, e, c=2): pass
@delegates()
class Foo(BaseFoo):
def __init__(self, a, b=1, **kwargs): super().__init__(**kwargs)
test_sig(Foo, '(a, b=1, c=2)')
#export
def funcs_kwargs(cls):
"Replace methods in `self._methods` with those from `kwargs`"
old_init = cls.__init__
def _init(self, *args, **kwargs):
for k in cls._methods:
arg = kwargs.pop(k,None)
if arg is not None:
if isinstance(arg,types.MethodType): arg = types.MethodType(arg.__func__, self)
setattr(self, k, arg)
old_init(self, *args, **kwargs)
functools.update_wrapper(_init, old_init)
cls.__init__ = use_kwargs(cls._methods)(_init)
return cls
#export
def method(f):
"Mark `f` as a method"
# `1` is a dummy instance since Py3 doesn't allow `None` any more
return types.MethodType(f, 1)
@funcs_kwargs
class T:
_methods=['b']
def __init__(self, f=1, **kwargs): assert not kwargs
def a(self): return 1
def b(self): return 2
t = T()
test_eq(t.a(), 1)
test_eq(t.b(), 2)
t = T(b = lambda:3)
test_eq(t.b(), 3)
test_sig(T, '(f=1, *, b=None)')
test_fail(lambda: T(a = lambda:3))
@method
def _f(self,a=1): return a+1
t = T(b = _f)
test_eq(t.b(2), 3)
class T2(T):
def __init__(self,a):
super().__init__(b = lambda:3)
self.a=a
t = T2(a=1)
test_eq(t.b(), 3)
test_sig(T2, '(a)')
def _g(a=1): return a+1
class T3(T): b = staticmethod(_g)
t = T3()
test_eq(t.b(2), 3) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Type checking Runtime type checking is handy, so let's make it easy! | #export core
#NB: Please don't move this to a different line or module, since it's used in testing `get_source_link`
def chk(f): return typechecked(always=True)(f) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Decorator for a function to check that type-annotated arguments receive arguments of the right type. | @chk
def test_chk(a:int=1): return a
test_eq(test_chk(2), 2)
test_eq(test_chk(), 1)
test_fail(lambda: test_chk('a'), contains='"a" must be int') | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Decorated functions will pickle correctly. | t = pickle.loads(pickle.dumps(test_chk))
test_eq(t(2), 2)
test_eq(t(), 1) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Context managers | @contextmanager
def working_directory(path):
"Change working directory to `path` and return to previous on exit."
prev_cwd = Path.cwd()
os.chdir(path)
try: yield
finally: os.chdir(prev_cwd) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Monkey-patching | def is_listy(x): return isinstance(x,(list,tuple,Generator))
#export
def tensor(x, *rest, **kwargs):
"Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly."
if len(rest): x = (x,)+rest
# Pytorch bug in dataloader using num_workers>0
if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0)
res = (torch.tensor(x, **kwargs) if isinstance(x, (tuple,list))
else as_tensor(x, **kwargs) if hasattr(x, '__array__')
else as_tensor(x, **kwargs) if is_listy(x)
else as_tensor(x, **kwargs) if is_iter(x)
else None)
if res is None:
res = as_tensor(array(x), **kwargs)
if res.dtype is torch.float64: return res.float()
if res.dtype is torch.int32:
warn('Tensor is int32: upgrading to int64; for better performance use int64 input')
return res.long()
return res
test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3]))
test_eq(tensor(1,2,3), torch.tensor([1,2,3]))
test_eq_type(tensor(1.0), torch.tensor(1.0)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
`Tensor.ndim` We add an `ndim` property to `Tensor` with same semantics as [numpy ndim](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ndim.html), which allows tensors to be used in matplotlib and other places that assume this property exists. | test_eq(torch.tensor([1,2]).ndim,1)
test_eq(torch.tensor(1).ndim,0)
test_eq(torch.tensor([[1]]).ndim,2) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Documentation functions | #export core
def add_docs(cls, cls_doc=None, **docs):
"Copy values from `docs` to `cls` docstrings, and confirm all public methods are documented"
if cls_doc is not None: cls.__doc__ = cls_doc
for k,v in docs.items():
f = getattr(cls,k)
if hasattr(f,'__func__'): f = f.__func__ # required for class methods
f.__doc__ = v
# List of public callables without docstring
nodoc = [c for n,c in vars(cls).items() if isinstance(c,Callable)
and not n.startswith('_') and c.__doc__ is None]
assert not nodoc, f"Missing docs: {nodoc}"
assert cls.__doc__ is not None, f"Missing class docs: {cls}"
#export core
def docs(cls):
"Decorator version of `add_docs`, using `_docs` dict"
add_docs(cls, **cls._docs)
return cls
class _T:
def f(self): pass
@classmethod
def g(cls): pass
add_docs(_T, "a", f="f", g="g")
test_eq(_T.__doc__, "a")
test_eq(_T.f.__doc__, "f")
test_eq(_T.g.__doc__, "g")
#export
def custom_dir(c, add:List):
"Implement custom `__dir__`, adding `add` to `cls`"
return dir(type(c)) + list(c.__dict__.keys()) + add
show_doc(is_iter)
assert is_iter([1])
assert not is_iter(torch.tensor(1))
assert is_iter(torch.tensor([1,2]))
assert (o for o in range(3)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
GetAttr - | #export
class GetAttr(BaseObj):
"Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`"
@property
def _xtra(self): return [o for o in dir(self.default) if not o.startswith('_')]
def __getattr__(self,k):
if k in self._xtra: return getattr(self.default, k)
raise AttributeError(k)
def __dir__(self): return custom_dir(self, self._xtra)
class _C(GetAttr): default,_xtra = 'Hi',['lower']
t = _C()
test_eq(t.lower(), 'hi')
test_fail(lambda: t.upper())
assert 'lower' in dir(t)
#export
def delegate_attr(self, k, to):
"Use in `__getattr__` to delegate to attr `to` without inheriting from `GetAttr`"
if k.startswith('_') or k==to: raise AttributeError(k)
try: return getattr(getattr(self,to), k)
except AttributeError: raise AttributeError(k) from None
class _C:
f = 'Hi'
def __getattr__(self, k): return delegate_attr(self, k, 'f')
t = _C()
test_eq(t.lower(), 'hi') | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
L - | # export
def coll_repr(c, max_n=10):
"String repr of up to `max_n` items of (possibly lazy) collection `c`"
return f'(#{len(c)}) [' + ','.join(itertools.islice(map(str,c), max_n)) + ('...'
if len(c)>10 else '') + ']'
test_eq(coll_repr(range(1000), 5), '(#1000) [0,1,2,3,4...]')
# export
def mask2idxs(mask):
"Convert bool mask or index list to index `L`"
mask = list(mask)
if len(mask)==0: return []
if isinstance(mask[0],bool): return [i for i,m in enumerate(mask) if m]
return [int(i) for i in mask]
test_eq(mask2idxs([False,True,False,True]), [1,3])
test_eq(mask2idxs(torch.tensor([1,2,3])), [1,2,3])
# export
def _listify(o):
if o is None: return []
if isinstance(o, list): return o
if isinstance(o, (str,np.ndarray,Tensor)): return [o]
if is_iter(o): return list(o)
return [o]
#export
class CollBase(GetAttr, metaclass=NewChkMeta):
"Base class for composing a list of `items`"
_xtra = [o for o in dir([]) if not o.startswith('_')]
def __init__(self, items): self.items = items
def __len__(self): return len(self.items)
def __getitem__(self, k): return self.items[k]
def __setitem__(self, k, v): self.items[k] = v
def __delitem__(self, i): del(self.items[i])
def __repr__(self): return self.items.__repr__()
def __iter__(self): return self.items.__iter__()
def _new(self, items, *args, **kwargs): return self.__class__(items, *args, **kwargs)
@property
def default(self): return self.items
#export
class L(CollBase):
"Behaves like a list of `items` but can also index with list of indices or masks"
def __init__(self, items=None, *rest, use_list=False, match=None):
if rest: items = (items,)+rest
if items is None: items = []
if (use_list is not None) or not isinstance(items,(Tensor,ndarray,pd.DataFrame,pd.Series)):
items = list(items) if use_list else _listify(items)
if match is not None:
if len(items)==1: items = items*len(match)
else: assert len(items)==len(match), 'Match length mismatch'
super().__init__(items)
def __getitem__(self, idx): return L(self._gets(idx), use_list=None) if is_iter(idx) else self._get(idx)
def _get(self, i): return getattr(self.items,'iloc',self.items)[i]
def _gets(self, i):
i = mask2idxs(i)
return (self.items.iloc[list(i)] if hasattr(self.items,'iloc')
else self.items.__array__()[(i,)] if hasattr(self.items,'__array__')
else [self.items[i_] for i_ in i])
def __setitem__(self, idx, o):
"Set `idx` (can be list of indices, or mask, or int) items to `o` (which is broadcast if not iterable)"
idx = idx if isinstance(idx,L) else _listify(idx)
if not is_iter(o): o = [o]*len(idx)
for i,o_ in zip(idx,o): self.items[i] = o_
def __repr__(self): return coll_repr(self)
def __eq__(self,b): return all_equal(b,self)
def __iter__(self): return (self[i] for i in range(len(self)))
def __invert__(self): return self._new(not i for i in self)
def __mul__ (a,b): return a._new(a.items*b)
def __add__ (a,b): return a._new(a.items+_listify(b))
def __radd__(a,b): return a._new(b)+a
def __addi__(a,b):
a.items += list(b)
return a
def sorted(self, key=None, reverse=False):
"New `L` sorted by `key`. If key is str then use `attrgetter`. If key is int then use `itemgetter`."
if isinstance(key,str): k=lambda o:getattr(o,key,0)
elif isinstance(key,int): k=itemgetter(key)
else: k=key
return self._new(sorted(self.items, key=k, reverse=reverse))
@classmethod
def range(cls, a, b=None, step=None):
"Same as builtin `range`, but returns an `L`. Can pass a collection for `a`, to use `len(a)`"
if is_coll(a): a = len(a)
return cls(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a))
def itemgot(self, idx): return self.mapped(itemgetter(idx))
def attrgot(self, k, default=None): return self.mapped(lambda o:getattr(o,k,default))
def tensored(self): return self.mapped(tensor)
def stack(self, dim=0): return torch.stack(list(self.tensored()), dim=dim)
def cat (self, dim=0): return torch.cat (list(self.tensored()), dim=dim)
def cycle(self): return itertools.cycle(self) if len(self) > 0 else itertools.cycle([None])
def filtered(self, f, *args, **kwargs): return self._new(filter(partial(f,*args,**kwargs), self))
def mapped(self, f, *args, **kwargs): return self._new(map(partial(f,*args,**kwargs), self))
def mapped_dict(self, f, *args, **kwargs): return {k:f(k, *args,**kwargs) for k in self}
def starmapped(self, f, *args, **kwargs): return self._new(itertools.starmap(partial(f,*args,**kwargs), self))
def zipped(self, longest=False): return self._new((zip_longest if longest else zip)(*self))
def zipwith(self, *rest, longest=False): return self._new([self, *rest]).zipped(longest=longest)
def mapped_zip(self, f, longest=False): return self.zipped(longest=longest).starmapped(f)
def mapped_zipwith(self, f, *rest, longest=False): return self.zipwith(*rest, longest=longest).starmapped(f)
def shuffled(self):
it = copy(self.items)
random.shuffle(it)
return self._new(it)
#export
add_docs(L,
__getitem__="Retrieve `idx` (can be list of indices, or mask, or int) items",
filtered="Create new `L` filtered by predicate `f`, passing `args` and `kwargs` to `f`",
mapped="Create new `L` with `f` applied to all `items`, passing `args` and `kwargs` to `f`",
mapped_dict="Like `mapped`, but creates a dict from `items` to function results",
starmapped="Like `mapped`, but use `itertools.starmap`",
itemgot="Create new `L` with item `idx` of all `items`",
attrgot="Create new `L` with attr `k` of all `items`",
tensored="`mapped(tensor)`",
cycle="Same as `itertools.cycle`",
stack="Same as `torch.stack`",
cat="Same as `torch.cat`",
zipped="Create new `L` with `zip(*items)`",
zipwith="Create new `L` with `self` zipped with each of `*rest`",
mapped_zip="Combine `zipped` and `starmapped`",
mapped_zipwith="Combine `zipwith` and `starmapped`",
shuffled="Same as `random.shuffle`, but not inplace") | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
You can create an `L` from an existing iterable (e.g. a list, range, etc) and access or modify it with an int list/tuple index, mask, int, or slice. All `list` methods can also be used with `L`. | t = L(range(12))
test_eq(t, list(range(12)))
test_ne(t, list(range(11)))
t.reverse()
test_eq(t[0], 11)
t[3] = "h"
test_eq(t[3], "h")
t[3,5] = ("j","k")
test_eq(t[3,5], ["j","k"])
test_eq(t, L(t))
t | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
There are optimized indexers for arrays, tensors, and DataFrames. | arr = np.arange(9).reshape(3,3)
t = L(arr, use_list=None)
test_eq(t[1,2], arr[[1,2]])
arr = torch.arange(9).view(3,3)
t = L(arr, use_list=None)
test_eq(t[1,2], arr[[1,2]])
df = pd.DataFrame({'a':[1,2,3]})
t = L(df, use_list=None)
test_eq(t[1,2], L(pd.DataFrame({'a':[2,3]}), use_list=None)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
You can also modify an `L` with `append`, `+`, and `*`. | t = L()
test_eq(t, [])
t.append(1)
test_eq(t, [1])
t += [3,2]
test_eq(t, [1,3,2])
t = t + [4]
test_eq(t, [1,3,2,4])
t = 5 + t
test_eq(t, [5,1,3,2,4])
test_eq(L(1,2,3), [1,2,3])
test_eq(L(1,2,3), L(1,2,3))
t = L(1)*5
t = t.mapped(operator.neg)
test_eq(t,[-1]*5)
test_eq(~L([True,False,False]), L([False,True,True]))
t = L(range(4))
test_eq(zip(t, L(1).cycle()), zip(range(4),(1,1,1,1)))
t = L.range(100)
t2 = t.shuffled()
test_ne(t,t2)
test_eq(L.range(100), t)
test_eq(set(t),set(t2))
def _f(x,a=0): return x+a
t = L(1)*5
test_eq(t.mapped(_f), t)
test_eq(t.mapped(_f,1), [2]*5)
test_eq(t.mapped(_f,a=2), [3]*5) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
An `L` can be constructed from anything iterable, although tensors and arrays will not be iterated over on construction, unless you pass `use_list` to the constructor. | test_eq(L([1,2,3]),[1,2,3])
test_eq(L(L([1,2,3])),[1,2,3])
test_ne(L([1,2,3]),[1,2,])
test_eq(L('abc'),['abc'])
test_eq(L(range(0,3)),[0,1,2])
test_eq(L(o for o in range(0,3)),[0,1,2])
test_eq(L(tensor(0)),[tensor(0)])
test_eq(L([tensor(0),tensor(1)]),[tensor(0),tensor(1)])
test_eq(L(tensor([0.,1.1]))[0],tensor([0.,1.1]))
test_eq(L(tensor([0.,1.1]), use_list=True), [0.,1.1]) # `use_list=True` to unwrap arrays/tensors | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
If `match` is not `None` then the created list is same len as `match`, either by:- If `len(items)==1` then `items` is replicated,- Otherwise an error is raised if `match` and `items` are not already the same size. | test_eq(L(1,match=[1,2,3]),[1,1,1])
test_eq(L([1,2],match=[2,3]),[1,2])
test_fail(lambda: L([1,2],match=[1,2,3])) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
If you create an `L` from an existing `L` then you'll get back the original object (since `L` uses the `NewChkMeta` metaclass). | test_is(L(t), t) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Methods | show_doc(L.__getitem__)
t = L(range(12))
test_eq(t[1,2], [1,2]) # implicit tuple
test_eq(t[[1,2]], [1,2]) # list
test_eq(t[:3], [0,1,2]) # slice
test_eq(t[[False]*11 + [True]], [11]) # mask
test_eq(t[tensor(3)], 3)
show_doc(L.__setitem__)
t[4,6] = 0
test_eq(t[4,6], [0,0])
t[4,6] = [1,2]
test_eq(t[4,6], [1,2])
show_doc(L.filtered)
test_eq(t.filtered(lambda o:o<5), [0,1,2,3,1,2])
show_doc(L.mapped)
test_eq(L(range(4)).mapped(operator.neg), [0,-1,-2,-3])
show_doc(L.mapped_dict)
test_eq(L(range(1,5)).mapped_dict(operator.neg), {1:-1, 2:-2, 3:-3, 4:-4})
show_doc(L.zipped)
t = L([[1,2,3],'abc'])
test_eq(t.zipped(), [(1, 'a'),(2, 'b'),(3, 'c')])
t = L([[1,2],'abc'])
test_eq(t.zipped(longest=True ), [(1, 'a'),(2, 'b'),(None, 'c')])
test_eq(t.zipped(longest=False), [(1, 'a'),(2, 'b')])
show_doc(L.mapped_zip)
t = L([1,2,3],[2,3,4])
test_eq(t.mapped_zip(operator.mul), [2,6,12])
show_doc(L.zipwith)
b = [[0],[1],[2,2]]
t = L([1,2,3]).zipwith(b)
test_eq(t, [(1,[0]), (2,[1]), (3,[2,2])])
show_doc(L.mapped_zipwith)
test_eq(L(1,2,3).mapped_zipwith(operator.mul, [2,3,4]), [2,6,12])
show_doc(L.itemgot)
test_eq(t.itemgot(1), b)
show_doc(L.attrgot)
a = [SimpleNamespace(a=3,b=4),SimpleNamespace(a=1,b=2)]
test_eq(L(a).attrgot('b'), [4,2])
show_doc(L.sorted)
test_eq(L(a).sorted('a').attrgot('b'), [2,4])
show_doc(L.range)
test_eq_type(L.range([1,1,1]), L(range(3)))
test_eq_type(L.range(5,2,2), L(range(5,2,2)))
show_doc(L.tensored) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`. | t = L(([1,2],[3,4]))
test_eq(t.tensored(), [tensor(1,2),tensor(3,4)])
show_doc(L.stack)
test_eq(t.stack(), tensor([[1,2],[3,4]]))
show_doc(L.cat)
test_eq(t.cat(), tensor([1,2,3,4])) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Utility functions Basics | # export
def ifnone(a, b):
"`b` if `a` is None else `a`"
return b if a is None else a | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Since `b if a is None else a` is such a common pattern, we wrap it in a function. However, be careful, because python will evaluate *both* `a` and `b` when calling `ifnone` (which it doesn't do if using the `if` version directly). | test_eq(ifnone(None,1), 1)
test_eq(ifnone(2 ,1), 2)
#export
def get_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Dynamically create a class, optionally inheriting from `sup`, containing `fld_names`"
attrs = {}
for f in fld_names: attrs[f] = None
for f in L(funcs): attrs[f.__name__] = f
for k,v in flds.items(): attrs[k] = v
sup = ifnone(sup, ())
if not isinstance(sup, tuple): sup=(sup,)
def _init(self, *args, **kwargs):
for i,v in enumerate(args): setattr(self, list(attrs.keys())[i], v)
for k,v in kwargs.items(): setattr(self,k,v)
def _repr(self):
return '\n'.join(f'{o}: {getattr(self,o)}' for o in set(dir(self))
if not o.startswith('_') and not isinstance(getattr(self,o), types.MethodType))
if not sup: attrs['__repr__'] = _repr
attrs['__init__'] = _init
res = type(nm, sup, attrs)
if doc is not None: res.__doc__ = doc
return res
_t = get_class('_t', 'a', b=2)
t = _t()
test_eq(t.a, None)
test_eq(t.b, 2)
t = _t(1, b=3)
test_eq(t.a, 1)
test_eq(t.b, 3)
t = _t(1, 3)
test_eq(t.a, 1)
test_eq(t.b, 3) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Most often you'll want to call `mk_class`, since it adds the class to your module. See `mk_class` for more details and examples of use (which also apply to `get_class`). | #export
def mk_class(nm, *fld_names, sup=None, doc=None, funcs=None, mod=None, **flds):
"Create a class using `get_class` and add to the caller's module"
if mod is None: mod = inspect.currentframe().f_back.f_locals
res = get_class(nm, *fld_names, sup=sup, doc=doc, funcs=funcs, **flds)
mod[nm] = res | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Any `kwargs` will be added as class attributes, and `sup` is an optional (tuple of) base classes. | mk_class('_t', a=1, sup=GetAttr)
t = _t()
test_eq(t.a, 1)
assert(isinstance(t,GetAttr)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
A `__init__` is provided that sets attrs for any `kwargs`, and for any `args` (matching by position to fields), along with a `__repr__` which prints all attrs. The docstring is set to `doc`. You can pass `funcs` which will be added as attrs with the function names. | def foo(self): return 1
mk_class('_t', 'a', sup=GetAttr, doc='test doc', funcs=foo)
t = _t(3, b=2)
test_eq(t.a, 3)
test_eq(t.b, 2)
test_eq(t.foo(), 1)
test_eq(t.__doc__, 'test doc')
t
#export
def wrap_class(nm, *fld_names, sup=None, doc=None, funcs=None, **flds):
"Decorator: makes function a method of a new class `nm` passing parameters to `mk_class`"
def _inner(f):
mk_class(nm, *fld_names, sup=sup, doc=doc, funcs=L(funcs)+f, mod=f.__globals__, **flds)
return f
return _inner
@wrap_class('_t', a=2)
def bar(self,x): return x+1
t = _t()
test_eq(t.a, 2)
test_eq(t.bar(3), 4)
show_doc(noop)
noop()
test_eq(noop(1),1)
show_doc(noops)
mk_class('_t', foo=noops)
test_eq(_t().foo(1),1)
#export
def set_seed(s):
"Set random seed for `random`, `torch`, and `numpy` (where available)"
try: torch.manual_seed(s)
except NameError: pass
try: np.random.seed(s%(2**32-1))
except NameError: pass
random.seed(s)
set_seed(2*33)
a1 = np.random.random()
a2 = torch.rand(())
a3 = random.random()
set_seed(2*33)
b1 = np.random.random()
b2 = torch.rand(())
b3 = random.random()
test_eq(a1,b1)
test_eq(a2,b2)
test_eq(a3,b3)
#export
def store_attr(self, nms):
"Store params named in comma-separated `nms` from calling context into attrs in `self`"
mod = inspect.currentframe().f_back.f_locals
for n in re.split(', *', nms): setattr(self,n,mod[n])
class T:
def __init__(self, a,b,c): store_attr(self, 'a,b, c')
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Subclassing `Tensor` | #export
class TensorBase(Tensor, metaclass=BypassNewMeta):
def _new_meta(self, *args, **kwargs): return tensor(self)
#export
def _patch_tb():
def get_f(fn):
def _f(self, *args, **kwargs):
cls = self.__class__
res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
return cls(res) if isinstance(res,Tensor) else res
return _f
t = tensor([1])
skips = '__class__ __deepcopy__ __delattr__ __dir__ __doc__ __getattribute__ __hash__ __init__ \
__init_subclass__ __new__ __reduce__ __module__ __setstate__'.split()
for fn in dir(t):
if fn in skips: continue
f = getattr(t, fn)
if isinstance(f, (types.MethodWrapperType, types.BuiltinFunctionType, types.BuiltinMethodType, types.MethodType, types.FunctionType)):
setattr(TensorBase, fn, get_f(fn))
_patch_tb()
t = TensorBase(range(5))
test_eq_type(t[0], TensorBase(0))
test_eq_type(t[:2], TensorBase([0,1]))
test_eq_type(t+1, TensorBase(range(1,6)))
class _T(TensorBase): pass
t = _T(range(5))
test_eq_type(t[0], _T(0))
test_eq_type(t[:2], _T([0,1]))
test_eq_type(t+1, _T(range(1,6)))
#export
def retain_type(new, old=None, typ=None):
"Cast `new` to type of `old` if it's a superclass"
# e.g. old is TensorImage, new is Tensor - if not subclass then do nothing
assert old is not None or typ is not None
if typ is None:
if not isinstance(old, type(new)): return new
typ = old if isinstance(old,type) else type(old)
# Do nothing the new type is already an instance of requested type (i.e. same type)
return typ(new) if typ!=NoneType and not isinstance(new, typ) else new
class _T(tuple): pass
a = _T((1,2))
b = tuple((1,2))
test_eq_type(retain_type(b, typ=_T), a)
#export
def retain_types(new, old=None, typs=None):
"Cast each item of `new` to type of matching item in `old` if it's a superclass"
assert old is not None or typs is not None
return tuple(L(new,L(old),L(typs)).mapped_zip(retain_type, longest=True))
class T(tuple): pass
t1,t2 = retain_types((tensor(1),(tensor(1),)), (TensorBase(2),T((2,))))
test_eq_type(t1, TensorBase(1))
test_eq_type(t2, T((tensor(1),))) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Collection functions | #export
def tuplify(o, use_list=False, match=None):
"Make `o` a tuple"
return tuple(L(o, use_list=use_list, match=match))
test_eq(tuplify(None),())
test_eq(tuplify([1,2,3]),(1,2,3))
test_eq(tuplify(1,match=[1,2,3]),(1,1,1))
#export
def replicate(item,match):
"Create tuple of `item` copied `len(match)` times"
return (item,)*len(match)
t = [1,1]
test_eq(replicate([1,2], t),([1,2],[1,2]))
test_eq(replicate(1, t),(1,1))
#export
def uniqueify(x, sort=False, bidir=False, start=None):
"Return the unique elements in `x`, optionally `sort`-ed, optionally return the reverse correspondance."
res = list(OrderedDict.fromkeys(x).keys())
if start is not None: res = L(start)+res
if sort: res.sort()
if bidir: return res, {v:k for k,v in enumerate(res)}
return res
# test
test_eq(set(uniqueify([1,1,0,5,0,3])),{0,1,3,5})
test_eq(uniqueify([1,1,0,5,0,3], sort=True),[0,1,3,5])
v,o = uniqueify([1,1,0,5,0,3], bidir=True)
test_eq(v,[1,0,5,3])
test_eq(o,{1:0, 0: 1, 5: 2, 3: 3})
v,o = uniqueify([1,1,0,5,0,3], sort=True, bidir=True)
test_eq(v,[0,1,3,5])
test_eq(o,{0:0, 1: 1, 3: 2, 5: 3})
# export
def setify(o): return o if isinstance(o,set) else set(L(o))
# test
test_eq(setify(None),set())
test_eq(setify('abc'),{'abc'})
test_eq(setify([1,2,2]),{1,2})
test_eq(setify(range(0,3)),{0,1,2})
test_eq(setify({1,2}),{1,2})
#export
def is_listy(x):
"`isinstance(x, (tuple,list,L))`"
return isinstance(x, (tuple,list,L,slice,Generator))
assert is_listy([1])
assert is_listy(L([1]))
assert is_listy(slice(2))
assert not is_listy(torch.tensor([1]))
#export
def range_of(x):
"All indices of collection `x` (i.e. `list(range(len(x)))`)"
return list(range(len(x)))
test_eq(range_of([1,1,1,1]), [0,1,2,3])
#export
def groupby(x, key):
"Like `itertools.groupby` but doesn't need to be sorted, and isn't lazy"
res = {}
for o in x: res.setdefault(key(o), []).append(o)
return res
test_eq(groupby('aa ab bb'.split(), itemgetter(0)), {'a':['aa','ab'], 'b':['bb']})
#export
def merge(*ds):
"Merge all dictionaries in `ds`"
return {k:v for d in ds for k,v in d.items()}
test_eq(merge(), {})
test_eq(merge(dict(a=1,b=2)), dict(a=1,b=2))
test_eq(merge(dict(a=1,b=2), dict(b=3,c=4)), dict(a=1, b=3, c=4))
#export
def shufflish(x, pct=0.04):
"Randomly relocate items of `x` up to `pct` of `len(x)` from their starting location"
n = len(x)
return L(x[i] for i in sorted(range_of(x), key=lambda o: o+n*(1+random.random()*pct)))
l = list(range(100))
l2 = array(shufflish(l))
test_close(l2[:50 ].mean(), 25, eps=5)
test_close(l2[-50:].mean(), 75, eps=5)
test_ne(l,l2)
#export
class IterLen:
"Base class to add iteration to anything supporting `len` and `__getitem__`"
def __iter__(self): return (self[i] for i in range_of(self))
#export
@docs
class ReindexCollection(GetAttr, IterLen):
"Reindexes collection `coll` with indices `idxs` and optional LRU cache of size `cache`"
def __init__(self, coll, idxs=None, cache=None):
self.default,self.coll,self.idxs,self.cache = coll,coll,ifnone(idxs,L.range(coll)),cache
def _get(self, i): return self.coll[i]
self._get = types.MethodType(_get,self)
if cache is not None: self._get = functools.lru_cache(maxsize=cache)(self._get)
def __getitem__(self, i): return self._get(self.idxs[i])
def __len__(self): return len(self.coll)
def reindex(self, idxs): self.idxs = idxs
def shuffle(self): random.shuffle(self.idxs)
def cache_clear(self): self._get.cache_clear()
_docs = dict(reindex="Replace `self.idxs` with idxs",
shuffle="Randomly shuffle indices",
cache_clear="Clear LRU cache")
sz = 50
t = ReindexCollection(L.range(sz), cache=2)
test_eq(list(t), range(sz))
test_eq(t[sz-1], sz-1)
test_eq(t._get.cache_info().hits, 1)
t.shuffle()
test_eq(t._get.cache_info().hits, 1)
test_ne(list(t), range(sz))
test_eq(set(t), set(range(sz)))
t.cache_clear()
test_eq(t._get.cache_info().hits, 0)
test_eq(t.count(0), 1)
#export
def _oper(op,a,b=None): return (lambda o:op(o,a)) if b is None else op(a,b)
def _mk_op(nm, mod=None):
"Create an operator using `oper` and add to the caller's module"
if mod is None: mod = inspect.currentframe().f_back.f_locals
op = getattr(operator,nm)
def _inner(a,b=None): return _oper(op, a,b)
_inner.__name__ = _inner.__qualname__ = nm
_inner.__doc__ = f'Same as `operator.{nm}`, or returns partial if 1 arg'
mod[nm] = _inner
#export
_all_ = ['lt', 'gt', 'le', 'ge', 'eq', 'ne', 'add', 'sub', 'mul', 'truediv']
#export
for op in 'lt gt le ge eq ne add sub mul truediv'.split(): _mk_op(op) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
The following functions are provided matching the behavior of the equivalent versions in `operator`: - *lt gt le ge eq ne add sub mul truediv* | lt(3,5),gt(3,5) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
However, they also have additional functionality: if you only pass one param, they return a partial function that passes that param as the second positional parameter. | lt(5)(3),gt(5)(3)
#export
class _InfMeta(type):
@property
def count(self): return itertools.count()
@property
def zeros(self): return itertools.cycle([0])
@property
def ones(self): return itertools.cycle([1])
@property
def nones(self): return itertools.cycle([None])
#export
class Inf(metaclass=_InfMeta):
"Infinite lists"
pass | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
`Inf` defines the following properties: - `count: itertools.count()`- `zeros: itertools.cycle([0])`- `ones : itertools.cycle([1])`- `nones: itertools.cycle([None])` | test_eq([o for i,o in zip(range(5), Inf.count)],
[0, 1, 2, 3, 4])
test_eq([o for i,o in zip(range(5), Inf.zeros)],
[0, 0, 0, 0, 0])
#export
def true(*args, **kwargs):
"Predicate: always `True`"
return True
#export
def stop(e=StopIteration):
"Raises exception `e` (by default `StopException`) even if in an expression"
raise e
#export
def gen(func, seq, cond=true):
"Like `(func(o) for o in seq if cond(func(o)))` but handles `StopIteration`"
return itertools.takewhile(cond, map(func,seq))
test_eq(gen(noop, Inf.count, lt(5)),
range(5))
test_eq(gen(operator.neg, Inf.count, gt(-5)),
[0,-1,-2,-3,-4])
test_eq(gen(lambda o:o if o<5 else stop(), Inf.count),
range(5))
#export
def chunked(it, cs, drop_last=False):
if not isinstance(it, Iterator): it = iter(it)
while True:
res = list(itertools.islice(it, cs))
if res and (len(res)==cs or not drop_last): yield res
if len(res)<cs: return
t = L.range(10)
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
t = map(lambda o:stop() if o==6 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5]])
t = map(lambda o:stop() if o==7 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5], [6]])
t = tensor(range(10))
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
#export
def concat(*ls):
"Concatenate tensors, arrays, lists, or tuples"
if not len(ls): return []
it = ls[0]
if isinstance(it,torch.Tensor): res = torch.cat(ls)
elif isinstance(it,ndarray): res = np.concatenate(ls)
else:
res = [o for x in ls for o in L(x)]
if isinstance(it,(tuple,list)): res = type(it)(res)
else: res = L(res)
return retain_type(res, it)
a,b,c = [1],[1,2],[1,1,2]
test_eq(concat(a,b), c)
test_eq_type(concat(tuple (a),tuple (b)), tuple (c))
test_eq_type(concat(array (a),array (b)), array (c))
test_eq_type(concat(tensor(a),tensor(b)), tensor(c))
test_eq_type(concat(TensorBase(a),TensorBase(b)), TensorBase(c))
test_eq_type(concat([1,1],1), [1,1,1])
test_eq_type(concat(1,1,1), L(1,1,1))
test_eq_type(concat(L(1,2),1), L(1,2,1)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Chunks - | #export
class Chunks:
"Slice and int indexing into a list of lists"
def __init__(self, chunks, lens=None):
self.chunks = chunks
self.lens = L(map(len,self.chunks) if lens is None else lens)
self.cumlens = np.cumsum(0+self.lens)
self.totlen = self.cumlens[-1]
def __getitem__(self,i):
if isinstance(i,slice): return self.getslice(i)
di,idx = self.doc_idx(i)
return self.chunks[di][idx]
def getslice(self, i):
st_d,st_i = self.doc_idx(ifnone(i.start,0))
en_d,en_i = self.doc_idx(ifnone(i.stop,self.totlen+1))
res = [self.chunks[st_d][st_i:(en_i if st_d==en_d else sys.maxsize)]]
for b in range(st_d+1,en_d): res.append(self.chunks[b])
if st_d!=en_d and en_d<len(self.chunks): res.append(self.chunks[en_d][:en_i])
return concat(*res)
def doc_idx(self, i):
if i<0: i=self.totlen+i # count from end
docidx = np.searchsorted(self.cumlens, i+1)-1
cl = self.cumlens[docidx]
return docidx,i-cl
docs = L(list(string.ascii_lowercase[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq([b[ o] for o in range(0,5)], ['a','b','c','d','e'])
test_eq([b[-o] for o in range(1,6)], ['z','y','x','w','v'])
test_eq(b[6:13], 'g,h,i,j,k,l,m'.split(','))
test_eq(b[20:77], 'u,v,w,x,y,z'.split(','))
test_eq(b[:5], 'a,b,c,d,e'.split(','))
test_eq(b[:2], 'a,b'.split(','))
t = torch.arange(26)
docs = L(t[a:b] for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq([b[ o] for o in range(0,5)], range(0,5))
test_eq([b[-o] for o in range(1,6)], [25,24,23,22,21])
test_eq(b[6:13], torch.arange(6,13))
test_eq(b[20:77], torch.arange(20,26))
test_eq(b[:5], torch.arange(5))
test_eq(b[:2], torch.arange(2))
docs = L(TensorBase(t[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq_type(b[:2], TensorBase(range(2)))
test_eq_type(b[:5], TensorBase(range(5)))
test_eq_type(b[9:13], TensorBase(range(9,13)))
type(b[9:13]) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Functions on functions | #export
def trace(f):
"Add `set_trace` to an existing function `f`"
def _inner(*args,**kwargs):
set_trace()
return f(*args,**kwargs)
return _inner
# export
def compose(*funcs, order=None):
"Create a function that composes all functions in `funcs`, passing along remaining `*args` and `**kwargs` to all"
funcs = L(funcs)
if order is not None: funcs = funcs.sorted(order)
def _inner(x, *args, **kwargs):
for f in L(funcs): x = f(x, *args, **kwargs)
return x
return _inner
f1 = lambda o,p=0: (o*2)+p
f2 = lambda o,p=1: (o+1)/p
test_eq(f2(f1(3)), compose(f1,f2)(3))
test_eq(f2(f1(3,p=3),p=3), compose(f1,f2)(3,p=3))
test_eq(f2(f1(3, 3), 3), compose(f1,f2)(3, 3))
f1.order = 1
test_eq(f1(f2(3)), compose(f1,f2, order="order")(3))
#export
def maps(*args, retain=noop):
"Like `map`, except funcs are composed first"
f = compose(*args[:-1])
def _f(b): return retain(f(b), b)
return map(_f, args[-1])
test_eq(maps([1]), [1])
test_eq(maps(operator.neg, [1,2]), [-1,-2])
test_eq(maps(operator.neg, operator.neg, [1,2]), [1,2])
test_eq_type(list(maps(operator.neg, [TensorBase(1), 2], retain=retain_type)),
[TensorBase(-1), -2])
#export
def mapper(f):
"Create a function that maps `f` over an input collection"
return lambda o: [f(o_) for o_ in o]
func = mapper(lambda o:o*2)
test_eq(func(range(3)),[0,2,4])
#export
def partialler(f, *args, order=None, **kwargs):
"Like `functools.partial` but also copies over docstring"
fnew = partial(f,*args,**kwargs)
fnew.__doc__ = f.__doc__
if order is not None: fnew.order=order
elif hasattr(f,'order'): fnew.order=f.order
return fnew
def _f(x,a=1):
"test func"
return x+a
_f.order=1
f = partialler(_f, a=2)
test_eq(f.order, 1)
f = partialler(_f, a=2, order=3)
test_eq(f.__doc__, "test func")
test_eq(f.order, 3)
test_eq(f(3), _f(3,2))
#export
def instantiate(t):
"Instantiate `t` if it's a type, otherwise do nothing"
return t() if isinstance(t, type) else t
test_eq_type(instantiate(int), 0)
test_eq_type(instantiate(1), 1)
#export
mk_class('_Arg', 'i')
_0,_1,_2,_3,_4 = _Arg(0),_Arg(1),_Arg(2),_Arg(3),_Arg(4)
#export
class bind:
"Same as `partial`, except you can use `_0` `_1` etc param placeholders"
def __init__(self, fn, *pargs, **pkwargs):
store_attr(self, 'fn,pargs,pkwargs')
self.maxi = max((x.i for x in pargs if isinstance(x, _Arg)), default=-1)
def __call__(self, *args, **kwargs):
fargs = L(args[x.i] if isinstance(x, _Arg) else x for x in self.pargs) + args[self.maxi+1:]
return self.fn(*fargs, **{**self.pkwargs, **kwargs})
def myfn(a,b,c,d=1,e=2): return(a,b,c,d,e)
test_eq(bind(myfn, _1, 17, _0, e=3)(19,14), (14,17,19,1,3))
test_eq(bind(myfn, 17, _0, e=3)(19,14), (17,19,14,1,3))
test_eq(bind(myfn, 17, e=3)(19,14), (17,19,14,1,3))
test_eq(bind(myfn)(17,19,14), (17,19,14,1,2)) | _____no_output_____ | Apache-2.0 | dev/01_core.ipynb | nareshr8/fastai_dev |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.