markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Setting the index- Now we need to set the index of the data-frame so that it contains the sequence of dates.
googl.set_index(pd.to_datetime(googl['Date']), inplace=True) googl.index[0] type(googl.index[0])
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Plotting series- We can plot a series in a dataframe by invoking its `plot()` method.- Here we plot a time-series of the daily traded volume:
ax = googl['Volume'].plot() plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Adjusted closing prices as a time series
googl['Adj Close'].plot() plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Slicing series using date/time stamps- We can slice a time series by specifying a range of dates or times.- Date and time stamps are specified strings representing dates in the required format.
googl['Adj Close']['1-1-2016':'1-1-2017'].plot() plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Resampling - We can *resample* to obtain e.g. weekly or monthly prices.- In the example below the `'W'` denotes weekly.- See [the documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.htmloffset-aliases) for other frequencies.- We group data into weeks, and then take the last value in each week.- For details of other ways to resample the data, see [the documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html). Resampled time-series plot
weekly_prices = googl['Adj Close'].resample('W').last() weekly_prices.head() weekly_prices.plot() plt.title('Prices for GOOGL sampled at weekly frequency') plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Converting prices to log returns
weekly_rets = np.diff(np.log(weekly_prices)) plt.plot(weekly_rets) plt.xlabel('t'); plt.ylabel('$r_t$') plt.title('Weekly log-returns for GOOGL') plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Converting the returns to a series- Notice that in the above plot the time axis is missing the dates.- This is because the `np.diff()` function returns an array instead of a data-frame.
type(weekly_rets)
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
- We can convert it to a series thus:
weekly_rets_series = pd.Series(weekly_rets, index=weekly_prices.index[1:]) weekly_rets_series.head()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Plotting with the correct time axis Now when we plot the series we will obtain the correct time axis:
plt.plot(weekly_rets_series) plt.title('GOOGL weekly log-returns'); plt.xlabel('t'); plt.ylabel('$r_t$') plt.show()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Plotting a return histogram
weekly_rets_series.hist() plt.show() weekly_rets_series.describe()
_____no_output_____
CC-BY-4.0
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
Scraping Fantasy Football DataNeed to scrape the following data:- Weekly Player PPR Projections: ESPN, CBS, Fantasy Sharks, Scout Fantasy Sporsts, (and tried Fantasy Football Today but doesn't have defense projections currently, so exclude)- Previous Week Player Actual PPR Results- Weekly Fanduel Player Salary (can manually download csv from a Thurs-Sun contest and then import)
import pandas as pd import numpy as np import requests # import json # from bs4 import BeautifulSoup import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # from selenium.common.exceptions import NoSuchElementException #function to initiliaze selenium web scraper def instantiate_selenium_driver(): chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--window-size=1420,1080') #chrome_options.add_argument('--headless') chrome_options.add_argument('--disable-gpu') driver = webdriver.Chrome('..\plugins\chromedriver.exe', chrome_options=chrome_options) return driver #function to save dataframes to pickle archive #file name: don't include csv in file name, function will also add a timestamp to the archive #directory name don't include final backslash def save_to_pickle(df, directory_name, file_name): lt = time.localtime() full_file_name = f"{file_name}_{lt.tm_year}-{lt.tm_mon}-{lt.tm_mday}-{lt.tm_hour}-{lt.tm_min}.pkl" path = f"{directory_name}/{full_file_name}" df.to_pickle(path) print(f"Pickle saved to: {path}") #remove name suffixes of II III IV or Jr. or Sr. or random * from names to easier match other databases #also remove periods from first name T.J. make TJ (just remove periods from whole name in function) def remove_suffixes_periods(name): #remove periods and any asterisks name = name.replace(".", "") name = name.replace("*", "") #remove any suffixes by splitting the name on spaces and then rebuilding the name with only the first two of the list (being first/last name) name_split = name.split(" ") name_final = " ".join(name_split[0:2]) #rebuild # #old suffix removal process (created some errors for someone with Last Name starting with V) # for suffix in [" III", " II", " IV", " V", " Jr.", " Sr."]: # name = name.replace(suffix, "") return name_final #function to rename defense position labels so all matach #this will be used since a few players have same name as another player, but currently none that #are at same position need to create a function that gets all the defense labels the same, so that #when merge, can merge by both player name and position to prevent bad merges #input of pos will be the value of the column that getting mapped def convert_defense_label(pos): defense_labels_scraped = ['DST', 'D', 'Def', 'DEF'] if pos in defense_labels_scraped: #conver defense position labels to espn format pos = 'D/ST' return pos
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get Weekly Player Actual Fantasy PPR PointsGet from ESPN's Scoring Leaders tablehttp://games.espn.com/ffl/leaders?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0- scoringPeriodId = week of the season- seasonId = year- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16- leagueID = scoring type, PPR Standard is 0
##SCRAPE ESPN SCORING LEADERS TABLE FOR ACTUAL FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_actual_PPR_player_points_ESPN(week, year): #instantiate the driver driver = instantiate_selenium_driver() #initialize dataframe for all data player_actual_ppr = pd.DataFrame() #url that returns info has different code for each position position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16} #cycle through each position webpage to create comprehensive dataframe for pos, pos_id in position_ids.items(): #note leagueID=0 is for PPR standard scoring url_start_pos = f"http://games.espn.com/ffl/leaders?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0" driver.get(url_start_pos) #each page only gets 50 results, so cycle through next button until next button no longer exists while True: #read in the table from ESPN, by using the class, and use the 1st row index for column header player_actual_ppr_table_page = pd.read_html(driver.page_source, attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data header=[1])[0] #returns table in a list, so get zeroth table #easier to just assign the player position rather than try to scrape it out player_actual_ppr_table_page['POS'] = pos #replace any placeholder string -- or --/-- with None type to not confuse calculations later player_actual_ppr_table_page.replace({'--': None, '--/--': None}, inplace=True) #if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes # #rename D/ST columns so don't get misassigned to wrong columns # if pos == 'D/ST': # player_actual_ppr_table_page.rename(columns={'SCK':'D/ST_Sack', # 'FR':'D/ST_FR', 'INT':'D/ST_INT', # 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'}, # inplace=True) # #rename/recalculate Kicker columns so don't get misassigned to wrong columns # elif pos == 'K': # player_actual_ppr_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49', # '50+':'KICK_FG_50+', 'TOT':'KICK_FG', # 'XP':'KICK_XP'}, # inplace=True) # #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types # #just want made FG's for each bucket and overall FGAtt and XPAtt # player_actual_ppr_table_page['KICK_FGAtt'] = player_actual_ppr_table_page['KICK_FG'].map( # lambda x: x.split("/")[-1]).astype('float64') # player_actual_ppr_table_page['KICK_XPAtt'] = player_actual_ppr_table_page['KICK_XP'].map( # lambda x: x.split("/")[-1]).astype('float64') # player_actual_ppr_table_page['KICK_FG_1-39'] = player_actual_ppr_table_page['KICK_FG_1-39'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG_40-49'] = player_actual_ppr_table_page['KICK_FG_40-49'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG_50+'] = player_actual_ppr_table_page['KICK_FG_50+'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG'] = player_actual_ppr_table_page['KICK_FG'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_XP'] = player_actual_ppr_table_page['KICK_XP'].map( # lambda x: x.split("/")[0]).astype('float64') # player_actual_ppr_table_page['KICK_FG%'] = player_actual_ppr_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt'] #add page data to overall dataframe player_actual_ppr = pd.concat([player_actual_ppr, player_actual_ppr_table_page], ignore_index=True, sort=False) #click to next page to get next 40 results, but check that it exists try: next_button = driver.find_element_by_partial_link_text('NEXT') next_button.click() except EC.NoSuchElementException: break driver.quit() #drop any completely blank columns player_actual_ppr.dropna(axis='columns', how='all', inplace=True) #add columns that give week/season player_actual_ppr['WEEK'] = week player_actual_ppr['SEASON'] = year return player_actual_ppr ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_ESPN(df_scraped_ppr_espn): #split out player, team, position based on ESPN's formatting def split_player_team_pos_espn(play_team_pos): #incoming string for players: 'Todd Gurley II, LAR RB' or 'Drew Brees, NO\xa0QB' #incoming string for players with special designations: 'Aaron Rodgers, GB\xa0QB Q' #incoming string for D/ST: 'Jaguars D/ST\xa0D/ST' #operations if D/ST if "D/ST" in play_team_pos: player = play_team_pos.split(' D/ST\xa0')[0] team = player.split()[0] #operations for regular players else: player = play_team_pos.split(',')[0] team_pos = play_team_pos.split(',')[1] team = team_pos.split()[0] return player, team df_scraped_ppr_espn[['PLAYER', 'TEAM']] = df_scraped_ppr_espn.apply( lambda x: split_player_team_pos_espn(x['PLAYER, TEAM POS']), axis='columns', result_type='expand') #need to remove name suffixes so can match players easier to other data - see function defined above df_scraped_ppr_espn['PLAYER'] = df_scraped_ppr_espn['PLAYER'].map(remove_suffixes_periods) #convert PTS to float type (sometimes zeros have been stored as strings) df_scraped_ppr_espn['PTS'] = df_scraped_ppr_espn['PTS'].astype('float64') #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' df_scraped_ppr_espn = df_scraped_ppr_espn[['PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK']].sort_values('PTS', ascending=False) return df_scraped_ppr_espn #CALL SCRAPE AND FORMATTING OF ACTUAL PPR WEEK 1- AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk1_player_actual_ppr_scrape = scrape_actual_PPR_player_points_ESPN(1, 2018) save_to_pickle(df_wk1_player_actual_ppr_scrape, 'pickle_archive', 'Week1_Player_Actual_PPR_messy_scrape') #format data to extract just player pts/playr/pos/team/weel and save the data df_wk1_player_actual_ppr = format_extract_PPR_player_points_ESPN(df_wk1_player_actual_ppr_scrape) #rename PTS column to something more descriptive df_wk1_player_actual_ppr.rename(columns={'PTS':'FPTS_PPR_ACTUAL'}, inplace=True) save_to_pickle(df_wk1_player_actual_ppr, 'pickle_archive', 'Week1_Player_Actual_PPR') print(df_wk1_player_actual_ppr.shape) df_wk1_player_actual_ppr.head()
Pickle saved to: pickle_archive/Week1_Player_Actual_PPR_messy_scrape_2018-9-16-7-31.pkl Pickle saved to: pickle_archive/Week1_Player_Actual_PPR_2018-9-16-7-31.pkl (1007, 5)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get ESPN Player Fantasy Points Projections for Week Get from ESPN's Projections Tablehttp://games.espn.com/ffl/tools/projections?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0- scoringPeriodId = week of the season- seasonId = year- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16- leagueID = scoring type, PPR Standard is 0
##SCRAPE ESPN PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_ESPN(week, year): #instantiate the driver on the ESPN projections page driver = instantiate_selenium_driver() #initialize dataframe for all data proj_ppr_espn = pd.DataFrame() #url that returns info has different code for each position position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16} #cycle through each position webpage to create comprehensive dataframe for pos, pos_id in position_ids.items(): #note leagueID=0 is for PPR standard scoring url_start_pos = f"http://games.espn.com/ffl/tools/projections?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0" driver.get(url_start_pos) #each page only gets 50 results, so cycle through next button until next button no longer exists while True: #read in the table from ESPN, by using the class, and use the 1st row index for column header proj_ppr_espn_table_page = pd.read_html(driver.page_source, attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data header=[1])[0] #returns table in a list, so get zeroth table #easier to just assign the player position rather than try to scrape it out proj_ppr_espn_table_page['POS'] = pos #replace any placeholder string -- or --/-- with None type to not confuse calculations later proj_ppr_espn_table_page.replace({'--': None, '--/--': None}, inplace=True) #if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes # #rename D/ST columns so don't get misassigned to wrong columns # if pos == 'D/ST': # proj_ppr_espn_table_page.rename(columns={'SCK':'D/ST_Sack', # 'FR':'D/ST_FR', 'INT':'D/ST_INT', # 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'}, # inplace=True) # #rename/recalculate Kicker columns so don't get misassigned to wrong columns # elif pos == 'K': # proj_ppr_espn_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49', # '50+':'KICK_FG_50+', 'TOT':'KICK_FG', # 'XP':'KICK_XP'}, # inplace=True) # #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types # #just want made FG's for each bucket and overall FGAtt and XPAtt # proj_ppr_espn_table_page['KICK_FGAtt'] = proj_ppr_espn_table_page['KICK_FG'].map( # lambda x: x.split("/")[-1]).astype('float64') # proj_ppr_espn_table_page['KICK_XPAtt'] = proj_ppr_espn_table_page['KICK_XP'].map( # lambda x: x.split("/")[-1]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_1-39'] = proj_ppr_espn_table_page['KICK_FG_1-39'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_40-49'] = proj_ppr_espn_table_page['KICK_FG_40-49'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG_50+'] = proj_ppr_espn_table_page['KICK_FG_50+'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG'] = proj_ppr_espn_table_page['KICK_FG'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_XP'] = proj_ppr_espn_table_page['KICK_XP'].map( # lambda x: x.split("/")[0]).astype('float64') # proj_ppr_espn_table_page['KICK_FG%'] = proj_ppr_espn_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt'] #add page data to overall dataframe proj_ppr_espn = pd.concat([proj_ppr_espn, proj_ppr_espn_table_page], ignore_index=True, sort=False) #click to next page to get next 40 results, but check that it exists try: next_button = driver.find_element_by_partial_link_text('NEXT') next_button.click() except EC.NoSuchElementException: break driver.quit() #drop any completely blank columns proj_ppr_espn.dropna(axis='columns', how='all', inplace=True) #add columns that give week/season proj_ppr_espn['WEEK'] = week proj_ppr_espn['SEASON'] = year return proj_ppr_espn #formatting/extracting function is same for ESPN Actual/PPR Projections, so don't need new function #WEEK 1 PROJECTIONS #CALL SCRAPE AND FORMATTING OF ESPN WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk1_ppr_proj_espn_scrape = scrape_weekly_player_projections_ESPN(1, 2018) save_to_pickle(df_wk1_ppr_proj_espn_scrape, 'pickle_archive', 'Week1_PPR_Projections_ESPN_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk1_ppr_proj_espn = format_extract_PPR_player_points_ESPN(df_wk1_ppr_proj_espn_scrape) #rename PTS column to something more descriptive df_wk1_ppr_proj_espn.rename(columns={'PTS':'FPTS_PPR_ESPN'}, inplace=True) save_to_pickle(df_wk1_ppr_proj_espn, 'pickle_archive', 'Week1_PPR_Projections_ESPN') print(df_wk1_ppr_proj_espn.shape) df_wk1_ppr_proj_espn.head() #WEEK 2 PROJECTIONS #CALL SCRAPE AND FORMATTING OF ESPN WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk2_ppr_proj_espn_scrape = scrape_weekly_player_projections_ESPN(2, 2018) save_to_pickle(df_wk2_ppr_proj_espn_scrape, 'pickle_archive', 'Week2_PPR_Projections_ESPN_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk2_ppr_proj_espn = format_extract_PPR_player_points_ESPN(df_wk2_ppr_proj_espn_scrape) #rename PTS column to something more descriptive df_wk2_ppr_proj_espn.rename(columns={'PTS':'FPTS_PPR_ESPN'}, inplace=True) save_to_pickle(df_wk2_ppr_proj_espn, 'pickle_archive', 'Week2_PPR_Projections_ESPN') print(df_wk2_ppr_proj_espn.shape) df_wk2_ppr_proj_espn.head()
Pickle saved to: pickle_archive/Week2_PPR_Projections_ESPN_messy_scrape_2018-9-16-7-35.pkl Pickle saved to: pickle_archive/Week2_PPR_Projections_ESPN_2018-9-16-7-35.pkl (1007, 5)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get CBS Player Fantasy Points Projections for Week Get from CBS's Projections Tablehttps://www.cbssports.com/fantasy/football/stats/sortable/points/QB/ppr/projections/2018/2?&print_rows=9999- QB is where position goes- 2018 is where season goes- 2 is where week goes- print_rows = 9999 gives all results in one table
##SCRAPE CBS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_CBS(week, year): ###GET PROJECTIONS FROM CBS### #CBS has separate tables for each position, so need to cycle through them #but url can return all list so don't need to go page by page proj_ppr_cbs = pd.DataFrame() positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DST'] header_row_index = {'QB':2, 'RB':2, 'WR':2, 'TE':2, 'K':1, 'DST':1} for position in positions: #url just needs to change position url = f"https://www.cbssports.com/fantasy/football/stats/sortable/points/{position}/ppr/projections/{year}/{week}?&print_rows=9999" #read in the table from CBS by class, and use the 2nd row index for column header proj_ppr_cbs_pos = pd.read_html(url, attrs={'class': 'data'}, #return only the table of this class, which has the player data header=[header_row_index[position]])[0] #returns table in a list, so get table proj_ppr_cbs_pos['POS'] = position #add the table to the overall df proj_ppr_cbs = pd.concat([proj_ppr_cbs, proj_ppr_cbs_pos], ignore_index=True, sort=False) #some tables include the page selector as the bottom row of the table, #so need to find the index values of those rows and then drop them from the table index_pages_rows = list(proj_ppr_cbs[proj_ppr_cbs['Player'].str.contains('Pages')].index) proj_ppr_cbs.drop(index_pages_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_cbs['WEEK'] = week proj_ppr_cbs['SEASON'] = year return proj_ppr_cbs ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_CBS(df_scraped_ppr_cbs): # #could include this extra data if you want to extract it # #calculate completion percentage # df_cbs_proj['COMPLETION_PERCENTAGE'] = df_cbs_proj.CMP/df_cbs_proj.ATT # #rename some of columns so don't lose meaning # df_cbs_proj.rename(columns={'ATT':'PASS_ATT', 'CMP':'PASS_COMP', 'COMPLETION_PERCENTAGE': 'PASS_COMP_PCT', # 'YD': 'PASS_YD', 'TD':'PASS_TD', 'INT':'PASS_INT', 'RATE':'PASS_RATE', # 'ATT.1': 'RUSH_ATT', 'YD.1': 'RUSH_YD', 'AVG': 'RUSH_AVG', 'TD.1':'RUSH_TD', # 'TARGT': 'RECV_TARGT', 'RECPT': 'RECV_RECPT', 'YD.2':'RECV_YD', 'AVG.1':'RECV_AVG', 'TD.2':'RECV_TD', # 'FPTS':'PTS', # 'FG':'KICK_FG', 'FGA': 'KICK_FGAtt', 'XP':'KICK_XP', 'XPAtt':'KICK_XPAtt', # 'Int':'D/ST_INT', 'Sty':'D/ST_Sty', 'Sack':'D/ST_Sack', 'TK':'D/ST_TK', # 'DFR':'D/ST_FR', 'FF':'D/ST_FF', 'DTD':'D/ST_TD', # 'Pa':'D/ST_PtsAll', 'PaNetA':'D/ST_PaYdA', 'RuYdA':'D/ST_RuYdA', 'TyDa':'D/ST_ToYdA'}, # inplace=True) # #calculate passing, rushing, total yards/game # df_cbs_proj['D/ST_PaYd/G'] = df_cbs_proj['D/ST_PaYdA']/16 # df_cbs_proj['D/ST_RuYd/G'] = df_cbs_proj['D/ST_RuYdA']/16 # df_cbs_proj['D/ST_ToYd/G'] = df_cbs_proj['D/ST_ToYdA']/16 #rename FPTS to PTS df_scraped_ppr_cbs.rename(columns={'FPTS':'FPTS_PPR_CBS'}, inplace=True) #split out player, team def split_player_team(play_team): #incoming string for players: 'Todd Gurley, LAR' #incoming string for DST: 'Jaguars, JAC' #operations if D/ST (can tell if there is only two items in a list separated by a space, instead of three) if len(play_team.split()) == 2: player = play_team.split(',')[0] #+ ' D/ST' team = play_team.split(',')[1] #operations for regular players else: player = play_team.split(',')[0] team = play_team.split(',')[1] #remove any possible name suffixes to merge with other data better player = remove_suffixes_periods(player) return player, team df_scraped_ppr_cbs[['PLAYER', 'TEAM']] = df_scraped_ppr_cbs.apply( lambda x: split_player_team(x['Player']), axis='columns', result_type='expand') #convert defense position label to espn standard df_scraped_ppr_cbs['POS'] = df_scraped_ppr_cbs['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' df_scraped_ppr_cbs = df_scraped_ppr_cbs[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_CBS', 'WEEK']].sort_values('FPTS_PPR_CBS', ascending=False) return df_scraped_ppr_cbs #WEEK 1 PROJECTIONS #CALL SCRAPE AND FORMATTING OF CBS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk1_ppr_proj_cbs_scrape = scrape_weekly_player_projections_CBS(1, 2018) save_to_pickle(df_wk1_ppr_proj_cbs_scrape, 'pickle_archive', 'Week1_PPR_Projections_CBS_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk1_ppr_proj_cbs = format_extract_PPR_player_points_CBS(df_wk1_ppr_proj_cbs_scrape) save_to_pickle(df_wk1_ppr_proj_cbs, 'pickle_archive', 'Week1_PPR_Projections_CBS') print(df_wk1_ppr_proj_cbs.shape) df_wk1_ppr_proj_cbs.head() #WEEK 2 PROJECTIONS #CALL SCRAPE AND FORMATTING OF CBS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk2_ppr_proj_cbs_scrape = scrape_weekly_player_projections_CBS(2, 2018) save_to_pickle(df_wk2_ppr_proj_cbs_scrape, 'pickle_archive', 'Week2_PPR_Projections_CBS_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk2_ppr_proj_cbs = format_extract_PPR_player_points_CBS(df_wk2_ppr_proj_cbs_scrape) save_to_pickle(df_wk2_ppr_proj_cbs, 'pickle_archive', 'Week2_PPR_Projections_CBS') print(df_wk2_ppr_proj_cbs.shape) df_wk2_ppr_proj_cbs.head()
Pickle saved to: pickle_archive/Week2_PPR_Projections_CBS_messy_scrape_2018-9-16-7-35.pkl Pickle saved to: pickle_archive/Week2_PPR_Projections_CBS_2018-9-16-7-35.pkl (815, 5)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get Fantasy Sharks Player Points Projection for WeekThey have a json option that gets updated weekly (don't appear to store previous week projections). The json defaults to PPR (which is lucky for us) and has an all players option.https://www.fantasysharks.com/apps/Projections/WeeklyProjections.php?pos=ALL&format=jsonIt returns a list of players, each saved as a dictionary.[ { "Rank": 1, "ID": "4925", "Name": "Brees, Drew", "Pos": "QB", "Team": "NOS", "Opp": "CLE", "Comp": "27.49", "PassYards": "337", "PassTD": 2.15, "Int": "0.61", "Att": "1.5", "RushYards": "0", "RushTD": 0.12, "Rec": "0", "RecYards": "0", "RecTD": 0, "FantasyPoints": 26 }, But the json is only for current week, can't get other week data - so instead use this url exampe:https://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment=628&uid=4- Segment is the week/season id - for 2018 week 1 starts at 628 and adds 1 for each additional week- Position=99 is all positions- scoring=2 is PPR default
##SCRAPE FANTASY SHARKS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be week as number (year isn't used, but keep same format as others) #returns dataframe of scraped data def scrape_weekly_player_projections_Sharks(week, year): #fantasy sharks url - segment for 2018 week 1 starts at 628 and adds 1 for each additional week segment = 627 + week #Position=99 is all positions, and scoring=2 is PPR default sharks_weekly_url = f"https://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment={segment}&uid=4" #since don't need to iterate over pages, can just use reqeuests instead of selenium scraper #however with requests, need to include headers because this website was rejecting the request since it knew python was running it - need to spoof a browser header #other possible headers: 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36' headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)'} #response returns html response = requests.get(sharks_weekly_url, headers=headers) #extract the table data from the html response (call response.text) and get table with player data proj_ppr_sharks = pd.read_html(response.text, #response.text gives the html of the page request attrs={'id': 'toolData'}, #return only the table of this id, which has the player data header = 0 #header is the 0th row )[0] #pd.read_html returns a list of tables even though only one in it, select the table #the webpage uses different tiers, which add extra rows to the table - get rid of those #also sometimes repeats the column headers for readability as scrolling - get rid of those #so need to find the index values of those bad rows and then drop them from the table index_pages_rows = list(proj_ppr_sharks[proj_ppr_sharks['#'].str.contains('Tier|#')].index) proj_ppr_sharks.drop(index_pages_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_sharks['WEEK'] = week proj_ppr_sharks['SEASON'] = year return proj_ppr_sharks ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata like opposing team (OPP) def format_extract_PPR_player_points_Sharks(df_scraped_ppr_sharks): #rename PTS to FPTS_PPR_SHARKS and a few others df_scraped_ppr_sharks.rename(columns={'Pts':'FPTS_PPR_SHARKS', 'Player': 'PLAYER', 'Tm': 'TEAM', 'Position': 'POS'}, inplace=True) #they have player name as Last Name, First Name - reorder to First Last def modify_player_name(player, pos): #incoming string for players: 'Johnson, David' Change to: 'David Johnson' #incoming string for defense: 'Lions, Detroit' Change to: 'Lions' if pos == 'D': player_formatted = player.split(', ')[0] else: player_formatted = ' '.join(player.split(', ')[::-1]) player_formatted = remove_suffixes_periods(player_formatted) #name overrides - some spelling differences from ESPN/CBS if player_formatted == 'Steven Hauschka': player_formatted = 'Stephen Hauschka' elif player_formatted == 'Josh Bellamy': player_formatted = 'Joshua Bellamy' elif player_formatted == 'Joshua Perkins': player_formatted = 'Josh Perkins' return player_formatted df_scraped_ppr_sharks['PLAYER'] = df_scraped_ppr_sharks.apply( lambda row: modify_player_name(row['PLAYER'], row['POS']), axis='columns') #convert FPTS to float type (currently stored as string) df_scraped_ppr_sharks['FPTS_PPR_SHARKS'] = df_scraped_ppr_sharks['FPTS_PPR_SHARKS'].astype('float64') #convert defense position label to espn standard df_scraped_ppr_sharks['POS'] = df_scraped_ppr_sharks['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'FPTS' df_scraped_ppr_sharks = df_scraped_ppr_sharks[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SHARKS', 'WEEK']].sort_values('FPTS_PPR_SHARKS', ascending=False) return df_scraped_ppr_sharks #WEEK 1 PROJECTIONS #CALL SCRAPE AND FORMATTING OF FANTASY SHARKS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk1_ppr_proj_sharks_scrape = scrape_weekly_player_projections_Sharks(1, 2018) save_to_pickle(df_wk1_ppr_proj_sharks_scrape, 'pickle_archive', 'Week1_PPR_Projections_Sharks_messy_scrape') #format data to extract just player pts/playr/pos/team/week and save the data df_wk1_ppr_proj_sharks = format_extract_PPR_player_points_Sharks(df_wk1_ppr_proj_sharks_scrape) save_to_pickle(df_wk1_ppr_proj_sharks, 'pickle_archive', 'Week1_PPR_Projections_Sharks') print(df_wk1_ppr_proj_sharks.shape) df_wk1_ppr_proj_sharks.head() #WEEK 2 PROJECTIONS #CALL SCRAPE AND FORMATTING OF FANTASY SHARKS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk2_ppr_proj_sharks_scrape = scrape_weekly_player_projections_Sharks(2, 2018) save_to_pickle(df_wk2_ppr_proj_sharks_scrape, 'pickle_archive', 'Week2_PPR_Projections_Sharks_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk2_ppr_proj_sharks = format_extract_PPR_player_points_Sharks(df_wk2_ppr_proj_sharks_scrape) save_to_pickle(df_wk2_ppr_proj_sharks, 'pickle_archive', 'Week2_PPR_Projections_Sharks') print(df_wk2_ppr_proj_sharks.shape) df_wk2_ppr_proj_sharks.head()
Pickle saved to: pickle_archive/Week2_PPR_Projections_Sharks_messy_scrape_2018-9-16-7-35.pkl Pickle saved to: pickle_archive/Week2_PPR_Projections_Sharks_2018-9-16-7-35.pkl (992, 5)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get Scout Fantasy Sports Player Fantasy Points Projections for Week Get from Scout Fantasy Sports Projections Tablehttps://fftoolbox.scoutfantasysports.com/football/rankings/?pos=rb&week=2&noppr=false- pos is position with options of 'QB','RB','WR','TE', 'K', 'DEF'- week is week of year- noppr is set to false when you want the ppr projections- it also returns one long table (no pagination required)
##SCRAPE Scout PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## #input needs to be year as four digit number and week as number #returns dataframe of scraped data def scrape_weekly_player_projections_SCOUT(week, year): ###GET PROJECTIONS FROM SCOUT### #SCOUT has separate tables for each position, so need to cycle through them #but url can return whole list so don't need to go page by page proj_ppr_scout = pd.DataFrame() positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DEF'] for position in positions: #url just needs to change position and week url = f"https://fftoolbox.scoutfantasysports.com/football/rankings/?pos={position}&week={week}&noppr=false" #response returns html response = requests.get(url, verify=False) #need verify false otherwise requests won't work on this site #extract the table data from the html response (call response.text) and get table with player data proj_ppr_scout_pos = pd.read_html(response.text, #response.text gives the html of the page request attrs={'class': 'responsive-table'}, #return only the table of this class, which has the player data header=0 #header is the 0th row )[0] #returns list of tables so get the table #add the table to the overall df proj_ppr_scout = pd.concat([proj_ppr_scout, proj_ppr_scout_pos], ignore_index=True, sort=False) #ads are included in table rows (eg 'googletag.defineSlot("/7103/SMG_FFToolBox/728x...') #so need to find the index values of those rows and then drop them from the table index_ads_rows = list(proj_ppr_scout[proj_ppr_scout['#'].str.contains('google')].index) proj_ppr_scout.drop(index_ads_rows, axis='index', inplace=True) #add columns that give week/season proj_ppr_scout['WEEK'] = week proj_ppr_scout['SEASON'] = year return proj_ppr_scout ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### #(you could make this more complex if want to extract some of the subdata) def format_extract_PPR_player_points_SCOUT(df_scraped_ppr_scout): #rename columns df_scraped_ppr_scout.rename(columns={'Projected Pts.':'FPTS_PPR_SCOUT', 'Player':'PLAYER', 'Pos':'POS', 'Team':'TEAM'}, inplace=True) #some players (very few - mostly kickers) seem to have name as last, first instead of written out #also rename defenses from City/State to Mascot #create dictionary for geographical location to mascot (use this for some Defense renaming) based on this website's naming NFL_team_mascot = {'Arizona': 'Cardinals', 'Atlanta': 'Falcons', 'Baltimore': 'Ravens', 'Buffalo': 'Bills', 'Carolina': 'Panthers', 'Chicago': 'Bears', 'Cincinnati': 'Bengals', 'Cleveland': 'Browns', 'Dallas': 'Cowboys', 'Denver': 'Broncos', 'Detroit': 'Lions', 'Green Bay': 'Packers', 'Houston': 'Texans', 'Indianapolis': 'Colts', 'Jacksonville': 'Jaguars', 'Kansas City': 'Chiefs', #'Los Angeles': 'Rams', 'Miami': 'Dolphins', 'Minnesota': 'Vikings', 'New England': 'Patriots', 'New Orleans': 'Saints', 'New York Giants': 'Giants', 'New York Jets': 'Jets', 'Oakland': 'Raiders', 'Philadelphia': 'Eagles', 'Pittsburgh': 'Steelers', #'Los Angeles': 'Chargers', 'Seattle': 'Seahawks', 'San Francisco': '49ers', 'Tampa Bay': 'Buccaneers', 'Tennessee': 'Titans', 'Washington': 'Redskins'} #get Los Angelse defense data for assigning D's LosAngeles_defense_ranks = [int(x) for x in df_scraped_ppr_scout['#'][df_scraped_ppr_scout.PLAYER == 'Los Angeles'].tolist()] print(LosAngeles_defense_ranks) #in this function the defense rename here is SUPER GLITCHY since there are two Defenses' names 'Los Angeles', for now this code assumes the higher pts Defense is LA Rams def modify_player_name_scout(player, pos, rank): #defense need to change from city to mascot if pos == 'Def': #if Los Angeles is geographic location, then use minimum rank to Rams (assuming they are better defense) if player == 'Los Angeles' and int(rank) == min(LosAngeles_defense_ranks): player_formatted = 'Rams' elif player == 'Los Angeles' and int(rank) == max(LosAngeles_defense_ranks): player_formatted = 'Chargers' else: player_formatted = NFL_team_mascot.get(player) else: #if incoming string for players: 'Johnson, David' Change to: 'David Johnson' (this is rare - mostly for kickers on this site for som reason) if ',' in player: player = ' '.join(player.split(', ')[::-1]) #remove suffixes/periods for all players player_formatted = remove_suffixes_periods(player) #hard override of some player names that don't match to ESPN naming if player_formatted == 'Juju Smith-Schuster': player_formatted = 'JuJu Smith-Schuster' elif player_formatted == 'Steven Hauschka': player_formatted = 'Stephen Hauschka' return player_formatted df_scraped_ppr_scout['PLAYER'] = df_scraped_ppr_scout.apply( lambda row: modify_player_name_scout(row['PLAYER'], row['POS'], row['#']), axis='columns') #convert defense position label to espn standard df_scraped_ppr_scout['POS'] = df_scraped_ppr_scout['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK' (note Team is blank because webpage uses images for teams) df_scraped_ppr_scout = df_scraped_ppr_scout[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SCOUT', 'WEEK']].sort_values('FPTS_PPR_SCOUT', ascending=False) return df_scraped_ppr_scout #WEEK 1 PROJECTIONS #CALL SCRAPE AND FORMATTING OF SCOUT WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk1_ppr_proj_scout_scrape = scrape_weekly_player_projections_SCOUT(1, 2018) save_to_pickle(df_wk1_ppr_proj_scout_scrape, 'pickle_archive', 'Week1_PPR_Projections_SCOUT_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk1_ppr_proj_scout = format_extract_PPR_player_points_SCOUT(df_wk1_ppr_proj_scout_scrape) save_to_pickle(df_wk1_ppr_proj_scout, 'pickle_archive', 'Week1_PPR_Projections_SCOUT') print(df_wk1_ppr_proj_scout.shape) df_wk1_ppr_proj_scout.head() #WEEK 2 PROJECTIONS #CALL SCRAPE AND FORMATTING OF SCOUT WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE #scrape data and save the messy full dataframe df_wk2_ppr_proj_scout_scrape = scrape_weekly_player_projections_SCOUT(2, 2018) save_to_pickle(df_wk2_ppr_proj_scout_scrape, 'pickle_archive', 'Week2_PPR_Projections_SCOUT_messy_scrape') #format data to extract just player pts/playr/pos/team and save the data df_wk2_ppr_proj_scout = format_extract_PPR_player_points_SCOUT(df_wk2_ppr_proj_scout_scrape) save_to_pickle(df_wk2_ppr_proj_scout, 'pickle_archive', 'Week2_PPR_Projections_SCOUT') print(df_wk2_ppr_proj_scout.shape) df_wk2_ppr_proj_scout.head()
C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) C:\Users\micha\Anaconda3\envs\PythonData\lib\site-packages\urllib3\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Get FanDuel Player Salaries for Week just import the Thurs-Mon game salaries (they differ for each game type, and note they don't include Kickers in the Thurs-Mon)Go to a FanDuel Thurs-Mon competition and Download a csv of players, which we then upload and format in python.
###FORMAT/EXTRACT FANDUEL SALARY INFO### def format_extract_FanDuel(df_fanduel_csv, week, year): #rename columns df_fanduel_csv.rename(columns={'Position':'POS', 'Nickname':'PLAYER', 'Team':'TEAM', 'Salary':'SALARY_FANDUEL'}, inplace=True) #add week/season columns df_fanduel_csv['WEEK'] = week df_fanduel_csv['SEASON'] = year #fix names def modify_player_name_fanduel(player, pos): #defense comes in as 'Dallas Cowboys' or 'Tampa Bay Buccaneers' need to split and take last word, which is the team mascot, just 'Cowboys' or 'Buccaneers' if pos == 'D': player_formatted = player.split()[-1] else: #need to remove suffixes, etc. player_formatted = remove_suffixes_periods(player) #hard override of some player names that don't match to ESPN naming if player_formatted == 'Josh Bellamy': player_formatted = 'Joshua Bellamy' return player_formatted df_fanduel_csv['PLAYER'] = df_fanduel_csv.apply( lambda row: modify_player_name_fanduel(row['PLAYER'], row['POS']), axis='columns') #convert defense position label to espn standard df_fanduel_csv['POS'] = df_fanduel_csv['POS'].map(convert_defense_label) #for this function only extract 'PLAYER', 'POS', 'TEAM', 'SALARY', 'WEEK' (note Team is blank because webpage uses images for teams) df_fanduel_csv = df_fanduel_csv[['PLAYER', 'POS', 'TEAM', 'SALARY_FANDUEL', 'WEEK']].sort_values('SALARY_FANDUEL', ascending=False) return df_fanduel_csv #WEEK 2 FANDUEL SALARIES #import csv from FanDuel df_wk2_fanduel_csv = pd.read_csv('fanduel_salaries/Week2-FanDuel-NFL-2018-09-13-28179-players-list.csv') #format data to extract just player salary/player/pos/team and save the data df_wk2_fanduel = format_extract_FanDuel(df_wk2_fanduel_csv, 1, 2018) save_to_pickle(df_wk2_fanduel, 'pickle_archive', 'Week2_Salary_FanDuel') print(df_wk2_fanduel.shape) df_wk2_fanduel.head()
Pickle saved to: pickle_archive/Week2_Salary_FanDuel_2018-9-16-7-35.pkl (669, 5)
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
!!!FFtoday apparently doesn't do weekly projections for Defenses, so don't use it for now (can check back in future and see if updated)!!! Get FFtoday Player Fantasy Points Projections for Week Get from FFtoday's Projections Tablehttp://www.fftoday.com/rankings/playerwkproj.php?Season=2018&GameWeek=2&PosID=10&LeagueID=107644- Season = year- GameWeek = week- PosID = the id for each position 'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99- LeagueID = the scoring type, 107644 gives FFToday PPR scoring
# ##SCRAPE FFtoday PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS## # #input needs to be year as four digit number and week as number # #returns dataframe of scraped data # def scrape_weekly_player_projections_FFtoday(week, year): # #instantiate selenium driver # driver = instantiate_selenium_driver() # #initialize dataframe for all data # proj_ppr_fft = pd.DataFrame() # #url that returns info has different code for each position and also takes year variable # position_ids = {'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99} # #cycle through each position webpage to create comprehensive dataframe # for pos, pos_id in position_ids.items(): # url_start_pos = f"http://www.fftoday.com/rankings/playerwkproj.php?Season={year}&GameWeek={week}&PosID={pos_id}&LeagueID=107644" # driver.get(url_start_pos) # #each page only gets 50 results, so cycle through next button until next button no longer exists # while True: # #read in table - no classes for tables so just need to find the right table in the list of tables from the page - 5th index # proj_ppr_fft_table_page = pd.read_html(driver.page_source, header=[1])[5] # proj_ppr_fft_table_page['POS'] = pos # #need to rename columns for different positions before concat because of differing column conventions # if pos == 'QB': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Comp':'PASS_COMP', 'Att': 'PASS_ATT', 'Yard':'PASS_YD', # 'TD':'PASS_TD', 'INT':'PASS_INT', # 'Att.1':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'}, # inplace=True) # elif pos == 'RB': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Att': 'RUSH_ATT', 'Yard':'RUSH_YD', 'TD':'RUSH_TD', # 'Rec':'RECV_RECPT', 'Yard.1':'RECV_YD', 'TD.1':'RECV_TD'}, # inplace=True) # elif pos == 'WR': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD', # 'Att':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'}, # inplace=True) # elif pos == 'TE': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD'}, # inplace=True) # elif pos == 'K': # proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER', # 'FGM':'KICK_FG', 'FGA':'KICK_FGAtt', 'FG%':'KICK_FG%', # 'EPM':'KICK_XP', 'EPA':'KICK_XPAtt'}, # inplace=True) # elif pos == 'DEF': # proj_ppr_fft_table_page['PLAYER'] = proj_ppr_fft_table_page['Team'] #+ ' D/ST' #add player name with team name plus D/ST tag # proj_ppr_fft_table_page.rename(columns={'Sack':'D/ST_Sack', 'FR':'D/ST_FR', 'DefTD':'D/ST_TD', 'INT':'D/ST_INT', # 'PA':'D/ST_PtsAll', 'PaYd/G':'D/ST_PaYd/G', 'RuYd/G':'D/ST_RuYd/G', # 'Safety':'D/ST_Sty', 'KickTD':'D/ST_RET_TD'}, # inplace=True) # #add the position/page data to overall df # proj_ppr_fft = pd.concat([proj_ppr_fft, proj_ppr_fft_table_page], # ignore_index=True, # sort=False) # #click to next page to get next 50 results, but check that next button exists # try: # next_button = driver.find_element_by_link_text("Next Page") # next_button.click() # except EC.NoSuchElementException: # break # driver.quit() # #add columns that give week/season # proj_ppr_fft['WEEK'] = week # proj_ppr_fft['SEASON'] = year # return proj_ppr_fft # ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA### # #(you could make this more complex if want to extract some of the subdata) # def format_extract_PPR_player_points_FFtoday(df_scraped_ppr_fft): # # #optional data formatting for additional info # # #calculate completion percentage # # df_scraped_ppr_fft['PASS_COMP_PCT'] = df_scraped_ppr_fft.PASS_COMP/df_scraped_ppr_fft.PASS_ATT # # #calculate total PaYd and RuYd for season # # df_scraped_ppr_fft['D/ST_PaYdA'] = df_scraped_ppr_fft['D/ST_PaYd/G'] * 16 # # df_scraped_ppr_fft['D/ST_RuYdA'] = df_scraped_ppr_fft['D/ST_RuYd/G'] * 16 # # df_scraped_ppr_fft['D/ST_ToYd/G'] = df_scraped_ppr_fft['D/ST_PaYd/G'] + df_scraped_ppr_fft['D/ST_RuYd/G'] # # df_scraped_ppr_fft['D/ST_ToYdA'] = df_scraped_ppr_fft['D/ST_ToYd/G'] * 16 # #rename some of outstanding columns to match other dfs # df_scraped_ppr_fft.rename(columns={'Team':'TEAM', 'FPts':'FPTS_PPR_FFTODAY'}, # inplace=True) # #remove any possible name suffixes to merge with other data better # df_scraped_ppr_fft['PLAYER'] = df_scraped_ppr_fft['PLAYER'].map(remove_suffixes_periods) # #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS' # df_scraped_ppr_fft = df_scraped_ppr_fft[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_FFTODAY', 'WEEK']].sort_values('FPTS_PPR_FFTODAY', ascending=False) # return df_scraped_ppr_fft
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Initial Database Stuff
# actual_ppr_df = pd.read_pickle('pickle_archive/Week1_Player_Actual_PPR_2018-9-13-6-41.pkl') # espn_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_ESPN_2018-9-13-6-46.pkl') # cbs_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_CBS_2018-9-13-17-45.pkl') # cbs_final_df.head() # from sqlalchemy import create_engine # disk_engine = create_engine('sqlite:///my_lite_store.db') # actual_ppr_df.to_sql('actual_ppr', disk_engine, if_exists='append') # espn_final_df.to_sql('espn_final_df', disk_engine, if_exists='append') # cbs_final_df.to_sql('cbs_final_df', disk_engine, if_exists='append')
_____no_output_____
MIT
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
Tutorial 13: Skyrmion in a disk> Interactive online tutorial:> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb) In this tutorial, we compute and relax a skyrmion in a interfacial-DMI material in a confined disk like geometry.
import discretisedfield as df import micromagneticmodel as mm import oommfc as oc
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`.
region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9)) mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
The mesh we defined is:
%matplotlib inline mesh.k3d()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
Now, we can define the system object by first setting up the Hamiltonian:
system = mm.System(name="skyrmion") system.energy = ( mm.Exchange(A=1.6e-11) + mm.DMI(D=4e-3, crystalclass="Cnv") + mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1)) + mm.Demag() + mm.Zeeman(H=(0, 0, 2e5)) )
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
Disk geometry is set up be defining the saturation magnetisation (norm of the magnetisation field). For that, we define a function:
Ms = 1.1e6 def Ms_fun(pos): """Function to set magnitude of magnetisation: zero outside cylindric shape, Ms inside cylinder. Cylinder radius is 50nm. """ x, y, z = pos if (x**2 + y**2) ** 0.5 < 50e-9: return Ms else: return 0
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
And the second function we need is the function to definr the initial magnetisation which is going to relax to skyrmion.
def m_init(pos): """Function to set initial magnetisation direction: -z inside cylinder (r=10nm), +z outside cylinder. y-component to break symmetry. """ x, y, z = pos if (x**2 + y**2) ** 0.5 < 10e-9: return (0, 0, -1) else: return (0, 0, 1) # create system with above geometry and initial magnetisation system.m = df.Field(mesh, dim=3, value=m_init, norm=Ms_fun)
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
The geometry is now:
system.m.norm.k3d_nonzero()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
and the initial magnetsation is:
system.m.plane("z").mpl()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
Finally we can minimise the energy and plot the magnetisation.
# minimize the energy md = oc.MinDriver() md.drive(system) # Plot relaxed configuration: vectors in z-plane system.m.plane("z").mpl() # Plot z-component only: system.m.z.plane("z").mpl() # 3d-plot of z-component system.m.z.k3d_voxels(filter_field=system.m.norm)
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
Finally we can sample and plot the magnetisation along the line:
system.m.z.line(p1=(-49e-9, 0, 0), p2=(49e-9, 0, 0), n=20).mpl()
_____no_output_____
BSD-3-Clause
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
**Author: Avani Gupta Roll: 2019121004** Excercise 2In Excercise 1, we computed the LDA for a multi-class problem, the IRIS dataset. In this excercise, we will now compare the LDA and PCA for the IRIS dataset.To revisit, the iris dataset contains measurements for 150 iris flowers from three different species.The three classes in the Iris dataset:1. Iris-setosa (n=50)2. Iris-versicolor (n=50)3. Iris-virginica (n=50)The four features of the Iris dataset:1. sepal length in cm2. sepal width in cm3. petal length in cm4. petal width in cm
from sklearn.datasets import make_classification import matplotlib.pyplot as plt import numpy as np import seaborn as sns; sns.set(); import pandas as pd from sklearn.model_selection import train_test_split from numpy import pi
_____no_output_____
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
Importing the dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'Class'] dataset = pd.read_csv(url, names=names) dataset.tail()
_____no_output_____
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
Data preprocessingOnce dataset is loaded into a pandas data frame object, the first step is to divide dataset into features and corresponding labels and then divide the resultant dataset into training and test sets. The following code divides data into labels and feature set:
X = dataset.iloc[:, 0:4].values y = dataset.iloc[:, 4].values
_____no_output_____
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
The above script assigns the first four columns of the dataset i.e. the feature set to X variable while the values in the fifth column (labels) are assigned to the y variable.The following code divides data into training and test sets:
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
_____no_output_____
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
Feature ScalingWe will now perform feature scaling as part of data preprocessing too. For this task, we will be using scikit learn `StandardScalar`.
from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
_____no_output_____
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
Write your code belowWrite your code to compute the PCA and LDA on the IRIS dataset below.
### WRITE YOUR CODE HERE #### from sklearn.preprocessing import LabelEncoder enc = LabelEncoder() label_encoder = enc.fit(y_train) y_train = label_encoder.transform(y_train) + 1 #labels are done in alphabetical order # 1: 'Iris-setosa', 2: 'Iris-versicolor', 3:'Iris-virginica' label_encoder = enc.fit(y_test) y_test = label_encoder.transform(y_test) + 1 labels = ['setosa', 'Versicolor', 'Virginica'] # LDA num_classes = 3 num_classes_plus1 = num_classes + 1 def find_mean(X_train,y_train,num_classes_plus1): mean_arr = [] for cl in range(1,num_classes_plus1): mean_arr.append(np.mean(X_train[y_train==cl], axis=0)) return mean_arr mean_arr = find_mean(X_train,y_train,num_classes_plus1) def within_classScatter(mean_arr,X_train,y_train,num_classes_plus1): S_w = np.zeros((num_classes_plus1,num_classes_plus1)) for cl, mv in zip(range(1,num_classes_plus1),mean_arr): temp_s = np.zeros((num_classes_plus1,num_classes_plus1)) for data in X_train[y_train==cl]: data, mv = data.reshape(num_classes_plus1,1), mv.reshape(num_classes_plus1,1) ### making them vertical vectors temp_s += (data-mv)@((data-mv).T) S_w += temp_s return S_w S_w = within_classScatter(mean_arr,X_train,y_train,num_classes_plus1) print("within class scatter matrix S_w:\n") print(S_w) def btw_clasScatter(mean_arr,X_train,y_train,num_classes_plus1): total_mean = np.mean(X_train, axis=0).reshape(num_classes_plus1,1) S_b = np.zeros((num_classes_plus1,num_classes_plus1)) for cl, mv in zip(range(1,num_classes_plus1), mean_arr): n = X_train[y_train==cl].shape[0] class_mean = mv.reshape(num_classes_plus1,1) S_b += n*((class_mean - total_mean)@(class_mean - total_mean).T) return S_b S_b = btw_clasScatter(mean_arr,X_train,y_train,num_classes_plus1) print("between class scatter matrix S_b:\n") print(S_b) def takeTopEigen(S_w, S_b,k): eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_w).dot(S_b)) eigs_sorted_in = np.argsort(eigen_vals)[::-1] eigen_vals = eigen_vals[eigs_sorted_in] eigen_vecs = eigen_vecs[:,eigs_sorted_in] weights = eigen_vecs[:,:k] return weights def lda_vecs(X_train, y_train,weights): Xtrain_lda = X_train@weights Xtest_lda = X_test@weights return Xtrain_lda, Xtest_lda weights = takeTopEigen(S_w, S_b,2) Xtrain_lda, Xtest_lda = lda_vecs(X_train, y_train,weights) def centroid(Xtrain_lda,y_train): centroids = [] for i in range(1,num_classes_plus1): centroids.append(np.mean(Xtrain_lda[y_train == i], axis = 0)) return centroids centroids = centroid(Xtrain_lda,y_train) def pred(X_lda,centroids): y_pred = [] for i in range(len(X_lda)): y_pred.append(np.argmin([ np.linalg.norm(centroids[0]-X_lda[i]), np.linalg.norm(centroids[1]-X_lda[i]), np.linalg.norm(centroids[2]-X_lda[i]) ])+1) return np.array(y_pred) def accuracy(X_lda,y,centroids): y_pred = pred(X_lda,centroids) err = y-y_pred accuracy = len(err[err == 0])/len(err) return accuracy*100 acc = accuracy(Xtrain_lda,y_train,centroids) print("Accuracy on train set",acc) acc = accuracy(Xtest_lda,y_test,centroids) print("Accuracy on test set:",acc) def calc_class(Xtrain_lda,centroids): x_r, y_r = np.meshgrid(np.linspace(np.min(Xtrain_lda[:,0])-0.2, np.max(Xtrain_lda[:,1])+0.2,200), np.linspace(np.min(Xtrain_lda[:,1])-0.2, np.max(Xtrain_lda[:,1])+0.2,200)) cl = np.zeros(x_r.shape) # finding which class the sample belongs to # cl is label vector of predicted class for i in range(len(x_r)): for j in range(len(y_r)): pt = [x_r[i,j], y_r[i,j]] clas = [] for l in range(3): clas.append(np.linalg.norm(centroids[l]-pt)) cl[i,j] = np.argmin(clas)+1 return cl,x_r,y_r def plot(X_lda,y,cl,title,strr): for clas in range(1,num_classes_plus1): plt.scatter(X_lda[y == clas][:,0],X_lda[y == clas][:,1],label=labels[clas-1]) plt.xlabel(strr+"1") plt.ylabel(strr+"2") plt.title(title) plt.legend(loc='upper right') plt.contour(x_r,y_r,cl) plt.show() z,x_r,y_r = calc_class(Xtrain_lda,centroids) plot(Xtrain_lda,y_train,z,"Training set","LDA") plot(Xtest_lda,y_test,z,"Test set","LDA") # PCA u, s, vt = np.linalg.svd(X_train, full_matrices=False) w_pca = vt.T[:,:2] Xtrain_pca = X_train@w_pca Xtest_pca = X_test@w_pca cntr = centroid(Xtrain_pca,y_train) acc = accuracy(Xtrain_pca,y_train,cntr) print("Accuracy on train set",acc) cl,x_r,y_r = calc_class(Xtrain_pca,cntr) plot(Xtrain_pca,y_train,cl,"training set","PCA") acc = accuracy(Xtest_pca,y_test,centroids) print("Accuracy on test set:",acc) plot(Xtest_pca,y_test,cl,"test set","PCA")
Accuracy on train set 85.0
MIT
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
Exercise 11 - Recurrent Neural Networks========A recurrent neural network (RNN) is a class of neural network that excels when your data can be treated as a sequence - such as text, music, speech recognition, connected handwriting, or data over a time period. RNN's can analyse or predict a word based on the previous words in a sentence - they allow a connection between previous information and current information.This exercise looks at implementing a LSTM RNN to generate new characters after learning from a large sample of text. LSTMs are a special type of RNN which dramatically improves the model’s ability to connect previous data to current data where there is a long gap.We will train an RNN model using a novel written by H. G. Wells - The Time Machine. Step 1------Let's start by loading our libraries and text file. This might take a few minutes. Run the cell below to import the necessary libraries.
%%capture # Run this! from keras.models import load_model from keras.models import Sequential from keras.layers import Dense, Activation, LSTM from keras.callbacks import LambdaCallback, ModelCheckpoint import numpy as np import random, sys, io, string
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Replace the `` with `The Time Machine`
### # REPLACE THE <addFileName> BELOW WITH The Time Machine ### text = io.open('Data/<addFileName>.txt', encoding = 'UTF-8').read() ### # Let's have a look at some of the text print(text[0:198]) # This cuts out punctuation and make all the characters lower case text = text.lower().translate(str.maketrans("", "", string.punctuation)) # Character index dictionary charset = sorted(list(set(text))) index_from_char = dict((c, i) for i, c in enumerate(charset)) char_from_index = dict((i, c) for i, c in enumerate(charset)) print('text length: %s characters' %len(text)) print('unique characters: %s' %len(charset))
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Expected output: ```The Time Traveller (for so it will be convenient to speak of him) was expounding a recondite matter to us. His pale grey eyes shone and twinkled, and his usually pale face was flushed and animated.text length: 174201 charactersunique characters: 39```Step 2-----Next we'll divide the text into sequences of 40 characters.Then for each sequence we'll make a training set - the following character will be the correct output for the test set. In the cell below replace: 1. `` with `40` 2. `` with `4` and then __run the code__.
### # REPLACE <sequenceLength> WITH 40 AND <step> WITH 4 ### sequence_length = <sequenceLength> step = <step> ### sequences = [] target_chars = [] for i in range(0, len(text) - sequence_length, step): sequences.append([text[i: i + sequence_length]]) target_chars.append(text[i + sequence_length]) print('number of training sequences:', len(sequences))
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Expected output:`number of training sequences: 43541` Replace `` with `sequences` and run the code.
# One-hot vectorise X = np.zeros((len(sequences), sequence_length, len(charset)), dtype=np.bool) y = np.zeros((len(sequences), len(charset)), dtype=np.bool) ### # REPLACE THE <addSequences> BELOW WITH sequences ### for n, sequence in enumerate(<addSequences>): ### for m, character in enumerate(list(sequence[0])): X[n, m, index_from_char[character]] = 1 y[n, index_from_char[target_chars[n]]] = 1
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Step 3------Let's build our model, using a single LSTM layer of 128 units. We'll keep the model simple for now, so that training does not take too long. In the cell below replace: 1. `` with `LSTM` 2. `` with `128` 3. `` with `'softmax` and then __run the code__.
model = Sequential() ### # REPLACE THE <addLSTM> BELOW WITH LSTM (use uppercase) AND <addLayerSize> WITH 128 ### model.add(<addLSTM>(<addLayerSize>, input_shape = (X.shape[1], X.shape[2]))) ### ### # REPLACE THE <addSoftmaxFunction> with 'softmax' (INCLUDING THE QUOTES) ### model.add(Dense(y.shape[1], activation = <addSoftMaxFunction>)) ### model.compile(loss = 'categorical_crossentropy', optimizer = 'Adam')
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
The code below generates text at the end of an epoch (one training cycle). This allows us to see how the model is performing as it trains. If you're making a large neural network with a long training time it's useful to check in on the model as see if the text generating is legible as it trains, as overtraining may occur and the output of the model turn to nonsense.The code below will also save a model if it is the best performing model, so we can use it later. Run the code below, but don't change it
# Run this, but do not edit. # It helps generate the text and save the model epochs. # Generate new text def on_epoch_end(epoch, _): diversity = 0.5 print('\n### Generating text with diversity %0.2f' %(diversity)) start = random.randint(0, len(text) - sequence_length - 1) seed = text[start: start + sequence_length] print('### Generating with seed: "%s"' %seed[:40]) output = seed[:40].lower().translate(str.maketrans("", "", string.punctuation)) print(output, end = '') for i in range(500): x_pred = np.zeros((1, sequence_length, len(charset))) for t, char in enumerate(output): x_pred[0, t, index_from_char[char]] = 1. predictions = model.predict(x_pred, verbose=0)[0] exp_preds = np.exp(np.log(np.asarray(predictions).astype('float64')) / diversity) next_index = np.argmax(np.random.multinomial(1, exp_preds / np.sum(exp_preds), 1)) next_char = char_from_index[next_index] output = output[1:] + next_char print(next_char, end = '') print() print_callback = LambdaCallback(on_epoch_end=on_epoch_end) # Save the model checkpoint = ModelCheckpoint('Models/model-epoch-{epoch:02d}.hdf5', monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min')
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
The code below will start the model to train. This may take a long time. Feel free to stop the training with the `square stop button` to the right of the `Run button` in the toolbar.Later in the exercise, we will load a pretrained model. In the cell below replace: 1. `` with `print_callback` 2. `` with `checkpoint` and then __run the code__.
### # REPLACE <addPrintCallback> WITH print_callback AND <addCheckpoint> WITH checkpoint ### model.fit(X, y, batch_size = 128, epochs = 3, callbacks = [<addPrintCallback>, <addCheckpoint>]) ###
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
The output won't appear to be very good. But then, this dataset is small, and we have trained it only for a short time using a rather small RNN. How might it look if we upscaled things?Step 5------We could improve our model by:* Having a larger training set.* Increasing the number of LSTM units.* Training it for longer* Experimenting with difference activation functions, optimization functions etcTraining this would still take far too long on most computers to see good results - so we've trained a model already for you.This model uses a different dataset - a few of the King Arthur tales pasted together. The model used:* sequences of 50 characters* Two LSTM layers (512 units each)* A dropout of 0.5 after each LSTM layer* Only 30 epochs (we'd recomend 100-200)Let's try importing this model that has already been trained. Replace `` with `load_model` and run the code.
from keras.models import load_model print("loading model... ", end = '') ### # REPLACE <addLoadModel> BELOW WITH load_model ### model = <addLoadModel>('Models/arthur-model-epoch-30.hdf5') ### model.compile(loss = 'categorical_crossentropy', optimizer = 'Adam') ### print("model loaded")
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Step 6-------Now let's use this model to generate some new text! Replace `` with `'Data/Arthur tales.txt'`
### # REPLACE <addFilePath> BELOW WITH 'Data/Arthur tales.txt' (INCLUDING THE QUOTATION MARKS) ### text = io.open(<addFilePath>, encoding='UTF-8').read() ### # Cut out punctuation and make lower case text = text.lower().translate(str.maketrans("", "", string.punctuation)) # Character index dictionary charset = sorted(list(set(text))) index_from_char = dict((c, i) for i, c in enumerate(charset)) char_from_index = dict((i, c) for i, c in enumerate(charset)) print('text length: %s characters' %len(text)) print('unique characters: %s' %len(charset))
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
In the cell below replace: 1. `` with `50` 2. `` with a sentence of your own, at least 50 characters long. 3. `` with the number of characters you want to generate (choose a large number, like 1500) and then __run the code__.
# Generate text diversity = 0.5 print('\n### Generating text with diversity %0.2f' %(diversity)) ### # REPLACE <sequenceLength> BELOW WITH 50 ### sequence_length = <sequenceLength> ### # Next we'll make a starting point for our text generator ### # REPLACE <writeSentence> WITH A SENTENCE OF AT LEAST 50 CHARACTERS ### seed = "<writeSentence>" ### seed = seed.lower().translate(str.maketrans("", "", string.punctuation)) ### # OR, ALTERNATIVELY, UNCOMMENT THE FOLLOWING TWO LINES AND GRAB A RANDOM STRING FROM THE TEXT FILE ### #start = random.randint(0, len(text) - sequence_length - 1) #seed = text[start: start + sequence_length] ### print('### Generating with seed: "%s"' %seed[:40]) output = seed[:sequence_length].lower().translate(str.maketrans("", "", string.punctuation)) print(output, end = '') ### # REPLACE THE <numCharsToGenerate> BELOW WITH THE NUMBER OF CHARACTERS WE WISH TO GENERATE, e.g. 1500 ### for i in range(<numCharsToGenerate>): ### x_pred = np.zeros((1, sequence_length, len(charset))) for t, char in enumerate(output): x_pred[0, t, index_from_char[char]] = 1. predictions = model.predict(x_pred, verbose=0)[0] exp_preds = np.exp(np.log(np.asarray(predictions).astype('float64')) / diversity) next_index = np.argmax(np.random.multinomial(1, exp_preds / np.sum(exp_preds), 1)) next_char = char_from_index[next_index] output = output[1:] + next_char print(next_char, end = '') print()
_____no_output_____
MIT
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
Introduction to Python and Natural Language Technologies__Laboratory 10- NLP applications, Dependency parsing____April 22, 2021__During this laboratory you will have to implement various evaluation methods and use them to measure the performance of pretrained models.
import stanza import spacy from gensim.summarization import summarizer as gensim_summarizer from transformers import pipeline import nltk import conllu import os import numpy as np import requests stanza.download('en') stanza_nlp = stanza.Pipeline('en') spacy_nlp = spacy.load("en_core_web_sm")
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
Let's download the UD treebanks if you do not have them already. We are going to use them for evaluations.
url = "https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3424/ud-treebanks-v2.7.tgz" tgz = 'ud-treebanks-v2.7.tgz' directory = 'ud_treebanks' if not os.path.exists(directory): import tarfile response = requests.get(url, stream=True) with open(tgz, 'wb') as ud: ud.write(response.content) os.mkdir(directory) with tarfile.open(tgz, 'r:gz') as _tar: for member in _tar: if member.isdir(): continue fname = member.name.rsplit('/',1)[1] _tar.makefile(member, os.path.join(directory, fname)) data = "ud_treebanks/en_ewt-ud-train.conllu" with open(data) as conll_data: trees = conllu.parse(conll_data.read()) print(trees[0].serialize())
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
Evaluation Methods 1. F-scoreProbably the most relevant measure we can use when we are evaluating classifiers.Implement the function below. The function takes two iterables and returns a detailed dictionary that contains the True Positive, True Negative, False Positive, Precision, Recall, F-score values for each unique class in the gold list. Additionally, the dictionary should contain the micro and macro precision, recall and F-score values as well.You can read about the F-measure [here](https://en.wikipedia.org/wiki/F-score).Help for the micro-macro averages: https://tomaxent.com/2018/04/27/Micro-and-Macro-average-of-Precision-Recall-and-F-Score/.Example:
f_dict = { 0: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, 1: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, 2: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, 'MICRO AVG': {'precision': 1.0, 'recall': 1.0, 'f': 1.0}, 'MACRO AVG': {'precision': 1.0, 'recall': 1.0, 'f': 1.0} } f_dict2 = { 0: {'tp': 3, 'fp': 1, 'fn': 1, 'precision': 0.75, 'recall': 0.75, 'f': 0.75}, 1: {'tp': 3, 'fp': 1, 'fn': 1, 'precision': 0.75, 'recall': 0.75, 'f': 0.75}, 2: {'tp': 2, 'fp': 2, 'fn': 2, 'precision': 0.5, 'recall': 0.5, 'f': 0.5}, 'MICRO AVG': {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f': 0.6666666666666666}, 'MACRO AVG': {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f': 0.6666666666666666} } def f_score(gold, predicted): raise NotImplementedError() gold = [0, 0, 1, 1, 2, 2, 0, 1, 2, 0, 1, 2] pred = [0, 2, 1, 1, 2, 0, 0, 2, 1, 0, 1, 2] assert f_dict == f_score(gold, gold) assert f_dict2 == f_score(gold, pred)
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
1.1 Evaluate a pretrained POS tagger using the exampleChoose an existing POS tagger (eg. stanza, spacy, nltk) and predict the POS tags of the sentence given below. Compare the results to the refference below using the f_score function above. Keep in mind, that there are different POS formats, and you should compare them accordingly.
sentence = trees[0].metadata["text"] upos = [token['upos'] for token in trees[0]] xpos = [token['xpos'] for token in trees[0]] print(f'{sentence}\n{upos}\n{xpos}') # Your solution here
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
2. ROUGE-N scoreWe usually use the ROUGE score to evaluate summaries, comparing the reference summaries and the generated summaries. Write a function that gets a reference summary, a generated summary and a number N. The number represents the length of n-grams to compare. The function should return a dictionary containing the precision, recall and f-score of the ROUGE-N score. (I practice, the most important part of the ROUGE score is its recall.)\begin{equation*}Recall = \frac{overlapping\ ngrams}{all\ ngrams\ in\ the\ reference\ summary}\end{equation*}\begin{equation*}Precision = \frac{overlapping\ ngrams}{all\ ngrams\ in\ the\ generated\ summary}\end{equation*}\begin{equation*}F1 = 2 * \frac{Precision * Recall}{Precision + Recall}\end{equation*}You can read further about the ROUGE-N scoring method [here](https://www.aclweb.org/anthology/W04-1013.pdf).You are encouraged to implement and use the helper functions outlined below. You can use any tokenizer you'd like for this exercise.Example results of the rouge_n function:
n2 = {'precision': 0.75, 'recall': 0.6, 'f': 0.6666666666666665} def get_ngram(text, n): raise NotImplementedError() def rouge_n(reference, generated, n): raise NotImplementedError() reference = 'this cat is absoultely adorable today' generated = 'this cat is adorable today' assert n2 == rouge_n(reference, generated, 2)
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
2.1 Evaluate a pretraied summarizer using the exampleChoose a summarizer (eg. gensim, huggingface) and summarize the following text (taken from the [CNN-Daily Mail dataset](https://cs.nyu.edu/~kcho/DMQA/)) and calculate the ROUGE-2 score of the summary.
article = """Manchester City starlet Devante Cole, son of Andy Cole, has joined Barnsley on loan until January. City have also confirmed that £3m midfielder Bruno Zuculini has joined Valencia on loan for the rest of the season.  Meanwhile Juventus and Roma remain keen on signing Matija Nastasic. On the move: Manchester City striker Devante Cole, son of Andy, has joined Barnsley on loan""" reference = """Devante Cole has joined Barnsley on loan until January. Son of Andy Cole has impressed in the City youth ranks. City have also confirmed that Bruno Zuculini has joined Valencia.""" # Your solution here
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
3. Dependency parse evaluationWe've discussed the two methods used to evaluate dependency parsers.Reminder: - Labeled attachment score (LAS): the percentage of words that are assigned both the correct syntactic head and the correct dependency label - Unlabeled attachment score (UAS): the percentage of words that are assigned both the correct syntactic head 3.1 UAS methodImplement the UAS method for evaluating graphs!The input of the function should be two graphs, both in formatted in a simplified conll-dict format, where the keys are the indices of the tokens and the values are tuples consisting of the head and the dependency relation.
def convert_conllu(tree): return {token['id']: (token['head'], token['deprel']) for token in tree} reference_graph = convert_conllu(trees[0]) reference_graph pred = {1: (0, 'root'), 2: (1, 'punct'), 3: (1, 'flat'), 4: (1, 'punct'), 5: (6, 'amod'), 6: (7, 'obj'), 7: (1, 'parataxis'), 8: (7, 'obj'), 9: (8, 'flat'), 10: (8, 'flat'), 11: (8, 'punct'), 12: (8, 'flat'), 13: (8, 'punct'), 14: (15, 'det'), 15: (8, 'appos'), 16: (18, 'case'), 17: (10, 'det'), 18: (7, 'obl'), 19: (8, 'case'), 20: (21, 'det'), 21: (18, 'obl'), 22: (23, 'case'), 23: (21, 'nmod'), 24: (21, 'punct'), 25: (28, 'case'), 26: (28, 'det'), 27: (28, 'amod'), 28: (8, 'obl'), 29: (1, 'punct')} def uas(gold, predicted): raise NotImplementedError()
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
3.2 LAS methodImplement the LAS method as well, similarly to the previous evaluation script.
def las(gold, predicted): raise NotImplementedError() assert 26/29 == uas(reference_graph, pred) assert 24/29 == las(reference_graph, pred)
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
================ PASSING LEVEL ==================== 3.3 Try out the evaluation methods with stanzaEvaluate the predictions of stanza on the given example! To do so, you will have to convert the output of stanza to be in the same format as the expected input of the uas and las methods. We recomend the stanza [documentation](https://stanfordnlp.github.io/stanza/tutorials.html) to be able to do this.
def stanza_converter(stanza_doc): raise NotImplementedError() # Your solution here
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
3.4 Compare the accuracy of stanza and spacyRun the spacy dependency parser on the same input as before and evaluate the performace. To do so you will have to implement a function, that converts the output of spacy (see the [documentation](https://spacy.io/usage/linguistic-featuresdependency-parse)) to the appropriate format and check the output of the las and uas methods.
def spacy_converter(spacy_doc): raise NotImplementedError() # Your solution here
_____no_output_____
MIT
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
Construction
import numpy as np import matplotlib.pyplot as plt import seaborn as sns def sign(x): return (-1)**(x < 0) def make_standard(X): means = X.mean(0) stds = X.std(0) return (X - means)/stds class RegularizedRegression: def __init__(self, name = None): self.name = name def record_info(self, X_train, y_train, lam, intercept, standardize): if standardize == True: # standardize (if specified) X_train = make_standard(X_train) if intercept == False: # add intercept (if not already included) ones = np.ones(len(X_train)).reshape(len(X_train), 1) # column of ones X_train = np.concatenate((ones, X_train), axis = 1) self.X_train = np.array(X_train) self.y_train = np.array(y_train) self.N, self.D = self.X_train.shape self.lam = lam def fit_ridge(self, X_train, y_train, lam = 0, intercept = False, standardize = False): # record data and dimensions self.record_info(X_train, y_train, lam, intercept, standardize) # estimate parameters XtX = np.dot(self.X_train.T, self.X_train) XtX_plus_lam_inverse = np.linalg.inv(XtX + self.lam*np.eye(self.D)) Xty = np.dot(self.X_train.T, self.y_train) self.beta_hats = np.dot(XtX_plus_lam_inverse, Xty) self.y_train_hat = np.dot(self.X_train, self.beta_hats) # calculate loss self.L = .5*np.sum((self.y_train - self.y_train_hat)**2) + (self.lam/2)*np.linalg.norm(self.beta_hats)**2 def fit_lasso(self, X_train, y_train, lam = 0, n_iters = 10000, lr = 0.001, intercept = False, standardize = False): # record data and dimensions self.record_info(X_train, y_train, lam, intercept, standardize) # estimate parameters beta_hats = np.random.randn(self.D) for i in range(n_iters): dL_dbeta = -self.X_train.T @ (self.y_train - (self.X_train @ beta_hats)) + self.lam*sign(beta_hats) beta_hats -= lr*dL_dbeta self.beta_hats = beta_hats self.y_train_hat = np.dot(self.X_train, self.beta_hats) # calculate loss self.L = .5*np.sum((self.y_train - self.y_train_hat)**2) + self.lam*np.sum(np.abs(self.beta_hats)) mpg = sns.load_dataset('mpg') # load mpg dataframe mpg = mpg.dropna(axis = 0).reset_index(drop = True) # drop null values mpg = mpg.loc[:,mpg.dtypes != object] # keep only numeric columns X_train = mpg.drop(columns = 'mpg') # get predictor variables y_train = mpg['mpg'] # get outcome variable lam = 10 ridge_model = RegularizedRegression() ridge_model.fit_ridge(X_train, y_train, lam) lasso_model = RegularizedRegression() lasso_model.fit_lasso(X_train, y_train, lam, standardize = True) fig, ax = plt.subplots() sns.scatterplot(ridge_model.y_train, ridge_model.y_train_hat) ax.set_xlabel(r'$y$', size = 16) ax.set_ylabel(r'$\hat{y}$', rotation = 0, size = 16, labelpad = 15) ax.set_title(r'Ridge $y$ vs. $\hat{y}$', size = 20, pad = 10) sns.despine() fig, ax = plt.subplots() sns.scatterplot(lasso_model.y_train, lasso_model.y_train_hat) ax.set_xlabel(r'$y$', size = 16) ax.set_ylabel(r'$\hat{y}$', rotation = 0, size = 16, labelpad = 15) ax.set_title(r'LASSO $y$ vs. $\hat{y}$', size = 20, pad = 10) sns.despine()
_____no_output_____
MIT
content/c2/.ipynb_checkpoints/construction-checkpoint.ipynb
JeffFessler/mlbook
PROBLEM 1 INTRODUCTION
#Say "Hello, World!" With Python print("Hello, World!") #Python If-Else #!/bin/python3 import math import os import random import re import sys if __name__ == '__main__': n = int(input().strip()) if 1 <= n <= 100: if n % 2 != 0 or (n % 2 == 0 and 6<=n<=20): print("Weird") elif n % 2 == 0 and (2<=n<=5 or n>20): print("Not Weird") #Arithmetic Operators if __name__ == '__main__': a = int(input()) b = int(input()) if 1<=a<=10**10 and 1<=b<=10**10: print(a+b) print(a-b) print(a*b) #Python: Division if __name__ == '__main__': a = int(input()) b = int(input()) print(a//b) print(a/b) #Loops if __name__ == '__main__': n = int(input()) if 1<=n<=20: for i in range(n): print(i*i) #Write a function def is_leap(year): leap = False # Write your logic here if 1900 <= year <= 10**5: if year % 4 == 0 and year % 100 == 0 and year % 400 == 0: leap = True elif year % 4 == 0 and year % 100 != 0: leap = True return leap year = int(input()) print(is_leap(year)) #Print Function if __name__ == '__main__': n = int(input()) output = "" for i in range(1,n+1): output += str(i) print(output)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
BASIC DATA TYPES
# List Comprehension if __name__ == '__main__': x = int(input()) y = int(input()) z = int(input()) n = int(input()) lista = [[i,j,k] for i in range(0,x+1) for j in range(0,y+1) for k in range(0,z+1) if i+j+k != n] print(lista) #Find the runner up score! if __name__ == '__main__': n = int(input()) arr = map(int, input().split()) if 2<=n<=10: arr = list(arr) for elem in arr: if -100<=elem<=100: massimo = max(arr) runner_up = -101 for score in arr: if score > runner_up and score < massimo: runner_up = score print(runner_up) #Nested Lists lista=list() lista2=list() if __name__ == '__main__': for _ in range(int(input())): name = input() score = float(input()) lista2.append(score) lista.append([name,score]) minimo=min(lista2) while min(lista2)==minimo: lista2.remove(min(lista2)) lista.sort() nuovo_minimo = min(lista2) for name,score in lista: if score==nuovo_minimo: print(name) #Finding the percentage if __name__ == '__main__': n = int(input()) student_marks = {} for _ in range(n): name, *line = input().split() scores = list(map(float, line)) student_marks[name] = scores query_name = input() if 2<=n<=10: for key in student_marks: if key == query_name: marks = student_marks[key] total = len(marks) somma = 0 for elem in marks: somma += float(elem) average = somma/total print("%.2f" % average) #Lists if __name__ == '__main__': N = int(input()) lista = [] for n in range(N): command = input().split(" ") if command[0] == "insert": lista.insert(int(command[1]), int(command[2])) elif command[0] == "print": print(lista) elif command[0] == "remove": lista.remove(int(command[1])) elif command[0] == "append": lista.append(int(command[1])) elif command[0] == "sort": lista.sort() elif command[0] == "pop": lista.pop() elif command[0] == "reverse": lista.reverse() #Tuples if __name__ == '__main__': n = int(input()) integer_list = map(int, input().split()) tupla = tuple(integer_list) print(hash(tupla))
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
STRINGS
#sWAP cASE def swap_case(s): new = '' for char in s: if char.isupper(): new += char.lower() elif char.islower(): new += char.upper() else: new += char return new if __name__ == '__main__': s = input() result = swap_case(s) print(result) #String Split and Join def split_and_join(line): # write your code here new_line = '-'.join(line.split(' ')) return new_line if __name__ == '__main__': line = input() result = split_and_join(line) print(result) #What's Your Name? def print_full_name(a, b): if len(a)<=10 and len(b)<=10: print('Hello '+a+ ' ' + b + '! You just delved into python.') if __name__ == '__main__': first_name = input() last_name = input() print_full_name(first_name, last_name) #Mutations def mutate_string(string, position, character): l = list(string) l[position] = character string = ''.join(l) return string if __name__ == '__main__': s = input() i, c = input().split() s_new = mutate_string(s, int(i), c) print(s_new) #Find a string def count_substring(string, sub_string): if 1<=len(string)<=200: count = 0 for i in range(len(string)): if string[i:].startswith(sub_string): count += 1 return count if __name__ == '__main__': string = input().strip() sub_string = input().strip() count = count_substring(string, sub_string) print(count) #string validators if __name__ == '__main__': s = input() if 0<=len(s)<=1000: print(any(char.isalnum() for char in s)) print(any(char.isalpha() for char in s)) print(any(char.isdigit() for char in s)) print(any(char.islower() for char in s)) print(any(char.isupper() for char in s)) #text alignment thickness = int(input()) #This must be an odd number c = 'H' #Top Cone for i in range(thickness): print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1)) #Top Pillars for i in range(thickness+1): print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6)) #Middle Belt for i in range((thickness+1)//2): print((c*thickness*5).center(thickness*6)) #Bottom Pillars for i in range(thickness+1): print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6)) #Bottom Cone for i in range(thickness): print(((c*(thickness-i-1)).rjust(thickness)+c+(c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6)) #Text Wrap import textwrap def wrap(string, max_width): if 0<=len(string)<=1000 and 0<=max_width<=len(string): text = textwrap.fill(string,max_width) return text if __name__ == '__main__': string, max_width = input(), int(input()) result = wrap(string, max_width) print(result) #Designer Door Mat if __name__ == '__main__': n, m = map(int, input().split(" ")) if 5<=n<=101 and 15<=m<=303: for i in range(n): if 0<=i<=(n//2-1): print(('.|.'*i).rjust(m//2-1,'-')+'.|.'+('.|.'*i).ljust(m//2-1,'-')) elif i == n//2: print('WELCOME'.center(m,'-')) else: print(('.|.'*(2*(n-i-1)+1)).center(m,'-')) #String Formatting def print_formatted(number): # your code goes here for i in range(1,n+1): decimal = str(i) octal = str(oct(i)[2:]) hexadecimal = str(hex(i)[2:]).upper() binary = str(bin(i)[2:]) width = len(bin(n)[2:]) print (decimal.rjust(width,' '),octal.rjust(width,' '),hexadecimal.rjust(width,' '),binary.rjust(width,' ')) if __name__ == '__main__': n = int(input()) print_formatted(n) #Alphabet Rangoli def print_rangoli(size): # your code goes here alphabet = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] sub_alphabet = alphabet[0:size] if 0<size<27: for i in range(1,size): row = sub_alphabet[-1:size-i:-1]+sub_alphabet[size-i:] print('-'*((size-i)*2)+ '-'.join(row)+'-'*((size-i)*2)) for i in range(size): first_half = '' second_half = '' for j in range(size-1-i): first_half += alphabet[size-1-j] + '-' second_half += '-'+alphabet[j+1+i] print('-'*2*i + first_half + alphabet[i]+ second_half + '-'*2*i) if __name__ == '__main__': n = int(input()) print_rangoli(n) #Capitalize! #!/bin/python3 import math import os import random import re import sys # Complete the solve function below. def solve(s): if 0<len(s)<1000: s = s.split(" ") return(" ".join(elem.capitalize() for elem in s)) if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') s = input() result = solve(s) fptr.write(result + '\n') fptr.close() #The Minion Game def minion_game(string): # your code goes here vowels = 'AEIOU' score_s = 0 score_k = 0 if 0<=len(string)<=10**6: for i in range(len(string)): if string[i] in vowels: score_k += len(string)-i else: score_s += len(string)-i if score_k>score_s: print('Kevin '+str(score_k)) elif score_s>score_k: print('Stuart '+str(score_s)) else: print('Draw') if __name__ == '__main__': s = input() minion_game(s) #Merge the Tools def merge_the_tools(string, k): # your code goes here if 1<=len(string)<=10**4 and 1<=k<=len(string) and len(string)%k==0: l = [] for i in range(0, len(string),k): l.append(string[i:(i+k)]) for elem in l: l2 = [] for char in elem: if char not in l2: l2.append(char) print("".join(l2)) if __name__ == '__main__': string, k = input(), int(input()) merge_the_tools(string, k)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
SETS
#Introduction to sets def average(array): # your code goes here if 1<=len(array)<=100: somma = 0 array1 = [] for elem in array: if elem not in array1: array1.append(elem) somma += elem average = somma/len(array1) return average if __name__ == '__main__': n = int(input()) arr = list(map(int, input().split())) result = average(arr) print(result) #No idea! # Enter your code here. Read input from STDIN. Print output to STDOUT n, m = map(int, input().split()) array = map(int, input().split(' ')) A = set(map(int, input().split(' '))) B = set(map(int, input().split(' '))) if 1<=n<=10**5 and 1<=m<=10**5: happiness = 0 for elem in array: if elem in A: happiness += 1 if elem in B: happiness -= 1 print(happiness) #Symmetric difference # Enter your code here. Read input from STDIN. Print output to STDOUT M=int(input()) m=input() N=int(input()) n=input() set1 = set(map(int,m.split(' '))) set2 = set(map(int,n.split(' '))) differenza1 = set1.difference(set2) differenza2 = set2.difference(set1) differenza_tot = list(differenza1.union(differenza2)) differenza_tot.sort() for i in range(len(differenza_tot)): print(int(differenza_tot[i])) #Set.add() # Enter your code here. Read input from STDIN. Print output to STDOUT N = int(input()) #total number of country stamps if 0<N<1000: country_set = set() for i in range(N): country = input() country_set.add(country) print(len(country_set)) #Set.discard(), .remove() & .pop() n = int(input()) #number of elementes in set s s = set(map(int, input().split())) N = int(input()) #number of commands if 0<n<20 and 0<N<20: for i in range(N): command = list(input().split()) if command[0] == 'pop': s.pop() elif command[0] == 'remove': s.remove(int(command[1])) elif command[0] == 'discard': s.discard(int(command[1])) print(sum(s)) #Set.union() Operation # Enter your code here. Read input from STDIN. Print output to STDOUT n = int(input()) english_newspaper = set(map(int,input().split(' '))) m = int(input()) french_newspaper = set(map(int,input().split(' '))) at_least_one = english_newspaper.union(french_newspaper) if 0<len(at_least_one)<1000: print(len(at_least_one)) #Set.intersection() Operation # Enter your code here. Read input from STDIN. Print output to STDOUT n = int(input()) english_newspaper = set(map(int, input().split())) m = int(input()) french_newspaper = set(map(int,input().split())) both_newspapers = english_newspaper.intersection(french_newspaper) if 0<len(english_newspaper.union(french_newspaper))<1000: print(len(both_newspapers)) #Set.difference() Operation # Enter your code here. Read input from STDIN. Print output to STDOUT n = int(input()) english_newspaper = set(map(int, input().split())) m = int(input()) french_newspaper = set(map(int, input().split())) only_english = english_newspaper.difference(french_newspaper) print(len(only_english)) #Set.symmetric_difference() Operation # Enter your code here. Read input from STDIN. Print output to STDOUT n = int(input()) english_newspaper = set(map(int,input().split())) m = int(input()) french_newspaper = set(map(int,input().split())) either_one = english_newspaper.symmetric_difference(french_newspaper) print(len(either_one)) #Set Mutations # Enter your code here. Read input from STDIN. Print output to STDOUT A = int(input()) setA = set(map(int,input().split())) N = int(input()) if 0<len(setA)<1000 and 0<N<100: for i in range(N): command = list(input().split(' ')) setB = set(map(int, input().split(' '))) if 0<len(setB)<100: if command[0] == 'update': setA.update(setB) if command[0] == 'intersection_update': setA.intersection_update(setB) if command[0] == 'difference_update': setA.difference_update(setB) if command[0] == 'symmetric_difference_update': setA.symmetric_difference_update(setB) print(sum(setA)) #The captain's Room # Enter your code here. Read input from STDIN. Print output to STDOUT K = int(input()) rooms = list(map(int, input().split())) from collections import Counter rooms = Counter(rooms) for room in rooms: if rooms[room] == 1: captain_room = room print(captain_room) #because it kept giving me error due to timeout even though the sample cases were correct, I checked the discussion page and took the idea of using Counter from collections #Check subset # Enter your code here. Read input from STDIN. Print output to STDOUT T = int(input()) if 0<T<21: for i in range(T): a = int(input()) setA = set(map(int, input().split())) b = int(input()) setB = set(map(int,input().split())) if setA.difference(setB) == set(): print(True) else: print(False) #Check strict superset # Enter your code here. Read input from STDIN. Print output to STDOUT setA = set(map(int, input().split())) n = int(input()) all_sets = [] if 0<len(setA)<501 and 0<n<21: for i in range(n): setI = set(map(int,input().split())) if 0<len(setI)<101: all_sets.append(setI) output = True for elem in all_sets: if not setA.issuperset(elem): output = False print(output)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
COLLECTIONS
#collections.Counter() # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import Counter X = int(input()) #number of shoes shoe_sizes = list(map(int,input().split())) N = int(input()) #number of customers shoe_sizes = Counter(shoe_sizes) total = 0 if 0<X<10**3 and 0<N<=10**3: for i in range(N): size, price = map(int,input().split()) if shoe_sizes[size]: total += price shoe_sizes[size] -= 1 print(total) #DefaultDict Tutorial # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import defaultdict n, m = map(int, input().split()) groupA = [] groupB = [] if 0<=n<=10000 and 1<=m<=100: for i in range(n): wordA = input() groupA.append(wordA) for i in range(m): wordB = input() groupB.append(wordB) d = defaultdict(list) for i in range(n): d[groupA[i]].append(i+1) for i in groupB: if i in d: print(*d[i]) else: print(-1) #Collections.namedtuple() # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import namedtuple n = int(input()) students = namedtuple('students',input().split()) sum_grades = 0 if 0<n<=100: for i in range(n): st = students._make(input().split()) sum_grades += float(st.MARKS) average = sum_grades/n print(average) #Collections.OrderedDict() # Enter your code here. Read input from STDIN. Print output to STDOUT import collections n = int(input()) l = collections.OrderedDict() for i in range(n): item_name = input().split(' ') item_price = int(item_name[-1]) item_name = ' '.join(item_name[:-1]) if item_name not in l: l[item_name] = item_price else: l[item_name] += item_price for item in l.items(): print(*item) #Word Order # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import Counter n = int(input()) if 1<=n<=10**5: l = [] for i in range(n): word = input().lower() l.append(word) c = Counter(l) my_sum = 0 for key in c.keys(): my_sum += 1 print(my_sum) print(*c.values()) #Collections.deque() # Enter your code here. Read input from STDIN. Print output to STDOUT from collections import deque n = int(input()) d = deque() for i in range(n): command = list(input().split()) if command[0] == 'append': d.append(command[1]) if command[0] == 'appendleft': d.appendleft(command[1]) if command[0] == 'pop': d.pop() if command[0] == 'popleft': d.popleft() print(*d) #Company Logo #!/bin/python3 import math import os import random import re import sys from collections import Counter if __name__ == '__main__': s = input() if 3<=len(s)<=10**4: d = Counter(sorted(s)) for elem in d.most_common(3): print(*elem) #Piling Up! # Enter your code here. Read input from STDIN. Print output to STDOUT t = int(input()) #number of test cases if 1<=t<=5: for i in range(t): n = int(input()) if 1<=n<=10**5: cubes = list(map(int, input().split())) if cubes[0] == max(cubes) or cubes[-1] == max(cubes): print('Yes') else: print('No')
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
DATE AND TIME
#Calendar Module # Enter your code here. Read input from STDIN. Print output to STDOUT import calendar month,day,year = map(int,input().split()) if 2000<year<3000: weekdays = ['MONDAY','TUESDAY','WEDNESDAY','THURSDAY','FRIDAY','SATURDAY','SUNDAY'] weekday = calendar.weekday(year,month,day) print(weekdays[weekday]) #Time Delta #!/bin/python3 import math import os import random import re import sys from datetime import datetime # Complete the time_delta function below. def time_delta(t1, t2): access1 = datetime.strptime(t1,'%a %d %b %Y %H:%M:%S %z') access2 = datetime.strptime(t2,'%a %d %b %Y %H:%M:%S %z') return str(int((abs(access1-access2)).total_seconds())) if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') t = int(input()) for t_itr in range(t): t1 = input() t2 = input() delta = time_delta(t1, t2) fptr.write(delta+ '\n') fptr.close()
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
EXCEPTIONS
#Exceptions # Enter your code here. Read input from STDIN. Print output to STDOUT t = int(input()) if 0<t<10: for i in range(t): values = list(input().split()) try: division = int(values[0])//int(values[1]) print(division) except ZeroDivisionError as e: print("Error Code:",e) except ValueError as e: print("Error Code:",e)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
BUILT-INS
#Zipped! # Enter your code here. Read input from STDIN. Print output to STDOUT num_students, num_subjects = map(int,input().split()) mark_sheet = [] for i in range(num_subjects): mark_sheet.append(map(float, input().split(' '))) for grades in zip(*mark_sheet): somma = sum(grades) print(somma/num_subjects) #Athlete Sort #!/bin/python3 import math import os import random import re import sys if __name__ == '__main__': nm = input().split() n = int(nm[0]) m = int(nm[1]) arr = [] for _ in range(n): arr.append(list(map(int, input().rstrip().split()))) k = int(input()) if 1<=n<=1000 and 1<=m<=1000: l = [] for lista in arr: l.append(lista[k]) l.sort() l1 = [] for elem in l: for i in range(n): if arr[i][k] == elem and i not in l1: l1.append(i) for i in l1: print(*arr[i]) #ginortS # Enter your code here. Read input from STDIN. Print output to STDOUT string = input() if 0<len(string)<1000: l = [] for char in string: l.append(char) l1 = [] l2 = [] l3 = [] l4 = [] for elem in l: if elem.islower(): l1.append(elem) if elem.isupper(): l2.append(elem) if elem.isdigit() and int(elem)%2 != 0: l3.append(elem) if elem.isdigit() and int(elem)%2 == 0: l4.append(elem) l1.sort() l2.sort() l3.sort() l4.sort() lista = l1 + l2 + l3 + l4 print(''.join(lista))
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
PYTHON FUNCTIONALS
#Map and Lambda Functions cube = lambda x: x**3 # complete the lambda function def fibonacci(n): # return a list of fibonacci numbers serie = [] if 0<=n<=15: if n == 1: serie = [0] if n > 1: serie = [0, 1] for i in range(1,n-1): serie.append(serie[i]+serie[i-1]) return serie if __name__ == '__main__': n = int(input()) print(list(map(cube, fibonacci(n))))
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
REGEX AND PARSING
#Detect Floating Point Number # Enter your code here. Read input from STDIN. Print output to STDOUT import re t = int(input()) if 0<t<10: for i in range(t): test_case = input() print(bool(re.search(r"^[+-/.]?[0-9]*\.[0-9]+$",test_case))) #Re.split() regex_pattern = r"[,.]" # Do not delete 'r'. import re print("\n".join(re.split(regex_pattern, input()))) #Group(), Groups() & Groupdict() # Enter your code here. Read input from STDIN. Print output to STDOUT import re s = input() if 0<len(s)<100: m = re.search(r"([a-z0-9A-Z])\1+",s) if m != None: print(m.group(1)) else: print(-1) #Re.findall() & Re.finditer() # Enter your code here. Read input from STDIN. Print output to STDOUT import re s = input() consonanti ='bcdfghjklmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ' if 0<len(s)<100: m = re.findall(r'(?<=['+consonanti+'])([AEIOUaeiou]{2,})(?=['+consonanti+'])',s.strip()) if len(m)>0: for elem in m: print(elem) else: print(-1) #Re.start() & Re.end() # Enter your code here. Read input from STDIN. Print output to STDOUT import re s = input() k = input() if 0<len(s)<100 and 0<len(k)<len(s): for i in range(len(s)): if re.match(k,s[i:]): tupla = (i,i+len(k)-1) print(tupla) if re.search(k,s) == None: tupla = (-1, -1) print(tupla) #Regex Substitutions # Enter your code here. Read input from STDIN. Print output to STDOUT import re n = int(input()) if 0<n<100: for i in range(n): line = input() l1 = re.sub(r' &&(?= )', ' and', line) l2 = re.sub(r' \|\|(?= )',' or',l1) print(l2) #Validating Roman Numerals regex_pattern = r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$" # Do not delete 'r'. import re print(str(bool(re.match(regex_pattern, input())))) #Validating phone numbers # Enter your code here. Read input from STDIN. Print output to STDOUT import re n = int(input()) if 0<n<=10: for i in range(n): number = input().strip() if len(number)==10: if bool(re.search(r'^([789]+)([0123456789]{0,9}$)',number)) == True: print('YES') else: print('NO') else: print('NO') #Validating and Parsing Email Addressess # Enter your code here. Read input from STDIN. Print output to STDOUT import re import email.utils n = int(input()) if 0<n<100: for i in range(n): address = email.utils.parseaddr(input()) if bool(re.match(r'^([a-zA-Z]+)([a-zA-Z0-9|\-|/.|_]+)@([a-zA-Z]+)\.([a-zA-Z]){1,3}$',address[1])) == True: print(email.utils.formataddr(address)) #Hex Color Code # Enter your code here. Read input from STDIN. Print output to STDOUT import re n = int(input()) if 0<n<50: for i in range(n): line = input().split() if len(line) > 1 and ('{' or '}') not in line: line = ' '.join(line) hexs = re.findall(r'#[0-9A-Fa-f]{6}|#[0-9A-Fa-f]{3}',line) if hexs: for elem in hexs: print(str(elem)) #HTML Parser - Part 1 # Enter your code here. Read input from STDIN. Print output to STDOUT from html.parser import HTMLParser as hp n = int(input()) class MyHTMLParser(hp): def handle_starttag(self, tag, attrs): print ('Start :',tag) for attr in attrs: print("->", attr[0],'>',attr[1]) def handle_endtag(self, tag): print ('End :',tag) def handle_startendtag(self, tag, attrs): print ('Empty :',tag) for attr in attrs: print("->", attr[0],'>',attr[1]) parser = MyHTMLParser() for i in range(n): parser.feed(input()) #HTML Parser - Part 2 from html.parser import HTMLParser class MyHTMLParser(HTMLParser): def handle_comment(self,data): lines = len(data.split('\n')) if lines>1: print(">>> Multi-line Comment") if data.strip(): print(data) else: print(">>> Single-line Comment") if data.strip(): print(data) def handle_data(self, data): if data.strip(): print(">>> Data"+'\n'+data) html = "" for i in range(int(input())): html += input().rstrip() html += '\n' parser = MyHTMLParser() parser.feed(html) parser.close() #Detect HTML Tags, Attributes and Attribute Values # Enter your code here. Read input from STDIN. Print output to STDOUT from html.parser import HTMLParser class MyHTMLParser(HTMLParser): def handle_starttag(self,tag,attrs): print(tag) for attr in attrs: if attr: print('->',attr[0],'>',attr[1]) parser = MyHTMLParser() for i in range(int(input())): parser.feed(input()) #Validating UID # Enter your code here. Read input from STDIN. Print output to STDOUT import re t = int(input()) for i in range(t): id = input() if re.search(r'^(?!.*(.).*\1)(?=(.*[A-Z]){2,})(?=(.*[0-9]){3,})[a-zA-Z0-9]{10}$',id): print('Valid') else: print('Invalid') #Validating Credit Card Numbers # Enter your code here. Read input from STDIN. Print output to STDOUT import re n = int(input()) for i in range(n): credit_card = input() if re.match(r'^([456]{1}[0-9]{3})-?([0-9]){4}-?([0-9]){4}-?([0-9]){4}$',credit_card) and re.match(r'(([0-9])(?!\2{3})){16}',credit_card.replace('-','')): print('Valid') else: print('Invalid') #Validating Postal Codes regex_integer_in_range = r"^[1-9][0-9]{5}$" # Do not delete 'r'. regex_alternating_repetitive_digit_pair = r"([0-9])(?=.\1)" # Do not delete 'r'. import re P = input() print (bool(re.match(regex_integer_in_range, P)) and len(re.findall(regex_alternating_repetitive_digit_pair, P)) < 2) #Matrix Script #!/bin/python3 import math import os import random import re import sys first_multiple_input = input().rstrip().split() n = int(first_multiple_input[0]) m = int(first_multiple_input[1]) matrix = [] for _ in range(n): matrix_item = input() matrix.append(matrix_item) output = '' for column in range(m): for row in range(n): output += matrix[row][column] print(re.sub(r'(?<=[A-Za-z0-9])[!@#$%& ]{1,}(?=[A-Za-z0-9])',' ',output))
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
XML
#XML 1 - Find the score import sys import xml.etree.ElementTree as etree def get_attr_number(node): # your code goes here somma = 0 for elem in node.iter(): diz = elem.attrib somma += len(diz) return somma if __name__ == '__main__': sys.stdin.readline() xml = sys.stdin.read() tree = etree.ElementTree(etree.fromstring(xml)) root = tree.getroot() print(get_attr_number(root)) #XML 2 - Find the maximum Depth import xml.etree.ElementTree as etree maxdepth = 0 def depth(elem, level): global maxdepth # your code goes here if (level+1)>maxdepth: maxdepth = level + 1 for child in list(elem): depth(child,level+1) if __name__ == '__main__': n = int(input()) xml = "" for i in range(n): xml = xml + input() + "\n" tree = etree.ElementTree(etree.fromstring(xml)) depth(tree.getroot(), -1) print(maxdepth)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
CLOSURES AND DECORATIONS
#Standardize Mobile Number Using Decorators import re def wrapper(f): def fun(l): # complete the function lista = [] for elem in l: if len(elem) == 10: lista.append('+91'+' '+str(elem[0:5]+ ' '+str(elem[5:]))) elif len(elem) == 11: lista.append('+91'+' '+str(elem[1:6]+ ' '+str(elem[6:]))) elif len(elem) == 12: lista.append('+91'+' '+str(elem[2:7]+ ' '+str(elem[7:]))) elif len(elem) == 13: lista.append('+91'+' '+str(elem[3:8]+ ' '+str(elem[8:]))) lista.sort() for elem in lista: print(elem) return fun @wrapper def sort_phone(l): print(*sorted(l), sep='\n') if __name__ == '__main__': l = [input() for _ in range(int(input()))] sort_phone(l) #Decorators 2 - Name Directory import operator def person_lister(f): def inner(people): # complete the function s = sorted(people, key = lambda x: int(x[2])) return[f(person) for person in s] return inner @person_lister def name_format(person): return ("Mr. " if person[3] == "M" else "Ms. ") + person[0] + " " + person[1] if __name__ == '__main__': people = [input().split() for i in range(int(input()))] print(*name_format(people), sep='\n')
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
NUMPY
#Arrays import numpy def arrays(arr): # complete this function # use numpy.array arr.reverse() return numpy.array(arr, float) arr = input().strip().split(' ') result = arrays(arr) print(result) #Shape and Reshape import numpy x = list(map(int,input().split())) my_array = numpy.array(x) print(numpy.reshape(my_array,(3,3))) #Transpose and Flatten import numpy n,m=map(int,input().split()) l = [] for i in range(n): row = list(map(int, input().split())) l.append(row) my_array = numpy.array(l) print(numpy.transpose(my_array)) print(my_array.flatten()) #Concatenate import numpy n,m,p = map(int,input().split()) l1 = [] l2 = [] for i in range(n): row = list(map(int,input().split())) l1.append(row) for j in range(m): row = list(map(int,input().split())) l2.append(row) array1 = numpy.array(l1) array2 = numpy.array(l2) print(numpy.concatenate((array1,array2),axis=0)) #Zeros and Ones import numpy shape = list(map(int,input().split())) print(numpy.zeros(shape, dtype = numpy.int)) print(numpy.ones(shape, dtype = numpy.int)) #Eye and Identity import numpy n,m = map(int,input().split()) numpy.set_printoptions(sign=' ') #I had to look at this method of formatting the answer in the discussion board #because I wasn't aware of it print(numpy.eye(n,m)) #Array Mathematics import numpy n,m = map(int,input().split()) arrayA = numpy.array([list(map(int, input().split())) for i in range(n)], int) arrayB = numpy.array([list(map(int, input().split())) for i in range(n)], int) print(numpy.add(arrayA,arrayB)) print(numpy.subtract(arrayA,arrayB)) print(numpy.multiply(arrayA,arrayB)) print(arrayA//arrayB) print(numpy.mod(arrayA,arrayB)) print(numpy.power(arrayA,arrayB)) #Floor, Ceil and Rint import numpy a = numpy.array(list(map(float,input().split()))) numpy.set_printoptions(sign=' ') print(numpy.floor(a)) print(numpy.ceil(a)) print(numpy.rint(a)) #Sum and Prod import numpy n,m = map(int,input().split()) a = numpy.array([list(map(int,input().split())) for i in range(n)],int) my_sum = numpy.sum(a,axis=0) print(numpy.prod(my_sum)) #Min and Max import numpy n,m = map(int,input().split()) a = numpy.array([list(map(int,input().split())) for i in range(n)]) minimo = numpy.min(a,axis=1) print(numpy.max(minimo)) #Mean, Var and Std import numpy n,m = map(int,input().split()) array = numpy.array([list(map(int,input().split())) for i in range(n)]) numpy.set_printoptions(legacy='1.13') #I took this line from the discussion board because it kept giving me the right answer but in the wrong format without it print(numpy.mean(array,axis=1)) print(numpy.var(array,axis=0)) print(numpy.std(array)) #Dot and Cross import numpy n = int(input()) a = numpy.array([list(map(int,input().split())) for i in range(n)]) b = numpy.array([list(map(int,input().split())) for i in range(n)]) print(numpy.dot(a,b)) #Inner and Outer import numpy a = numpy.array(list(map(int,input().split()))) b = numpy.array(list(map(int,input().split()))) print(numpy.inner(a,b)) print(numpy.outer(a,b)) #Polynomials import numpy coefficient = numpy.array(list(map(float,input().split()))) x = int(input()) print(numpy.polyval(coefficient,x)) #Linear Algebra import numpy n = int(input()) a = numpy.array([list(map(float,input().split())) for i in range(n)]) det = numpy.linalg.det(a).round(2) print(det)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
PROBLEM 2
#Birthday Cake Candles #!/bin/python3 import math import os import random import re import sys # # Complete the 'birthdayCakeCandles' function below. # # The function is expected to return an INTEGER. # The function accepts INTEGER_ARRAY candles as parameter. # def birthdayCakeCandles(candles): # Write your code here tallest = candles.count(max(candles)) return tallest if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') candles_count = int(input().strip()) candles = list(map(int, input().rstrip().split())) result = birthdayCakeCandles(candles) fptr.write(str(result) + '\n') fptr.close() #Number Line Jumps #!/bin/python3 import math import os import random import re import sys # Complete the kangaroo function below. def kangaroo(x1, v1, x2, v2): if 0<=x1<x2<=10000 and 1<=v1<=10000 and 0<v2<=10000: if v1<=v2: return 'NO' opt_jumps = (x2-x1)/(v1-v2) if opt_jumps%1==0: return 'YES' else: return 'NO' if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') x1V1X2V2 = input().split() x1 = int(x1V1X2V2[0]) v1 = int(x1V1X2V2[1]) x2 = int(x1V1X2V2[2]) v2 = int(x1V1X2V2[3]) result = kangaroo(x1, v1, x2, v2) fptr.write(result + '\n') fptr.close() #Viral Advertising #!/bin/python3 import math import os import random import re import sys # Complete the viralAdvertising function below. def viralAdvertising(n): if 1<= n and n<= 50: people = 5 likes = 0 i = 0 for i in range(0,n): new_likes = people//2 likes += new_likes people = new_likes*3 i += 1 return likes if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') n = int(input()) result = viralAdvertising(n) fptr.write(str(result) + '\n') fptr.close() #Recursive Sum Digit #!/bin/python3 import math import os import random import re import sys # Complete the superDigit function below. def superDigit(n, k): if len(n)==1 and k<=1: return int(n) else: somma=0 for i in n: somma += int(i) n = str(somma*k) return superDigit(n,1) if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') nk = input().split() n = nk[0] k = int(nk[1]) result = superDigit(n, k) fptr.write(str(result) + '\n') fptr.close() #Insertion Sort - Part 1 #!/bin/python3 import math import os import random import re import sys # Complete the insertionSort1 function below. def insertionSort1(n, arr): value = arr[-1] arr.remove(value) count = 0 for i in range(n-2,-1,-1): if arr[i]>value: arr.insert(i,arr[i]) print(*arr) arr.remove(arr[i]) elif arr[i]<=value and count==0: arr.insert(i+1, value) count+=1 print(*arr) if arr[0]>value: arr.insert(0, value) print(*arr) if __name__ == '__main__': n = int(input()) arr = list(map(int, input().rstrip().split())) insertionSort1(n, arr) #Insertion Sort - Part 2 #!/bin/python3 import math import os import random import re import sys # Complete the insertionSort2 function below. def insertionSort2(n, arr): for i in range(1,n): num=arr[i] j=i-1 while j>=0 and arr[j]>num: arr[j+1]=arr[j] j=j-1 arr[j+1]=num print(' '.join(str(i) for i in arr)) if __name__ == '__main__': n = int(input()) arr = list(map(int, input().rstrip().split())) insertionSort2(n, arr)
_____no_output_____
MIT
scripts.ipynb
giuliacasale/homework1
PRACTICA DE PREDICCION DE SERIES DE TIEMPO CON TENSORFLOW 1. Cargue Librerias y Data Set
# Cargamos las librerias necesarias para el analisis import os import datetime import IPython import IPython.display import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf mpl.rcParams['figure.figsize'] = (8, 6) mpl.rcParams['axes.grid'] = False # Cargamos el dataset zip_path = tf.keras.utils.get_file( origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip', fname='jena_climate_2009_2016.csv.zip', extract=True) csv_path, _ = os.path.splitext(zip_path)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip 13574144/13568290 [==============================] - 0s 0us/step
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
2. Limpieza y Preparacion De Datos
df = pd.read_csv(csv_path) df.head() # Ya que el registro esta cada 10 minutos, tomaremos solo el valor correspondiente al valor final de la hora, para tener solo un valor por hora df = pd.read_csv(csv_path) # slice [start:stop:step], starting from index 5 take every 6th record. df = df[5::6] # Convertimos la columna de tiempo en formato datetime y la extraemos del dataframe date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S') df.head() # Graficamos las series de interes para ver su evolucion en el tiempo plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)'] plot_features = df[plot_cols] plot_features.index = date_time _ = plot_features.plot(subplots=True) plot_features = df[plot_cols][:744] plot_features.index = date_time[:744] _ = plot_features.plot(subplots=True) # Vamos a revisar los datos de forma descriptiva df.describe().transpose()
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
Podemos observar que las variables de "wv (m/s)" y "max. wv (m/s)" tienen valors minimos anomalos, estos deben ser erroneos, procederemos a imputarlos con cero.
wv = df['wv (m/s)'] bad_wv = wv == -9999.0 wv[bad_wv] = 0.0 max_wv = df['max. wv (m/s)'] bad_max_wv = max_wv == -9999.0 max_wv[bad_max_wv] = 0.0 df['wv (m/s)'].min()
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
La ultima variable "wd (deg)" nos indica la direccion del viento en grados. Sin embargo, los grados no son un buen input para el modelo. En este caso las limitaciones son las siguientes:- 0° y 360° deberian estar cerca y cerrarse, eso no se puede apreciar.- Si no hay velocidad del viento, la direccion no debería importar.
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400) plt.colorbar() plt.xlabel('Wind Direction [deg]') plt.ylabel('Wind Velocity [m/s]');
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
Para solventar estas dificultadades, convertiremos la direccion y la magnitud de la velocidad del viento en vectores, una medida mas representativa
wv = df.pop('wv (m/s)') max_wv = df.pop('max. wv (m/s)') # Convert to radians. wd_rad = df.pop('wd (deg)')*np.pi / 180 # Calculate the wind x and y components. df['Wx'] = wv*np.cos(wd_rad) df['Wy'] = wv*np.sin(wd_rad) # Calculate the max wind x and y components. df['max Wx'] = max_wv*np.cos(wd_rad) df['max Wy'] = max_wv*np.sin(wd_rad)
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
Revisaremos la distribucion de los componentes de cada vector
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400) plt.colorbar() plt.xlabel('Wind X [m/s]') plt.ylabel('Wind Y [m/s]') ax = plt.gca() ax.axis('tight') # Transformaremos la fecha a segundos para revisar periodicidiad timestamp_s = date_time.map(datetime.datetime.timestamp) # Definimos los segundos que tiene un dia y un año day = 24*60*60 year = (365.2425)*day # Convertiremos los datos con ayuda de las funciones de Seno y Coseno para crear la periodicidad df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day)) df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day)) df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year)) df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year)) df.head() # Creamos la figura donde dibujaremos las graficas fig , axarr = plt.subplots(2,1,figsize=(10,8)) # Dibujamos cada grafica axarr[0].plot(np.array(df['Day sin'])[:25]) axarr[0].plot(np.array(df['Day cos'])[:25]) axarr[0].set_title('Frecuencia Diaria') axarr[1].plot(np.array(df['Year sin'])[:24*365]) axarr[1].plot(np.array(df['Year cos'])[:24*365]) axarr[1].set_title('Frecuencia Anual'); # Para corroborar las frecuencias, corremos un algoritmo de tf.signal.rfft de la temperatura sobre el tiempo fft = tf.signal.rfft(df['T (degC)']) f_per_dataset = np.arange(0, len(fft)) n_samples_h = len(df['T (degC)']) hours_per_year = 24*365.2524 years_per_dataset = n_samples_h/(hours_per_year) f_per_year = f_per_dataset/years_per_dataset plt.step(f_per_year, np.abs(fft)) plt.xscale('log') plt.ylim(0, 400000) plt.xlim([0.1, max(plt.xlim())]) plt.xticks([1, 365.2524], labels=['1/Year', '1/day']) _ = plt.xlabel('Frequency (log scale)')
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
Podemos evidenciar que los dos picos se presentan en 1/año y 1/dia, lo que corrobora nuestras suposiciones.
df.head()
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
3. Split De Los DatosDividiremos los datos de la siguiente manera:- Entrenamiento: 70%- Validacion: 20%- Prueba: 10%
column_indices = {name: i for i, name in enumerate(df.columns)} n = len(df) train_df = df[0:int(n*0.7)] val_df = df[int(n*0.7):int(n*0.9)] test_df = df[int(n*0.9):] num_features = df.shape[1]
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
4. Normalizacion De Los Datos
# Normalizaremos los conjuntos de entrenamiento, validacion y prueba train_mean = train_df.mean() train_std = train_df.std() train_df = (train_df - train_mean) / train_std val_df = (val_df - train_mean) / train_std test_df = (test_df - train_mean) / train_std df_std = (df - train_mean) / train_std df_std = df_std.melt(var_name='Column', value_name='Normalized') plt.figure(figsize=(12, 6)) ax = sns.violinplot(x='Column', y='Normalized', data=df_std) _ = ax.set_xticklabels(df.keys(), rotation=90)
_____no_output_____
MIT
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
Classification with Python In this notebook we try to practice all the classification algorithms that we learned in this course.We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.Lets first load required libraries:
import itertools import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter import pandas as pd import numpy as np import matplotlib.ticker as ticker from sklearn import preprocessing %matplotlib inline
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
About dataset This dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:| Field | Description ||----------------|---------------------------------------------------------------------------------------|| Loan_status | Whether a loan is paid off on in collection || Principal | Basic principal loan amount at the || Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule || Effective_date | When the loan got originated and took effects || Due_date | Since it’s one-time payoff schedule, each loan has one single due date || Age | Age of applicant || Education | Education of applicant || Gender | The gender of applicant | Lets download the dataset
!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
--2020-05-22 14:48:38-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196 Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 23101 (23K) [text/csv] Saving to: ‘loan_train.csv’ 100%[======================================>] 23,101 --.-K/s in 0.002s 2020-05-22 14:48:38 (11.5 MB/s) - ‘loan_train.csv’ saved [23101/23101]
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Load Data From CSV File
df = pd.read_csv('loan_train.csv') df.head() df.shape
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Convert to date time object
df['due_date'] = pd.to_datetime(df['due_date']) df['effective_date'] = pd.to_datetime(df['effective_date']) df.head()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Data visualization and pre-processing Let’s see how many of each class is in our data set
df['loan_status'].value_counts()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
260 people have paid off the loan on time while 86 have gone into collection Lets plot some columns to underestand data better:
# notice: installing seaborn might takes a few minutes !conda install -c anaconda seaborn -y import seaborn as sns bins = np.linspace(df.Principal.min(), df.Principal.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'Principal', bins=bins, ec="k") g.axes[-1].legend() plt.show() bins = np.linspace(df.age.min(), df.age.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'age', bins=bins, ec="k") g.axes[-1].legend() plt.show()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Pre-processing: Feature selection/extraction Lets look at the day of the week people get the loan
df['dayofweek'] = df['effective_date'].dt.dayofweek bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10) g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2) g.map(plt.hist, 'dayofweek', bins=bins, ec="k") g.axes[-1].legend() plt.show()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0) df.head()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Convert Categorical features to numerical values Lets look at gender:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
86 % of female pay there loans while only 73 % of males pay there loan Lets convert male to 0 and female to 1:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True) df.head()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
One Hot Encoding How about education?
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Feature befor One Hot Encoding
df[['Principal','terms','age','Gender','education']].head()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame
Feature = df[['Principal','terms','age','Gender','weekend']] Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1) Feature.drop(['Master or Above'], axis = 1,inplace=True) Feature.head()
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Feature selection Lets defind feature sets, X:
X = Feature X[0:5]
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
What are our lables?
y = df['loan_status'].values y[0:5]
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Normalize Data Data Standardization give data zero mean and unit variance (technically should be done after train test split )
X= preprocessing.StandardScaler().fit(X).transform(X) X[0:5]
/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler. return self.partial_fit(X, y) /opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:1: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler. if __name__ == '__main__':
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Classification Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the modelYou should use the following algorithm:- K Nearest Neighbor(KNN)- Decision Tree- Support Vector Machine- Logistic Regression__ Notice:__ - You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.- You should include the code of the algorithm in the following cells. K Nearest Neighbor(KNN)Notice: You should find the best k to build the model with the best accuracy. **warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__. Train Test SplitThis will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4) print ('Train set:', X_train.shape, y_train.shape) print ('Test set:', X_test.shape, y_test.shape)
Train set: (276, 8) (276,) Test set: (70, 8) (70,)
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Calculate the best K among 1 to 15and plot the result to select
from sklearn.neighbors import KNeighborsClassifier from sklearn import metrics Ks = 15 mean_acc = np.zeros((Ks-1)) std_acc = np.zeros((Ks-1)) ConfustionMx = []; for n in range(1,Ks): #Train Model and Predict neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train) yhat=neigh.predict(X_test) mean_acc[n-1] = metrics.accuracy_score(y_test, yhat) std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0]) mean_acc plt.plot(range(1,Ks),mean_acc,'g') plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10) plt.legend(('Accuracy ', '+/- 3xstd')) plt.ylabel('Accuracy ') plt.xlabel('Number of Nabors (K)') plt.tight_layout() plt.show() print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
The answerIt seems k = 7 gives the best accuracy Decision Tree Train set adn Test SetJust use the previous one~ And use Decision Tree to build the model (max_depeth from1 - 10 , why? There are only less than 10 kinds of attributes )
from sklearn.tree import DecisionTreeClassifier from sklearn import metrics import matplotlib.pyplot as plt Ks = 10 acc = np.zeros((Ks-1)) for n in range(1,Ks): drugTree = DecisionTreeClassifier(criterion="entropy", max_depth = n) drugTree # it shows the default parameters drugTree.fit(X_train,y_train) predTree = drugTree.predict(X_test) acc[n-1] = metrics.accuracy_score(y_test, predTree) acc plt.plot(range(1,Ks),acc,'g') plt.ylabel('Accuracy ') plt.xlabel('Depth (K)') plt.tight_layout() plt.show() print( "The best accuracy was with", acc.max(), "with k=", acc.argmax()+1)
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
And we will use depth 6 with decision tree.??? Why 1 to 2 gives this results? Support Vector Machine Data pre-processing and selectionFor SVM, treat as numbers
Feature.dtypes feature_df = Feature[['Principal', 'terms', 'age', 'Gender', 'weekend', 'Bechalor', 'High School or Below', 'college']] X_SVM = np.asarray(feature_df) X_SVM[0:5] Y_Feature = [ 1 if i == "PAIDOFF" else 0 for i in df['loan_status'].values] y_SVM = np.asarray(Y_Feature) y_SVM [0:5]
_____no_output_____
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
Train and Test dataSplit
X_train_SVM, X_test_SVM, y_train_SVM, y_test_SVM = train_test_split( X_SVM, y_SVM, test_size=0.2, random_state=4) print ('Train set:', X_train_SVM.shape, y_train_SVM.shape) print ('Test set:', X_test_SVM.shape, y_test_SVM.shape)
Train set: (276, 8) (276,) Test set: (70, 8) (70,)
MIT
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning