markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
4. Alice, Bob and Carol have agreed to pool their Halloween candy and split it evenly among themselves.For the sake of their friendship, any candies left over will be smashed. For example, if they collectivelybring home 91 candies, they'll take 30 each and smash 1.Write an arithmetic expression below to calculate how many candies they must smash for a given haul. | # Variables representing the number of candies collected by alice, bob, and carol
alice_candies = 121
bob_candies = 77
carol_candies = 109
# Your code goes here! Replace the right-hand side of this assignment with an expression
# involving alice_candies, bob_candies, and carol_candies
to_smash = (alice_candies + bob_candies + carol_candies) % 3
print(to_smash)
# Check your answer
q4.check()
#q4.hint()
#q4.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
Assignment Data Description- covid data of daily cummulative cases of India as reported from January 2020 to 8th August 2020- Source: https://www.kaggle.com/sudalairajkumar/covid19-in-india Conduct below Insight investigation1. Find which state has highest mean of cummulative confirmed cases since reported from Jan 2020- Plot line graph plotting means of top 10 States having highest daily confirmed cases2. Which state has highest Death Rate for the month of June, July & Aug - Plot bar graph of Death Rates for the top 5 states Below key steps to be adopted to solve above Questions- Load Data --> Clean data / Data munging --> Grouping of Data by State --> Exploration using plots Load Packages | import pandas as pd # for cleaning and loading data from csv file
import numpy as np
from matplotlib import pyplot as plt # package for plotting graphs
import datetime
import seaborn as sns; sns.set(color_codes=True)
%matplotlib inline | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
Load data | df = pd.read_csv("covid_19_india.csv")
df.head() # Preview first 5 rows of dataframe
# Convert Date column which is a string into datetime object
df["Date"] = pd.to_datetime(df["Date"], format = "%d/%m/%y")
df.head() | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
Cleaning of data- The dataset consists of cummulative values, aim is to create columns with daily reported deaths and confirmed cases.- Below method is helper function to create column consisting of daily cases reported from Cummulative freq column | ex = np.unique(df['State/UnionTerritory'])
ex | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
From above unique values of states it is clear that Telangana is represented in multiple ways. We will change each occurrence of Telangna state with standard spelling | def clean_stateName(stateName):
if stateName == 'Telangana***':
stateName = 'Telangana'
elif stateName == 'Telengana':
stateName = 'Telangana'
elif stateName == 'Telengana***':
stateName = 'Telangana'
return stateName | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
- Apply method is used to apply either user defined or builtin function across every cell of dataframe- Commonly lambda function is used to apply method across each cell- A lambda function is a small anonymous function.- A lambda function can take any number of arguments, but can only have one expression. | df["State/UnionTerritory"] = df["State/UnionTerritory"].apply(lambda x: clean_stateName(x))
np.unique(df["State/UnionTerritory"]) # to identify all unique values in a column of dataframe or array
def daily_cases(dframe, stateColumn,dateColumn, cummColumn):
# Sort column containing state and then by date in ascending order
dframe.sort_values(by = [stateColumn, dateColumn], inplace = True)
newColName = 'daily_' + cummColumn
dframe[newColName] = dframe[cummColumn].diff() # diff is pandas method to caclucate difference between consecutive values
# print(dframe.tail())
'''
Below line uses shift method of pandas to compare consecutive state names and if they are not different
as shown by using ! symbol then create list of boolean, True for if they are different else False
'''
mask = dframe[stateColumn] != dframe[stateColumn].shift(1)
dframe[newColName][mask] = np.nan # where value of mask =True the cell value will be replaced by NaN
dframe[newColName] = dframe[newColName].apply(lambda x: 0 if x < 0 else x) # replace negative values by 0
# dframe.drop('diffs',axis=1, inplace = True)
return dframe
df_new = daily_cases(dframe= df, stateColumn= 'State/UnionTerritory',dateColumn= 'Date', cummColumn= 'Confirmed')
df_new[df_new["State/UnionTerritory"]=="Maharashtra"].tail(n=5) | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
Q1. Find which state has highest mean of cummulative confirmed cases since reported from Jan 2020 | # Hint : Groupby state names to find their means for confirmed cases
df_group = df_new.groupby(["State/UnionTerritory"])['daily_Confirmed'].mean()
df_group = df_group.sort_values(ascending= False)[0:10]
df_group
df_group.index
ax = sns.lineplot(x=df_group.index, y= df_group.values)
plt.scatter(x=df_group.index, y= df_group.values, c = 'r')
ax.figure.set_figwidth(12)
ax.figure.set_figheight(4)
ax.set_ylabel("Mean of Daily Confirmed Cases") | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
Q2. Which state has highest Death Rate for the month of June, July & Aug | # Hint - explore how a datetime column of dataframe can be filtered using specific months
df_months = df_new['Date'].apply(lambda x: x.month in [6,7,8]) # this will create boolean basis comparison of months
df_final = df_new[df_months] # Filtered dataframe consisting of data from June, July & Aug
df_final.tail()
df_final['death_rate'] = df_final['Deaths'] / df_final['Confirmed'] *100
df_final.tail()
df_groups_deaths = df_final.groupby(["State/UnionTerritory"])['death_rate'].mean()
top_10_deathrates = df_groups_deaths.sort_values(ascending= False)[0:10]
fig, ax = plt.subplots()
fig.set_figwidth(15)
fig.set_figheight(6)
ax.bar(x = top_10_deathrates.index, height = top_10_deathrates.values)
ax.set_xlabel('States')
ax.set_ylabel('Death Rates %')
ax.set_title('Top 10 States with Highest Death Rate since June 2020')
for i, v in enumerate(top_10_deathrates.values):
ax.text(i, v, s = ("%.2f" % v), color='blue', fontweight='bold', fontsize = 12) # %.2f will print decimals upto 2 places
plt.xticks(rotation=45) # this line will rotate the x axis label in 45 degrees to make it more readable | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
Q3. Explore Trend in Confirmed Cases for the state of Maharashtra- Plot line graph with x axis as Date column and y axis as daily confirmed cases. - such a graph is also calledas Time Series Plot Hint - explore on google or in matplotlib for Time series graph from a dataframe | df_mah = df_new[df_new["State/UnionTerritory"]=='Maharashtra']
fig, ax = plt.subplots()
fig.set_figwidth(15)
fig.set_figheight(6)
ax.plot(df_mah["Date"],df_mah["daily_Confirmed"])
df_mah = df_final[df_final["State/UnionTerritory"]=='Maharashtra']
fig, ax = plt.subplots()
fig.set_figwidth(15)
fig.set_figheight(6)
ax.plot(df_mah["Date"],df_mah["death_rate"])
ax.scatter(df_mah["Date"],df_mah["death_rate"])
ax.set_xlabel('Date')
ax.set_ylabel('Death Rate')
ax.set_title('Death Rate in Maharastra') | _____no_output_____ | MIT | covid_data_analysis_solution.ipynb | rahulkumbhar8888/DataScience |
print((4 + 8) / 2) | 6.0
| MIT | solar-learn.ipynb | anasir514/colab |
|
Check values before feature selection in both training and test data- nan- different enough values | import pandas as pd
import glob
import os
training_df = pd.concat(map(pd.read_csv, glob.glob(os.path.join('../Data/Train', "*.csv"))), ignore_index=True)
test_df = test_data = pd.read_csv('../data/test.csv')
training_df_shape = training_df.shape
test_df_shape = test_df.shape
all_stations = set(training_df['station'])
def nan_analysis(column_name):
training_with_null_df = training_df[training_df[column_name].isnull()]
training_nan = training_with_null_df.shape
print(f'Number of Nan for {column_name}: {training_nan} of {training_df_shape}')
test_nan = test_df[test_df[column_name].isnull()].shape
print(f'Number of Nan for {column_name}: {test_nan} of {test_df_shape}')
return training_with_null_df[['station']]
def value_analysis(column_name):
return pd.merge(training_df[[column_name]].describe(),
test_df[[column_name]].describe(),
left_index=True,
right_index=True,
suffixes=('training', 'test'))
def station_ids_for_non_nan(column_name):
training_not_null = training_df[training_df[column_name].notnull()]
training_not_null_stations = set(training_not_null['station'])
print(f'Not nan for {column_name}: {training_not_null.shape} of {training_df_shape}')
print(f'Station with only null values: {all_stations - training_not_null_stations}')
| _____no_output_____ | BSD-2-Clause | notebooks/TestAndTrainingDataForFeatureSelection.ipynb | isabelladegen/mlp-2021 |
Weather Data | precipitation = 'precipitation.l.m2'
precipitation_nan = nan_analysis(precipitation)
value_analysis(precipitation)
# -> Training data has no values for precipitation not a good feature
column = 'temperature.C'
temperature_nan = nan_analysis(column)
value_analysis(column)
# min temperature is quite different between training and test but there seems to be enough data
column = 'windMaxSpeed.m.s'
windmax_nan = nan_analysis(column)
value_analysis(column)
column = 'windMeanSpeed.m.s'
windmean_nan = nan_analysis(column)
value_analysis(column)
column = 'windDirection.grades'
winddir_nan = nan_analysis(column)
value_analysis(column)
column = 'relHumidity.HR'
relhum_nan = nan_analysis(column)
value_analysis(column)
column = 'airPressure.mb'
airpressure_nan = nan_analysis(column)
value_analysis(column)
# all weather measure are missing 75
diff = set(airpressure_nan.index) - set(relhum_nan.index)
diff = set(winddir_nan.index) - set(relhum_nan.index) | _____no_output_____ | BSD-2-Clause | notebooks/TestAndTrainingDataForFeatureSelection.ipynb | isabelladegen/mlp-2021 |
Is Holiday | column = 'isHoliday'
nan_analysis(column)
value_analysis(column) | Number of Nan for isHoliday: (0, 25) of (55875, 25)
Number of Nan for isHoliday: (0, 25) of (2250, 25)
| BSD-2-Clause | notebooks/TestAndTrainingDataForFeatureSelection.ipynb | isabelladegen/mlp-2021 |
Bikes Profile Data | column = 'full_profile_3h_diff_bikes'
nan_analysis(column)
station_ids_for_non_nan(column)
value_analysis(column)
# each station has none null values!
column = 'full_profile_bikes'
nan_analysis(column)
station_ids_for_non_nan(column)
value_analysis(column)
#select the none nan
column = 'short_profile_3h_diff_bikes'
nan_analysis(column)
station_ids_for_non_nan(column)
value_analysis(column)
column = 'short_profile_bikes'
nan_analysis(column)
station_ids_for_non_nan(column)
value_analysis(column) | Number of Nan for short_profile_bikes: (12600, 25) of (55875, 25)
Number of Nan for short_profile_bikes: (0, 25) of (2250, 25)
Not nan for short_profile_bikes: (43275, 25) of (55875, 25)
Station with only null values: set()
| BSD-2-Clause | notebooks/TestAndTrainingDataForFeatureSelection.ipynb | isabelladegen/mlp-2021 |
17. Module, Package, Try_except, Numpy1_20191011_014_Day4_2๋ถ Magic method ์ ๋ฆฌ- ํด๋์ค ์์ฑ ํ, ํด๋์ค object์ ๊ธฐ๋ณธ ์ฐ์ฐ๊ธฐ๋ฅ ๋ณด๊ฐํ ๋ ํ์ฉ ๊ฐ๋ฅ ์ฃผ๋ชฉ!- object๋ฅผ ๋ํ ๋, plus(1,2) ํจ์ ์ฐ์ง ์๊ณ , num1 + num2 ๋ก๋ ์ฐ์ฐ์ด ๊ฐ๋ฅ!- ๋น๊ต - \__eq__(==), \__ne__(!=) - \__lt__(, greater than), \__le__(=, gre or equal)- ์ฐ์ฐ - \__add__(+), \__sub__(-), \__mul__(*), \__truediv__(/) - \__floordiv__(//), \__mod__(%), \__pow__(**)- ๊ทธ์ธ - \__repr__(object์ ๊ทธ๋ฅ represent), \__str__(object์ print) **----> return str( ) -----> string ๋ฐ์ดํฐ๋ก ๋ฆฌํดํด์ค์ผ ํจ** | # Magic method ์ฌ์ฉํ ํด๋์ค ์ ์ ๋ฐ object ์ฐ์ฐ
# ์) integer ํด๋์ค ์์ฑ | _____no_output_____ | MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
class Integer: def __init__(self,number): self.num = number def __add__(self,unit): return self.num + unit.num def __str__(self): return str(self.num) def __repr__(self): return str(self.num)num1 = Integer(1)num2 = Integer(2)num1+num2 ๊ทธ๋ฅ num1 + num2 ํ๋ฉด,, ๊ฐ ๋ณ์์๋ 1๊ณผ 2๊ฐ ๋ค์ด๊ฐ์์ผ๋, ๋น์ฐํ + ์ฐ์ฐ ๋์ด์ผ ํ๋ ๊ฑฐ ์๋๊ฐ ํ๊ฒ ์ง๋ง,, num1๊ณผ num2์ ํด๋์ค(์ฌ์ฉ์์ ์ ๋ฐ์ดํฐํ์
), ๋ฐ์ดํฐํ์
์ด Integer๋ผ๋ ๋ด๊ฐ ์ ์ ๋ด๋ฆฐ ํ์
์ด๊ธฐ ๋๋ฌธ์,, __add__ ๊ฐ ๋ฐ๋ก ์๋ค. ๋ฐ๋ผ์, ์ ๋ ๊ฒ ๊ธฐ๋ณธ ์ฐ์ฐ์ ํ์ฉํ๋ ค๋ฉด magic method๋ฅผ ๋ค์ ์ฌ์ ์ ํด์ค์ผ ํ๋ค. | a = 1
a.__add__(2) # ====> a.num + 2.num ==== self.num + unit.num ===== def __add__(self, unit):
num1
print(num1) | <__main__.Integer object at 0x104eb7c90>
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
1. ํด๋์ค ์์ - ๊ณ์ข ํด๋์ค ๋ง๋ค๊ธฐ- ๋ณ์ : ์์ฐ(asset), ์ด์์จ(interest)- ํจ์ : ์ธ์ถ(draw), ์
๊ธ(interest), ์ด์์ถ๊ฐ(add_interest)- ์ธ์ถ ์, ์์ฐ ์ด์์ ๋์ ์ธ์ถํ ์ ์๋ค. | class Account:
def __init__(self,asset,interest=1.05):
self.asset = asset
self.interest = interest
def draw(self,amount):
if self.asset >= amount:
self.asset -= amount
print("{}์์ด ์ธ์ถ๋์์ต๋๋ค.".format(amount))
else:
print('{}์์ด ๋ถ์กฑํฉ๋๋ค.'.format((amount-self.asset)))
def insert(self,amount):
self.asset += amount
print('{}์์ด ์
๊ธ๋์์ต๋๋ค.'.format(amount))
def add_interest(self):
self.asset *= self.interest
print('{}์์ ์ด์๊ฐ ์
๊ธ๋์์ต๋๋ค.'.format((self.asset*(self.interest-1))))
def __repr__(self):
return "asset : {}, interest : {}".format(self.asset, self.interest)
acc1 = Account(10000)
acc1.asset
acc1
acc1.draw(12000)
acc1.draw(3000)
acc1
acc1.insert(5000)
acc1
acc1.add_interest(),1
acc1 | 630.0000000000006์์ ์ด์๊ฐ ์
๊ธ๋์์ต๋๋ค.
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
Module package* ๋ณ์, ํจ์ < ํด๋์ค < ๋ชจ๋ < ํจํค์ง- ๋ชจ๋ : ๋ณ์์ ํจ์, ํด๋์ค๋ฅผ ๋ชจ์๋์ ( .py ) ํ์ฅ์๋ฅผ ๊ฐ์ง ํ์ผ ( ํด๋์ค ๋ณด๋ค ์กฐ๊ธ ๋ ํฐ ๋ฒ์ )- ํจํค์ง : ๋ชจ๋๋ณด๋ค ํ ๋จ๊ณ ํฐ ๊ธฐ๋ฅ. ๋ชจ๋์ ๊ธฐ๋ฅ์ ๋๋ ํ ๋ฆฌ ๋ณ๋ก ์ ๋ฆฌํด๋์ ๊ฐ๋
1. ๋ชจ๋ ์์ฑ2. ๋ชจ๋ ํธ์ถ 1. ๋ชจ๋ ์์ฑ(ํ์ผ ์์ฑ) | !ls
%%writefile dss.py
# ๋ชจ๋ ํ์ผ ์์ฑ (๋งค์ง ์ปค๋งจ๋ ์ฌ์ฉ)
# 1) %% -> ์ด ์
์ ์๋ ๋ด์ฉ์ ์ ๋ถ๋ค writefile ์ ์ ์ฉํ๊ฒ ๋ค.
# 2) dss.py ๋ผ๋ ํ์ผ์ ๋ง๋ค์ด์, ์จ์๋ ์ฝ๋๋ค์ ์ด ํ์ผ์ ์ ์ฅํ๊ฒ ๋ค.
# ๋ชจ๋ ์์ฑ -> ํ์ผ ์ ์ฅ
# 1. ๋ชจ๋ ์์ฑ (๋ชจ๋ = ํด๋์ค, ํจ์, ๋ณ์์ set)
num = 1234
def disp1(msg):
print("disp1", msg)
def disp2(msg):
print('disp2', msg)
class Calc:
def plus(self, *args):
return sum(args)
!ls
%reset
%whos | Variable Type Data/Info
------------------------------
dss module <module 'school.dss.data1<...>แแ
ฅแธ/school/dss/data1.py'>
school module <module 'school' (namespace)>
url module <module 'school.web.url' <...>แแ
ฎแแ
ฅแธ/school/web/url.py'>
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
2. ๋ชจ๋ ํธ์ถ | # ๋ชจ๋ ํธ์ถ : import ( .py ์ ์ธํ ํ์ผ๋ช
)
import dss
%whos
dss.num
dss.disp1('์๋
')
calc = dss.Calc()
calc.plus(1,2,3,4,5,6) | _____no_output_____ | MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
3. ๋ชจ๋ ๋ด ํน์ ๋ณ์, ํจ์ ํธ์ถ | # import random --> random ๋ชจ๋์ ๋ถ๋ฌ์จ ๊ฒ (random.py ๋ผ๋ ํ์ผ์ ์ฝ๋(๋ชจ๋ ์ ์ด๋์) ๊ฐ์ ธ์จ ๊ฒ)
# random.randint(1,5) --> random ๋ชจ๋ ๋ด randint๋ผ๋ ํจ์๋ฅผ ๊ฐ์ ธ์จ ๊ฒ.
# calc.plus --> dss ๋ผ๋ ๋ชจ๋์ plus๋ผ๋ ํจ์ ๊ฐ์ ธ์จ ๊ฒ.
# ๋ชจ๋ ์์ ํน์ ํจ์, ๋ณ์, ํด๋์ค ํธ์ถ
# '๋ชจ๋.๋ณ์' ๋ก ์ ์ง ์๊ณ , '๋ชจ๋' ๋ก ๋ฐ๋ก ํธ์ถ ๊ฐ๋ฅ
from dss import num, disp2
%whos
dss.num
num | _____no_output_____ | MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
4. ๋ชจ๋ ๋ด ๋ชจ๋ ๋ณ์, ํจ์ ํธ์ถ | %reset
from dss import *
%whos | Variable Type Data/Info
--------------------------------
Calc type <class 'dss.Calc'>
calc Calc <dss.Calc object at 0x109baed10>
disp1 function <function disp1 at 0x109a88ef0>
disp2 function <function disp2 at 0x109ab75f0>
dss module <module 'dss' from '/User<...>แผ/0. แแ
ณแแ
ฎแฏ แแ
ฎแแ
ฅแธ/dss.py'>
num int 1234
school module <module 'school' (namespace)>
url module <module 'school.web.url' <...>แแ
ฎแแ
ฅแธ/school/web/url.py'>
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
5. ํจํค์ง- ํจํค์ง ์์ฑ- ํจํค์ง ํธ์ถ- setup.py ํจํค์ง ์ค์น ํ์ผ ๋ง๋ค๊ธฐ - ํจํค์ง(๋๋ ํ ๋ฆฌ) : ๋ชจ๋(ํ์ผ) 1) ํจํค์ง ( ๋๋ ํ ๋ฆฌ (dss / web) ) ์์ฑ | # !mkdir p- ---> school ๋ฐ์ dss ๋๋ ํ ๋ฆฌ ์์ฑ
!mkdir -p school/dss
# !mkdir p- ---> school ๋ฐ์ web ๋๋ ํ ๋ฆฌ ์์ฑ
!mkdir -p school/web
!tree school | [01;34mschool[00m
โโโ [01;34mdss[00m
โย ย โโโ __init__.py
โย ย โโโ data1.py
โย ย โโโ data2.py
โโโ [01;34mweb[00m
โโโ __init__.py
โโโ url.py
2 directories, 5 files
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
tree ์ค์น- homebrew ์ค์น - homebrew : https://brew.sh/index_ko - homebrew : osx ํจํค์ง ๊ด๋ฆฌ ์ค์น ํด - /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" - brew install tree 2) ๋ชจ๋(ํ์ผ) ์์ฑ | # ์ด ๋จ๊ณ๋ ํ์ด์ฌ 3.8๋ฒ์ ผ ์ดํ ๋ถํฐ๋ ์ํด๋ ๋จ
# !touch --> ํ์ผ ์์ฑ
!touch school/dss/__init__.py
!touch school/web/__init__.py
!tree school
%%writefile school/dss/data1.py
# dss๋ผ๋ ํจํค์ง ์์ ๋ชจ๋(ํ์ผ)์ ์ถ๊ฐ
# web์ด๋ผ๋ ๋๋ ํ ๋ฆฌ ์์ ๋ชจ๋(ํ์ผ)์ ์ถ๊ฐ
def plus(*args):
print('data1')
return sum(args)
%%writefile school/dss/data2.py
def plus2(*args):
print('data2')
return sum(args)
%%writefile school/web/url.py
def make(url):
return url if url[:7] == 'http://' else 'http://'+url
!tree school | [01;34mschool[00m
โโโ [01;34mdss[00m
โย ย โโโ __init__.py
โย ย โโโ data1.py
โย ย โโโ data2.py
โโโ [01;34mweb[00m
โโโ __init__.py
โโโ url.py
2 directories, 5 files
| MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
3) ํจํค์ง ๊ฒฝ๋ก ์์ ์๋ ๋ชจ๋์ ์ฐพ์๋ค์ด๊ฐ ์ฌ์ฉ | import school.dss.data1
%whos
# school ๋๋ ํ ๋ฆฌ - dss ๋๋ ํ ๋ฆฌ - data1 ๋ชจ๋ - plus ํจ์ ํธ์ถ
school.dss.data1.plus(1,2,3)
# ๋ชจ๋ ํธ์ถ ๋ช
๋ น์ด ๋๋ฌด ๊ธธ๋ค import school.dss.data1
# alias ๋ก ๋จ์ถ๋ช
์์ฑ
import school.dss.data1 as dss
dss.plus(1,2,3)
# school web : ๋๋ ํ ๋ฆฌ
# url : ๋ชจ๋
from school.web import url
url.make('google.com')
# ํจํค์ง์ ์์น : ํน์ ๋๋ ํ ๋ฆฌ์ ์๋ ํจํค์ง๋ ์ด๋์์๋ import ๊ฐ๋ฅ
import random
import sys
for path in sys.path:
print(path)
# !ls /Users/kimjeongseob/opt/anaconda3/lib/python3.7
# ์๋์ ์ถ๋ ฅ ๊ฒฐ๊ณผ๋ฅผ ๋ณ์์๋ค ๋ฃ์ ์ ์์
A = !ls /Users/kimjeongseob/opt/anaconda3/lib/python3.7
len(A), A[-5:]
# setup.py ๋ฅผ ์์ฑํด์ ํจํค์ง๋ฅผ ์ค์นํด์ ์ฌ์ฉ
# setuptools ๋ฅผ ์ด์ฉ | _____no_output_____ | MIT | 1.Study/2. with computer/4.Programming/2.Python/3. Study/01_Python/0408_1_Lecture_python.ipynb | jskim0406/Study |
**Desafio 030****Python 3 - 1ยบ Mundo**Descriรงรฃo: Crie um programa que leia um nรบmero inteiro e mostre na tela se ele รฉ PAR ou รMPAR.Link: https://www.youtube.com/watch?v=4vFCzKuHOn4&t=4s | num = int(input('Digite um nรบmero: '))
if num % 2 == 0:
print('O nรบmero รฉ par.')
else:
print('O nรบmero รฉ รญmpar.') | _____no_output_____ | Apache-2.0 | Mundo01/Desafio030.ipynb | BrunaKuntz/PythonMundo01 |
Example 5 - Open-loop simulation An open-loop simulation is the case where no state-feedback control is used. It means that only time-dependent control is used or not control at all. This kind of simulation is mainly useful for stability analysis and for cheching the trimmed behaviior (including perturbations around the trimmed conditions). Import atmosphere model | from pyaat.atmosphere import atmosCOESA
atm = atmosCOESA() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Import gravity model | from pyaat.gravity import VerticalConstant
grav = VerticalConstant() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Import Aircraft model | from pyaat.aircraft import Aircraft
airc = Aircraft() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Import propulsion model | from pyaat.propulsion import SimpleModel
prop = SimpleModel() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Create a system | from pyaat.system import system
Complete_system = system(atmosphere = atm, propulsion = prop, aircraft = airc, gravity = grav) | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Trimm at cruize condition | Xe, Ue = Complete_system.trimmer(condition='cruize', HE = 10000., VE = 200) | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Printing equilibrium states and controls | from pyaat.tools import printInfo
printInfo(Xe, Ue, frame ='body')
printInfo(Xe, Ue, frame ='aero')
printInfo(Xe, Ue, frame='controls') | --------------------------------
----------- CONTROLS -----------
--------------------------------
delta_p
34.65222851433093
-------------
delta_e
-2.208294991778133
-------------
delta_a
4.978810759532202e-22
-------------
delta_r
-8.268303092392625e-22
| MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Simulation The open-loop simulation is carried out using the method 'propagate'. Mandatory inputs are the time of simulation TF, the equilibrium states Xe, the equilibrium control Ue, and a bolean variable called 'perturbation' which defines is applied during the simulation or not. Equilibrium simulation | solution, control = Complete_system.propagate(Xe, Ue, TF = 180, perturbation = False) | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
The outputs are two multidimentional arrays, containing the states over time and control over time. | print('Solution')
solution
print('control')
control | control
| MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
The time array can be accessed directly on the system. | time = Complete_system.time
time | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Check out the documentation for more information about the outputs. Ploting the resultsSome plots can be generated directly using the plotter util embeeded within PyAAT. | from pyaat.tools import plotter
pltr = plotter()
pltr.states = solution
pltr.time = Complete_system.time
pltr.control = control
pltr.LinVel(frame = 'body')
pltr.LinPos()
pltr.Attitude()
pltr.AngVel()
pltr.Controls()
pltr.LinPos3D() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
All states and controls remain constant over time, as expected. Out-of-equilibrium simulationsSometimes we may be interested in verifying the behavior of the aircraft out of the equilibrium states. It can be done by applying perturbations.Note that you would obtain the same result if you input a vector Xe out of equilibrium, but consider that it may cause confusion and in more advanced simulations (considering closed-loop control) it might lead to errors. Perturbation on states | solution, control = Complete_system.propagate(Xe, Ue, T0 = 0.0, TF = 30.0, dt = 0.01, perturbation = True, state = {'beta':2., 'alpha':2.})
pltr.states = solution
pltr.time = Complete_system.time
pltr.control = control
pltr.LinVel(frame = 'aero')
pltr.LinPos()
pltr.Attitude()
pltr.AngVel()
pltr.Controls() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
open-loop control Some usual control inputs are also embeeded within the toolbox, such as the doublet and step. | from pyaat.control import doublet, step | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Doublet input on elevator | doub = doublet()
doub.command = 'elevator'
doub.amplitude = 3
doub.T = 1
doub.t_init = 2 | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
Step input on aileron | st =step()
st.command = 'aileron'
st.amplitude = 1
st.t_init = 2
solution, control = Complete_system.propagate(Xe, Ue, TF = 50, perturbation=True, control = [doub, st]) | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
One can input as many control perturbation as we want, and we can combine it with states perturbations is desired. | pltr.states = solution
pltr.time = Complete_system.time
pltr.control = control
pltr.Controls()
pltr.LinVel(frame = 'aero')
pltr.LinPos()
pltr.Attitude()
pltr.AngVel()
pltr.LinPos3D() | _____no_output_____ | MIT | examples/open-loop_simulation_example.ipynb | KenedyMatiasso/PyAAT |
___ ___ Pandas Built-in Data VisualizationIn this lecture we will learn about pandas built-in capabilities for data visualization! It's built-off of matplotlib, but it baked into pandas for easier usage! Let's take a look! Imports | import numpy as np
import pandas as pd
%matplotlib inline | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
The DataThere are some fake data csv files you can read in as dataframes: | df1 = pd.read_csv('df1',index_col=0)
df2 = pd.read_csv('df2') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Style SheetsMatplotlib has [style sheets](http://matplotlib.org/gallery.htmlstyle_sheets) you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though).Here is how to use them.**Before plt.style.use() your plots look like this:** | df1['A'].hist() | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Call the style: | import matplotlib.pyplot as plt
plt.style.use('ggplot') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Now your plots look like this: | df1['A'].hist()
plt.style.use('bmh')
df1['A'].hist()
plt.style.use('dark_background')
df1['A'].hist()
plt.style.use('fivethirtyeight')
df1['A'].hist()
plt.style.use('ggplot') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities! Plot TypesThere are several plot types built-in to pandas, most of them statistical plots by nature:* df.plot.area * df.plot.barh * df.plot.density * df.plot.hist * df.plot.line * df.plot.scatter* df.plot.bar * df.plot.box * df.plot.hexbin * df.plot.kde * df.plot.pieYou can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. 'box','barh', etc..)___ Let's start going through them! Area | df2.plot.area(alpha=0.4) | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Barplots | df2.head()
df2.plot.bar()
df2.plot.bar(stacked=True) | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Histograms | df1['A'].plot.hist(bins=50) | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Line Plots | df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1) | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Scatter Plots | df1.plot.scatter(x='A',y='B') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
You can use c to color based off another column valueUse cmap to indicate colormap to use. For all the colormaps, check out: http://matplotlib.org/users/colormaps.html | df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column: | df1.plot.scatter(x='A',y='B',s=df1['C']*200) | C:\Users\Marcial\Anaconda3\lib\site-packages\matplotlib\collections.py:877: RuntimeWarning: invalid value encountered in sqrt
scale = np.sqrt(self._sizes) * dpi / 72.0 * self._factor
| Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
BoxPlots | df2.plot.box() # Can also pass a by= argument for groupby | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Hexagonal Bin PlotUseful for Bivariate Data, alternative to scatterplot: | df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])
df.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges') | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
____ Kernel Density Estimation plot (KDE) | df2['a'].plot.kde()
df2.plot.density() | _____no_output_____ | Apache-2.0 | 04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Pandas Built-in Data Visualization.ipynb | rikimarutsui/Python-for-Finance-Repo |
Amazon Shure MV7 EDA and Sentement Analysis- toc: true- branch: master- badges: true- comments: true- categories: [Fastpages, Jupyter, Python, Selenium, Stoc]- annotations: true- hide: false- image: images/diagram.png- layout: post- search_exclude: true Required Packages[wordcloud](https://github.com/amueller/word_cloud), [geopandas](https://geopandas.org/en/stable/getting_started/install.html), [nbformat](https://pypi.org/project/nbformat/), [seaborn](https://seaborn.pydata.org/installing.html), [scikit-learn](https://scikit-learn.org/stable/install.html)  Now let's get started!First thing first, you need to load all the necessary libraries: | import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
from wordcloud import WordCloud
from wordcloud import STOPWORDS
import re
import plotly.graph_objects as go
import seaborn as sns | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
Read the Data | #Import Data
df = pd.read_csv("/Users/zeyu/Desktop/DS/Ebay & Amazon/Amazon_reviews_scraping/Amazon_reviews_scraping/full_reviews.csv") | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
 Data CleaningStep 1:- Splite column Date to Country and Date- Combine the two rating columns to one- Convert type of date from string to datetime | #Clean Data
info = []
for i in df["date"]:
x = re.sub("Reviewed in ", "", i)
x1 = re.sub(" on ", "*", x)
info.append(x1)
df["date"] = pd.DataFrame({"date": info})
df[['country','date']] = df.date.apply(
lambda x: pd.Series(str(x).split("*")))
star = []
star = df.stars1.combine_first(df.stars2)
df["star"] = pd.DataFrame({"star": star})
del df['stars1']
del df['stars2']
#Convert String to Date
df.date = pd.to_datetime(df.date) | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
Step 2:- Two methods to verify if column "star" contain any NaN- Converted the type of column "star" from string to Int | "nan" in df['star']
df_no_star = df[df['star'].isna()]
df_no_star
#Convert 2.0 out of 5 stars to 2
df_int = []
#df_with_star["stars"] = [str(x).replace(':',' ') for x in df["stars"]]
for i in df["star"]:
x = re.sub(".0 out of 5 stars", "", i)
df_int.append(x)
df["rating"] = pd.DataFrame({"rating": df_int})
df["rating"] = df["rating"].astype(int)
del df['star'] | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
This is the data looks like after cleaning. EDA | temp = df['rating'].value_counts()
fig = go.Figure(go.Bar(
x=temp,
y=temp.index,
orientation='h'))
fig.show()
df_country = df['country'].value_counts()
fig = go.Figure(go.Bar(
x=df_country,
y=df_country.index,
orientation='h'))
fig.show()
mean_rating = df['rating'].mean()
mean_rating
"""fig = px.line(df, x=temp.index, y=temp.rating, title='Life expectancy in Canada')
fig.show()"""
import plotly.express as px
temp = df.groupby([df['date'].dt.date]).mean()
temp
#Average rating each month
temp = df.groupby(df['date'].dt.strftime('%B'))['rating'].mean().sort_values()
order_temp = temp.reindex(["January", "February", "March", "April", "May", "June", "July", "August", "September", "November", "December"])
order_temp.plot()
#Quantity of reviews in each month.
temp = df.groupby(df['date'].dt.strftime('%B'))['rating'].count().sort_values()
order_temp = temp.reindex(["January", "February", "March", "April", "May", "June", "July", "August", "September", "November", "December"])
order_temp.plot()
#Many words are useless so create a stopword list
stopwords = set(STOPWORDS)
stopwords.update(["Mic", "Microphone", "using","sound","use"])
def cleaned_visualise_word_map(x):
words=" "
for msg in x:
msg = str(msg).lower()
words = words+msg+" "
wordcloud = WordCloud(stopwords = stopwords, width=3000, height=2500, background_color='white').generate(words)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 14
fig_size[1] = 7
#Display image appear more smoothly
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show(wordcloud)
cleaned_visualise_word_map(df["review"])
df = df[df['rating'] != 3]
df['sentiment'] = df['rating'].apply(lambda rating : +1 if rating > 3 else -1)
positive = df[df['sentiment'] == 1]
negative = df[df['sentiment'] == -1]
df['sentimentt'] = df['sentiment'].replace({-1 : 'negative'})
df['sentimentt'] = df['sentimentt'].replace({1 : 'positive'})
fig = px.histogram(df, x="sentimentt")
fig.update_traces(marker_color="indianred",marker_line_color='rgb(8,48,107)',
marker_line_width=1.5)
fig.update_layout(title_text='Product Sentiment')
fig.show()
stopwords = set(STOPWORDS)
#stopwords.update(["Mic", "Microphone", "using", "sound", "use"])
## good and great removed because they were included in negative sentiment
pos = " ".join(review for review in positive.title)
wordcloud2 = WordCloud(stopwords=stopwords).generate(pos)
plt.imshow(wordcloud2, interpolation='bilinear')
plt.axis("off")
plt.show()
pos = " ".join(review for review in negative.title)
wordcloud2 = WordCloud(stopwords=stopwords).generate(pos)
plt.imshow(wordcloud2, interpolation='bilinear')
plt.axis("off")
plt.show() | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
Sentiment Analysis | def remove_punctuation(text):
final = "".join(u for u in text if u not in ("?", ".", ";", ":", "!",'"'))
return final
df['review'] = df['review'].apply(remove_punctuation)
df = df.dropna(subset=['title'])
df['title'] = df['title'].apply(remove_punctuation)
dfNew = df[['title','sentiment']]
dfNew.head()
dfLong = df[['review','sentiment']]
dfLong.head()
index = df.index
df['random_number'] = np.random.randn(len(index))
train = df[df['random_number'] <= 0.8]
test = df[df['random_number'] > 0.8]
#change df frame to a bag of words
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(token_pattern=r'\b\w+\b') | _____no_output_____ | Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
[Vectorizer](https://towardsdatascience.com/hacking-scikit-learns-vectorizers-9ef26a7170af) &[Bag-of-Words](https://towardsdatascience.com/hacking-scikit-learns-vectorizers-9ef26a7170af) | train_matrix = vectorizer.fit_transform(train['title'])
test_matrix = vectorizer.transform(test['title'])
train_matrix_l = vectorizer.fit_transform(train['review'])
test_matrix_l = vectorizer.transform(test['review'])
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
X_train = train_matrix
X_test = test_matrix
y_train = train['sentiment']
y_test = test['sentiment']
X_train_l = train_matrix_l
X_test_l = test_matrix_l
y_train_l = train['sentiment']
y_test_l = test['sentiment']
lr.fit(X_train,y_train)
lr.fit(X_train_l,y_train_l)
predictions = lr.predict(X_test)
predictions_l = lr.predict(X_test_l)
# find accuracy, precision, recall:
from sklearn.metrics import confusion_matrix,classification_report
new = np.asarray(y_test)
confusion_matrix(predictions,y_test)
long = np.asarray(y_test_l)
confusion_matrix(predictions_l,y_test_l)
print(classification_report(predictions,y_test))
#0.88 Accuracy
print(classification_report(predictions_l,y_test_l)) | precision recall f1-score support
-1 0.00 0.00 0.00 0
1 1.00 0.89 0.94 116
accuracy 0.89 116
macro avg 0.50 0.44 0.47 116
weighted avg 1.00 0.89 0.94 116
| Apache-2.0 | _notebooks/2022-02-01-EDA-test.ipynb | christopherGuan/sample-ds-blog |
**Part 1:** Event Selection Optimization 1) Make a stacked histogram plot for the feature variable: mass | fig, ax = plt.subplots(1,1)
ax.hist(higgs_events['mass'],density = True,alpha = 0.8, label = 'higgs')
ax.hist(qcd_events['mass'],density = True,alpha = 0.8, label = 'qcd')
plt.legend(fontsize = 18)
plt.show() | _____no_output_____ | MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
Expected events in background is 20,000 and is poisson distirbuted $\cdot$ Use Poisson statistics for significance calculation | np.random.seed(123)
dist = stats.poisson.rvs(20000, size = 10000)
plt.hist(dist,density = True, bins = np.linspace(19450,20550,50), label = 'Expected Yield Distribution')
plt.axvline(20100,color = 'red',label = 'Observed Yield')
plt.legend(fontsize = 18)
plt.show()
print('Significance of 20100 events:', np.round(stats.norm.isf(stats.poisson.sf(20100,20000)),3),'sigma') | Significance of 20100 events: 0.711 sigma
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
$\frac{\textbf{N}_{Higgs}}{\sqrt{\textbf{N}_{QCD}}} = \frac{100}{\sqrt{20000}} = 0.707$This value is different than the value obtained in the previous calculation. This is because the value $\frac{\textbf{N}_{Higgs}}{\sqrt{\textbf{N}_{QCD}}}$ is the number of standard deviations away from the mean the measurment is, while the number from the above calculation is how the probability of the background producing a value larger than the observed value corresponds to the standard normal distributions $\sigma$. | def mult_cut(qcd,higgs,features,cuts):
'''
Parameters:
qcd - qcd data dictionary
higgs - higgs data dictionary
features (list) - the features to apply cuts to
cuts (list of touples) - in format ((min,max),(min,max))
Returns:
number of qcd and higgs events
cut min and max
significance
'''
qcd_factor = 20000/len(qcd)
higgs_factor = 100/len(higgs)
mu = qcd
signal = higgs
for i in range(0,len(features)):
a = np.array(mu[features[i]])
b = np.array(signal[features[i]])
mu = mu[:][np.logical_and(a>cuts[i][0], a<cuts[i][1])]
signal = signal[:][np.logical_and(b>cuts[i][0], b<cuts[i][1])]
mu = len(mu)*qcd_factor
signal = len(signal)*higgs_factor
sig = np.round(stats.norm.isf(stats.poisson.sf(mu + signal,mu)),3)
print(features,'cuts', cuts ,'leaves',mu,'expected qcd events and',signal,'expected higgs events')
print('Significance of', mu+signal ,'events:',sig,'sigma')
print('---------------------------------------------\n') | _____no_output_____ | MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
2) Identify mass cuts to optimize the expected significance | s = 120
for n in range(0,7):
mult_cut(qcd_dict,new_dict,['mass'],[(s,150)])
s+=1
s = 132
for n in range(0,7):
mult_cut(qcd_dict,new_dict,['mass'],[(124,s)])
s-=1 | ['mass'] cuts [(124, 132)] leaves 724.6 expected qcd events and 69.554 expected higgs events
Significance of 794.154 events: 2.563 sigma
---------------------------------------------
['mass'] cuts [(124, 131)] leaves 640.6 expected qcd events and 68.992 expected higgs events
Significance of 709.592 events: 2.682 sigma
---------------------------------------------
['mass'] cuts [(124, 130)] leaves 551.6 expected qcd events and 67.891 expected higgs events
Significance of 619.491 events: 2.842 sigma
---------------------------------------------
['mass'] cuts [(124, 129)] leaves 469.20000000000005 expected qcd events and 65.21600000000001 expected higgs events
Significance of 534.416 events: 2.956 sigma
---------------------------------------------
['mass'] cuts [(124, 128)] leaves 382.8 expected qcd events and 60.361000000000004 expected higgs events
Significance of 443.161 events: 3.034 sigma
---------------------------------------------
['mass'] cuts [(124, 127)] leaves 291.40000000000003 expected qcd events and 53.394 expected higgs events
Significance of 344.79400000000004 events: 3.032 sigma
---------------------------------------------
['mass'] cuts [(124, 126)] leaves 197.8 expected qcd events and 38.963 expected higgs events
Significance of 236.763 events: 2.68 sigma
---------------------------------------------
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
Cut optimization was performed on the unsampled data in order to not overfit the cuts to the sample selected. The optimal cuts kept data with a mass between 124 and 128, and with those cuts yielded a measurement significance of 3.034 sigma. 3) Make stacked histogram plots for the rest of the features With and without optimal mass cuts | plt.rcParams["figure.figsize"] = (20,50)
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6),(ax7,ax8),(ax9,ax10),(ax11,ax12),(ax13,ax14),(ax15,ax16),(ax17,ax18),(ax19,ax20),(ax21,ax22),(ax23,ax24),(ax25,ax26),(ax27,ax28)) = plt.subplots(14,2)
axes = ((ax1,ax2),(ax3,ax4),(ax5,ax6),(ax7,ax8),(ax9,ax10),(ax11,ax12),(ax13,ax14),(ax15,ax16),(ax17,ax18),(ax19,ax20),(ax21,ax22),(ax23,ax24),(ax25,ax26),(ax27,ax28))
labels = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1', 't2', 't3', 't21', 't32', 'KtDeltaR']
a = np.array(new_dict['mass'])
b = np.array(qcd_dict['mass'])
for i in range(0,14):
axes[i][0].hist(new_dict[labels[i]],density = True, alpha = 0.7,label = 'higgs')
axes[i][0].hist(qcd_dict[labels[i]],density = True, alpha = 0.7,label = 'qcd')
axes[i][0].set_xlabel(labels[i])
axes[i][0].legend()
axes[i][1].hist(new_dict[labels[i]][np.logical_and(a<135, a>124)],density = True, alpha = 0.7,label = 'higgs with mass cuts')
axes[i][1].hist(qcd_dict[labels[i]][np.logical_and(b<135, b>124)],density = True, alpha = 0.7,label = 'qcd with mass cuts')
axes[i][1].set_xlabel(labels[i])
axes[i][1].legend()
plt.show() | _____no_output_____ | MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
4) Optimize event selections using multiple features | mult_cut(qcd_dict,new_dict,['d2'],[(0,1.42)])
mult_cut(qcd_dict,new_dict,['t3'],[(0,0.17)])
mult_cut(qcd_dict,new_dict,['KtDeltaR'],[(0.48,0.93)])
mult_cut(qcd_dict,new_dict,['ee2'],[(0.11,0.21)])
mult_cut(qcd_dict,new_dict,['d2'],[(0,1.42)])
mult_cut(qcd_events,higgs_events,['mass','d2'],[(124,128),(0,1.42)])
mult_cut(qcd_events,higgs_events,['mass','KtDeltaR'],[(124,128),(0.48,0.93)])
mult_cut(qcd_events,higgs_events,['mass','ee2'],[(124,128),(0.11,0.21)])
mult_cut(qcd_events,higgs_events,['mass','t3'],[(124,128),(0,0.17)])
mult_cut(qcd_events,higgs_events,['mass','d2','KtDeltaR'],[(124,128),(0,1.42),(0.48,0.93)]) | ['mass', 'd2', 'KtDeltaR'] cuts [(124, 128), (0, 1.42), (0.48, 0.93)] leaves 18.0 expected qcd events and 51.0 expected higgs events
Significance of 69.0 events: 9.238 sigma
---------------------------------------------
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
5) Plot 2-dimensional scattering plots between top two most discriminative features | plt.rcParams["figure.figsize"] = (20,10)
fig, (ax1,ax2) = plt.subplots(1,2)
ax1.plot(qcd_dict['mass'],qcd_dict['d2'],color = 'red', label = 'QCD',ls='',marker='.',alpha=0.5)
ax1.plot(new_dict['mass'],qcd_dict['d2'],color = 'blue',label = 'Higgs',ls='',marker='.',alpha=0.5)
ax1.legend(fontsize = 18)
ax1.set_xlabel('mass',fontsize = 18)
ax1.set_ylabel('d2',fontsize = 18)
ax2.plot(qcd_dict['mass'],qcd_dict['KtDeltaR'],color = 'red', label = 'QCD',ls='',marker='.',alpha=0.5)
ax2.plot(new_dict['mass'],qcd_dict['KtDeltaR'],color = 'blue',label = 'Higgs',ls='',marker='.',alpha=0.5)
ax2.legend(fontsize = 18)
ax2.set_xlabel('mass',fontsize = 18)
ax2.set_ylabel('KtDeltaR',fontsize = 18)
plt.show() | _____no_output_____ | MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
Using Maching Learning to predict | sample_train, sample_test = train_test_split(sample,test_size = 0.2)
X_train = sample_train.drop('label',axis = 1)
y_train = sample_train['label']
X_test = sample_test.drop('label',axis = 1)
y_test = sample_test['label']
mdl = MLPClassifier(hidden_layer_sizes = (8,20,20,8,8,4),max_iter=200,alpha = 10**-6,learning_rate = 'invscaling')
mdl.fit(X_train,y_train)
sum(mdl.predict(X_test) == y_test)/len(y_test)
from sklearn.metrics import confusion_matrix
conf = confusion_matrix(y_test,mdl.predict(X_test))
print([conf[1]*100/sum(y_test == 1),conf[0]*20000/sum(y_test == 0)])
true_higgs = conf[1][1]*100/sum(y_test == 1)
false_higgs = conf[0][1]*20000/sum(y_test == 0)
print(false_higgs,true_higgs)
sig = stats.norm.isf(stats.poisson.sf(k = true_higgs+false_higgs, mu = false_higgs))
print("significance using neural network is",np.round(sig,3),'sigma') | significance using neural network is 2.132 sigma
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
Machine learning model chosen was less effective than the cuts that I had determined. With a more optimized loss function I'm sure machine learning would out perform manually selected cuts, but in this instance it didn't. **Part 2:** Pseudo-experiment data analysis | #Defining a function to make cuts and return the cut data, not calculating significance like previous function
def straight_cut(data,features,cuts):
for i in range(0,len(features)):
a = np.array(data[features[i]])
data = data[:][np.logical_and(a>cuts[i][0], a<cuts[i][1])]
return data | _____no_output_____ | MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
1) High Luminosity | plt.rcParams["figure.figsize"] = (20,30)
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(3,2)
axes = (ax1,ax2,ax3,ax4,ax5,ax6)
features = ['mass','d2','KtDeltaR','ee2','t3','ee3']
for i in range(0,6):
counts,bins = np.histogram(new_dict[features[i]],bins = 50)
axes[i].hist(bins[:-1],bins, weights = counts*40344/100000, color = 'red',label = 'Higgs',alpha = 0.7)
counts,bins = np.histogram(qcd_dict[features[i]],bins = 50)
axes[i].hist(bins[:-1],bins, weights = counts*40344/100000, color = 'blue',label = 'QCD',alpha = 0.7)
axes[i].hist(high_lumi[features[i]], color = 'green',label = 'data', bins = 50,alpha = 0.7)
axes[i].legend()
plt.show()
plt.rcParams["figure.figsize"] = (20,30)
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(3,2)
axes = (ax1,ax2,ax3,ax4,ax5,ax6)
features = ['mass','d2','KtDeltaR','ee2','t3','ee3']
cut_higgs = straight_cut(new_dict,['mass','d2','KtDeltaR'],[(124,128),(0,1.42),(0.48,0.93)])
cut_qcd = straight_cut(qcd_dict,['mass','d2','KtDeltaR'],[(124,128),(0,1.42),(0.48,0.93)])
cut_high = straight_cut(high_lumi,['mass','d2','KtDeltaR'],[(124,128),(0,1.42),(0.48,0.93)])
for i in range(0,6):
counts,bins = np.histogram(cut_higgs[features[i]])
axes[i].hist(bins[:-1],bins, weights = counts*40344/100000, color = 'red',label = 'Higgs',alpha = 0.7)
counts,bins = np.histogram(cut_qcd[features[i]])
axes[i].hist(bins[:-1],bins, weights = counts*40344/100000, color = 'blue',label = 'QCD',alpha = 0.7)
axes[i].hist(cut_high[features[i]], color = 'green',label = 'data',alpha = 0.7)
axes[i].legend()
axes[i].set_yscale('log')
plt.show()
n_qcd = len(cut_qcd)*40344/100000
n_observed = len(cut_high)
sig = np.round(stats.norm.isf(stats.poisson.sf(n_observed,n_qcd)),3)
print('Significance of', n_observed ,'events:',sig,'sigma') | Significance of 128 events: 10.724 sigma
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
The same cuts made on the simulated data gave a lower significance of $9.2\sigma$ 2) Low Luminosity | plt.rcParams["figure.figsize"] = (20,30)
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(3,2)
axes = (ax1,ax2,ax3,ax4,ax5,ax6)
features = ['mass','d2','KtDeltaR','ee2','t3','ee3']
for i in range(0,6):
counts,bins = np.histogram(new_dict[features[i]],bins = 50)
axes[i].hist(bins[:-1],bins, weights = counts*4060/100000, color = 'red',label = 'Higgs',alpha = 0.7)
counts,bins = np.histogram(qcd_dict[features[i]],bins = 50)
axes[i].hist(bins[:-1],bins, weights = counts*4060/100000, color = 'blue',label = 'QCD',alpha = 0.7)
axes[i].hist(low_lumi[features[i]], color = 'green',label = 'data', bins = 50,alpha = 0.7)
axes[i].legend()
plt.show()
plt.rcParams["figure.figsize"] = (20,30)
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(3,2)
axes = (ax1,ax2,ax3,ax4,ax5,ax6)
features = ['mass','d2','KtDeltaR','ee2','t3','ee3']
cut_low = straight_cut(low_lumi,['mass','d2','KtDeltaR'],[(124,128),(0,1.42),(0.48,0.93)])
for i in range(0,6):
counts,bins = np.histogram(cut_higgs[features[i]])
axes[i].hist(bins[:-1],bins, weights = counts*4060/100000, color = 'red',label = 'Higgs',alpha = 0.7)
counts,bins = np.histogram(cut_qcd[features[i]])
axes[i].hist(bins[:-1],bins, weights = counts*4060/100000, color = 'blue',label = 'QCD',alpha = 0.7)
axes[i].hist(cut_low[features[i]], color = 'green',label = 'data',alpha = 0.7)
axes[i].legend()
axes[i].set_yscale('log')
plt.show()
n_qcd = len(cut_qcd)*4060/100000
n_observed = len(cut_low)
sig = np.round(stats.norm.isf(stats.poisson.sf(n_observed,n_qcd)),3)
print('Significance of', n_observed ,'events:',sig,'sigma') | Significance of 9 events: 2.273 sigma
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
3) Confidence Levels of signal yield95% Upper limit for signal yield low luminosity$$\sum_{k = 9}^{\infty}P(\mu,k) = 0.95$$$$P(\mu,k) = \frac{e^{-\mu}\mu^k}{k!}$$$$\sum_{k = 0}^{9}\frac{e^{-\mu}\mu^k}{k!} = 0.05$$$$\mu = 15.71$$ | print('With a true signal of 15.71, the probability seeing something stronger than 9 events is:',np.round(stats.poisson.sf(9,15.71),4)) | With a true signal of 15.71, the probability seeing something stronger than 9 events is: 0.9501
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
This means that 95% of the time would see more than 9 events if there were a true signal strength of 15.71 events. For the low luminosity data we expected to see 4.22 events, since the data is poisson distributed we will round up to 5 events in order to get more than 95%$$\sum_{k = 5}^{\infty}P(\mu,k) = 0.95$$$$P(\mu,k) = \frac{e^{-\mu}\mu^k}{k!}$$$$\sum_{k = 0}^{5}\frac{e^{-\mu}\mu^k}{k!} = 0.05$$$$\mu = 10.51$$ | prob = 0
mu = 128
while prob>0.05:
prob = stats.poisson.cdf(128,mu)
mu+=0.02
print(mu,prob)
print('With a true signal of 10.513, the probability seeing something stronger than 4.22 events is:',np.round(stats.poisson.sf(4.22,10.513),4)) | With a true signal of 10.513, the probability seeing something stronger than 4.22 events is: 0.9791
| MIT | Labs/Labs5-8/Lab7.ipynb | jeff-abe/PHYS434 |
Weighting in taxcalc_helpers Setup | import numpy as np
import pandas as pd
import taxcalc as tc
import microdf as mdf
tc.__version__ | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
Load dataStart with a `DataFrame` with `nu18` and `XTOT`, and also calculate `XTOT_m`. | df = mdf.calc_df(group_vars=['nu18'], metric_vars=['XTOT'])
df.columns | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
From this we can calculate the number of people and tax units by the tax unit's number of children. | df.groupby('nu18')[['s006_m', 'XTOT_m']].sum() | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
What if we also want to calculate the total number of *children* by the tax unit's number of children?For this we can use `add_weighted_metrics`, the function called within `calc_df`. | mdf.add_weighted_metrics(df, ['nu18']) | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
Now we can do the same thing as before, with the new `nu18_m` column. | df.groupby('nu18')[['nu18_m']].sum() | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
We can also calculate weighted sums without adding the weighted metric. | total_children = mdf.weighted_sum(df, 'nu18', 's006')
# Fix this decimal.
'Total children: ' + str(round(total_children / 1e6)) + 'M.' | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
We can also calculate the weighted mean and median. | mdf.weighted_mean(df, 'nu18', 's006')
mdf.weighted_median(df, 'nu18', 's006') | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
We can also look at more quantiles.*Note that weighted quantiles have a different interface.* | decile_bounds = np.arange(0, 1.1, 0.1)
deciles = mdf.weighted_quantile(df, 'nu18', 's006', decile_bounds)
pd.DataFrame(deciles, index=decile_bounds) | _____no_output_____ | MIT | docs/weighting.ipynb | MaxGhenis/taxcalc-helpers |
Natural and artificial perturbations | import functools
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import propagate, cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter2D, OrbitPlotter3D | _____no_output_____ | MIT | docs/source/examples/Natural and artificial perturbations.ipynb | helgee/poliastro |
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations! | R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = propagate(
orbit, (tr - orbit.epoch).to(u.s), method=cowell_with_ad
)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R); | _____no_output_____ | MIT | docs/source/examples/Natural and artificial perturbations.ipynb | helgee/poliastro |
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time! | r0 = np.array([-2384.46, 5729.01, 3050.46]) * u.km
v0 = np.array([-7.36138, -2.98997, 1.64354]) * u.km / u.s
orbit = Orbit.from_vectors(Earth, r0, v0)
tof = 48.0 * u.h
# This will be easier with propagate
# when this is solved:
# https://github.com/poliastro/poliastro/issues/257
rr, vv = cowell(
Earth.k,
orbit.r,
orbit.v,
np.linspace(0, tof, 2000),
ad=J2_perturbation,
J2=Earth.J2.value,
R=Earth.R.to(u.km).value
)
k = Earth.k.to(u.km**3 / u.s**2).value
rr = rr.to(u.km).value
vv = vv.to(u.km / u.s).value
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, h')
plt.plot(np.linspace(0, tof, 2000), raans); | _____no_output_____ | MIT | docs/source/examples/Natural and artificial perturbations.ipynb | helgee/poliastro |
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time! | # database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = propagate(
initial, (tr - initial.epoch).to(u.s), method=cowell_with_3rdbody
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon') | _____no_output_____ | MIT | docs/source/examples/Natural and artificial perturbations.ipynb | helgee/poliastro |
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination. | from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr2 = propagate(
s0, (tr - s0.epoch).to(u.s), method=cowell_with_ad
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label='orbit with artificial thrust') | _____no_output_____ | MIT | docs/source/examples/Natural and artificial perturbations.ipynb | helgee/poliastro |
Reason for these testsA PR is raised in [ISSUE_1](https://github.com/frankaging/Reason-SCAN/issues/1), the reporter finds some discrepancies in split numbers. Specifically, the `test` split in our main data frame, is not matching up with our sub-test splits as `p1`, `p2` and `p3`. This PR further exposes another issue with our documentations about the splits (i.e., how we generate our splits). Thus, we use this live debug notebook to address these comments. The Issue | import os, json
p1_test_path_to_data = "../../ReaSCAN-v1.0/ReaSCAN-compositional-p1-test/data-compositional-splits.txt"
print(f"Reading dataset from file: {p1_test_path_to_data}...")
p1_test_data = json.load(open(p1_test_path_to_data, "r"))
print(len(p1_test_data["examples"]["test"]))
p2_test_path_to_data = "../../ReaSCAN-v1.0/ReaSCAN-compositional-p2-test/data-compositional-splits.txt"
print(f"Reading dataset from file: {p2_test_path_to_data}...")
p2_test_data = json.load(open(p2_test_path_to_data, "r"))
print(len(p2_test_data["examples"]["test"]))
p3_test_path_to_data = "../../ReaSCAN-v1.0/ReaSCAN-compositional-p3-test/data-compositional-splits.txt"
print(f"Reading dataset from file: {p3_test_path_to_data}...")
p3_test_data = json.load(open(p3_test_path_to_data, "r"))
print(len(p3_test_data["examples"]["test"]))
len(p1_test_data["examples"]["test"]) + len(p2_test_data["examples"]["test"]) + len(p3_test_data["examples"]["test"])
ReaSCAN_path_to_data = "../../ReaSCAN-v1.0/ReaSCAN-compositional/data-compositional-splits.txt"
print(f"Reading dataset from file: {ReaSCAN_path_to_data}...")
ReaSCAN_data = json.load(open(ReaSCAN_path_to_data, "r"))
p1_test_example_filtered = []
p2_test_example_filtered = []
p3_test_example_filtered = []
for example in ReaSCAN_data["examples"]["test"]:
if example['derivation'] == "$OBJ_0":
p1_test_example_filtered += [example]
elif example['derivation'] == "$OBJ_0 ^ $OBJ_1":
p2_test_example_filtered += [example]
elif example['derivation'] == "$OBJ_0 ^ $OBJ_1 & $OBJ_2":
p3_test_example_filtered += [example]
print(f"p1 test example count={len(p1_test_example_filtered)}")
print(f"p2 test example count={len(p2_test_example_filtered)}")
print(f"p3 test example count={len(p3_test_example_filtered)}")
len(p1_test_example_filtered) + len(p2_test_example_filtered) + len(p3_test_example_filtered) | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
For instance, as you can see `p1 test example count` should be equal to `921`, but it is not. However, you can see that the total number of test examples matches up. The **root cause** potentially is that our sub-test splits are created asynchronously with the test split in the main data. Before confirming the **root cause**, we need to first analyze what is the actual **impact** on performance numbers? Are they changing our results qualitatively? or just quantitatively? We come up with some tests around this issue starting from basic to more complex. Test-1: ValidityWe need to ensure our sub-test splits **only** contain commands appear in the training set. Otherwise, our test splits become compositional splits. | train_command_set = set([])
for example in ReaSCAN_data["examples"]["train"]:
train_command_set.add(example["command"])
for example in p1_test_data["examples"]["test"]:
assert example["command"] in train_command_set
for example in p2_test_data["examples"]["test"]:
assert example["command"] in train_command_set
for example in p3_test_data["examples"]["test"]:
assert example["command"] in train_command_set
print("Test-1 Passed") | Test-1 Passed
| CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
Test-2: Overestimating?What about the shape world? Are there overlaps between train and test? | import hashlib
train_example_hash = set([])
for example in ReaSCAN_data["examples"]["train"]:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
train_example_hash.add(example_hash_object.hexdigest())
assert len(train_example_hash) == len(ReaSCAN_data["examples"]["train"])
p1_test_example_hash = set([])
for example in p1_test_data["examples"]["test"]:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
p1_test_example_hash.add(example_hash_object.hexdigest())
assert len(p1_test_example_hash) == len(p1_test_data["examples"]["test"])
p2_test_example_hash = set([])
for example in p2_test_data["examples"]["test"]:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
p2_test_example_hash.add(example_hash_object.hexdigest())
assert len(p2_test_example_hash) == len(p2_test_data["examples"]["test"])
p3_test_example_hash = set([])
for example in p3_test_data["examples"]["test"]:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
p3_test_example_hash.add(example_hash_object.hexdigest())
assert len(p3_test_example_hash) == len(p3_test_data["examples"]["test"])
p1_test_dup_count = 0
for hash_str in p1_test_example_hash:
if hash_str in train_example_hash:
p1_test_dup_count += 1
p2_test_dup_count = 0
for hash_str in p2_test_example_hash:
if hash_str in train_example_hash:
p2_test_dup_count += 1
p3_test_dup_count = 0
for hash_str in p3_test_example_hash:
if hash_str in train_example_hash:
p3_test_dup_count += 1
print(f"p1_test_dup_count={p1_test_dup_count}")
print(f"p2_test_dup_count={p2_test_dup_count}")
print(f"p3_test_dup_count={p3_test_dup_count}")
main_p1_test_example_hash = set([])
for example in p1_test_example_filtered:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
main_p1_test_example_hash.add(example_hash_object.hexdigest())
assert len(main_p1_test_example_hash) == len(p1_test_example_filtered)
main_p1_test_dup_count = 0
for hash_str in main_p1_test_example_hash:
if hash_str in train_example_hash:
main_p1_test_dup_count += 1
print(f"main_p1_test_dup_count={main_p1_test_dup_count}") | main_p1_test_dup_count=0
| CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
**Conclusion**: Yes. As you can see, we have many duplicated examples in our random tests. This means that, we need to use updated testing splits for evaluating performance. As a result, the **table 3** in the paper needs to be updated since it is now overestimating model performance for non-generalizing test splits (e.g., `p1`, `p2` nad `p3`). **Action Required**: Need to re-evaluation model performance on those splits. Test-3: Does this issue affect any other generalization splits?Does our generalization splits containing duplicates? | def get_example_hash_set(split):
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-{split}/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
split_test_data_test_example_hash = set([])
for example in split_test_data["examples"]["test"]:
example_hash_object = hashlib.md5(json.dumps(example).encode('utf-8'))
split_test_data_test_example_hash.add(example_hash_object.hexdigest())
assert len(split_test_data_test_example_hash) == len(split_test_data["examples"]["test"])
return split_test_data_test_example_hash
a1_hash = get_example_hash_set("a1")
a2_hash = get_example_hash_set("a2")
a3_hash = get_example_hash_set("a3")
b1_hash = get_example_hash_set("b1")
b2_hash = get_example_hash_set("b2")
c1_hash = get_example_hash_set("c1")
c2_hash = get_example_hash_set("c2")
a1_dup_count = 0
for hash_str in a1_hash:
if hash_str in train_example_hash:
a1_dup_count += 1
a2_dup_count = 0
for hash_str in a2_hash:
if hash_str in train_example_hash:
a2_dup_count += 1
a3_dup_count = 0
for hash_str in a3_hash:
if hash_str in train_example_hash:
a3_dup_count += 1
print(f"a1_dup_count={a1_dup_count}")
print(f"a2_dup_count={a2_dup_count}")
print(f"a3_dup_count={a3_dup_count}")
b1_dup_count = 0
for hash_str in b1_hash:
if hash_str in train_example_hash:
b1_dup_count += 1
b2_dup_count = 0
for hash_str in b2_hash:
if hash_str in train_example_hash:
b2_dup_count += 1
print(f"b1_dup_count={b1_dup_count}")
print(f"b2_dup_count={b2_dup_count}")
c1_dup_count = 0
for hash_str in c1_hash:
if hash_str in train_example_hash:
c1_dup_count += 1
c2_dup_count = 0
for hash_str in c2_hash:
if hash_str in train_example_hash:
c2_dup_count += 1
print(f"c1_dup_count={c1_dup_count}")
print(f"c2_dup_count={c2_dup_count}") | c1_dup_count=0
c2_dup_count=0
| CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
**Conclusion**: No. Test-4: What about correctness of generalization splits in general?We see there is no duplicate, but what about general correctness? Are their created correctly? In this section, we add more sanity checks to show correctness of each generalization split.For each split, we verify two things:* the generalization split can ONLY contain test examples that it is designed to test.* the training split DOES NOT contain examples that are aligned with the generalization split. A1:novel color modifier | split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-a1/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
for example in split_test_data["examples"]["test"]:
assert "yellow,square" in example["command"]
for example in ReaSCAN_data["examples"]["train"]:
assert "yellow,square" not in example["command"] | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
A2: novel color attribute | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-a2/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
for example in ReaSCAN_data["examples"]["train"]:
assert "red,square" not in example["command"]
for example in split_test_data["examples"]["test"]:
if "red,square" not in example["command"]:
# then, some background object referred in the command needs to be a red square!!
if example["derivation"] == "$OBJ_0":
assert example['situation']['placed_objects']['0']['object']['shape'] == "square"
assert example['situation']['placed_objects']['0']['object']['color'] == "red"
elif example["derivation"] == "$OBJ_0 ^ $OBJ_1":
assert example['situation']['placed_objects']['0']['object']['shape'] == "square" or example['situation']['placed_objects']['1']['object']['shape'] == "square"
assert example['situation']['placed_objects']['0']['object']['color'] == "red" or example['situation']['placed_objects']['1']['object']['color'] == "red"
elif example["derivation"] == "$OBJ_0 ^ $OBJ_1 & $OBJ_2":
assert example['situation']['placed_objects']['0']['object']['shape'] == "square" or example['situation']['placed_objects']['1']['object']['shape'] == "square" or example['situation']['placed_objects']['2']['object']['shape'] == "square"
assert example['situation']['placed_objects']['0']['object']['color'] == "red" or example['situation']['placed_objects']['1']['object']['color'] == "red" or example['situation']['placed_objects']['2']['object']['color'] == "red"
else:
pass | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
A3: novel size attribute | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-a3/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
for example in split_test_data["examples"]["test"]:
assert "small,cylinder" in example['command'] or \
"small,red,cylinder" in example['command'] or \
"small,blue,cylinder" in example['command'] or \
"small,yellow,cylinder" in example['command'] or \
"small,green,cylinder" in example['command']
for example in ReaSCAN_data["examples"]["train"]:
assert not ("small,cylinder" in example['command'] or \
"small,red,cylinder" in example['command'] or \
"small,blue,cylinder" in example['command'] or \
"small,yellow,cylinder" in example['command'] or \
"small,green,cylinder" in example['command']) | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
B1: novel co-occurrence of objects | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-b1/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
from collections import namedtuple, OrderedDict
seen_command_structs = {}
seen_concepts = {} # add in seen concepts, so we can select concepts that are seen, but new composites!
seen_object_co = set([])
seen_rel_co = set([])
for example_selected in ReaSCAN_data["examples"]["train"]:
rel_map = OrderedDict({})
for ele in example_selected["relation_map"]:
rel_map[tuple(ele[0])] = ele[1]
example_struct = OrderedDict({
'obj_pattern_map': example_selected["object_pattern_map"],
'rel_map': rel_map,
'obj_map': example_selected["object_expression"],
'grammer_pattern': example_selected['grammer_pattern'],
'adverb': example_selected['adverb_in_command'],
'verb': example_selected['verb_in_command']
})
obj_co = []
for k, v in example_selected["object_expression"].items():
if v not in seen_concepts:
seen_concepts[v] = 1
else:
seen_concepts[v] += 1
obj_co += [v]
obj_co.sort()
seen_object_co.add(tuple(obj_co))
rel_co = []
for k, v in rel_map.items():
if v not in seen_concepts:
seen_concepts[v] = 1
else:
seen_concepts[v] += 1
rel_co += [v]
rel_co.sort()
seen_rel_co.add(tuple(rel_co))
test_seen_command_structs = {}
test_seen_concepts = {} # add in seen concepts, so we can select concepts that are seen, but new composites!
test_seen_object_co = set([])
test_seen_rel_co = set([])
for example_selected in split_test_data["examples"]["test"]:
rel_map = OrderedDict({})
for ele in example_selected["relation_map"]:
rel_map[tuple(ele[0])] = ele[1]
example_struct = OrderedDict({
'obj_pattern_map': example_selected["object_pattern_map"],
'rel_map': rel_map,
'obj_map': example_selected["object_expression"],
'grammer_pattern': example_selected['grammer_pattern'],
'adverb': example_selected['adverb_in_command'],
'verb': example_selected['verb_in_command']
})
obj_co = []
for k, v in example_selected["object_expression"].items():
if v not in test_seen_concepts:
test_seen_concepts[v] = 1
else:
test_seen_concepts[v] += 1
obj_co += [v]
obj_co.sort()
test_seen_object_co.add(tuple(obj_co))
rel_co = []
for k, v in rel_map.items():
if v not in test_seen_concepts:
test_seen_concepts[v] = 1
else:
test_seen_concepts[v] += 1
rel_co += [v]
rel_co.sort()
test_seen_rel_co.add(tuple(rel_co))
test_seen_object_co.intersection(seen_object_co) | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
B2: novel co-occurrence of relations | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-b2/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
test_seen_command_structs = {}
test_seen_concepts = {} # add in seen concepts, so we can select concepts that are seen, but new composites!
test_seen_object_co = set([])
test_seen_rel_co = set([])
for example_selected in split_test_data["examples"]["test"]:
rel_map = OrderedDict({})
for ele in example_selected["relation_map"]:
rel_map[tuple(ele[0])] = ele[1]
example_struct = OrderedDict({
'obj_pattern_map': example_selected["object_pattern_map"],
'rel_map': rel_map,
'obj_map': example_selected["object_expression"],
'grammer_pattern': example_selected['grammer_pattern'],
'adverb': example_selected['adverb_in_command'],
'verb': example_selected['verb_in_command']
})
obj_co = []
for k, v in example_selected["object_expression"].items():
if v not in test_seen_concepts:
test_seen_concepts[v] = 1
else:
test_seen_concepts[v] += 1
obj_co += [v]
obj_co.sort()
test_seen_object_co.add(tuple(obj_co))
rel_co = []
for k, v in rel_map.items():
if v not in test_seen_concepts:
test_seen_concepts[v] = 1
else:
test_seen_concepts[v] += 1
rel_co += [v]
rel_co.sort()
test_seen_rel_co.add(tuple(rel_co))
test_seen_rel_co | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
C1:novel conjunctive clause length | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-c1/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
for example in split_test_data["examples"]["test"]:
assert example["derivation"] == "$OBJ_0 ^ $OBJ_1 & $OBJ_2 & $OBJ_3"
assert (example["command"].count("and")) == 2 | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
C2:novel relative clauses | # this test may be a little to weak for now. maybe improve it to verify the shape world?
split_test_path_to_data = f"../../ReaSCAN-v1.0/ReaSCAN-compositional-c2/data-compositional-splits.txt"
print(f"Reading dataset from file: {split_test_path_to_data}...")
split_test_data = json.load(open(split_test_path_to_data, "r"))
for example in split_test_data["examples"]["test"]:
assert example["derivation"] == "$OBJ_0 ^ $OBJ_1 ^ $OBJ_2"
assert (example["command"].count("that,is")) == 2 | _____no_output_____ | CC-BY-4.0 | code/dataset/verify_split_tests.ipynb | frankaging/Reason-SCAN |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.