markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Lets see how the data looks like. | bins = 200
plt.hist(exp_A, bins=200, histtype='step', label='Exeriment A')
plt.hist(exp_B, bins=200, histtype='step', label='Exeriment B')
plt.hist(exp_C, bins=200, histtype='step', label='Exeriment C')
plt.hist(exp_D, bins=200, histtype='step', label='Exeriment D')
plt.ylabel('Counts')
plt.xlabel('Energy (keV)')
plt.legend()
plt.show() | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
We save the data in a simple txt format. | np.savetxt('experiment_A.txt', exp_A)
np.savetxt('experiment_B.txt', exp_B)
np.savetxt('experiment_C.txt', exp_C) | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
For one of the experiments, we save the binned file only. | hist_D, bins_D = np.histogram(exp_D, bins=300, range=(0,40))
np.savetxt('experiment_D.txt', np.column_stack([bins_D[:-1], bins_D[1:], hist_D])) | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
Efficiency Data We create the efficiency curves on an already binned grid. | grid = np.arange(0.002, 20, 0.002)
eff_A = (np.ones(grid.shape) - np.exp(-grid))*0.8 + 0.2
eff_B = 0.9*np.ones(grid.shape)
eff_C = (np.sqrt(grid) / np.sqrt(grid[-1]) * 0.7*np.ones(grid.shape))*0.8 + 0.2
eff_D = np.ones(grid.shape) | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
Lets plot the curves. | plt.plot(eff_A, label='Efficiency A')
plt.plot(eff_B, label='Efficiency B')
plt.plot(eff_C, label='Efficiency C')
plt.plot(eff_D, label='Efficiency D')
plt.xlabel('Energy (keV)')
plt.ylabel('Survival Probability')
plt.legend()
plt.show() | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
Now lets plot the re-weighted histogram. | # put the exposures
exposure_A = 1
exposure_B = 0.2
exposure_C = 15
exposure_D = np.random.uniform(size=len(hist_D)) + 1
# make histograms
hist_A, bins_A = np.histogram(exp_A, bins)
hist_B, bins_B = np.histogram(exp_B, bins)
hist_C, bins_C = np.histogram(exp_C, bins)
# reweight with efficiencies
hist_A = hist_A / np.interp(bins_A[:-1], grid, eff_A)
hist_B = hist_B / np.interp(bins_B[:-1], grid, eff_B)
hist_C = hist_C / np.interp(bins_C[:-1], grid, eff_C)
hist_D = hist_D / np.interp(bins_D[:-1], grid, eff_D)
# plot - comment the lines of experiments to not show them
plt.hist(bins_A[:-1], bins_A, weights=hist_A/exposure_A, histtype='step', label='Experiment A', color='C0')
plt.hist(bins_B[:-1], bins_B, weights=hist_B/exposure_B, histtype='step', label='Experiment B', color='C1')
plt.hist(bins_C[:-1], bins_C, weights=hist_C/exposure_C, histtype='step', label='Experiment C', color='C2')
plt.hist(bins_D[:-1], bins_D, weights=hist_D/exposure_D, histtype='step', label='Experiment D', color='C3')
plt.xlabel('Energy (keV)')
plt.ylabel('Counts')
plt.legend()
plt.show() | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
And save the efficiency curves to files as well. | np.savetxt('experiment_A_eff.txt', np.column_stack([grid, eff_A]))
np.savetxt('experiment_B_eff.txt', np.column_stack([grid, eff_B]))
np.savetxt('experiment_C_eff.txt', np.column_stack([grid, eff_C]))
np.savetxt('experiment_D_eff.txt', np.column_stack([grid, eff_D])) | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
Finally, write the exposures to files. | np.savetxt('experiment_A_exposure.txt', [exposure_A])
np.savetxt('experiment_B_exposure.txt', [exposure_B])
np.savetxt('experiment_C_exposure.txt', [exposure_C])
np.savetxt('experiment_D_exposure.txt', np.column_stack([(bins_D[1:] - bins_D[:-1])/2 + bins_D[:-1], exposure_D])) | _____no_output_____ | CC-BY-4.0 | data/test_data/generate_data.ipynb | fewagner/excess |
# Google Colab Instructions
from google.colab import drive
drive.mount('/content/drive')
!ls /content/drive/My\ Drive/Colab\ Notebooks
# What version of python do you have?
import sys
import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf
print(f"Python Version: {sys.version}")
print(f"Tensorflow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print(f"Scikit-Learn Version: {sk.__version__}")
print("GPU is ", "Available" if tf.test.is_gpu_available() else "Not Available")
| _____no_output_____ | MIT | Utility_References.ipynb | chakra-ai/DeepNeuralNetworks |
|
Wind Statistics Introduction:The data have been modified to contain some missing values, identified by NaN. Using pandas should make this exerciseeasier, in particular for the bonus question.You should be able to perform all of these operations without usinga for loop or other looping construct.1. The data in 'wind.data' has the following format: | """
Yr Mo Dy RPT VAL ROS KIL SHA BIR DUB CLA MUL CLO BEL MAL
61 1 1 15.04 14.96 13.17 9.29 NaN 9.87 13.67 10.25 10.83 12.58 18.50 15.04
61 1 2 14.71 NaN 10.83 6.50 12.62 7.67 11.50 10.04 9.79 9.67 17.54 13.83
61 1 3 18.50 16.88 12.33 10.13 11.17 6.17 11.25 NaN 8.50 7.67 12.75 12.71
""" | _____no_output_____ | Apache-2.0 | pandas/06_Stats/Wind_Stats/Solutions.ipynb | eric999j/Udemy_Python_Hand_On |
The first three columns are year, month and day. The remaining 12 columns are average windspeeds in knots at 12 locations in Ireland on that day. More information about the dataset go [here](wind.desc). Step 1. Import the necessary libraries | import pandas as pd
import datetime | _____no_output_____ | Apache-2.0 | pandas/06_Stats/Wind_Stats/Solutions.ipynb | eric999j/Udemy_Python_Hand_On |
Agenda1. Recap: list and loops // Questions about assignment2. List comprehension3. Dictionaries4. Pandas datatypes5. Read data with Pandas6. Explore data with Pandas7. Work with missing values List comprehension | my_list = ['wordA', 'wordB']
#normal loop
new_list1 = []
for item in my_list:
new_list1.append(item.upper())
#list comprehension
new_list2 = [item.upper() for item in my_list]
print(new_list1, new_list2) | ['WORDA', 'WORDB'] ['WORDA', 'WORDB']
| MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Python dictionaries- Dictionary is a python datatype that is used to store key-value pairs. It enables you to quickly retrieve, add, remove, modify values using a key. Dictionary is very similar to what we call associative array or hash in other languages.- {} and seprated by ,Dictionaries and lists share the following characteristics:- Both are mutable (can be changed)- Both are dynamic. They can grow and shrink as needed.- Both can be nested. A list can contain another list. A dictionary can contain another dictionary. A dictionary can also contain a list, and vice versa.- Dictionaries differ from lists primarily in how elements are accessed:List elements are accessed by their position in the list, via indexing.Dictionary elements are accessed via keys. | mydict = {"name": "Demi",
"birth_year": 1994,
"hobby": "programming"}
print(mydict['name'])
mydict[0]
mydict.keys()
mydict.values()
for key, value in mydict.items():
print(key.upper())
for item in mydict.values():
print(item)
#change a value
mydict['name'] = "DeeJay"
mydict.items()
# dictonaries can contain any data type
mydict = {"names": ["Demi", "DeeJay"],
"birth_year": 1994,
"hobby": ["programming", "yoga", "drinking wine"]} | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Exercise- Create a dictionary about yourself, list at least 2 hobbies- Print only your second hobby- What is your birth_year? Pandas- Pandas stands for “Python Data Analysis Library"- pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, it takes data (like a CSV or TSV file, or a SQL database) and creates a Python object with rows and columns called dataframe that looks very similar to table in a statistical software (think Excel or SPSS for example). - similar to R- pandas is a libary or module, therefore if we want to use it, we need to instal and import it. You can make use of the functions that are defined in the module by calling them with . (dot), like you did with list.split() or string.strip() | # Install a conda package in the current Jupyter kernel
import sys
!conda install --yes --prefix {sys.prefix} pandas | Collecting package metadata (current_repodata.jsodone
Solving envidone
## Package Plan ##
environment location: /usr/local/Caskroom/miniconda/base/envs/testj
added / updated specs:
- pandas
The following NEW packages will be INSTALLED:
blas pkgs/main/osx-64::blas-1.0-mkl
intel-openmp pkgs/main/osx-64::intel-openmp-2020.1-216
libgfortran pkgs/main/osx-64::libgfortran-3.0.1-h93005f0_2
mkl pkgs/main/osx-64::mkl-2019.4-233
mkl-service pkgs/main/osx-64::mkl-service-2.3.0-py37hfbe908c_0
mkl_fft pkgs/main/osx-64::mkl_fft-1.0.15-py37h5e564d8_0
mkl_random pkgs/main/osx-64::mkl_random-1.1.0-py37ha771720_0
numpy pkgs/main/osx-64::numpy-1.18.1-py37h7241aed_0
numpy-base pkgs/main/osx-64::numpy-base-1.18.1-py37h6575580_1
pandas pkgs/main/osx-64::pandas-1.0.3-py37h6c726b0_0
pytz pkgs/main/noarch::pytz-2020.1-py_0
Preparing transaction:done
Verifying transact| WARNING conda.core.path_actions:verify(963): Unable to create environments file. Path not writable.
environment location: /Users/alyonagalyeva/.conda/environments.txt
done
Execut\ WARNING conda.core.envs_manager:register_env(52): Unable to register environment. Path not writable or missing.
environment location: /usr/local/Caskroom/miniconda/base/envs/testj
registry file: /Users/alyonagalyeva/.conda/environments.txt
done
| MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
- after the installation we need to import the libary, you need to do import for every Jupyter notebook. - `as pd` is an alias, if you do not do 'as' you will have to type pandas everytime. Programmers are lazy, so we use shortcuts such as pd | import pandas as pd | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Pandas datatypesThere are two core objects in Pandas: the DataFrame and the Series. SeriesPandas Series is a one-dimensional labeled array, capable of holding data of any type (integer, string, float, python objects, etc.). The axis labels are collectively called index. Pandas Series is nothing, but a column in an Excel sheet. Like in Excel every row in the sheet has - an index- a value or datapoint (if you entered a value)**img from: https://codechalleng.es/bites/251/*** Did we already told you, you can do amazing stuff with markdown? https://about.gitlab.com/handbook/markdown-guide/ | # assign the variable s to Series
s = pd.Series(data, index=index)
# lets define data
data = [2,4,6,5]
# lets try it again
s = pd.Series(data, index=index)
# we need to have the same amount of indexes as data points
my_index = [0,1,2,3]
# try to change my_index
pd.Series(data, index=my_index) | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
ExerciseHow can you use python functions to define the index? Remember, you're lazy!- Hint: - Length of the data and the index needs to be the same - Have you used the range function before? DataFrameA DataFrame is a table. It contains an array of individual entries, each of which has a certain value. Each entry corresponds to a row (or record) and a column.- not limited to integers also strings ** image from = https://www.geeksforgeeks.org/ and https://www.learndatasci.com/For example, consider the following simple DataFrame | df_with_numbers = pd.DataFrame({'Yes': [53, 21], 'No': [13, 1]})
df_with_numbers
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Boring.']}) | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Read dataBeing able to create a DataFrame or Series manually is handy. But, most of the time, we won't actually create our own data manually. Instead, we'll be working with data that already exists.Data can be stored in any number of different forms and formats. By far the most basic is a CSV file. When you open a CSV file you get something that looks like this:Product A,Product B,Product C,30,21,9,35,34,1,41,11,11Download data from Kaggle or take a look at this data descriprion:https://www.kaggle.com/kimjihoo/ds4c-what-is-this-dataset-detailed-description | # read the data and store it in df variable
path = 'data/coronavirusdataset/Case.csv'
df = pd.read_csv(path) | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Viewing and Inspecting DataNow that you’ve loaded your data, it’s time to take a look at it. How does the dataframe look like? Running the name of the data frame would give you the entire table, but you can also use functions | # get the first n rows with df.head(n), or the last n rows with df.tail(n)
df.head()
len(df)
# check the number of rows and columns
df.shape
# important to check non-null values
df.info()
# check only the columns
df.columns
df.group
#df['group']
df['city'].describe()
df['province'].unique()
# view unique values and counts for a series (like a column or a few columns)
df['city'].value_counts() | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Exercise1. How many individual provinces does this dataset contain?2. Display the top three MENTIONED provinces Slices | df[1:4]
cases_in_gurogu = df[df.city == 'Guro-gu']
cases_in_gurogu
df.confirmed.sum() | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Exercise1. How many confirmed cases are there in Eunpyeong-gu ? Missing dataEntries with missing values are given the value NaN, short for "Not a Number". For technical reasons these NaN values are always float64 dtype. Copying dataframeIn Pandas, indexing a DataFrame returns a reference to the initial DataFrame. By changing the subset we change the initial DataFrame. Thus, you'd want to use the copy if you want to make sure the initial DataFrame shouldn't be changed. Consider the following code: | # index, column
missing_data_df = df.copy()
missing_data_df
# create missing values
missing_data_df.at[0, 'confirmed'] = None
missing_data_df | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Pandas provides some methods specific to manipulating the missing data. To select NaN entries you can use pd.isnull() (or its companion pd.notnull()). | df[pd.isnull(df.city)]
# df.isnull().values.any()
# df.info | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Replacing missing values is a common operation. Pandas provides a really handy method for this problem: fillna~(). fillna() provides a few different strategies for mitigating such data. For example, we can simply replace each NaN with an "Unknown": | # if any non value exsist, fill with unknown
df.city.fillna("Unknown") | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Exercise1. fill the missing values of the confirmed cases with the average of the confirmed cases Missing values are not always NaN, they can also be ["n/a", "na", "-", ""]. If needed we can also replace these values. | # df.latitude
df.latitude.unique() # check the -
# replace values
df.latitude.replace('-', "unknown") | _____no_output_____ | MIT | lessons/Week3-lesson.ipynb | pyladiesams/Bootcamp-Data-Analysis-beginner-apr-may2020 |
Intervalos de Confiança Francisco A. Rodrigues, University of São Paulo. https://sites.icmc.usp.br/[email protected] Esse notebook é relacionado à aula: https://www.youtube.com/watch?v=AkmyfLc-EOs Podemos interpretaro intervalo de confiança de $(1-\alpha)100\%$ através de simulações. | import numpy as np
import matplotlib.pyplot as plt
n = 50 # tamanho da amostra
Ns = 100 # numero de intervalos
mu = 2 # media populacional
sigma = 2 # desvio padrão populacional
beta = 0.95 # nivel de confianca
zalpha = 1.96 # valor de z (a partir de beta)
c = 0 # conta o numero de intervalos que contem a media
plt.figure(figsize=(14,10))
for s in range(1,Ns):
x = np.random.normal(mu, sigma, n) # sorteia uma amostra de tamanho n
IC1 = np.mean(x) - zalpha*sigma/np.sqrt(n) #intervalo inferior
IC2 = np.mean(x) + zalpha*sigma/np.sqrt(n) #intervalo superior
if(mu > IC1 and mu < IC2):
c = c + 1
# mostra o intervalo em cinza se continar a media
plt.vlines(s, ymin=IC1, ymax=IC2, color = 'gray')
plt.plot(s,np.mean(x), 'o', color = 'gray',
markersize=5)
else:
# mostra o intervalo que nao contem a media
plt.vlines(s, ymin=IC1, ymax=IC2, color = 'black', linestyles = 'dashed')
plt.plot(s,np.mean(x), 'o', color = 'black',
markersize=5)
plt.axhline(y = mu, color = 'black') # mostra a media populacional
plt.xlabel('Amostra', fontsize=20)
plt.show()
print('Nível de confiança:', beta)
print('Fraçao de intervalos que contém a média:', c/Ns) | _____no_output_____ | CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
Calculo do Intervalo de confiança Podemos implementar uma função para calcular o intervalo de confiança automaticamente. | import scipy.stats
import numpy as np
def confident_interval(Xs, n, confidence = 0.95, sigma = -1, s = -1):
zalpha = abs(scipy.stats.norm.ppf((1 - confidence)/2.))
if(sigma != -1): # se a variancia eh conhecida
IC1 = Xs - zalpha*sigma/np.sqrt(n)
IC2 = Xs + zalpha*sigma/np.sqrt(n)
else: # se a variancia eh desconhecida
if(n >= 50): # se o tamanho da amostra eh maior do que 50
# Usa a distribuicao normal
IC1 = Xs - zalpha*s/np.sqrt(n)
IC2 = Xs + zalpha*s/np.sqrt(n)
else: # se o tamanho da amostra eh menor do que 50
# Usa a distribuicao t de Student
talpha = scipy.stats.t.ppf((1 + confidence) / 2., n-1)
IC1 = Xs - talpha*s/np.sqrt(n)
IC2 = Xs + talpha*s/np.sqrt(n)
return [IC1, IC2] | _____no_output_____ | CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
**Exemplo**: Em uma empresa de distribuição de alimentos pela internet, verificou-se que o tempo necessário para uma entrega tem distribuição normal com média $\mu = 30$ minutos e desvio padrão $\sigma = 10$ minutos. Em uma amostra de 50 entregadores, observou-se um tempo médio de entrega $\bar{X}_{50} = 25$ minutos. Determine o intervalo de 95\% de confiança para a média $\mu$ de todos os entregadores da empresa. | Xs = 25
n = 50
confidence =0.95
sigma = 10
IC = confident_interval(Xs,n, confidence, sigma)
print('Confidence interval:', IC) | Confidence interval: [22.228192351300645, 27.771807648699355]
| CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
**Exemplo** Em um provedor de videos na Internet, verificou-se que para uma amostra de 15 usuários, o tempo médio de exibição é igual a $\bar{X}_{15} = 39,3$ minutos e o desvio padrão da amostra $S_{15} = 2,6$ minutos. Encontre um intervalo de 90\% para a média populacional $\mu$. | Xs = 39.3
s = 2.6
n = 15
confidence =0.9
IC = confident_interval(Xs,n, confidence, -1, s)
print('Confidence interval:', IC) | Confidence interval: [38.117602363950525, 40.48239763604947]
| CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
Para um conjunto de dados, temos a função abaixo. | import scipy.stats
import numpy as np
def confident_interval_data(X, confidence = 0.95, sigma = -1):
def S(X): #funcao para calcular o desvio padrao amostral
s = 0
for i in range(0,len(X)):
s = s + (X[i] - np.mean(X))**2
s = np.sqrt(s/(len(X)-1))
return s
n = len(X) # numero de elementos na amostra
Xs = np.mean(X) # media amostral
s = S(X) # desvio padrao amostral
zalpha = abs(scipy.stats.norm.ppf((1 - confidence)/2))
if(sigma != -1): # se a variancia eh conhecida
IC1 = Xs - zalpha*sigma/np.sqrt(n)
IC2 = Xs + zalpha*sigma/np.sqrt(n)
else: # se a variancia eh desconhecida
if(n >= 50): # se o tamanho da amostra eh maior do que 50
# Usa a distribuicao normal
IC1 = Xs - zalpha*s/np.sqrt(n)
IC2 = Xs + zalpha*s/np.sqrt(n)
else: # se o tamanho da amostra eh menor do que 50
# Usa a distribuicao t de Student
talpha = scipy.stats.t.ppf((1 + confidence) / 2., n-1)
IC1 = Xs - talpha*s/np.sqrt(n)
IC2 = Xs + talpha*s/np.sqrt(n)
return [IC1, IC2] | _____no_output_____ | CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
Executando para um exemplo. | X = [1, 2, 3, 4, 5]
confidence = 0.95
IC = confident_interval_data(X, confidence)
print('Confidence interval:', IC) | Confidence interval: [1.0367568385224393, 4.9632431614775605]
| CC0-1.0 | Intervalo-de-confianca.ipynb | franciscoicmc/simulacao |
This notebook will help you practice some of the skills and concepts you learned in chapter 2 of the book:- Strings, Numbers- Variables- Lists, Sets, Dictionaries- Loops and list comprehensions- Control Flow- Functions- Classes- Packages/Modules- Debugging an error- Using documentation Here we have some data on the number of books read by different people who work at Bob's Book Emporium. Create Python code that loops through each of the people and prints out how many books they have read. If someone has read 0 books, print out "___ has not read any books!" instead of the number of books. | people = ['Krishnang', 'Steve', 'Jimmy', 'Mary', 'Divya', 'Robert', 'Yulia']
books_read = [12, 6, 0, 7, 4, 10, 15]
for i in range(len(people)):
if books_read[i] == 0:
print(people[i] + "has not read any books!")
else:
print(people[i] + " has read " + str(books_read[i]) + " books!")
| Krishnang has read 12 books!
Steve has read 6 books!
Jimmyhas not read any books!
Mary has read 7 books!
Divya has read 4 books!
Robert has read 10 books!
Yulia has read 15 books!
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
There are several ways to solve this -- you could look at the `zip()` function, use `enumerate()`, use `range` and `len`, or use other methods. To print the names and values, you can use string concatenation (+), f-string formatting, or other methods. | # your code here | _____no_output_____ | MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Turn the loop we just created into a function that takes the two lists (books read and people) as arguments. Be sure to try out your function to make sure it works. | def people_books(people, books_read):
for i in range(len(people)):
if books_read[i] == 0:
print(people[i] + "has not read any books!")
else:
print(people[i] + " has read " + str(books_read[i]) + " books!")
people_books(people, books_read) | Krishnang has read 12 books!
Steve has read 6 books!
Jimmyhas not read any books!
Mary has read 7 books!
Divya has read 4 books!
Robert has read 10 books!
Yulia has read 15 books!
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Challenge: Sort the values of `books_read` from greatest to least and print the top three people with the number of books they have read. This is a tougher problem. Some possible ways to solve it include using NumPy's argsort, creating a dictionary, and creating tuples. | new_dict = {}
for i in range(len(books_read)):
new_dict[people[i]] = books_read[i]
sorted_dicctionary = sorted(new_dict.items(), key = lambda x: x[1], reverse=True)
print(sorted_dicctionary)
| [('Yulia', 15), ('Krishnang', 12), ('Robert', 10), ('Mary', 7), ('Steve', 6), ('Divya', 4), ('Jimmy', 0)]
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Bob's books gets a discount for every multiple of 3 books their employees buy and read. Find out how many multiples of 3 books they have read, and how many more books need to be read to get to the next multiple of 3. Python has a built-in `sum` function that may be useful here, and don't forget about the modulo operator. | sum_books = sum(books_read)
discounted = sum_books//3
remaining = sum_books % 3 | _____no_output_____ | MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Create a dictionary for the data where the keys are people's names and the values are the number of books. An advanced way to do this would be with a dictionary comprehension, but you can also use a loop. | # your code here
dicctionary = {person : books for person,books in zip(people, books_read)}
print(dicctionary) | {'Krishnang': 12, 'Steve': 6, 'Jimmy': 0, 'Mary': 7, 'Divya': 4, 'Robert': 10, 'Yulia': 15}
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Challenge: Use the dictionary to print out the top 3 people with the most books read. This is where Stack Overflow and searching the web might come in handy -- try searching 'sort dictionary by value in Python'. | # your code here
sorted_dicctionary = sorted(dicctionary.items(), key = lambda x:x[1], reverse = True)[:3]
sorted_dicctionary | _____no_output_____ | MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Using sets, ensure there are no duplicate names in our data. (Yes, this is trivial since our data is small and we can manually inspect it, but if we had thousands of names, we could use the same method as we do here.) | set_people = set(people)
print(set_people) | {'Yulia', 'Robert', 'Steve', 'Mary', 'Divya', 'Jimmy', 'Krishnang'}
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Create a class for storing the books read and people's names. The class should also include a function for printing out the top three book readers. Test out your class to make sure it works. | class books_people:
def __init__(self, people, books_read):
self.people = people
self.books_read = books_read
def print_top_readers(self):
book_tuples = ((b,p) for b,p in zip(self.books_read, self.people))
for b,p in sorted(book_tuples, reverse= True)[:3]:
print(f'{p} has read {b} books!')
br = books_people(people, books_read)
br.print_top_readers() | Yulia has read 15 books!
Krishnang has read 12 books!
Robert has read 10 books!
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Use the time module to see how long it takes to make a new class and print out the top three readers. | import time
start = time.time()
br=books_people(people, books_read)
br.print_top_readers()
elapsed = time.time() - start
print(f'It has elapsed {elapsed} seconds') | Yulia has read 15 books!
Krishnang has read 12 books!
Robert has read 10 books!
It has elapsed 0.0005550384521484375 seconds
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Another way to do this is with the %%timeit magic command: The code below is throwing a few errors. Debug and correct the error so the code runs. | for b, p in list(zip(books_read, people))[:3]:
if b > 0 and b < 10:
print(p + ' has only read ' + str(b) + ' books') | Steve has only read 6 books
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
Use the documentation (https://docs.python.org/3/library/stdtypes.htmlstring-methods) to understand how the functions `rjust` and `ljust` work, then modify the loop below so the output looks something like:```Krishnang------12 booksSteve---------- 6 booksJimmy---------- 0 booksMary----------- 7 booksDivya---------- 4 booksRobert---------10 booksYulia----------15 books``` | for b, p in zip(books_read, people):
print(f'{p.ljust(15, "-")}{str(b).rjust(2)} books') | Krishnang------12 books
Steve---------- 6 books
Jimmy---------- 0 books
Mary----------- 7 books
Divya---------- 4 books
Robert---------10 books
Yulia----------15 books
| MIT | 2-Chapter-2/Test_your_knowledge.ipynb | DiegoMerino28/Practical-Data-Science-with-Python |
A/B test 4 - loved journeys, control vs LLRThis related links B/C test (ab4) was conducted from 22nd-28th March 2019.The data used in this report are 23rd-27th Mar 2019 because the test was started partway through 22nd Ma, and ended partway through 28th Mar.The test compared the existing related links (where available) to links generated using LLR algorithm Import | %load_ext autoreload
%autoreload 2
import os
import pandas as pd
import numpy as np
import ast
import re
# z test
from statsmodels.stats.proportion import proportions_ztest
# bayesian bootstrap and vis
import matplotlib.pyplot as plt
import seaborn as sns
import bayesian_bootstrap.bootstrap as bb
from astropy.utils import NumpyRNGContext
# progress bar
from tqdm import tqdm, tqdm_notebook
from scipy import stats
from collections import Counter
import sys
sys.path.insert(0, '../../src' )
import analysis as analysis
# set up the style for our plots
sns.set(style='white', palette='colorblind', font_scale=1.3,
rc={'figure.figsize':(12,9),
"axes.facecolor": (0, 0, 0, 0)})
# instantiate progress bar goodness
tqdm.pandas(tqdm_notebook)
pd.set_option('max_colwidth',500)
# the number of bootstrap means used to generate a distribution
boot_reps = 10000
# alpha - false positive rate
alpha = 0.05
# number of tests
m = 4
# Correct alpha for multiple comparisons
alpha = alpha / m
# The Bonferroni correction can be used to adjust confidence intervals also.
# If one establishes m confidence intervals, and wishes to have an overall confidence level of 1-alpha,
# each individual confidence interval can be adjusted to the level of 1-(alpha/m).
# reproducible
seed = 1337 | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
File/dir locations Processed journey data | DATA_DIR = os.getenv("DATA_DIR")
filename = "full_sample_loved_947858.csv.gz"
filepath = os.path.join(
DATA_DIR, "sampled_journey", "20190323_20190327",
filename)
filepath
CONTROL_GROUP = "B"
INTERVENTION_GROUP = "C"
VARIANT_DICT = {
'CONTROL_GROUP':'B',
'INTERVENTION_GROUP':'C'
}
# read in processed sampled journey with just the cols we need for related links
df = pd.read_csv(filepath, sep ="\t", compression="gzip")
# convert from str to list
df['Event_cat_act_agg']= df['Event_cat_act_agg'].progress_apply(ast.literal_eval)
df['Page_Event_List'] = df['Page_Event_List'].progress_apply(ast.literal_eval)
df['Page_List'] = df['Page_List'].progress_apply(ast.literal_eval)
# drop dodgy rows, where page variant is not A or B.
df = df.query('ABVariant in [@CONTROL_GROUP, @INTERVENTION_GROUP]')
df[['Occurrences', 'ABVariant']].groupby('ABVariant').sum()
df['Page_List_Length'] = df['Page_List'].progress_apply(len)
| 100%|██████████| 772387/772387 [00:00<00:00, 786616.91it/s]
| MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Nav type of page lookup - is it a finding page? if not it's a thing page | filename = "document_types.csv.gz"
# created a metadata dir in the DATA_DIR to hold this data
filepath = os.path.join(
DATA_DIR, "metadata",
filename)
print(filepath)
df_finding_thing = pd.read_csv(filepath, sep="\t", compression="gzip")
df_finding_thing.head()
thing_page_paths = df_finding_thing[
df_finding_thing['is_finding']==0]['pagePath'].tolist()
finding_page_paths = df_finding_thing[
df_finding_thing['is_finding']==1]['pagePath'].tolist() | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
OutliersSome rows should be removed before analysis. For example rows with journey lengths of 500 or very high related link click rates. This process might have to happen once features have been created. Derive variables journey_click_rateThere is no difference in the proportion of journeys using at least one related link (journey_click_rate) between page variant A and page variant B. \begin{equation*}\frac{\text{total number of journeys including at least one click on a related link}}{\text{total number of journeys}}\end{equation*} | # get the number of related links clicks per Sequence
df['Related Links Clicks per seq'] = df['Event_cat_act_agg'].map(analysis.sum_related_click_events)
# map across the Sequence variable, which includes pages and Events
# we want to pass all the list elements to a function one-by-one and then collect the output.
df["Has_Related"] = df["Related Links Clicks per seq"].map(analysis.is_related)
df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences']
df.head(3) | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
count of clicks on navigation elementsThere is no statistically significant difference in the count of clicks on navigation elements per journey between page variant A and page variant B.\begin{equation*}{\text{total number of navigation element click events from content pages}}\end{equation*} Related link counts | # get the total number of related links clicks for that row (clicks per sequence multiplied by occurrences)
df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences'] | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Navigation events | def count_nav_events(page_event_list):
"""Counts the number of nav events from a content page in a Page Event List."""
content_page_nav_events = 0
for pair in page_event_list:
if analysis.is_nav_event(pair[1]):
if pair[0] in thing_page_paths:
content_page_nav_events += 1
return content_page_nav_events
# needs finding_thing_df read in from document_types.csv.gz
df['Content_Page_Nav_Event_Count'] = df['Page_Event_List'].progress_map(count_nav_events)
def count_search_from_content(page_list):
search_from_content = 0
for i, page in enumerate(page_list):
if i > 0:
if '/search?q=' in page:
if page_list[i-1] in thing_page_paths:
search_from_content += 1
return search_from_content
df['Content_Search_Event_Count'] = df['Page_List'].progress_map(count_search_from_content)
# count of nav or search clicks
df['Content_Nav_or_Search_Count'] = df['Content_Page_Nav_Event_Count'] + df['Content_Search_Event_Count']
# occurrences is accounted for by the group by bit in our bayesian boot analysis function
df['Content_Nav_Search_Event_Sum_row_total'] = df['Content_Nav_or_Search_Count'] * df['Occurrences']
# required for journeys with no nav later
df['Has_No_Nav_Or_Search'] = df['Content_Nav_Search_Event_Sum_row_total'] == 0 | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Temporary df file in case of crash Save | df.to_csv(os.path.join(
DATA_DIR,
"ab3_loved_temp.csv.gz"), sep="\t", compression="gzip", index=False)
df = pd.read_csv(os.path.join(
DATA_DIR,
"ab3_loved_temp.csv.gz"), sep="\t", compression="gzip") | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Frequentist statistics Statistical significance | # help(proportions_ztest)
has_rel = analysis.z_prop(df, 'Has_Related', VARIANT_DICT)
has_rel
has_rel['p-value'] < alpha | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Practical significance - uplift | # Due to multiple testing we used the Bonferroni correction for alpha
ci_low,ci_upp = analysis.zconf_interval_two_samples(has_rel['x_a'], has_rel['n_a'],
has_rel['x_b'], has_rel['n_b'], alpha = alpha)
print(' difference in proportions = {0:.2f}%'.format(100*(has_rel['p_b']-has_rel['p_a'])))
print(' % relative change in proportions = {0:.2f}%'.format(100*((has_rel['p_b']-has_rel['p_a'])/has_rel['p_a'])))
print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )'
.format(100*ci_low, 100*ci_upp)) | difference in proportions = 2.20%
% relative change in proportions = 62.91%
95% Confidence Interval = ( 2.12% , 2.28% )
| MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Bayesian statistics Based on [this](https://medium.com/@thibalbo/coding-bayesian-ab-tests-in-python-e89356b3f4bd) blog To be developed, a Bayesian approach can provide a simpler interpretation. Bayesian bootstrap | analysis.compare_total_searches(df, VARIANT_DICT)
fig, ax = plt.subplots()
plot_df_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby(
'Content_Nav_or_Search_Count').sum().iloc[:, 0]
plot_df_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby(
'Content_Nav_or_Search_Count').sum().iloc[:, 0]
ax.set_yscale('log')
width =0.4
ax = plot_df_B.plot.bar(label='B', position=1, width=width)
ax = plot_df_A.plot.bar(label='A', color='salmon', position=0, width=width)
plt.title("loved journeys")
plt.ylabel("Log(number of journeys)")
plt.xlabel("Number of uses of search/nav elements in journey")
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.savefig('nav_counts_loved_bar.png', dpi = 900, bbox_inches = 'tight')
a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Content_Nav_or_Search_Count', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
np.array(a_bootstrap).mean()
np.array(a_bootstrap).mean() - (0.05 * np.array(a_bootstrap).mean())
np.array(b_bootstrap).mean()
print("A relative change of {0:.2f}% from control to intervention".format((np.array(b_bootstrap).mean()-np.array(a_bootstrap).mean())/np.array(a_bootstrap).mean()*100))
# ratio is vestigial but we keep it here for convenience
# it's actually a count but considers occurrences
ratio_stats = analysis.bb_hdi(a_bootstrap, b_bootstrap, alpha=alpha)
ratio_stats
ax = sns.distplot(b_bootstrap, label='B')
ax.errorbar(x=[ratio_stats['b_ci_low'], ratio_stats['b_ci_hi']], y=[2, 2], linewidth=5, c='teal', marker='o',
label='95% HDI B')
ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon')
ax.errorbar(x=[ratio_stats['a_ci_low'], ratio_stats['a_ci_hi']], y=[5, 5], linewidth=5, c='salmon', marker='o',
label='95% HDI A')
ax.set(xlabel='mean search/nav count per journey', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True, bbox_to_anchor=(0.75, 1), loc='best')
frame = legend.get_frame()
frame.set_facecolor('white')
plt.title("loved journeys")
plt.savefig('nav_counts_loved.png', dpi = 900, bbox_inches = 'tight')
# calculate the posterior for the difference between A's and B's ratio
# ypa prefix is vestigial from blog post
ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap)
# get the hdi
ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff)
# the mean of the posterior
print('mean:', ypa_diff.mean())
print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi)
ax = sns.distplot(ypa_diff)
ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Content_Nav_or_Search_Count', ylabel='Density',
title='The difference between B\'s and A\'s mean counts times occurrences')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff > 0).sum() / ypa_diff.shape[0]
# We count the number of values less than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# less than 0, could act a bit like a p-value
(ypa_diff < 0).sum() / ypa_diff.shape[0]
(ypa_diff>0).sum()
(ypa_diff<0).sum() | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
proportion of journeys with a page sequence including content and related links onlyThere is no statistically significant difference in the proportion of journeys with a page sequence including content and related links only (including loops) between page variant A and page variant B \begin{equation*}\frac{\text{total number of journeys that only contain content pages and related links (i.e. no nav pages)}}{\text{total number of journeys}}\end{equation*} Overall | # if (Content_Nav_Search_Event_Sum == 0) that's our success
# Has_No_Nav_Or_Search == 1 is a success
# the problem is symmetrical so doesn't matter too much
sum(df.Has_No_Nav_Or_Search * df.Occurrences) / df.Occurrences.sum()
sns.distplot(df.Content_Nav_or_Search_Count.values); | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Frequentist statistics Statistical significance | nav = analysis.z_prop(df, 'Has_No_Nav_Or_Search', VARIANT_DICT)
nav | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Practical significance - uplift | # Due to multiple testing we used the Bonferroni correction for alpha
ci_low,ci_upp = analysis.zconf_interval_two_samples(nav['x_a'], nav['n_a'],
nav['x_b'], nav['n_b'], alpha = alpha)
diff = 100*(nav['x_b']/nav['n_b']-nav['x_a']/nav['n_a'])
print(' difference in proportions = {0:.2f}%'.format(diff))
print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )'
.format(100*ci_low, 100*ci_upp))
print("There was a {0: .2f}% relative change in the proportion of journeys not using search/nav elements".format(100 * ((nav['p_b']-nav['p_a'])/nav['p_a']))) | There was a 0.29% relative change in the proportion of journeys not using search/nav elements
| MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Average Journey Length (number of page views)There is no statistically significant difference in the average page list length of journeys (including loops) between page variant A and page variant B. | length_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby(
'Page_List_Length').sum().iloc[:, 0]
lengthB_2 = length_B.reindex(np.arange(1, 501, 1), fill_value=0)
length_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby(
'Page_List_Length').sum().iloc[:, 0]
lengthA_2 = length_A.reindex(np.arange(1, 501, 1), fill_value=0)
fig, ax = plt.subplots(figsize=(100, 30))
ax.set_yscale('log')
width = 0.4
ax = lengthB_2.plot.bar(label='B', position=1, width=width)
ax = lengthA_2.plot.bar(label='A', color='salmon', position=0, width=width)
plt.xlabel('length', fontsize=1)
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show(); | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Bayesian bootstrap for non-parametric hypotheses | # http://savvastjortjoglou.com/nfl-bayesian-bootstrap.html
# let's use mean journey length (could probably model parametrically but we use it for demonstration here)
# some journeys have length 500 and should probably be removed as they are liekely bots or other weirdness
#exclude journeys of longer than 500 as these could be automated traffic
df_short = df[df['Page_List_Length'] < 500]
print("The mean number of pages in an loved journey is {0:.3f}".format(sum(df.Page_List_Length*df.Occurrences)/df.Occurrences.sum()))
# for reproducibility, set the seed within this context
a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
a_bootstrap_short, b_bootstrap_short = analysis.bayesian_bootstrap_analysis(df_short, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
np.array(a_bootstrap).mean()
np.array(b_bootstrap).mean()
print("There's a relative change in page length of {0:.2f}% from A to B".format((np.array(b_bootstrap).mean()-np.array(a_bootstrap).mean())/np.array(a_bootstrap).mean()*100))
print(np.array(a_bootstrap_short).mean())
print(np.array(b_bootstrap_short).mean())
# Calculate a 95% HDI
a_ci_low, a_ci_hi = bb.highest_density_interval(a_bootstrap)
print('low ci:', a_ci_low, '\nhigh ci:', a_ci_hi)
ax = sns.distplot(a_bootstrap, color='salmon')
ax.plot([a_ci_low, a_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant A Mean Journey Length')
sns.despine()
plt.legend();
# Calculate a 95% HDI
b_ci_low, b_ci_hi = bb.highest_density_interval(b_bootstrap)
print('low ci:', b_ci_low, '\nhigh ci:', b_ci_hi)
ax = sns.distplot(b_bootstrap)
ax.plot([b_ci_low, b_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant B Mean Journey Length')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
ax = sns.distplot(b_bootstrap, label='B')
ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon')
ax.set(xlabel='Journey Length', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.title("loved journeys")
plt.savefig('journey_length_loved.png', dpi = 900, bbox_inches = 'tight')
ax = sns.distplot(b_bootstrap_short, label='B')
ax = sns.distplot(a_bootstrap_short, label='A', ax=ax, color='salmon')
ax.set(xlabel='Journey Length', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show(); | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
We can also measure the uncertainty in the difference between the Page Variants's Journey Length by subtracting their posteriors. | # calculate the posterior for the difference between A's and B's YPA
ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap)
# get the hdi
ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff)
# the mean of the posterior
ypa_diff.mean()
print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi)
ax = sns.distplot(ypa_diff)
ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density',
title='The difference between B\'s and A\'s mean Journey Length')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show(); | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
We can actually calculate the probability that B's mean Journey Length was greater than A's mean Journey Length by measuring the proportion of values greater than 0 in the above distribution. | # We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff > 0).sum() / ypa_diff.shape[0]
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff < 0).sum() / ypa_diff.shape[0] | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Some other analysisSome of these results raised more questions, so here's some analysis (with metrics that weren't defined before looking at the other results, so not sure they are statistically valid, but may be interesting nevertheless) Perhaps journey length is increasing because we're seeing fewer bouncers (journey length = 1) because tey are seeing a relevant lnk on their first page instead of giving up Proportion of journeys that are length 1 | def is_one(x):
"""Compute whether a journey's length is 1."""
return x == 1
df['journey_length_1'] = df['Page_List_Length'].progress_apply(is_one) | 100%|██████████| 772387/772387 [00:00<00:00, 846978.07it/s]
| MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Statistical significance | is_length_1 = analysis.z_prop(df, 'journey_length_1', VARIANT_DICT)
is_length_1
is_length_1['p-value'] < alpha | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Practical significance | # Due to multiple testing we used the Bonferroni correction for alpha
ci_low,ci_upp = analysis.zconf_interval_two_samples(is_length_1['x_a'], is_length_1['n_a'],
is_length_1['x_b'], is_length_1['n_b'], alpha = alpha)
print(' difference in proportions = {0:.2f}%'.format(100*(is_length_1['p_b']-is_length_1['p_a'])))
print(' % relative change in proportions = {0:.2f}%'.format(100*((is_length_1['p_b']-is_length_1['p_a'])/is_length_1['p_a'])))
print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )'
.format(100*ci_low, 100*ci_upp)) | difference in proportions = -0.31%
% relative change in proportions = -0.63%
95% Confidence Interval = ( -0.49% , -0.13% )
| MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
Average journey length where length > 1 | # for reproducibility, set the seed within this context
a_bootstrap_gt_1, b_bootstrap_gt_1 = analysis.bayesian_bootstrap_analysis(df[df['journey_length_1'] == False], col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
# a_bootstrap_short_gt_1, b_bootstrap_short_gt_1 = analysis.bayesian_bootstrap_analysis(df_short, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
np.array(a_bootstrap_gt_1).mean()
np.array(b_bootstrap_gt_1).mean()
print("There's a relative change in page length of {0:.2f}% from A to B".format((np.array(b_bootstrap_gt_1).mean()-np.array(a_bootstrap_gt_1).mean())/np.array(a_bootstrap_gt_1).mean()*100))
# calculate the posterior for the difference between A's and B's YPA
ypa_diff = np.array(b_bootstrap_gt_1) - np.array(a_bootstrap_gt_1)
# get the hdi
ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff)
print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi)
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff > 0).sum() / ypa_diff.shape[0]
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff < 0).sum() / ypa_diff.shape[0] | _____no_output_____ | MIT | notebooks/analyses_reports/2019-03-23_to_03-27_ab4_llr_i_loved.ipynb | alphagov/govuk_ab_analysis |
LeNet  | import torch
import random
import numpy as np
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
import torchvision.datasets
MNIST_train = torchvision.datasets.MNIST('./', download=True, train=True)
MNIST_test = torchvision.datasets.MNIST('./', download=True, train=False)
X_train = MNIST_train.train_data
y_train = MNIST_train.train_labels
X_test = MNIST_test.test_data
y_test = MNIST_test.test_labels
X_train
X_train.shape
len(y_train), len(y_test)
import matplotlib.pyplot as plt
plt.imshow(X_train[0, :, :])
plt.show()
print(y_train[0]) | _____no_output_____ | MIT | module05_mnist_conv.ipynb | YUMVOLKOVA/Neural_Networks_and_CV |
хотим передавать картинку, как трехмерный тензор | X_train = X_train.unsqueeze(1).float()
X_test = X_test.unsqueeze(1).float()
X_train.shape
X_train
class LeNet5(torch.nn.Module):
def __init__(self):
super(LeNet5, self).__init__()
self.conv1 = torch.nn.Conv2d(
in_channels=1, out_channels=6, kernel_size=5, padding=2) # у нас 28 на 28, чтобы не терять размерность картинки, делаем паддинг
self.act1 = torch.nn.Tanh()
self.pool1 = torch.nn.AvgPool2d(kernel_size=2, stride=2)
self.conv2 = torch.nn.Conv2d(
in_channels=6, out_channels=16, kernel_size=5, padding=0)
self.act2 = torch.nn.Tanh()
self.pool2 = torch.nn.AvgPool2d(kernel_size=2, stride=2)
self.fc1 = torch.nn.Linear(5 * 5 * 16, 120)
self.act3 = torch.nn.Tanh()
self.fc2 = torch.nn.Linear(120, 84)
self.act4 = torch.nn.Tanh()
self.fc3 = torch.nn.Linear(84, 10)
def forward(self, x):
x = self.conv1(x)
x = self.act1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.act2(x)
x = self.pool2(x)
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
x = self.fc1(x)
x = self.act3(x)
x = self.fc2(x)
x = self.act4(x)
x = self.fc3(x)
return x
lenet5 = LeNet5() | _____no_output_____ | MIT | module05_mnist_conv.ipynb | YUMVOLKOVA/Neural_Networks_and_CV |
У PyTorch-тензоров есть функция view, которая наш тензор преобразует к нужной размерности. Первая размерность будет x.size[0] -- это размер батча, а дальше тензор будет одномерный, соответственно мы вот эти три размерности должны просто перемножить и получить вот здесь 400. | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
lenet5 = lenet5.to(device)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(lenet5.parameters(), lr=1.0e-3)
batch_size = 100
test_accuracy_history = []
test_loss_history = []
X_test = X_test.to(device)
y_test = y_test.to(device)
for epoch in range(10000):
order = np.random.permutation(len(X_train))
for start_index in range(0, len(X_train), batch_size):
optimizer.zero_grad()
batch_indexes = order[start_index:start_index+batch_size]
X_batch = X_train[batch_indexes].to(device)
y_batch = y_train[batch_indexes].to(device)
preds = lenet5.forward(X_batch)
loss_value = loss(preds, y_batch)
loss_value.backward()
optimizer.step()
test_preds = lenet5.forward(X_test)
test_loss_history.append(loss(test_preds, y_test).data.cpu())
accuracy = (test_preds.argmax(dim=1) == y_test).float().mean().data.cpu()
test_accuracy_history.append(accuracy)
print(accuracy)
lenet5.forward(X_test)
plt.plot(test_accuracy_history);
# plt.plot(test_loss_history); | _____no_output_____ | MIT | module05_mnist_conv.ipynb | YUMVOLKOVA/Neural_Networks_and_CV |
Здача | import torch
N = 4
C = 3
C_out = 10
H = 8
W = 16
x = torch.ones((N, C, H, W))
x.shape
# torch.Size([4, 10, 8, 16])
out1 = torch.nn.Conv2d(C, C_out, kernel_size=(3, 3), padding=1)(x)
print(out1.shape) # для самопроверки
# torch.Size([4, 10, 8, 16])
out2 = torch.nn.Conv2d(C, C_out, kernel_size=(5, 5), padding=2)(x)
print(out2.shape) # для самопроверки
# torch.Size([4, 10, 8, 16])
out3 = torch.nn.Conv2d(C, C_out, kernel_size=(7, 7), padding=3)(x)
print(out3.shape) # для самопроверки
# torch.Size([4, 10, 8, 16])
out4 = torch.nn.Conv2d(C, C_out, kernel_size=(9, 9), padding=4)(x)
print(out4.shape) # для самопроверки
# torch.Size([4, 10, 8, 16])
out5 = torch.nn.Conv2d(C, C_out, kernel_size=(3, 5), padding=(1,2))(x)
print(out5.shape) # для самопроверки
# torch.Size([4, 10, 22, 30])
out6 = torch.nn.Conv2d(C, C_out, kernel_size=(3, 3), padding=(8,8))(x)
print(out6.shape) # для самопроверки
# torch.Size([4, 10, 7, 15])
out7 = torch.nn.Conv2d(C, C_out, kernel_size=(4, 4), padding=1)(x)
print(out7.shape) # для самопроверки
# torch.Size([4, 10, 9, 17])
out8 = torch.nn.Conv2d(C, C_out, kernel_size=(2, 2), padding=1)(x)
print(out8.shape) # для самопроверки | torch.Size([4, 10, 9, 17])
| MIT | module05_mnist_conv.ipynb | YUMVOLKOVA/Neural_Networks_and_CV |
In this note book the following steps are taken:1. Find the best hyper parameters for estimator2. Find the most important features by tunned random forest3. Comapring r2 of the tuuned full model and model with selected features4. Furthur step is finding tuned model with selected features and comparing the hyper parameters | #import data
Data=pd.read_csv("St.Johns-Transfomed-Data.csv")
X = Data.iloc[:,:-1]
y = Data.iloc[:,-1]
#split test and training set. total number of data is 330 so the test size cannot be large
np.random.seed(60)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20,
random_state = 1000)
regressors = {}
regressors.update({"XGBoost": XGBRegressor(random_state=1000)})
FEATURE_IMPORTANCE = {"XGBoost"}
#Define range of hyperparameters for estimator
np.random.seed(60)
parameters = {}
parameters.update({"XGBoost": {
"regressor__learning_rate":[0.001,0.01,0.02,0.1,0.25,0.5,1],
"regressor__gamma":[0.001,0.01,0.02,0.1,0.25,0.5,1],
"regressor__max_depth" : [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
"regressor__reg_alpha":[0.001,0.01,0.02,0.1],
"regressor__reg_lambda":[0.001,0.01,0.02,0.1],
"regressor__min_child_weight":[0.001,0.01,0.02,0.1]}
})
# Make correlation matrix
corr_matrix = X_train.corr(method = "spearman").abs()
# Draw the heatmap
sns.set(font_scale = 1.0)
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(corr_matrix, cmap= "YlGnBu", square=True, ax = ax)
f.tight_layout()
plt.savefig("correlation_matrix.png", dpi = 1080)
# Select upper triangle of matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k = 1).astype(np.bool))
# Find index of feature columns with correlation greater than 0.8
to_drop = [column for column in upper.columns if any(upper[column] > 0.8)]
# Drop features
X_train = X_train.drop(to_drop, axis = 1)
X_test = X_test.drop(to_drop, axis = 1)
X_train
FEATURE_IMPORTANCE = {"XGBoost"}
selected_regressor = "XGBoost"
regressor = regressors[selected_regressor]
results = {}
for regressor_label, regressor in regressors.items():
# Print message to user
print(f"Now tuning {regressor_label}.")
scaler = StandardScaler()
steps = [("scaler", scaler), ("regressor", regressor)]
pipeline = Pipeline(steps = steps)
#Define parameters that we want to use in gridsearch cv
param_grid = parameters[selected_regressor]
# Initialize GridSearch object for estimator
gscv = RandomizedSearchCV(pipeline, param_grid, cv = 3, n_jobs= -1, verbose = 1, scoring = r2_score, n_iter=20)
# Fit gscv (Tunes estimator)
print(f"Now tuning {selected_regressor}. Go grab a beer or something.")
gscv.fit(X_train, np.ravel(y_train))
#Getting the best hyperparameters
best_params = gscv.best_params_
best_params
#Getting the best score of model
best_score = gscv.best_score_
best_score
#Check overfitting of the estimator
from sklearn.model_selection import cross_val_score
mod = XGBRegressor(gamma= 0.001,
learning_rate= 0.5,
max_depth=3,
min_child_weight= 0.001,
reg_alpha=0.1,
reg_lambda = 0.1 ,random_state=10000)
scores_test = cross_val_score(mod, X_test, y_test, scoring='r2', cv=5)
scores_test
tuned_params = {item[11:]: best_params[item] for item in best_params}
regressor.set_params(**tuned_params)
#Find r2 of the model with all features (Model is tuned for all features)
results={}
model=regressor.set_params(gamma= 0.001,
learning_rate= 0.5,
max_depth=3,
min_child_weight= 0.001,
reg_alpha=0.1,
reg_lambda = 0.1 ,random_state=10000)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
R2 = metrics.r2_score(y_test, y_pred)
results = {"classifier": model,
"Best Parameters": best_params,
"Training r2": best_score*100,
"Test r2": R2*100}
results
# Select Features using RFECV
class PipelineRFE(Pipeline):
# Source: https://ramhiser.com/post/2018-03-25-feature-selection-with-scikit-learn-pipeline/
def fit(self, X, y=None, **fit_params):
super(PipelineRFE, self).fit(X, y, **fit_params)
self.feature_importances_ = self.steps[-1][-1].feature_importances_
return self
steps = [("scaler", scaler), ("regressor", regressor)]
pipe = PipelineRFE(steps = steps)
np.random.seed(60)
# Initialize RFECV object
feature_selector = RFECV(pipe, cv = 5, step = 1, verbose = 1)
# Fit RFECV
feature_selector.fit(X_train, np.ravel(y_train))
# Get selected features
feature_names = X_train.columns
selected_features = feature_names[feature_selector.support_].tolist()
performance_curve = {"Number of Features": list(range(1, len(feature_names) + 1)),
"R2": feature_selector.grid_scores_}
performance_curve = pd.DataFrame(performance_curve)
# Performance vs Number of Features
# Set graph style
sns.set(font_scale = 1.75)
sns.set_style({"axes.facecolor": "1.0", "axes.edgecolor": "0.85", "grid.color": "0.85",
"grid.linestyle": "-", 'axes.labelcolor': '0.4', "xtick.color": "0.4",
'ytick.color': '0.4'})
colors = sns.color_palette("RdYlGn", 20)
line_color = colors[3]
marker_colors = colors[-1]
# Plot
f, ax = plt.subplots(figsize=(13, 6.5))
sns.lineplot(x = "Number of Features", y = "R2", data = performance_curve,
color = line_color, lw = 4, ax = ax)
sns.regplot(x = performance_curve["Number of Features"], y = performance_curve["R2"],
color = marker_colors, fit_reg = False, scatter_kws = {"s": 200}, ax = ax)
# Axes limits
plt.xlim(0.5, len(feature_names)+0.5)
plt.ylim(0.60, 1)
# Generate a bolded horizontal line at y = 0
ax.axhline(y = 0.625, color = 'black', linewidth = 1.3, alpha = .7)
# Turn frame off
ax.set_frame_on(False)
# Tight layout
plt.tight_layout()
#Define new training and test set based based on selected features by RFECV
X_train_rfecv = X_train[selected_features]
X_test_rfecv= X_test[selected_features]
np.random.seed(60)
regressor.fit(X_train_rfecv, np.ravel(y_train))
#Finding important features
np.random.seed(60)
feature_importance = pd.DataFrame(selected_features, columns = ["Feature Label"])
feature_importance["Feature Importance"] = regressor.feature_importances_
feature_importance = feature_importance.sort_values(by="Feature Importance", ascending=False)
feature_importance
# Initialize GridSearch object for model with selected features
np.random.seed(60)
gscv = RandomizedSearchCV(pipeline, param_grid, cv = 3, n_jobs= -1, verbose = 1, scoring = r2_score, n_iter=20)
#Tuning random forest classifier with selected features
np.random.seed(60)
gscv.fit(X_train_rfecv,y_train)
#Getting the best parameters of model with selected features
best_params = gscv.best_params_
best_params
#Getting the score of model with selected features
best_score = gscv.best_score_
best_score
#Check overfitting of the tuned model with selected features
from sklearn.model_selection import cross_val_score
mod = XGBRegressor(gamma= 0.001,
learning_rate= 0.5,
max_depth=3,
min_child_weight= 0.001,
reg_alpha=0.1,
reg_lambda = 0.1 ,random_state=10000)
scores_test = cross_val_score(mod, X_test_rfecv, y_test, scoring='r2', cv=5)
scores_test
results={}
model=regressor.set_params(gamma= 0.001,
learning_rate= 0.5,
max_depth=3,
min_child_weight= 0.001,
reg_alpha=0.1,
reg_lambda = 0.1 ,random_state=10000)
model.fit(X_train_rfecv,y_train)
y_pred = model.predict(X_test_rfecv)
R2 = metrics.r2_score(y_test, y_pred)
results = {"classifier": model,
"Best Parameters": best_params,
"Training r2": best_score*100,
"Test r2": R2*100}
results | _____no_output_____ | Unlicense | XGBoost-RFECV-RoF-St.Johns.ipynb | SadafGharaati/Important-factors |
Building and submitting search queries to AGRISThis script is used with the aim to submit a search query to the (AGRIS database) and retrieve the list of the URLs (or a subset of the returned URLs) directing to the search results. The result URLs that are obtained are stored in a txt file in order to be used for scraping the AGRIS database for relevant content (i.e., abstracts of publications available from the specific database) to be used for text annotation-related purposes. The first step in the process of submitting a search query to the AGRIS database and receiving the result URLs is to import the Python libraries and packages that are necessary for the execution of this task. | import requests
from bs4 import BeautifulSoup | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
The findNumOfTokens function is defined and used with the aim to enable the retrieval of the number of the search results returned from the submission of the query to the AGRIS database (by making use of the search parameters presented and explained below). | def findNumOfTokens(string):
numOfTokens = len(string.split())
return numOfTokens | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
When builing a search query to submit to the AGRIS database, there is a list of search parameters that need to be configured. In other words, these parameters need to be assigned the values that will be used for the execution of the search task and the retrieval of the result URLs. These parameters are the following: subject (i.e., the subject of the results to be identified and returned - what the text documents/abstracts to be eventually retrieved need to be about); result type (AGRIS allows to execute searches in regard to a list of predefined types; these types are "Publications" and "Databsets"); start year (i.e., the year from which results for the search query should be identified and returned); end year (i.e., the year till which results for the search query should be identified and returned); country name (i.e., the name of the country that the content of the resources to be identifed and retrieved with the help of the search results should relate to); language (i.e., the language of the content of the resources made available from the search results that are identified and retrieved); content type (i.e., the type of the content of the resources -theses, journal papers, reports, etc.- made available from the search results that are identified and retrieved); To build the search query by taking account of the values provided to the search parameters listed above (i.e., the configurable part of the search query), we define and use the buildConfigurableQueryStr function. The input provided to the function are the values of the search parameters. In addition, the function takes into consideration the number of tokens included in the search query when "constructing" the value to be finally provided to the subject parameter. | def buildConfigurableQueryStr (subject, resultType, startYear, endYear, countryName, language, contentType):
numOfTokensInSubj = findNumOfTokens(subject)
if numOfTokensInSubj == 1:
filterString = "filterString=%2Bsubject%3A%28" + subject + "%29"
else:
filterString = ""
for subjectToken in subject.split():
filterString = filterString + "filterString=%2Bsubject%3A%28" + subjectToken + "%29"
typeresultsField = "typeresultsField=" + resultType
fromDate = "fromDate=" + str(startYear)
toDate = "toDate=" + str(endYear)
if countryName == "0":
country = "country=" + str(countryName)
else:
country = "country=" + countryName
if language == "0":
lang = "lang=" + str(0)
else:
lang = "lang=" + language
if contentType == "0":
typeToAdd = "typeToAdd=" + str(0)
else:
typeToAdd = "typeToAdd=" + contentType
configurableQueryStr = filterString + "&" + typeresultsField + "&" + fromDate + "&" + toDate + "&" + country + "&" + lang + "&" + typeToAdd
return configurableQueryStr | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Apart from the configurable part of the search query to be submitted to the AGRIS database, there is also a part of the search query consisting of parameters that receive default values (more specifically, most of those parameters receive no values at all!). This part of the search query can be named as the default part of the search query. The parameters receiving no values at all or specific values by default are: (i) agrovocString; (ii) agrovocToRemove; (iii) advQuery; (iv) centerString; (v) centerToRemove; (vi) filterToRemove; (vii) typeString; (viii) typeToRemove; and (ix) filterQuery. | def AGRISqueryBuilder ():
queryStr = ""
# list of query parameters receiving no values
paramsWithNullValues = ["agrovocString=", "agrovocToRemove=", "advQuery=", "centerString=", "centerToRemove=",
"filterToRemove=", "typeString=", "typeToRemove=", "filterQuery="]
# concatenating the parameters with no values to start assemblying the AGRIS query string
for param in paramsWithNullValues:
queryStr = queryStr + param + "&"
# list of query parameters with default values, such as onlyFullText, enableField and aggregatorField
# onlyFullText = false --> access resources that may not provide access to a full-text version!
# enableField = Disable --> multi-lingual search is disabled!
# aggregatorField = Disable --> include records from aggregators!
paramsWithDefaultValues = ["onlyFullText=false", "operator=Required", "field=0", "enableField=Disable",
"aggregatorField=Disable"]
for param in paramsWithDefaultValues:
queryStr = queryStr + param + "&"
return queryStr | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
By calling the AGRISqueryBuilder function, we are able to create the first part of the search query that will be submitted to the AGRIS database (i.e., the default part of the search query containing the search parameters that receive default values or no value at all). | queryStr_1 = AGRISqueryBuilder() | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Assignment of values to the search parameters to be used for creating the configurable part of the serch query Step 1: Subject of the search query. | subject = input("Type in the subject of your search in AGRIS: ") | Type in the subject of your search in AGRIS: agriculture
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 2: Type of the results to be retrieved (namely: "Publications", "Datasets" or both). | resultType = input("Type in the type of results (i.e., 'Publications', 'Datasets', 'Both') you are interested in: ") | Type in the type of results (i.e., 'Publications', 'Datasets', 'Both') you are interested in: Publications
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 3: Starting year from which results should become available. | startYear = input("Find resources that have become available from this year and on: ") | Find resources that have become available from this year and on: 2000
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 4: Year till which results should become available (i.e., end year). | endYear = input("Find resources that have become available up until this year: ") | Find resources that have become available up until this year: 2021
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 5: The name of the country that the content of the resources to be retrieved should relate to. | countryName = input("Type in the name of the country the resource's content relates to. If not relevant, provide 0 as a value: ") | Type in the name of the country the resource's content relates to. If not relevant, provide 0 as a value: 0
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 6: The language of the content that will become available from the resources to be retreved. | language = input("Type in the language in which content should be made available. In the case of no particular preference provide 0 as a value: ") | Type in the language in which content should be made available. In the case of no particular preference provide 0 as a value: English
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Step 7: The type of the content to be retrived (pertinent to the "Publications" result type - potential values are: theses, journal papers, reports, etc.). | contentType = input("Provide the type of content you are interested in (applies only to Publications). If not relevant, provide 0 as a value: ") | Provide the type of content you are interested in (applies only to Publications). If not relevant, provide 0 as a value: 0
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
By calling the buildConfigurableQueryStr function, we are able to create the second part of the search query that will be submitted to the AGRIS database (i.e., the configurable part of the search query containing the values provided to the search parameters as part of the steps executed above). | queryStr_2 = buildConfigurableQueryStr(subject, resultType, startYear, endYear, countryName, language, contentType) | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
The search query (i.e., the baseQueryStr) is built by concatenating the default (i.e., queryStr_1) and the configurable part (queryStr_2) of it. | baseQueryStr = queryStr_1 + queryStr_2 | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Display the search query (i.e, the baseQueryStr) to be finally submitted to the AGRIS database. | baseQueryStr | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
The constructed search query gets submitted to the AGRIS database. | response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr) | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Printing out the status code of the response provided to the query that has been submitted in order to receive feedback on whether the query submission has been successful or not (a response value equal to 200 reveals a successful query submission attempt!). | response.status_code | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Parsing content The page of the AGRIS database that has been retrieved and contains the results related to the submitted query is parsed with the aim to fetch the number of the search results.To do so, a parsing object (namely, an instance of the BeautifulSoup class) aiming to find the classes having the "pull-left grey-scale-1 last" label (this is the section/part of the results page where the number of the search results becomes available) is created. The execution of the find method called on the parsing object will allow to get the record in which the number of the search results is contained. | soup = BeautifulSoup(response.content, "html.parser")
numOfResultsRecord = soup.find("div", class_ = "pull-left grey-scale-1 last") | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
The number of the search results is eventually retrieved by splitting the respective record into pieces and retrieving the appropriate one (i.e., piece) after converting it to an integer. A check is also made to figure out the existence of the "," character in the results' number. If this is the case, the "," sign is removed. | if "," in numOfResultsRecord.find("p").find("strong").text.split()[-1]:
numOfResults = int(numOfResultsRecord.find("p").find("strong").text.split()[-1].replace(",", "")) | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Displaying the number of the search results that have been retrieved. | numOfResults | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
A quick check is done to make sure that there are indeed results that have been retrieved from the execution of the search query. If the number of search results is not 0, then there is a request for the number of the search results to keep (in the case that there are too may and all of them are needed!). | if numOfResults != 0:
numOfResultsToKeep = int(input("Type in the number of results to keep: "))
else:
print("No results have been found!") | Type in the number of results to keep: 1000
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
The section of the script provided below is about the calculation of the number of iterations to be made in order to skim through all the search results to be kept (based on the number of the search results to be kept provided above). This part is necessary because of the fact the search results provided by the AGRIS database become available in batches of 10. The following cases are considered:The number of the search results that have been returned is exactly 10. The number of the search results that have been returned is more than 0 and less than 10. The number of the search results that have been returned is a multiple of 10. The number of the search results that have been returned is more than 10 but not an exact multiple of it. | if (numOfResultsToKeep // 10 == 1):
numOfIterations = 1
elif (numOfResultsToKeep // 10 == 0) and (numOfResultsToKeep % 10 > 0 and numOfResultsToKeep % 10 < 10):
numOfIterations = 1
else:
if numOfResultsToKeep % 10 == 0:
numOfIterations = numOfResultsToKeep // 10
else:
numOfIterations = (numOfResultsToKeep // 10) + 1 | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Priniting out the number of the iterations that are needed to retrieve the required number of the search result URLs. | numOfIterations | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Creating a text file to store the search result URLs. | fileName = input("Type in the name of the file to use of storing the query result URLs: ")
fullFileName = fileName + ".txt"
file = open (fullFileName, "w") | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Iterating over the search results, retrieving the search result URLs, and writing/storing the search result URLs into the text file. To execute the iteration, the index from which results should be scanned from is asked. | startIndex = int(input("Index to start the retrieval of search results from: ")) | Index to start the retrival of search results from: 0
| MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Iteration over the search results (from the index that has been provided and on) and storage of the result URLs that get retrieved into the text file. | if numOfResultsToKeep >= 10:
if startIndex == 0:
iteration = 1
response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr + "&" + "startIndexSearch=")
soup = BeautifulSoup(response.content, "html.parser")
resultUrls = soup.find_all("div", class_="col-md-10 col-sm-10 col-xs-12 inner")
for resultUrl in resultUrls:
url = resultUrl.find("a")
file.write(url["href"] + "\n")
iteration +=1
while iteration <= numOfIterations:
startIndex += 10
response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr + "&" + "startIndexSearch=" + str(startIndex))
soup = BeautifulSoup(response.content, "html.parser")
resultUrls = soup.find_all("div", class_="col-md-10 col-sm-10 col-xs-12 inner")
for resultUrl in resultUrls:
url = resultUrl.find("a")
file.write(url["href"] + "\n")
iteration +=1
else:
iteration = 1
while iteration <= numOfIterations:
response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr + "&" + "startIndexSearch=" + str(startIndex))
soup = BeautifulSoup(response.content, "html.parser")
resultUrls = soup.find_all("div", class_="col-md-10 col-sm-10 col-xs-12 inner")
for resultUrl in resultUrls:
url = resultUrl.find("a")
file.write(url["href"] + "\n")
iteration += 1
startIndex +=10
else:
if startIndex == 0:
response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr + "&" + "startIndexSearch=")
soup = BeautifulSoup(response.content, "html.parser")
resultUrls = soup.find_all("div", class_="col-md-10 col-sm-10 col-xs-12 inner")
counter = 0
for resultUrl in resultUrls:
if counter < numOfResultsToKeep:
counter +=1
url = resultUrl.find("a")
file.write(url["href"] + "\n")
else:
break
else:
response = requests.get("https://agris.fao.org/agris-search/biblio.do?" + baseQueryStr + "&" + "startIndexSearch=" + str(startIndex))
soup = BeautifulSoup(response.content, "html.parser")
resultUrls = soup.find_all("div", class_="col-md-10 col-sm-10 col-xs-12 inner")
counter = 0
for resultUrl in resultUrls:
if counter < numOfResultsToKeep:
counter +=1
url = resultUrl.find("a")
file.write(url["href"] + "\n")
else:
break | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
Closing the text file. | file.close() | _____no_output_____ | MIT | Building_and_submitting_search_queries_to_AGRIS.ipynb | herculespan/customNERforAgriEntities |
ENGR 202 Solver | # importing the needed modules
import cmath as c
import math as m | _____no_output_____ | MIT | Applications/ENGR 202 Solver.ipynb | smithrockmaker/ENGR213 |
Solve for $X_C$ | # Where f is frequency, cap is the value of the capacitor, and xcap is the capacitive reactance
f = 5*10**3
cap = 50*(10**-9)
xcap = 1/-(2*m.pi*f*cap)
print("Xc =",xcap) | Xc = -636.6197723675813
| MIT | Applications/ENGR 202 Solver.ipynb | smithrockmaker/ENGR213 |
Solve for $X_L$ | # Where f is the frequency, l is the inductor value, and xind is the inductive reactance
f = 5*10**3
l = 200*(10**-3)
xind = 2*m.pi*f*l
print("XL =",xind) | XL = 6283.185307179587
| MIT | Applications/ENGR 202 Solver.ipynb | smithrockmaker/ENGR213 |
Define A complex number in rectangular form | # All values except r pulled from previous cells
# Solutions are given in Rectangular form
# Negative value for Xc already accounted for
r = 100 # Resistor value
x_c = r + 1j*(xcap)
print("For capacitor -",x_c)
x_i = r + 1j*(xind)
print("For inductor -",x_i) | For capacitor - (100-636.6197723675813j)
For inductor - (100+6283.185307179587j)
| MIT | Applications/ENGR 202 Solver.ipynb | smithrockmaker/ENGR213 |
Convert from Rectangular to Polar | # Answers are given in magnitude and radians. Convert if degrees are necessary.
y = c.polar(x_c)
print("Magnitude, radians",y)
y = c.polar(x_i)
print("Magnitude, radians",y) | Magnitude, radians (644.4258953280439, -1.414989826825355)
Magnitude, radians (6283.981031508405, 1.5548821760954434)
| MIT | Applications/ENGR 202 Solver.ipynb | smithrockmaker/ENGR213 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.