markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
[this doc on github](https://github.com/dotnet/interactive/tree/main/samples/notebooks/powershell) Interactive Host Experience in PowerShell notebook The PowerShell notebook provides a rich interactive experience through its host.The following are some examples. 1. _You can set the foreground and background colors for the output. The code below sets the foreground color to `Blue`, and you can see the output is rendered in blue afterwards:_ | $host.UI.RawUI.ForegroundColor = [System.ConsoleColor]::Blue
$PSVersionTable | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
2. _You can write to the host with specified foreground and background colors_ | Write-Host "Something to think about ..." -ForegroundColor Blue -BackgroundColor Gray | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
3. _Warning, Verbose, and Debug streams are rendered with the expected color:_ | Write-Warning "Warning"
Write-Verbose "Verbose" -Verbose
Write-Debug "Debug" -Debug | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
4. _You can use `Write-Host -NoNewline` as expected:_ | Write-Host "Hello " -NoNewline -ForegroundColor Red
Write-Host "World!" -ForegroundColor Blue | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
5. _You can read from user for credential:_ | $cred = Get-Credential
"$($cred.UserName), password received!" | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
6. _You can read from user for regular input:_ | Write-Verbose "Ask for name" -Verbose
$name = Read-Host -Prompt "What's your name? "
Write-Host "Greetings, $name!" -ForegroundColor DarkBlue | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
7. _You can read from user for password:_ | Read-Host -Prompt "token? " -AsSecureString | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
8. _You can use the multi-selection when running commands:_ | Get-Command nonExist -ErrorAction Inquire | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
9. _You can user the mandatory parameter prompts:_ | Write-Output | ForEach-Object { "I received '$_'" } | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
10. _Of course, pipeline streaming works:_ | Get-Process | select -First 5 | % { start-sleep -Milliseconds 300; $_ } | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
11. _Progress bar rendering works as expected:_ | ## Demo the progress bar
For ($i=0; $i -le 100; $i++) {
Write-Progress -Id 1 -Activity "Parent work progress" -Status "Current Count: $i" -PercentComplete $i -CurrentOperation "Counting ..."
For ($j=0; $j -le 10; $j++) {
Start-Sleep -Milliseconds 5
Write-Progress -Parent 1 -Id 2 -Activity "Child work progress" -Status "Current Count: $j" -PercentComplete ($j*10) -CurrentOperation "Working ..."
}
if ($i -eq 50) {
Write-Verbose "working hard!!!" -Verbose
"Something to output"
}
} | _____no_output_____ | MIT | samples/notebooks/powershell/Docs/Interactive-Host-Experience.ipynb | flcdrg/dotnet-interactive |
Fundamentos de Análise de Dados 2022.1 Trabalho 01 - Regressão: _Naval Propulsion Plants_**Nome:** Carolina Araújo Dias Dataset_Naval Propulsion Plants_: regressão múltipla (2 variáveis de saída), estimar cada variável de saída separadamente:- 11934 amostras;- 16 características reais;- 2 características reais para estimar, mas estimar somente _GT Compressor decay state coeficient_ (remover _GT Turbine decay state coeficient_). 01. Fazer o _download_ do respectivo banco de dados**Link:** http://archive.ics.uci.edu/ml/datasets/condition+based+maintenance+of+naval+propulsion+plantsApós feito o download, os dados foram salvos em _"../data/naval_data.txt"_. Bibliotecas | !python --version
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Funções Auxiliares | def check_constant_columns(dataframe: pd.DataFrame) -> None:
"""Checa se existem colunas constantes no dataframe
e imprime o nome e os valores de tais colunas."""
for column in dataframe.columns:
if len(dataframe[column].unique()) == 1:
print(f'Coluna: "{column}", Valor constante: {dataframe[column].unique()}')
def add_ones_column(data_array: np.array) -> np.array:
"""Adiciona uma coluna de 1's ao final de um array."""
length = data_array.shape[0]
return np.c_[data_array, np.ones(length)]
def plot_data(x, y):
plt.rcParams["figure.figsize"] = (12, 8)
plt.scatter(x=x,
y=y,
alpha=0.1)
plt.axline((1, 1),
slope=1,
color='r')
rmse = round(mean_squared_error(x, y, squared=False), 5)
plt.title(f'Dados Reais vs. Dados Preditos - RMSE: {rmse}',
loc='left', fontsize=18)
plt.xlabel('Dados Reais', fontsize=12)
plt.ylabel('Dados Preditos', fontsize=12)
plt.show() | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
02. Fazer a leitura dos dados | column_names = [
"Lever position",
"Ship speed",
"Gas Turbine shaft torque",
"GT rate of revolutions",
"Gas Generator rate of revolutions",
"Starboard Propeller Torque",
"Port Propeller Torque",
"Hight Pressure Turbine exit temperature",
"GT Compressor inlet air temperature",
"GT Compressor outlet air temperature",
"HP Turbine exit pressure",
"GT Compressor inlet air pressure",
"GT Compressor outlet air pressure",
"GT exhaust gas pressure",
"Turbine Injecton Control",
"Fuel flow",
"GT Compressor decay state coefficient",
"GT Turbine decay state coefficient"
]
# to readthe data using read_csv
# raw_data = pd.read_csv("data/naval_data.txt", sep=" ", header=None, engine='python')
# to read the data using read_fwf
raw_data = pd.read_fwf("../data/naval_data.txt", header=None)
raw_data.columns = column_names
# conferir os dados
raw_data.head() | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Só por olharmos para os dados conseguimos enxergar alguns problemas com as colunas. Por exemplo, aparentemente as colunas `Starboard Propeller Torque` e `Port Propeller Torque` são iguais. Além disso, as colunas `GT Compressor inlet air temperature` e `GT Compressor inlet air pressure` parecem ter apenas um valor constante. Vamos checar se isso é verdade. | if raw_data['Starboard Propeller Torque'].equals(raw_data['Port Propeller Torque']):
print(f'As colunas "Starboard Propeller Torque" e "Port Propeller Torque" são iguais.')
else:
print(f'As colunas não são iguais.')
check_constant_columns(raw_data) | Coluna: "GT Compressor inlet air temperature", Valor constante: [288.]
Coluna: "GT Compressor inlet air pressure", Valor constante: [0.998]
| MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Como identificamos essas colunas problemáticas, iremos removê-las a seguir. | data = raw_data.copy()
data.drop(['GT Compressor inlet air temperature',
'GT Compressor inlet air pressure',
'Port Propeller Torque'],
axis=1,
inplace=True) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
03. Se necessário, dividir os dados em conjunto de treinamento (70%) e teste (30%), utilizando a função apropriada do scikit-learn. Quatro NumPy arrays devem ser criados: X_train, y_train, X_test e y_test. | data.drop(["GT Turbine decay state coefficient"],
axis=1,
inplace=True)
print(f'Formato dos dados completos: {data.shape}')
X = data.drop(["GT Compressor decay state coefficient"],
axis=1)
y = data[["GT Compressor decay state coefficient"]]
print(f'Formato de X: {X.shape}')
print(f'Formato de y: {y.shape}')
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=12)
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy() | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
04. Acrescentar uma coluna de 1s ([1 1 . . . 1]^T) como última coluna da matriz de treinamento X_train. Repita o procedimento para a matriz de teste, chamando-a de X_test_2. [StackOverflow: How to add an extra column to a NumPy array](https://stackoverflow.com/questions/8486294/how-to-add-an-extra-column-to-a-numpy-array) | add_ones_column(X_train).shape | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
05. Calcular o posto das matrizes X_train_2 e X_test_2. Se necessário, ajustar as matrizes X_train_2 e X_test_2. Antes de remover as 3 colunas problemáticas: | raw_data.shape
X_raw = raw_data.drop(["GT Compressor decay state coefficient",
"GT Turbine decay state coefficient"],
axis=1)
add_ones_column(X_raw).shape
np.linalg.matrix_rank(add_ones_column(X_raw)) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Após remover as 3 colunas problemáticas: | np.linalg.matrix_rank(add_ones_column(X_train))
np.linalg.matrix_rank(add_ones_column(X_test))
add_ones_column(X_train).shape | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
06. Calcular a decomposição QR da matriz de treinamento: X_train_2 = QR, usando a função do NumPy apropriada. | Q, R = np.linalg.qr(add_ones_column(X_train)) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Questão 04Verificar numericamente que $Q^TQ = I$, para o respectivo banco de dados.*R.* Multiplicamos $Q^T$ por Q e salvamos em uma matriz M e comparamos essa matriz com uma matriz identidade de mesma dimensão. A função `np.allclose()` compara os valores levando em consideração as aproximações dos número. | M = np.matmul(Q.T, Q)
np.allclose(M, np.eye(M.shape[0])) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
07. Calcular o vetor de coeficientes $\mathbf{\tilde{x}}$ da Equação (1), utilizando a função do NumPy `linalg.solve()`. | coefs_lineares = np.linalg.solve(R, np.dot(Q.T, y_train)) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
08. Calcular as estimativas do modelo para os valores de treinamento e teste, utilizando o vetor de coeficientes $\mathbf{\tilde{x}}$, calculado no item anterior. Treino | y_train_preds = []
for i in range(len(X_train)):
y_train_preds.append(np.dot(np.squeeze(coefs_lineares), add_ones_column(X_train)[i])) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Teste | y_test_preds = []
for i in range(len(X_test)):
y_test_preds.append(np.dot(np.squeeze(coefs_lineares), add_ones_column(X_test)[i])) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
09. Gerar um gráfico com os valores reais de treinamento no eixo das abscissas e valores estimados de treinamento no eixo das ordenadas. Acrescentar ao gráfico uma reta pontilhada a +45◦ do eixo das abscissas. Treino | plot_data(x=y_train, y=y_train_preds) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Teste | plot_data(x=y_test, y=y_test_preds) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
10. Calcular a **raiz quadrada do erro médio quadrático** (RMSE) dos dados de treinamento e de teste. Treino | mean_squared_error(y_train, y_train_preds, squared=False) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Teste | mean_squared_error(y_test, y_test_preds, squared=False) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Repetir todo o processo acima para o outro target `GT Turbine decay state coefficient` | data = raw_data.copy()
data.drop(['GT Compressor inlet air temperature',
'GT Compressor inlet air pressure',
'Port Propeller Torque'],
axis=1,
inplace=True)
data.drop(["GT Compressor decay state coefficient"],
axis=1,
inplace=True)
X = data.drop(["GT Turbine decay state coefficient"],
axis=1)
y = data[["GT Turbine decay state coefficient"]]
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=12)
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
Q, R = np.linalg.qr(add_ones_column(X_train))
coefs_lineares = np.linalg.solve(R, np.dot(Q.T, y_train))
y_train_preds = []
for i in range(len(X_train)):
y_train_preds.append(np.dot(np.squeeze(coefs_lineares), add_ones_column(X_train)[i]))
y_test_preds = []
for i in range(len(X_test)):
y_test_preds.append(np.dot(np.squeeze(coefs_lineares), add_ones_column(X_test)[i])) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Treino | mean_squared_error(y_train, y_train_preds, squared=False)
plot_data(x=y_train, y=y_train_preds) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Teste | mean_squared_error(y_test, y_test_preds, squared=False)
plot_data(x=y_test, y=y_test_preds) | _____no_output_____ | MIT | trabalho01_regressao/trabalho01_regressao_naval_dataset.ipynb | diascarolina/fundamentos-analise-dados |
Tutorial HowToAdaptiveOpticsThis report provides a tutorial to use the code develloped to compute the PSIM for the ELT SCAO systems. The code is object-oriented and its architecture is quite inspired from the ([OOMAO simulator](https://github.com/cmcorreia/LAM-Public/tree/master/_libOomao)). Modules requiredThe code is written in Python 3 and requires the following modules* **numba** => required in aotools* **joblib** => paralleling computing* **scikit-image** => 2D interpolations* **numexpr** => memory optimized simple operations* **astropy** => handling of fits filesTo be able to use the code you need to install the listed modules using the following command lines in a terminal:*pip install aotools**pip install numba**pip install joblib**pip install scikit-image**pip install numexpr**pip install astropy* Import Modules | # -*- coding: utf-8 -*-
"""
Created on Wed Oct 21 10:51:32 2020
@author: cheritie
"""
# commom modules
import matplotlib.pyplot as plt
import numpy as np
import time
# adding AO_Module to the path
import __load__psim
__load__psim.load_psim()
# loading AO modules
from AO_modules.Atmosphere import Atmosphere
from AO_modules.Pyramid import Pyramid
from AO_modules.DeformableMirror import DeformableMirror
from AO_modules.MisRegistration import MisRegistration
from AO_modules.Telescope import Telescope
from AO_modules.Source import Source
# calibration modules
from AO_modules.calibration.compute_KL_modal_basis import compute_M2C
from AO_modules.calibration.ao_calibration import ao_calibration
# display modules
from AO_modules.tools.displayTools import displayMap
| Looking for AO_Modules...
['../AO_modules']
AO_Modules found! Loading the main modules:
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Read Parameter File | #import parameter file (dictionnary)
from parameterFile_VLT_I_Band import initializeParameterFile
param = initializeParameterFile()
# the list of the keys contained in the dictionnary can be printed using the following lines
# for key, value in param.items() :
# print (key, value) | Reading/Writting calibration data from /Disk3/cheritier/psim/data_calibration/
Writting output data in /diskb/cheritier/psim/data_cl
Creation of the directory /diskb/cheritier/psim/data_cl failed:
Directory already exists!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Telescope Object | # create the Telescope object
tel = Telescope(resolution = param['resolution'],\
diameter = param['diameter'],\
samplingTime = param['samplingTime'],\
centralObstruction = param['centralObstruction']) | NGS flux updated!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Wavelength 0.55 [microns]
Optical Band V
Magnitude -0.0
Flux 8967391304.0 [photons/m2/s]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Wavelength 0.55 [microns]
Optical Band V
Magnitude -0.0
Flux 8967391304.0 [photons/m2/s]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TELESCOPE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Diameter 8 [m]
Resolution 80 [pix]
Pixel Size 0.1 [m]
Surface 50.0 [m2]
Central Obstruction 0 [% of diameter]
Number of pixel in the pupil 5024 [pix]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The mai informations contained in the telescope objects are the following: * tel. pupil : the pupil of the telescope as a 2D mask* tel.src : the source object attached to the telescope that contains the informations related to the wavelength, flux and phase. The default wavelength is the V band with a magnitude 0.* tel.OPD : the telescope OPD corresponding to the tel.src.phase All the properties of an object can be displayed using the .show() method: | tel.show() | telescope:
D: 8
OPD: (80, 80)
centralObstruction: 0
fov: 0
index_pixel_petals: None
isPaired: False
isPetalFree: False
pixelArea: 5024
pixelSize: 0.1
pupil: (80, 80)
pupilLogical: (1, 5024)
pupilReflectivity: (80, 80)
resolution: 80
samplingTime: 0.001
src: source object
tag: telescope
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
We can show the 2D map corresponding to the pupil or to the OPD: | plt.figure()
plt.subplot(1,2,1)
plt.imshow(tel.pupil.T)
plt.title('Telescope Pupil: %.0f px in the pupil' %tel.pixelArea)
plt.subplot(1,2,2)
plt.imshow(tel.OPD.T)
plt.title('Telescope OPD [m]')
plt.colorbar() | _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
And we can display the property of the child class tel.src that correspond to the default source object attached to the telescope: | tel.src.show() | source:
bandwidth: 9e-08
magnitude: -0.0
nPhoton: 8967391304.347826
optBand: V
phase: (80, 80)
tag: source
wavelength: 5.5e-07
zeroPoint: 8967391304.347826
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Source ObjectThe Source object allows to access the properties related to the flux and wavelength of the object. We consider only on-axis objects as a start. | ngs=Source(optBand = param['opticalBand'],\
magnitude = param['magnitude'])
print('NGS Object built!') | NGS flux updated!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Wavelength 0.79 [microns]
Optical Band I
Magnitude 8.0
Flux 4629307.0 [photons/m2/s]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SOURCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Wavelength 0.79 [microns]
Optical Band I
Magnitude 8.0
Flux 4629307.0 [photons/m2/s]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
NGS Object built!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The NGS object has to be attached to a telescope object using the ***** operator. This operation sets the telescope property tel.src to the ngs object considered. | ngs*tel | _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The ngs object is now attached to the telescope. This means that the tel.src object now has a **phase** and a **fluxMap** property.If we display the properties of ngs and tel.src, they are the same: | # properties of ngs
ngs.show()
# properties of tel.src
tel.src.show() | source:
bandwidth: 1.5e-07
fluxMap: (80, 80)
magnitude: 8.0
nPhoton: 4629306.603523155
optBand: I
phase: (80, 80)
tag: source
var: 8.673617379884035e-19
wavelength: 7.9e-07
zeroPoint: 7336956521.73913
source:
bandwidth: 1.5e-07
fluxMap: (80, 80)
magnitude: 8.0
nPhoton: 4629306.603523155
optBand: I
phase: (80, 80)
tag: source
var: 8.673617379884035e-19
wavelength: 7.9e-07
zeroPoint: 7336956521.73913
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
We can compute and display the PSF corresponding to the telescope OPD and Source object attached to the telescope. |
zeroPaddingFactor = 8
tel.computePSF(zeroPaddingFactor = zeroPaddingFactor)
PSF_normalized = tel.PSF/tel.PSF.max()
nPix = zeroPaddingFactor*tel.resolution//3
plt.figure()
plt.imshow(np.log(np.abs(PSF_normalized[nPix:-nPix,nPix:-nPix])))
plt.clim([-13,0])
plt.colorbar()
| _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Atmosphere ObjectThe atmosphere object is created mainly from the telescope properties (diameter, pupil, samplingTime)and the *r0* and *L0* parameters. It is possible to generate multi-layers, each one is a child-class of the atmosphere object with its own set of parameters (windSpeed, Cn^2,windDirection, altitude). | atm=Atmosphere(telescope = tel,\
r0 = param['r0'],\
L0 = param['L0'],\
windSpeed = param['windSpeed'],\
fractionalR0 = param['fractionnalR0'],\
windDirection = param['windDirection'],\
altitude = param['altitude'])
print('Atmosphere Object built!') | Atmosphere Object built!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The atmosphere object has to be initialized using: | # initialize atmosphere
atm.initializeAtmosphere(tel)
print('Done!') | Creation of layer1/5 ...
-> Computing the initial phase screen...
initial phase screen : 0.023934602737426758 s
ZZt.. : 0.7004520893096924 s
ZXt.. : 0.3715839385986328 s
XXt.. : 0.2279503345489502 s
Done!
Creation of layer2/5 ...
-> Computing the initial phase screen...
initial phase screen : 0.036902666091918945 s
ZZt.. : 1.2596936225891113 s
ZXt.. : 0.6154317855834961 s
XXt.. : 0.26628828048706055 s
Done!
Creation of layer3/5 ...
-> Computing the initial phase screen...
initial phase screen : 0.031950950622558594 s
ZZt.. : 0.9521989822387695 s
ZXt.. : 0.4208860397338867 s
XXt.. : 0.20944452285766602 s
Done!
Creation of layer4/5 ...
-> Computing the initial phase screen...
initial phase screen : 0.026932239532470703 s
ZZt.. : 0.998450517654419 s
ZXt.. : 0.5060114860534668 s
XXt.. : 0.22093653678894043 s
Done!
Creation of layer5/5 ...
-> Computing the initial phase screen...
initial phase screen : 0.028923749923706055 s
ZZt.. : 0.9069092273712158 s
ZXt.. : 0.4238872528076172 s
XXt.. : 0.19648003578186035 s
Done!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ATMOSPHERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
r0 0.13 [m]
L0 30 [m]
Seeing(V) 0.79 ["]
------------------------------------------------------------------------
Layer Direction Speed Altitude
1 0 [deg] 10 [m/s] 100 [m]
------------------------------------------------------------------------
2 72 [deg] 10 [m/s] 100 [m]
------------------------------------------------------------------------
3 144 [deg] 10 [m/s] 100 [m]
------------------------------------------------------------------------
4 216 [deg] 10 [m/s] 100 [m]
------------------------------------------------------------------------
5 288 [deg] 10 [m/s] 100 [m]
------------------------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Done!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Similarly to the Source object, the atmosphere object can be paired to the telescope **+**. In that case, if the atmosphere OPD is updated, the telescope OPD is automatically updated. | tel+atm
print(tel.isPaired) | Telescope and Atmosphere combined!
True
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
We can display the properties of the telescope object: | plt.figure()
plt.imshow(tel.OPD.T)
plt.title('Telescope OPD [m]')
plt.colorbar()
plt.figure()
plt.imshow(tel.src.phase.T)
plt.colorbar()
plt.title('NGS Phase [rad]') | _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The atmosphere and the telescope can be separated using the **-** operator. This brings back the system to a diffraction limited case with a flat OPD. | tel-atm
print(tel.isPaired) | Telescope and Atmosphere separated!
False
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Deformable Mirror ObjectThe deformable mirror is mainly characterized with its influence functions. They can be user-defined and loaded in the model but the default case is a cartesian DM with gaussian influence functions and normalized to 1. The DM is always defined in the pupil plane. | dm=DeformableMirror(telescope = tel,\
nSubap = param['nSubaperture'],\
mechCoupling = param['mechanicalCoupling'])
print('Done!') | No coordinates loaded.. taking the cartesian geometry as a default
Generating a Deformable Mirror:
Computing the 2D zonal modes...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DEFORMABLE MIRROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Controlled Actuators 357
M4 influence functions No
Pixel Size 0.1 [m]
Pitch 0.4 [m]
Mechanical Coupling 0.45 [m]
Rotation: 0 deg -- shift X: 0 m -- shift Y: 0 m -- Anamorphosis Angle: 0 deg -- Radial Scaling: 0 -- Tangential Scaling: 0
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Done!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
We can display the cube of the influence functions to display the position of the actuators. | cube_IF = np.reshape(np.sum(dm.modes**3, axis =1),[tel.resolution,tel.resolution])
plt.figure()
plt.imshow(cube_IF.T) | _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Light propagationThe light can be propagate through the DM using the ***** operator. To update the DM surface, the property **dm.coefs** must be updated to set the new values of the DM coefficients.Typically, using a random command vector, we can propagate the light through the DM (light is reflected hence the sign change and the factor 2 in OPD): | tel-atm
dm.coefs = (np.random.rand(dm.nValidAct)-0.5)*100e-9
tel*dm
plt.figure()
plt.subplot(121)
plt.imshow(dm.OPD)
plt.title('DM OPD [m]')
plt.colorbar()
plt.subplot(122)
plt.imshow(tel.OPD)
plt.colorbar()
plt.title('Telescope OPD [m]')
plt.figure()
plt.imshow(atm.OPD_no_pupil)
plt.colorbar()
plt.title('Atmosphere OPD [m]')
| Telescope and Atmosphere separated!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Mis-registrationsThe DM/WFS mis-registrations are applied directly in the DM space, applying the transformations on the DM influence functions. First we create a **MisRegistration Object** that is initialized to 0. We can then update the values of the mis-registrations and input it to the DM model: | misReg = MisRegistration()
misReg.rotationAngle = 3
misReg.shiftX = 0.3*param['diameter']/param['nSubaperture']
misReg.shiftY = 0.25*param['diameter']/param['nSubaperture']
dm_misReg = DeformableMirror(telescope = tel,\
nSubap = param['nSubaperture'],\
mechCoupling = param['mechanicalCoupling'],\
misReg = misReg)
print('Done!')
plt.figure()
plt.plot(dm.coordinates[:,0],dm.coordinates[:,1],'.')
plt.plot(dm_misReg.coordinates[:,0],dm_misReg.coordinates[:,1],'.')
plt.axis('square')
plt.legend(['initial DM','mis-registered DM'])
| No coordinates loaded.. taking the cartesian geometry as a default
Generating a Deformable Mirror:
Computing the 2D zonal modes...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DEFORMABLE MIRROR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Controlled Actuators 357
M4 influence functions No
Pixel Size 0.1 [m]
Pitch 0.4 [m]
Mechanical Coupling 0.45 [m]
Rotation: 3 deg -- shift X: 0.12 m -- shift Y: 0.1 m -- Anamorphosis Angle: 0 deg -- Radial Scaling: 0 -- Tangential Scaling: 0
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Done!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Pyramid ObjectThe pyramid object consists mainly in defining the PWFS mask to apply the filtering of the electro-magnetic field. Many parameters can allow to tune the pyramid model:* Centering of the mask and of the FFT on 1 or 4 pixels* Modulation radius in λ/D. By default the number of modulation points ensures to have one point every λ/D on the circular trajectory but this sampling can be modified by the user. The number of modulation points is a multiple of 4 to ensure that each quadrant has the same number of modulation points.* The modulation value for the calibration and selection of the valid pixels* PWFS pupils separation, either for a perfect pyramid with a single value or for an imperfect pyramid with 8 values (shift X and Y for each PWFS pupil) . * The type of post-processing of the PWFS signals (slopes-maps, full-frame,etc). To be independent from this choice, the pyramid signals are named “wfs.pyramidSignal_2D” for either the Slopes-Maps or the camera frame and “wfs.pyramidSignal” for the signal reduced to the valid pixels considered.* The intensity threshold to select the valid pixel Some optional features can be user-defined:* Zero-padding value* Number of pixel on the edge of the Pyramid pupils* The units of the WFS signals can be calibrated using a ramp of Tip/TiltIn addition, the Pyramid object has a Detector object as a child-class that provides the pyramid signals. It can be access through **wfs.cam** | # make sure tel and atm are separated to initialize the PWFS
tel-atm
# create the Pyramid Object
wfs = Pyramid(nSubap = param['nSubaperture'],\
telescope = tel,\
modulation = param['modulation'],\
lightRatio = param['lightThreshold'],\
pupilSeparationRatio = param['pupilSeparationRatio'],\
calibModulation = param['calibrationModulation'],\
psfCentering = param['psfCentering'],\
edgePixel = param['edgePixel'],\
unitCalibration = param['unitCalibration'],\
extraModulationFactor = param['extraModulationFactor'],\
postProcessing = param['postProcessing'])
| Telescope and Atmosphere separated!
Pyramid Mask initialization...
Done!
Selection of the valid pixels...
The valid pixel are selected on flux considerations
Done!
Acquisition of the reference slopes and units calibration...
WFS calibrated!
Done!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% PYRAMID WFS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Pupils Diameter 20 [pixels]
Pupils Separation 3.999999999999999 [pixels]
Pixel Size 0.4 [m]
TT Modulation 3 [lamda/D]
PSF Core Sampling 1 [pixel(s)]
Signal Post-Processing slopesMaps
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
The light can be propagated to the WFS through the different objects with using the ***** operator: | tel*wfs
plt.figure()
plt.imshow(wfs.cam.frame)
plt.colorbar()
| _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
We can display the PWFS signals that corresponds to a random actuation of the DM: | dm.coefs = (np.random.rand(dm.nValidAct)-0.5)*100e-9
tel*dm*wfs
plt.figure()
plt.imshow(wfs.pyramidSignal_2D)
plt.colorbar() | _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Modal BasisIn this tutorial, we compute the mode-to-commands matrix (M2C) using the codes provided by C.Verinaud. It corresponds to a KL modal basis orthogonolized in the DM space. | # compute the modal basis
foldername_M2C = None # name of the folder to save the M2C matrix, if None a default name is used
filename_M2C = None # name of the filename, if None a default name is used
M2C_KL = compute_M2C(telescope = tel,\
atmosphere = atm,\
deformableMirror = dm,\
param = param,\
nameFolder = None,\
nameFile = None,\
remove_piston = True,\
HHtName = None,\
baseName = None ,\
mem_available = 8.1e9,\
minimF = False,\
nmo = 300,\
ortho_spm = True,\
SZ = np.int(2*tel.OPD.shape[0]),\
nZer = 3,\
NDIVL = 1)
| Creation of the directory /Disk3/cheritier/psim/data_calibration/ failed:
Directory already exists!
COMPUTING TEL*DM...
PREPARING IF_2D...
Computing Specific Modes ...
COMPUTING VON KARMAN 2D PSD...
COMPUTING COV MAT HHt...
TIME ELAPSED: 3 sec. COMPLETED: 100 %
SERIALIZING IFs...
SERIALIZING Specific Modes...
COMPUTING IFs CROSS PRODUCT...
NMAX = 300
RMS opd error = [[1.16127888e-08 1.75899251e-08 1.75899251e-08]]
RMS Positions = [[7.26577110e-08 3.29310827e-07 3.29310827e-07]]
MAX Positions = [[4.52339280e-07 8.84512596e-07 8.84512596e-07]]
CHECKING ORTHONORMALITY OF SPECIFIC MODES...
Orthonormality error for SpM = 3.3306690738754696e-16
BUILDING SEED BASIS ...
Orthonormality error for 304 modes of the Seed Basis = 2.2426505097428162e-14
KL WITH DOUBLE DIAGONALISATION: COVARIANCE ERROR = 5.993844598436698e-14
Orthonormality error for 300 modes of the KL Basis = 2.020605904817785e-14
Piston removed from the modal basis!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Interaction MatrixThe interaction matrix can be computed using the M2C matrix and the function interactionMatrix.The output is stored as a class that contains all the informations about the inversion (SVD) such as eigenValues, reconstructor, etc. It is possible to add a **phaseOffset** to the interactionMatrix measurement. | #%% to manually measure the interaction matrix
#
## amplitude of the modes in m
#stroke=1e-9
## Modal Interaction Matrix
#M2C = M2C[:,:param['nModes']]
#from AO_modules.calibration.InteractionMatrix import interactionMatrix
#
#calib = interactionMatrix(ngs = ngs,\
# atm = atm,\
# tel = tel,\
# dm = dm,\
# wfs = wfs,\
# M2C = M2C,\
# stroke = stroke,\
# phaseOffset = 0,\
# nMeasurements = 100,\
# noise = False)
#
#plt.figure()
#plt.plot(np.std(calib.D,axis=0))
#plt.xlabel('Mode Number')
#plt.ylabel('WFS slopes STD')
#plt.ylabel('Optical Gain')
param['nModes'] =300
ao_calib = ao_calibration(param = param,\
ngs = ngs,\
tel = tel,\
atm = atm,\
dm = dm,\
wfs = wfs,\
nameFolderIntMat = None,\
nameIntMat = None,\
nameFolderBasis = None,\
nameBasis = None,\
nMeasurements = 100)
| Creation of the directory /Disk3/cheritier/psim/data_calibration/ failed:
Directory already exists!
Loading the KL Modal Basis from: /Disk3/cheritier/psim/data_calibration/M2C_80_res
Computing the pseudo-inverse of the modal basis...
Diagonality criteria: 1.7785772854495008e-13 -- using the fast computation
Creation of the directory /Disk3/cheritier/psim/data_calibration/VLT_I_band_20x20/ failed:
Directory already exists!
Loading Interaction matrix zonal_interaction_matrix_80_res_3_mod_slopesMaps_psfCentering_False...
Done!
No Modal Gains found. All gains set to 1
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Display Modal Basis |
# project the mode on the DM
dm.coefs = ao_calib.M2C[:,:100]
tel*dm
#
# show the modes projected on the dm, cropped by the pupil and normalized by their maximum value
displayMap(tel.OPD,norma=True)
plt.title('Basis projected on the DM')
KL_dm = np.reshape(tel.OPD,[tel.resolution**2,tel.OPD.shape[2]])
covMat = (KL_dm.T @ KL_dm) / tel.resolution**2
plt.figure()
plt.imshow(covMat)
plt.title('Orthogonality')
plt.show()
plt.figure()
plt.plot(np.round(np.std(np.squeeze(KL_dm[tel.pupilLogical,:]),axis = 0),5))
plt.title('KL mode normalization projected on the DM')
plt.show()
| _____no_output_____ | MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Closed LoopHere is a code to do a closed-loop simulation using the PSIM code: |
# These are the calibration data used to close the loop
calib_CL = ao_calib.calib
M2C_CL = ao_calib.M2C
param['nLoop'] =100
plt.close('all')
# combine telescope with atmosphere
tel+atm
# initialize DM commands
dm.coefs=0
ngs*tel*dm*wfs
plt.ion()
# setup the display
fig = plt.figure(79)
ax1 = plt.subplot(2,3,1)
im_atm = ax1.imshow(tel.src.phase)
plt.colorbar(im_atm)
plt.title('Turbulence phase [rad]')
ax2 = plt.subplot(2,3,2)
im_dm = ax2.imshow(dm.OPD*tel.pupil)
plt.colorbar(im_dm)
plt.title('DM phase [rad]')
tel.computePSF(zeroPaddingFactor=6)
ax4 = plt.subplot(2,3,3)
im_PSF_OL = ax4.imshow(tel.PSF_trunc)
plt.colorbar(im_PSF_OL)
plt.title('OL PSF')
ax3 = plt.subplot(2,3,5)
im_residual = ax3.imshow(tel.src.phase)
plt.colorbar(im_residual)
plt.title('Residual phase [rad]')
ax5 = plt.subplot(2,3,4)
im_wfs_CL = ax5.imshow(wfs.cam.frame)
plt.colorbar(im_wfs_CL)
plt.title('Pyramid Frame CL')
ax6 = plt.subplot(2,3,6)
im_PSF = ax6.imshow(tel.PSF_trunc)
plt.colorbar(im_PSF)
plt.title('CL PSF')
plt.show()
# allocate memory to save data
SR = np.zeros(param['nLoop'])
total = np.zeros(param['nLoop'])
residual = np.zeros(param['nLoop'])
wfsSignal = np.arange(0,wfs.nSignal)*0
# loop parameters
gainCL = 0.6
wfs.cam.photonNoise = True
display = False
for i in range(param['nLoop']):
a=time.time()
# update phase screens => overwrite tel.OPD and consequently tel.src.phase
atm.update()
# save phase variance
total[i]=np.std(tel.OPD[np.where(tel.pupil>0)])*1e9
# save turbulent phase
turbPhase = tel.src.phase
if display == True:
# compute the OL PSF and update the display
tel.computePSF(zeroPaddingFactor=6)
im_PSF_OL.set_data(np.log(tel.PSF_trunc/tel.PSF_trunc.max()))
im_PSF_OL.set_clim(vmin=-3,vmax=0)
# propagate to the WFS with the CL commands applied
tel*dm*wfs
# save the DM OPD shape
dmOPD=tel.pupil*dm.OPD*2*np.pi/ngs.wavelength
dm.coefs=dm.coefs-gainCL*M2C_CL@calib_CL.M@wfsSignal
# store the slopes after computing the commands => 2 frames delay
wfsSignal=wfs.pyramidSignal
b= time.time()
print('Elapsed time: ' + str(b-a) +' s')
# update displays if required
if display==True:
# Turbulence
im_atm.set_data(turbPhase)
im_atm.set_clim(vmin=turbPhase.min(),vmax=turbPhase.max())
# WFS frame
C=wfs.cam.frame
im_wfs_CL.set_data(C)
im_wfs_CL.set_clim(vmin=C.min(),vmax=C.max())
# DM OPD
im_dm.set_data(dmOPD)
im_dm.set_clim(vmin=dmOPD.min(),vmax=dmOPD.max())
# residual phase
D=tel.src.phase
D=D-np.mean(D[tel.pupil])
im_residual.set_data(D)
im_residual.set_clim(vmin=D.min(),vmax=D.max())
tel.computePSF(zeroPaddingFactor=6)
im_PSF.set_data(np.log(tel.PSF_trunc/tel.PSF_trunc.max()))
im_PSF.set_clim(vmin=-4,vmax=0)
plt.draw()
plt.show()
plt.pause(0.001)
SR[i]=np.exp(-np.var(tel.src.phase[np.where(tel.pupil==1)]))
residual[i]=np.std(tel.OPD[np.where(tel.pupil>0)])*1e9
OPD=tel.OPD[np.where(tel.pupil>0)]
print('Loop'+str(i)+'/'+str(param['nLoop'])+' Turbulence: '+str(total[i])+' -- Residual:' +str(residual[i])+ '\n')
#%%
plt.figure()
plt.plot(total)
plt.plot(residual)
plt.xlabel('Time')
plt.ylabel('WFE [nm]')
plt.pause(10)
| Telescope and Atmosphere combined!
| MIT | tutorials/tutorial_howToAdaptiveOptics.ipynb | joao-aveiro/OOPAO |
Contents* [1. Bernoulli Bandit](Part-1.-Bernoulli-Bandit) * [Bonus 1.1. Gittins index (5 points)](Bonus-1.1.-Gittins-index-%285-points%29.) * [HW 1.1. Nonstationary Bernoulli bandit](HW-1.1.-Nonstationary-Bernoulli-bandit)* [2. Contextual bandit](Part-2.-Contextual-bandit) * [2.1 Bulding a BNN agent](2.1-Bulding-a-BNN-agent) * [2.2 Training the agent](2.2-Training-the-agent) * [HW 2.1 Better exploration](HW-2.1-Better-exploration)* [3. Exploration in MDP](Part-3.-Exploration-in-MDP) * [Bonus 3.1 Posterior sampling RL (3 points)](Bonus-3.1-Posterior-sampling-RL-%283-points%29) * [Bonus 3.2 Bootstrapped DQN (10 points)](Bonus-3.2-Bootstrapped-DQN-%2810-points%29) Part 1. Bernoulli BanditWe are going to implement several exploration strategies for simplest problem - bernoulli bandit.The bandit has $K$ actions. Action produce 1.0 reward $r$ with probability $0 \le \theta_k \le 1$ which is unknown to agent, but fixed over time. Agent's objective is to minimize regret over fixed number $T$ of action selections:$$\rho = T\theta^* - \sum_{t=1}^T r_t$$Where $\theta^* = \max_k\{\theta_k\}$**Real-world analogy:**Clinical trials - we have $K$ pills and $T$ ill patient. After taking pill, patient is cured with probability $\theta_k$. Task is to find most efficient pill.A research on clinical trials - https://arxiv.org/pdf/1507.08025.pdf | class BernoulliBandit:
def __init__(self, n_actions=5):
self._probs = np.random.random(n_actions)
@property
def action_count(self):
return len(self._probs)
def pull(self, action):
if np.any(np.random.random() > self._probs[action]):
return 0.0
return 1.0
def optimal_reward(self):
""" Used for regret calculation
"""
return np.max(self._probs)
def step(self):
""" Used in nonstationary version
"""
pass
def reset(self):
""" Used in nonstationary version
"""
class AbstractAgent(metaclass=ABCMeta):
def init_actions(self, n_actions):
self._successes = np.zeros(n_actions)
self._failures = np.zeros(n_actions)
self._total_pulls = 0
@abstractmethod
def get_action(self):
"""
Get current best action
:rtype: int
"""
pass
def update(self, action, reward):
"""
Observe reward from action and update agent's internal parameters
:type action: int
:type reward: int
"""
self._total_pulls += 1
if reward == 1:
self._successes[action] += 1
else:
self._failures[action] += 1
@property
def name(self):
return self.__class__.__name__
class RandomAgent(AbstractAgent):
def get_action(self):
return np.random.randint(0, len(self._successes)) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Epsilon-greedy agent**for** $t = 1,2,...$ **do** **for** $k = 1,...,K$ **do** $\hat\theta_k \leftarrow \alpha_k / (\alpha_k + \beta_k)$ **end for** $x_t \leftarrow argmax_{k}\hat\theta$ with probability $1 - \epsilon$ or random action with probability $\epsilon$ Apply $x_t$ and observe $r_t$ $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$**end for**Implement the algorithm above in the cell below: | class EpsilonGreedyAgent(AbstractAgent):
def __init__(self, epsilon=0.01):
self._epsilon = epsilon
def get_action(self):
# YOUR CODE HERE
@property
def name(self):
return self.__class__.__name__ + "(epsilon={})".format(self._epsilon) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
UCB AgentEpsilon-greedy strategy heve no preference for actions. It would be better to select among actions that are uncertain or have potential to be optimal. One can come up with idea of index for each action that represents otimality and uncertainty at the same time. One efficient way to do it is to use UCB1 algorithm:**for** $t = 1,2,...$ **do** **for** $k = 1,...,K$ **do** $w_k \leftarrow \alpha_k / (\alpha_k + \beta_k) + \sqrt{2log\ t \ / \ (\alpha_k + \beta_k)}$ **end for** **end for** $x_t \leftarrow argmax_{k}w$ Apply $x_t$ and observe $r_t$ $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$**end for**__Note:__ in practice, one can multiply $\sqrt{2log\ t \ / \ (\alpha_k + \beta_k)}$ by some tunable parameter to regulate agent's optimism and wilingness to abandon non-promising actions.More versions and optimality analysis - https://homes.di.unimi.it/~cesabian/Pubblicazioni/ml-02.pdf | class UCBAgent(AbstractAgent):
def get_action(self):
# YOUR CODE HERE | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Thompson samplingUCB1 algorithm does not take into account actual distribution of rewards. If we know the distribution - we can do much better by using Thompson sampling:**for** $t = 1,2,...$ **do** **for** $k = 1,...,K$ **do** Sample $\hat\theta_k \sim beta(\alpha_k, \beta_k)$ **end for** $x_t \leftarrow argmax_{k}\hat\theta$ Apply $x_t$ and observe $r_t$ $(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$**end for** More on Thompson Sampling:https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf | class ThompsonSamplingAgent(AbstractAgent):
def get_action(self):
# YOUR CODE HERE
def plot_regret(env, agents, n_steps=5000, n_trials=50):
scores = {
agent.name: [0.0 for step in range(n_steps)] for agent in agents
}
for trial in range(n_trials):
env.reset()
for a in agents:
a.init_actions(env.action_count)
for i in range(n_steps):
optimal_reward = env.optimal_reward()
for agent in agents:
action = agent.get_action()
reward = env.pull(action)
agent.update(action, reward)
scores[agent.name][i] += optimal_reward - reward
env.step() # change bandit's state if it is unstationary
plt.figure(figsize=(17, 8))
for agent in agents:
plt.plot(np.cumsum(scores[agent.name]) / n_trials)
plt.legend([agent.name for agent in agents])
plt.ylabel("regret")
plt.xlabel("steps")
plt.show()
# Uncomment agents
agents = [
# EpsilonGreedyAgent(),
# UCBAgent(),
# ThompsonSamplingAgent()
]
plot_regret(BernoulliBandit(), agents, n_steps=10000, n_trials=10) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Bonus 1.1. Gittins index (5 points).Bernoulli bandit problem has an optimal solution - Gittins index algorithm. Implement finite horizon version of the algorithm and demonstrate it's performance with experiments. some articles:- Wikipedia article - https://en.wikipedia.org/wiki/Gittins_index- Different algorithms for index computation - http://www.ece.mcgill.ca/~amahaj1/projects/bandits/book/2013-bandit-computations.pdf (see "Bernoulli" section) HW 1.1. Nonstationary Bernoulli banditWhat if success probabilities change over time? Here is an example of such bandit: | class DriftingBandit(BernoulliBandit):
def __init__(self, n_actions=5, gamma=0.01):
"""
Idea from https://github.com/iosband/ts_tutorial
"""
super().__init__(n_actions)
self._gamma = gamma
self._successes = None
self._failures = None
self._steps = 0
self.reset()
def reset(self):
self._successes = np.zeros(self.action_count) + 1.0
self._failures = np.zeros(self.action_count) + 1.0
self._steps = 0
def step(self):
action = np.random.randint(self.action_count)
reward = self.pull(action)
self._step(action, reward)
def _step(self, action, reward):
self._successes = self._successes * (1 - self._gamma) + self._gamma
self._failures = self._failures * (1 - self._gamma) + self._gamma
self._steps += 1
self._successes[action] += reward
self._failures[action] += 1.0 - reward
self._probs = np.random.beta(self._successes, self._failures) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
And a picture how it's reward probabilities change over time | drifting_env = DriftingBandit(n_actions=5)
drifting_probs = []
for i in range(20000):
drifting_env.step()
drifting_probs.append(drifting_env._probs)
plt.figure(figsize=(17, 8))
plt.plot(pandas.DataFrame(drifting_probs).rolling(window=20).mean())
plt.xlabel("steps")
plt.ylabel("Success probability")
plt.title("Reward probabilities over time")
plt.legend(["Action {}".format(i) for i in range(drifting_env.action_count)])
plt.show() | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Your task is to invent an agent that will have better regret than stationary agents from above. | # YOUR AGENT HERE SECTION
drifting_agents = [
ThompsonSamplingAgent(),
EpsilonGreedyAgent(),
UCBAgent(),
YourAgent()
]
plot_regret(DriftingBandit(), drifting_agents, n_steps=20000, n_trials=10) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Part 2. Contextual banditNow we will solve much more complex problem - reward will depend on bandit's state.**Real-word analogy:**> Contextual advertising. We have a lot of banners and a lot of different users. Users can have different features: age, gender, search requests. We want to show banner with highest click probability.If we want use strategies from above, we need some how store reward distributions conditioned both on actions and bandit's state. One way to do this - use bayesian neural networks. Instead of giving pointwise estimates of target, they maintain probability distributionsPicture from https://arxiv.org/pdf/1505.05424.pdfMore material: * A post on the matter - [url](http://twiecki.github.io/blog/2016/07/05/bayesian-deep-learning/) * Theano+PyMC3 for more serious stuff - [url](http://pymc-devs.github.io/pymc3/notebooks/bayesian_neural_network_advi.html) * Same stuff in tensorflow - [url](http://edwardlib.org/tutorials/bayesian-neural-network) Let's load our dataset: | all_states = np.load("all_states.npy")
action_rewards = np.load("action_rewards.npy")
state_size = all_states.shape[1]
n_actions = action_rewards.shape[1]
print("State size: %i, actions: %i" % (state_size, n_actions))
import theano
import theano.tensor as T
import lasagne
from lasagne import init
from lasagne.layers import *
import bayes
as_bayesian = bayes.bbpwrap(bayes.NormalApproximation(std=0.1))
BayesDenseLayer = as_bayesian(DenseLayer) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
2.1 Bulding a BNN agentLet's implement epsilon-greedy BNN agent | class BNNAgent:
"""a bandit with bayesian neural net"""
def __init__(self, state_size, n_actions):
input_states = T.matrix("states")
target_actions = T.ivector("actions taken")
target_rewards = T.vector("rewards")
self.total_samples_seen = theano.shared(
np.int32(0), "number of training samples seen so far")
batch_size = target_actions.shape[0] # por que?
# Network
inp = InputLayer((None, state_size), name='input')
# YOUR NETWORK HERE
out = <Your network >
# Prediction
prediction_all_actions = get_output(out, inputs=input_states)
self.predict_sample_rewards = theano.function(
[input_states], prediction_all_actions)
# Training
# select prediction for target action
prediction_target_actions = prediction_all_actions[T.arange(
batch_size), target_actions]
# loss = negative log-likelihood (mse) + KL
negative_llh = T.sum((prediction_target_actions - target_rewards)**2)
kl = bayes.get_var_cost(out) / (self.total_samples_seen+batch_size)
loss = (negative_llh + kl)/batch_size
self.weights = get_all_params(out, trainable=True)
self.out = out
# gradient descent
updates = lasagne.updates.adam(loss, self.weights)
# update counts
updates[self.total_samples_seen] = self.total_samples_seen + \
batch_size.astype('int32')
self.train_step = theano.function([input_states, target_actions, target_rewards],
[negative_llh, kl],
updates=updates,
allow_input_downcast=True)
def sample_prediction(self, states, n_samples=1):
"""Samples n_samples predictions for rewards,
:returns: tensor [n_samples, state_i, action_i]
"""
assert states.ndim == 2, "states must be 2-dimensional"
return np.stack([self.predict_sample_rewards(states) for _ in range(n_samples)])
epsilon = 0.25
def get_action(self, states):
"""
Picks action by
- with p=1-epsilon, taking argmax of average rewards
- with p=epsilon, taking random action
This is exactly e-greedy policy.
"""
reward_samples = self.sample_prediction(states, n_samples=100)
# ^-- samples for rewards, shape = [n_samples,n_states,n_actions]
best_actions = reward_samples.mean(axis=0).argmax(axis=-1)
# ^-- we take mean over samples to compute expectation, then pick best action with argmax
# YOUR CODE HERE
chosen_actions = <-- implement epsilon-greedy strategy - ->
return chosen_actions
def train(self, states, actions, rewards, n_iters=10):
"""
trains to predict rewards for chosen actions in given states
"""
loss_sum = kl_sum = 0
for _ in range(n_iters):
loss, kl = self.train_step(states, actions, rewards)
loss_sum += loss
kl_sum += kl
return loss_sum / n_iters, kl_sum / n_iters
@property
def name(self):
return self.__class__.__name__ | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
2.2 Training the agent | N_ITERS = 100
def get_new_samples(states, action_rewards, batch_size=10):
"""samples random minibatch, emulating new users"""
batch_ix = np.random.randint(0, len(states), batch_size)
return states[batch_ix], action_rewards[batch_ix]
from IPython.display import clear_output
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
def train_contextual_agent(agent, batch_size=10, n_iters=100):
rewards_history = []
for i in range(n_iters):
b_states, b_action_rewards = get_new_samples(
all_states, action_rewards, batch_size)
b_actions = agent.get_action(b_states)
b_rewards = b_action_rewards[
np.arange(batch_size), b_actions
]
mse, kl = agent.train(b_states, b_actions, b_rewards, n_iters=100)
rewards_history.append(b_rewards.mean())
if i % 10 == 0:
clear_output(True)
print("iteration #%i\tmean reward=%.3f\tmse=%.3f\tkl=%.3f" %
(i, np.mean(rewards_history[-10:]), mse, kl))
plt.plot(rewards_history)
plt.plot(moving_average(np.array(rewards_history), alpha=0.1))
plt.title("Reward per epesode")
plt.xlabel("Episode")
plt.ylabel("Reward")
plt.show()
samples = agent.sample_prediction(
b_states[:1], n_samples=100).T[:, 0, :]
for i in range(len(samples)):
plt.hist(samples[i], alpha=0.25, label=str(i))
plt.legend(loc='best')
print('Q(s,a) std:', ';'.join(
list(map('{:.3f}'.format, np.std(samples, axis=1)))))
print('correct', b_action_rewards[0].argmax())
plt.title("p(Q(s, a))")
plt.show()
return moving_average(np.array(rewards_history), alpha=0.1)
bnn_agent = BNNAgent(state_size=state_size, n_actions=n_actions)
greedy_agent_rewards = train_contextual_agent(
bnn_agent, batch_size=10, n_iters=N_ITERS) | iteration #90 mean reward=0.560 mse=0.457 kl=0.044
| Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
HW 2.1 Better explorationUse strategies from first part to gain more reward in contextual setting | class ThompsonBNNAgent(BNNAgent):
def get_action(self, states):
"""
picks action based by taking _one_ sample from BNN and taking action with highest sampled reward (yes, that simple)
This is exactly thompson sampling.
"""
# YOUR CODE HERE
thompson_agent_rewards = train_contextual_agent(ThompsonBNNAgent(state_size=state_size, n_actions=n_actions),
batch_size=10, n_iters=N_ITERS)
class BayesUCBBNNAgent(BNNAgent):
q = 90
def get_action(self, states):
"""
Compute q-th percentile of rewards P(r|s,a) for all actions
Take actions that have highest percentiles.
This implements bayesian UCB strategy
"""
# YOUR CODE HERE
ucb_agent_rewards = train_contextual_agent(BayesUCBBNNAgent(state_size=state_size, n_actions=n_actions),
batch_size=10, n_iters=N_ITERS)
plt.figure(figsize=(17, 8))
plt.plot(greedy_agent_rewards)
plt.plot(thompson_agent_rewards)
plt.plot(ucb_agent_rewards)
plt.legend([
"Greedy BNN",
"Thompson sampling BNN",
"UCB BNN"
])
plt.show() | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Part 3. Exploration in MDPThe following problem, called "river swim", illustrates importance of exploration in context of mdp's. Picture from https://arxiv.org/abs/1306.0940 Rewards and transition probabilities are unknown to an agent. Optimal policy is to swim against current, while easiest way to gain reward is to go left. | class RiverSwimEnv:
LEFT_REWARD = 5.0 / 1000
RIGHT_REWARD = 1.0
def __init__(self, intermediate_states_count=4, max_steps=16):
self._max_steps = max_steps
self._current_state = None
self._steps = None
self._interm_states = intermediate_states_count
self.reset()
def reset(self):
self._steps = 0
self._current_state = 1
return self._current_state, 0.0, False
@property
def n_actions(self):
return 2
@property
def n_states(self):
return 2 + self._interm_states
def _get_transition_probs(self, action):
if action == 0:
if self._current_state == 0:
return [0, 1.0, 0]
else:
return [1.0, 0, 0]
elif action == 1:
if self._current_state == 0:
return [0, .4, .6]
if self._current_state == self.n_states - 1:
return [.4, .6, 0]
else:
return [.05, .6, .35]
else:
raise RuntumeError(
"Unknown action {}. Max action is {}".format(action, self.n_actions))
def step(self, action):
"""
:param action:
:type action: int
:return: observation, reward, is_done
:rtype: (int, float, bool)
"""
reward = 0.0
if self._steps >= self._max_steps:
return self._current_state, reward, True
transition = np.random.choice(
range(3), p=self._get_transition_probs(action))
if transition == 0:
self._current_state -= 1
elif transition == 1:
pass
else:
self._current_state += 1
if self._current_state == 0:
reward = self.LEFT_REWARD
elif self._current_state == self.n_states - 1:
reward = self.RIGHT_REWARD
self._steps += 1
return self._current_state, reward, False | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Let's implement q-learning agent with epsilon-greedy exploration strategy and see how it performs. | class QLearningAgent:
def __init__(self, n_states, n_actions, lr=0.2, gamma=0.95, epsilon=0.1):
self._gamma = gamma
self._epsilon = epsilon
self._q_matrix = np.zeros((n_states, n_actions))
self._lr = lr
def get_action(self, state):
if np.random.random() < self._epsilon:
return np.random.randint(0, self._q_matrix.shape[1])
else:
return np.argmax(self._q_matrix[state])
def get_q_matrix(self):
""" Used for policy visualization
"""
return self._q_matrix
def start_episode(self):
""" Used in PSRL agent
"""
pass
def update(self, state, action, reward, next_state):
# YOUR CODE HERE
# Finish implementation of q-learnig agent
def train_mdp_agent(agent, env, n_episodes):
episode_rewards = []
for ep in range(n_episodes):
state, ep_reward, is_done = env.reset()
agent.start_episode()
while not is_done:
action = agent.get_action(state)
next_state, reward, is_done = env.step(action)
agent.update(state, action, reward, next_state)
state = next_state
ep_reward += reward
episode_rewards.append(ep_reward)
return episode_rewards
env = RiverSwimEnv()
agent = QLearningAgent(env.n_states, env.n_actions)
rews = train_mdp_agent(agent, env, 1000)
plt.figure(figsize=(15, 8))
plt.plot(moving_average(np.array(rews), alpha=.1))
plt.xlabel("Episode count")
plt.ylabel("Reward")
plt.show() | /usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:6: FutureWarning: pd.ewm_mean is deprecated for ndarrays and will be removed in a future version
| Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Let's visualize our policy: | def plot_policy(agent):
fig = plt.figure(figsize=(15, 8))
ax = fig.add_subplot(111)
ax.matshow(agent.get_q_matrix().T)
ax.set_yticklabels(['', 'left', 'right'])
plt.xlabel("State")
plt.ylabel("Action")
plt.title("Values of state-action pairs")
plt.show()
plot_policy(agent) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
As your see, agent uses suboptimal policy of going left and does not explore the right state. Bonus 3.1 Posterior sampling RL (3 points) Now we will implement Thompson Sampling for MDP!General algorithm:>**for** episode $k = 1,2,...$ **do**>> sample $M_k \sim f(\bullet\ |\ H_k)$>> compute policy $\mu_k$ for $M_k$>> **for** time $t = 1, 2,...$ **do**>>> take action $a_t$ from $\mu_k$ >>> observe $r_t$ and $s_{t+1}$>>> update $H_k$>> **end for**>**end for**In our case we will model $M_k$ with two matricies: transition and reward. Transition matrix is sampled from dirichlet distribution. Reward matrix is sampled from normal-gamma distribution.Distributions are updated with bayes rule - see continious distribution section at https://en.wikipedia.org/wiki/Conjugate_priorArticle on PSRL - https://arxiv.org/abs/1306.0940 | def sample_normal_gamma(mu, lmbd, alpha, beta):
""" https://en.wikipedia.org/wiki/Normal-gamma_distribution
"""
tau = np.random.gamma(alpha, beta)
mu = np.random.normal(mu, 1.0 / np.sqrt(lmbd * tau))
return mu, tau
class PsrlAgent:
def __init__(self, n_states, n_actions, horizon=10):
self._n_states = n_states
self._n_actions = n_actions
self._horizon = horizon
# params for transition sampling - Dirichlet distribution
self._transition_counts = np.zeros(
(n_states, n_states, n_actions)) + 1.0
# params for reward sampling - Normal-gamma distribution
self._mu_matrix = np.zeros((n_states, n_actions)) + 1.0
self._state_action_counts = np.zeros(
(n_states, n_actions)) + 1.0 # lambda
self._alpha_matrix = np.zeros((n_states, n_actions)) + 1.0
self._beta_matrix = np.zeros((n_states, n_actions)) + 1.0
def _value_iteration(self, transitions, rewards):
# YOU CODE HERE
state_values = < Find action values with value iteration >
return state_values
def start_episode(self):
# sample new mdp
self._sampled_transitions = np.apply_along_axis(
np.random.dirichlet, 1, self._transition_counts)
sampled_reward_mus, sampled_reward_stds = sample_normal_gamma(
self._mu_matrix,
self._state_action_counts,
self._alpha_matrix,
self._beta_matrix
)
self._sampled_rewards = sampled_reward_mus
self._current_value_function = self._value_iteration(
self._sampled_transitions, self._sampled_rewards)
def get_action(self, state):
return np.argmax(self._sampled_rewards[state] +
self._current_value_function.dot(self._sampled_transitions[state]))
def update(self, state, action, reward, next_state):
# YOUR CODE HERE
# update rules - https://en.wikipedia.org/wiki/Conjugate_prior
def get_q_matrix(self):
return self._sampled_rewards + self._current_value_function.dot(self._sampled_transitions)
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
horizon = 20
env = RiverSwimEnv(max_steps=horizon)
agent = PsrlAgent(env.n_states, env.n_actions, horizon=horizon)
rews = train_mdp_agent(agent, env, 1000)
plt.figure(figsize=(15, 8))
plt.plot(moving_average(np.array(rews), alpha=0.1))
plt.xlabel("Episode count")
plt.ylabel("Reward")
plt.show()
plot_policy(agent) | _____no_output_____ | Unlicense | week05_explore/week5.ipynb | kianya/Practical_RL |
Agenda 1. [Review](0)2. [Numpy Intro and Installation](2)2. [Exercise](10) 3. [Exercise](12) Review ExerciseThe following list represents the diameters of circles:circle_diameters = [1,2,3,5,8,13,21]1 - Calculate the area of each circle2 - Calculate the circumference of each circleUse Numpy for the solution Solution | import numpy as np
circle_diameters = [1,2,3,5,8,13,21]
area = [round(np.pi*(x/2)**2,2) for x in circle_diameters]
circ = [round(np.pi*x) for x in circle_diameters]
print(circ)
print(area) | _____no_output_____ | BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
ExerciseFor each tuple in the list print the area of each rectangleEach element in the list represents the length and the widthdimensions = [(20,2),(2,3),(4,4),(6,6)]Solve as a non numpy problem Solution | dimensions = [(20,2),(2,3),(4,4),(6,6)]
areas = [i*x for i,x in dimensions]
print(areas) | [40, 6, 16, 36]
| BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
ExerciseThe following dictionary represents the dimensions of an apartment in New York City. apartment = {'Bedroom 1':(12,12), 'Bedroom 2': (12,10), 'Bathroom 1': (6,8), 'Bathroom 2': (6,8), 'Kitchen': (10,8), 'Foyer': (14,4), 'Dining Room': (12,10), 'Living Room': (12,15)}1 - Calculate the square footage of each room2 - Calculate the total square footage of the apartment3 - If the apartment is selling for $90 a square foot. How much is it selling for?NOTE: Selling price is the Square footage * price per square foot as 12Do not use numpy | apartment = {'Bedroom 1':(12,12), 'Bedroom 2': (12,10), 'Bathroom 1': (6,8), 'Bathroom 2': (6,8), 'Kitchen': (10,8), 'Foyer': (14,4), 'Dining Room': (12,10), 'Living Room': (12,15)}
areas = [(v[0]*v[1]) for i,v in apartment.items()]
print(areas)
print(sum(areas))
print('$',sum(areas)*90) | [144, 120, 48, 48, 80, 56, 120, 180]
796
$ 71640
| BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
Exercise5 friends go out to dinner and their individual amounts were tallied on 1 billdinner = [36, 42, 27, 32, 39]Using numpy calculate the following:1 - The average meal price2 - The median meal price3 - The maximum cost for the meal4 - The least expensive meal5 - The difference each person varied from the mean6 - Add a 20% tip to each persons meal then Add 8.875% tax to the pre tax meal amount7 - What percentage of the total bill is each person contributing?8 - It was the birthday of the person who had the most expensive meal. Distribute the cost of their meal amoungst the other patrons. Solution | import numpy as np
dinner = [36, 42, 27, 32, 39]
avg = np.mean(dinner)
median = np.median(dinner)
m = np.max(dinner)
mi = np.min(dinner)
diff_mean = [round(x-avg,2) for x in dinner]
print('avg:', avg, 'median:', median, 'max:', m, 'min:', mi)
print(diff_mean)
adjusted_dinner = [round((x*1.2)*1.0875,2) for x in dinner]
new_sum = sum(adjusted_dinner)
print(adjusted_dinner)
contributions = [round((x/new_sum)*100,2) for x in adjusted_dinner]
print(contributions, sum(contributions))
add = np.max(adjusted_dinner)/4
adjusted_dinner.remove(max(adjusted_dinner))
print(np.add(adjusted_dinner,add))
| avg: 35.2 median: 36.0 max: 42 min: 27
[0.8, 6.8, -8.2, -3.2, 3.8]
[46.98, 54.81, 35.23, 41.76, 50.89]
[20.46, 23.86, 15.34, 18.18, 22.16] 100.0
[60.6825 48.9325 55.4625 64.5925]
| BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
Numpy continued Strings can also be stored in a numpy array.import numpy as nppasta_shapes = ['Macaroni','Rigatoni','Angel Hair','Spaghetti','Linguini']shapes = np.array(pasta_shapes)shapes = np.sort(shapes)print(shapes)print(shapes.dtype) | import numpy as np
pasta_shapes = ['Macaroni','Rigatoni','Angel Hair','Spaghetti','Linguini']
val= 5
val = np.array(val)
name= 'str'
name = np.array(name)
boo = True
boo = np.array(boo)
shapes = np.array(pasta_shapes)
print(np.sort(shapes))
print(type(shapes)) | ['Angel Hair' 'Linguini' 'Macaroni' 'Rigatoni' 'Spaghetti']
<U10
<class 'numpy.ndarray'>
| BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
We can use short cut abreviations to assign the data types to numpyarrays:For i, u, f, S and U we can define size as well. We can combine each letter with a size as well like: 4,8import numpy as nparr = np.array(list(range(1,11)), dtype='i8')print(arr, arr.dtype) | import numpy as np
np.array(list(range(1,11)), dtype='i8') | _____no_output_____ | BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
CastingWe can convert python data type into numpy data type using 2 methodsMethod 1Use dtype parameterarr_string = np.array(list(range(1,11)), dtype='S')print(arr_string)Method 2Use astype()import numpy as nparr = np.array(list(range(1,11)), dtype='i8')arr_2 = arr.astype('S')print(arr_2) | arr_string = np.array(list(range(1,11)))
print(arr_string)
import numpy as np
arr = np.array(list(range(1,11)), dtype='i8')
arr_2 = arr.astype('S')
print(arr_2)
| [ 1 2 3 4 5 6 7 8 9 10]
[b'1' b'2' b'3' b'4' b'5' b'6' b'7' b'8' b'9' b'10']
| BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
ExerciseTake the following floating point number and cast them as integersUse both methods.rainfall = [2.3,3.7,2.4,1.9] Solution |
import numpy as np
rainfall = np.array([2.3,3.7,2.4,1.9])
r2 = rainfall.astype(int)
print(r2)
import numpy as np
| _____no_output_____ | BSD-Source-Code | CI_Data_Science_Lesson_06_ANSWERS.ipynb | MaxDGU/datasciencenotebooks |
Repairing Code AutomaticallySo far, we have discussed how to track failures and how to locate defects in code. Let us now discuss how to _repair_ defects – that is, to correct the code such that the failure no longer occurs. We will discuss how to _repair code automatically_ – by systematically searching through possible fixes and evolving the most promising candidates. | from bookutils import YouTubeVideo
YouTubeVideo("UJTf7cW0idI") | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
**Prerequisites*** Re-read the [introduction to debugging](Intro_Debugging.ipynb), notably on how to properly fix code.* We make use of automatic fault localization, as discussed in the [chapter on statistical debugging](StatisticalDebugger.ipynb).* We make extensive use of code transformations, as discussed in the [chapter on tracing executions](Tracer.ipynb).* We make use of [delta debugging](DeltaDebugger.ipynb). | import bookutils | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.Repairer import ```and then make use of the following features.This chapter provides tools and techniques for automated repair of program code. The `Repairer()` class takes a `RankingDebugger` debugger as input (such as `OchiaiDebugger` from [the chapter on statistical debugging](StatisticalDebugger.ipynb)). A typical setup looks like this:```pythonfrom debuggingbook.StatisticalDebugger import OchiaiDebuggerdebugger = OchiaiDebugger()for inputs in TESTCASES: with debugger: test_foo(inputs)...repairer = Repairer(debugger)```Here, `test_foo()` is a function that raises an exception if the tested function `foo()` fails. If `foo()` passes, `test_foo()` should not raise an exception.The `repair()` method of a `Repairer` searches for a repair of the code covered in the debugger (except for methods starting or ending in `test`, such that `foo()`, not `test_foo()` is repaired). `repair()` returns the best fix candidate as a pair `(tree, fitness)` where `tree` is a [Python abstract syntax tree](http://docs.python.org/3/library/ast) (AST) of the fix candidate, and `fitness` is the fitness of the candidate (a value between 0 and 1). A `fitness` of 1.0 means that the candidate passed all tests. A typical usage looks like this:```pythonimport astortree, fitness = repairer.repair()print(astor.to_source(tree), fitness)```Here is a complete example for the `middle()` program. This is the original source code of `middle()`:```pythondef middle(x, y, z): type: ignore if y < z: if x < y: return y elif x < z: return y else: if x > y: return y elif x > z: return x return z```We set up a function `middle_test()` that tests it. The `middle_debugger` collects testcases and outcomes:```python>>> middle_debugger = OchiaiDebugger()>>> for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:>>> with middle_debugger:>>> middle_test(x, y, z)```The repairer attempts to repair the invoked function (`middle()`). The returned AST `tree` can be output via `astor.to_source()`:```python>>> middle_repairer = Repairer(middle_debugger)>>> tree, fitness = middle_repairer.repair()>>> print(astor.to_source(tree), fitness)def middle(x, y, z): if y < z: if x < z: if x < y: return y else: return x elif x > y: return y elif x > z: return x return z 1.0```Here are the classes defined in this chapter. A `Repairer` repairs a program, using a `StatementMutator` and a `CrossoverOperator` to evolve a population of candidates. Automatic Code RepairsSo far, we have discussed how to locate defects in code, how to track failures back to the defects that caused them, and how to systematically determine failure conditions. Let us now address the last step in debugging – namely, how to _automatically fix code_.Already in the [introduction to debugging](Intro_Debugging.ipynb), we have discussed how to fix code manually. Notably, we have established that a _diagnosis_ (which induces a fix) should show _causality_ (i.e., how the defect causes the failure) and _incorrectness_ (how the defect is wrong). Is it possible to obtain such a diagnosis automatically? In this chapter, we introduce a technique of _automatic code repair_ – that is, for a given failure, automatically determine a fix that makes the failure go away. To do so, we randomly (but systematically) _mutate_ the program code – that is, insert, change, and delete fragments – until we find a change that actually causes the failing test to pass. If this sounds like an audacious idea, that is because it is. But not only is _automated program repair_ one of the hottest topics of software research in the last decade, it is also being increasingly deployed in industry. At Facebook, for instance, every failing test report comes with an automatically generated _repair suggestion_ – a suggestion that already has been validated to work. Programmers can apply the suggestion as is or use it as basis for their own fixes. The middle() Function Let us introduce our ongoing example. In the [chapter on statistical debugging](StatisticalDebugger.ipynb), we have introduced the `middle()` function – a function that returns the "middle" of three numbers `x`, `y`, and `z`: | from StatisticalDebugger import middle
# ignore
from bookutils import print_content
# ignore
import inspect
# ignore
_, first_lineno = inspect.getsourcelines(middle)
middle_source = inspect.getsource(middle)
print_content(middle_source, '.py', start_line_number=first_lineno) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
In most cases, `middle()` just runs fine: | middle(4, 5, 6) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
In some other cases, though, it does not work correctly: | middle(2, 1, 3) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Validated Repairs Now, if we only want a repair that fixes this one given failure, this would be very easy. All we have to do is to replace the entire body by a single statement: | def middle_sort_of_fixed(x, y, z): # type: ignore
return x | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
You will concur that the failure no longer occurs: | middle_sort_of_fixed(2, 1, 3) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
But this, of course, is not the aim of automatic fixes, nor of fixes in general: We want our fixes not only to make the given failure go away, but we also want the resulting code to be _correct_ (which, of course, is a lot harder). Automatic repair techniques therefore assume the existence of a _test suite_ that can check whether an implementation satisfies its requirements. Better yet, one can use the test suite to gradually check _how close_ one is to perfection: A piece of code that satisfies 99% of all tests is better than one that satisfies ~33% of all tests, as `middle_sort_of_fixed()` would do (assuming the test suite evenly checks the input space). Genetic Optimization The common approach for automatic repair follows the principle of _genetic optimization_. Roughly spoken, genetic optimization is a _metaheuristic_ inspired by the process of _natural selection_. The idea is to _evolve_ a selection of _candidate solutions_ towards a maximum _fitness_:1. Have a selection of _candidates_.2. Determine the _fitness_ of each candidate.3. Retain those candidates with the _highest fitness_.4. Create new candidates from the retained candidates, by applying genetic operations: * _Mutation_ mutates some aspect of a candidate. * _CrossoverOperator_ creates new candidates combining features of two candidates.5. Repeat until an optimal solution is found. Applied for automated program repair, this means the following steps:1. Have a _test suite_ with both failing and passing tests that helps asserting correctness of possible solutions.2. With the test suite, use [fault localization](StatisticalDebugger.ipynb) to determine potential code locations to be fixed.3. Systematically _mutate_ the code (by adding, changing, or deleting code) and _cross_ code to create possible fix candidates.4. Identify the _fittest_ fix candidates – that is, those that satisfy the most tests.5. _Evolve_ the fittest candidates until a perfect fix is found, or until time resources are depleted. Let us illustrate these steps in the following sections. A Test Suite In automated repair, the larger and the more thorough the test suite, the higher the quality of the resulting fix (if any). Hence, if we want to repair `middle()` automatically, we need a good test suite – with good inputs, but also with good checks. Note that running the test suite commonly takes the most time of automated repair, so a large test suite also comes with extra cost. Let us first focus on achieving high-quality repairs. Hence, we will use the extensive test suites introduced in the [chapter on statistical debugging](StatisticalDebugger.ipynb): | from StatisticalDebugger import MIDDLE_PASSING_TESTCASES, MIDDLE_FAILING_TESTCASES | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
The `middle_test()` function fails whenever `middle()` returns an incorrect result: | def middle_test(x: int, y: int, z: int) -> None:
m = middle(x, y, z)
assert m == sorted([x, y, z])[1]
from ExpectError import ExpectError
with ExpectError():
middle_test(2, 1, 3) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Locating the Defect Our next step is to find potential defect locations – that is, those locations in the code our mutations should focus upon. Since we already do have two test suites, we can make use of [statistical debugging](StatisticalDebugger.ipynb) to identify likely faulty locations. Our `OchiaiDebugger` ranks individual code lines by how frequently they are executed in failing runs (and not in passing runs). | from StatisticalDebugger import OchiaiDebugger, RankingDebugger
middle_debugger = OchiaiDebugger()
for x, y, z in MIDDLE_PASSING_TESTCASES + MIDDLE_FAILING_TESTCASES:
with middle_debugger:
middle_test(x, y, z) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
We see that the upper half of the `middle()` code is definitely more suspicious: | middle_debugger | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
The most suspicious line is: | # ignore
location = middle_debugger.rank()[0]
(func_name, lineno) = location
lines, first_lineno = inspect.getsourcelines(middle)
print(lineno, end="")
print_content(lines[lineno - first_lineno], '.py') | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
with a suspiciousness of: | # ignore
middle_debugger.suspiciousness(location) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Random Code Mutations Our third step in automatic code repair is to _randomly mutate the code_. Specifically, we want to randomly _delete_, _insert_, and _replace_ statements in the program to be repaired. However, simply synthesizing code _from scratch_ is unlikely to yield anything meaningful – the number of combinations is simply far too high. Already for a three-character identifier name, we have more than 200,000 combinations: | import string
string.ascii_letters
len(string.ascii_letters + '_') * \
len(string.ascii_letters + '_' + string.digits) * \
len(string.ascii_letters + '_' + string.digits) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Hence, we do _not_ synthesize code from scratch, but instead _reuse_ elements from the program to be fixed, hypothesizing that "a program that contains an error in one area likely implements the correct behavior elsewhere" \cite{LeGoues2012}. This insight has been dubbed the *plastic surgery hypothesis*: content of new code can often be assembled out of fragments of code that already exist in the code base \citeBarr2014}. For our "plastic surgery", we do not operate on a _textual_ representation of the program, but rather on a _structural_ representation, which by construction allows us to avoid lexical and syntactical errors in the first place.This structural representation is the _abstract syntax tree_ (AST), which we already have seen in various chapters, such as the [chapter on delta debugging](DeltaDebugger.ipynb), the [chapter on tracing](Tracer.ipynb), and excessively in the [chapter on slicing](Slicer.ipynb). The [official Python `ast` reference](http://docs.python.org/3/library/ast) is complete, but a bit brief; the documentation ["Green Tree Snakes - the missing Python AST docs"](https://greentreesnakes.readthedocs.io/en/latest/) provides an excellent introduction.Recapitulating, an AST is a tree representation of the program, showing a hierarchical structure of the program's elements. Here is the AST for our `middle()` function. | import ast
import astor
import inspect
from bookutils import print_content, show_ast
def middle_tree() -> ast.AST:
return ast.parse(inspect.getsource(middle))
show_ast(middle_tree()) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
You see that it consists of one function definition (`FunctionDef`) with three `arguments` and two statements – one `If` and one `Return`. Each `If` subtree has three branches – one for the condition (`test`), one for the body to be executed if the condition is true (`body`), and one for the `else` case (`orelse`). The `body` and `orelse` branches again are lists of statements. An AST can also be shown as text, which is more compact, yet reveals more information. `ast.dump()` gives not only the class names of elements, but also how they are constructed – actually, the whole expression can be used to construct an AST. | print(ast.dump(middle_tree())) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
This is the path to the first `return` statement: | ast.dump(middle_tree().body[0].body[0].body[0].body[0]) # type: ignore | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Picking Statements For our mutation operators, we want to use statements from the program itself. Hence, we need a means to find those very statements. The `StatementVisitor` class iterates through an AST, adding all statements it finds in function definitions to its `statements` list. To do so, it subclasses the Python `ast` `NodeVisitor` class, described in the [official Python `ast` reference](http://docs.python.org/3/library/ast). | from ast import NodeVisitor
# ignore
from typing import Any, Callable, Optional, Type, Tuple
from typing import Dict, Union, Set, List, cast
class StatementVisitor(NodeVisitor):
"""Visit all statements within function defs in an AST"""
def __init__(self) -> None:
self.statements: List[Tuple[ast.AST, str]] = []
self.func_name = ""
self.statements_seen: Set[Tuple[ast.AST, str]] = set()
super().__init__()
def add_statements(self, node: ast.AST, attr: str) -> None:
elems: List[ast.AST] = getattr(node, attr, [])
if not isinstance(elems, list):
elems = [elems] # type: ignore
for elem in elems:
stmt = (elem, self.func_name)
if stmt in self.statements_seen:
continue
self.statements.append(stmt)
self.statements_seen.add(stmt)
def visit_node(self, node: ast.AST) -> None:
# Any node other than the ones listed below
self.add_statements(node, 'body')
self.add_statements(node, 'orelse')
def visit_Module(self, node: ast.Module) -> None:
# Module children are defs, classes and globals - don't add
super().generic_visit(node)
def visit_ClassDef(self, node: ast.ClassDef) -> None:
# Class children are defs and globals - don't add
super().generic_visit(node)
def generic_visit(self, node: ast.AST) -> None:
self.visit_node(node)
super().generic_visit(node)
def visit_FunctionDef(self,
node: Union[ast.FunctionDef, ast.AsyncFunctionDef]) -> None:
if not self.func_name:
self.func_name = node.name
self.visit_node(node)
super().generic_visit(node)
self.func_name = ""
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
return self.visit_FunctionDef(node) | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
The function `all_statements()` returns all statements in the given AST `tree`. If an `ast` class `tp` is given, it only returns instances of that class. | def all_statements_and_functions(tree: ast.AST,
tp: Optional[Type] = None) -> \
List[Tuple[ast.AST, str]]:
"""
Return a list of pairs (`statement`, `function`) for all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
visitor = StatementVisitor()
visitor.visit(tree)
statements = visitor.statements
if tp is not None:
statements = [s for s in statements if isinstance(s[0], tp)]
return statements
def all_statements(tree: ast.AST, tp: Optional[Type] = None) -> List[ast.AST]:
"""
Return a list of all statements in `tree`.
If `tp` is given, return only statements of that class.
"""
return [stmt for stmt, func_name in all_statements_and_functions(tree, tp)] | _____no_output_____ | MIT | notebooks/Repairer.ipynb | HGUISEL/debuggingbook |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.