markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can draw the object by using the method drawCircle():
# Call the method drawCircle RedCircle.drawCircle()
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5:
# Use method to change the object attribute radius print('Radius of object:',RedCircle.radius) RedCircle.add_radius(2) print('Radius of object of after applying the method add_radius(2):',RedCircle.radius) RedCircle.add_radius(5) print('Radius of object of after applying the method add_radius(5):',RedCircle.radius)
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is:
# Create a blue circle with a given radius BlueCircle = Circle(radius=100)
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
As before we can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute radius BlueCircle.radius # Print the object attribute color BlueCircle.color
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
We can draw the object by using the method drawCircle():
# Call the method drawCircle BlueCircle.drawCircle()
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
Compare the x and y axis of the figure to the figure for RedCircle; they are different. The Rectangle Class Let's create a class rectangle with the attributes of height, width and color. We will only add the method to draw the rectangle object:
# Create a new Rectangle class for creating a rectangle object class Rectangle(object): # Constructor def __init__(self, width=2, height=3, color='r'): self.height = height self.width = width self.color = color # Method def drawRectangle(self): plt.gca().add_patch(plt.Rectangle((0, 0), self.width, self.height ,fc=self.color)) plt.axis('scaled') plt.show()
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the color will be blue:
# Create a new object rectangle SkinnyBlueRectangle = Rectangle(2, 10, 'blue')
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
As before we can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute height SkinnyBlueRectangle.height # Print the object attribute width SkinnyBlueRectangle.width # Print the object attribute color SkinnyBlueRectangle.color
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
We can draw the object:
# Use the drawRectangle method to draw the shape SkinnyBlueRectangle.drawRectangle()
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
Let’s create the object FatYellowRectangle of type Rectangle :
# Create a new object rectangle FatYellowRectangle = Rectangle(20, 5, 'yellow')
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
We can access the attributes of the instance of the class by using the dot notation:
# Print the object attribute height FatYellowRectangle.height # Print the object attribute width FatYellowRectangle.width # Print the object attribute color FatYellowRectangle.color
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
We can draw the object:
# Use the drawRectangle method to draw the shape FatYellowRectangle.drawRectangle()
_____no_output_____
MIT
Python for AI and DataScience/PY0101EN-3-4-Classes.ipynb
amitkrishna/IBM-DataScience
Despesas - Autorizações de Pagamento do Governo do Estado da Paraíba De Janeiro/2021 a Junho/2021
# Instalação pacotes !pip install pandas !pip install PyMySQL !pip install SQLAlchemy import pandas as pd # Carregar CSVs em data frame do pandas df1 = pd.read_csv('../data/pagamento_exercicio_2021_mes_1.csv', encoding='ISO-8859-1',sep=';') df2 = pd.read_csv('../data/pagamento_exercicio_2021_mes_2.csv', encoding='ISO-8859-1',sep=';') df3 = pd.read_csv('../data/pagamento_exercicio_2021_mes_3.csv', encoding='ISO-8859-1',sep=';') df4 = pd.read_csv('../data/pagamento_exercicio_2021_mes_4.csv', encoding='ISO-8859-1',sep=';') df5 = pd.read_csv('../data/pagamento_exercicio_2021_mes_5.csv', encoding='ISO-8859-1',sep=';') df6 = pd.read_csv('../data/pagamento_exercicio_2021_mes_6.csv', encoding='ISO-8859-1',sep=';') # Concatenar todos os dataframes df = pd.concat([df1, df2, df3, df4, df5, df6])
_____no_output_____
MIT
notebooks/01-exploracao-dados.ipynb
andersonnrc/projeto-bootcamp-carrefour-analise-dados
Realização de análises e transformações
# Exibir as colunas df.columns # Exibir quantidade de linhas e colunas df.shape # Exibir tipos das colunas df.dtypes # Converter coluna (DATA_PAGAMENTO) em datetime # Converter colunas (EXERCICIO, CODIGO_UNIDADE_GESTORA, NUMERO_EMPENHO, NUMERO_AUTORIZACAO_PAGAMENTO) em object df["DATA_PAGAMENTO"] = pd.to_datetime(df["DATA_PAGAMENTO"]) df["EXERCICIO"] = df["EXERCICIO"].astype("object") df["CODIGO_UNIDADE_GESTORA"] = df["CODIGO_UNIDADE_GESTORA"].astype("object") df["NUMERO_EMPENHO"] = df["CODIGO_UNIDADE_GESTORA"].astype("object") df["NUMERO_AUTORIZACAO_PAGAMENTO"] = df["NUMERO_AUTORIZACAO_PAGAMENTO"].astype("object") # Exibir tipos das colunas df.dtypes # Consultar linhas com valores faltantes df.isnull().sum() # Exibir amostra df.sample(10) # Criar nova coluna que vai receber o mês de pagamento df["MES_PAGAMENTO"] = df["DATA_PAGAMENTO"].dt.month # Exibir amostra df.sample(10) # Conveter saída para coluna (VALOR_PAGAMENTO) com o tipo float pd.options.display.float_format = 'R${:,.2f}'.format # Retornar total pago agrupado por mês e por tipo de despesa # df.groupby([df["MES_PAGAMENTO"], "TIPO_DESPESA"])["VALOR_PAGAMENTO"].sum().reset_index() # Outra forma df.groupby(['MES_PAGAMENTO', "TIPO_DESPESA"]).agg({"VALOR_PAGAMENTO":"sum"}).reset_index() # Retornar maior valor pago a um credor agrupado por mês # df.groupby(df["MES_PAGAMENTO"])["VALOR_PAGAMENTO"].max() df.groupby(["MES_PAGAMENTO"]).agg({"VALOR_PAGAMENTO":"max"}).reset_index() # Salvar dataframe em um arquivo CSV df.to_csv('../data/pagamento_exercicio_2021_jan_a_jun_governo_pb.csv', index=False) # Salvar dataframe no banco de dados from sqlalchemy import create_engine con = create_engine("mysql+pymysql://root:mysql@localhost:3307/db_governo_pb", encoding="utf-8") df.to_sql('tb_pagamento_exercicio_2021', con, index = False, if_exists = 'replace', method = 'multi', chunksize=10000)
_____no_output_____
MIT
notebooks/01-exploracao-dados.ipynb
andersonnrc/projeto-bootcamp-carrefour-analise-dados
Gráficos para análise exploratória e/ou tomada de decisão
import matplotlib.pyplot as plt plt.style.use("seaborn") # Gráfico com o total pago aos credores por mês (Janeiro a Junho) df.groupby(df['MES_PAGAMENTO'])['VALOR_PAGAMENTO'].sum().plot.bar(title = 'Total Pago por Mês', color = 'blue') plt.xlabel('MÊS') plt.ylabel('RECEITA'); # Gráfico com o valor máximo pago a um credor por mês (Janeiro a Junho) df.groupby(["MES_PAGAMENTO"]).agg({"VALOR_PAGAMENTO":"max"}).plot.bar(title = 'Maior valor pago a um credor po mês', color = 'green') plt.xlabel('MÊS') plt.ylabel('VALOR'); # Gráfico de linha exibindo a soma dos pagamentos a credores no decorrer dos meses df.groupby(["MES_PAGAMENTO"]).agg({"VALOR_PAGAMENTO":"sum"}).plot(title = 'Total de pagamentos por mês aos credores') plt.xlabel('MÊS') plt.ylabel('TOTAL PAGO') plt.legend(); # Gráfico com o valor pago a credores agrupados por tipo de despesa df.groupby(["TIPO_DESPESA"]).agg({"VALOR_PAGAMENTO":"sum"}).plot.bar(title = 'Soma dos valores pagos por tipo de despesa', color = 'gray') plt.xlabel('TIPO DE DESPESA') plt.ylabel('VALOR');
_____no_output_____
MIT
notebooks/01-exploracao-dados.ipynb
andersonnrc/projeto-bootcamp-carrefour-analise-dados
**Load the libraries:**
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Dense, Dropout from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.optimizers import SGD, Adadelta, Adam, RMSprop, Adagrad, Nadam, Adamax SEED = 2017
Using TensorFlow backend.
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Import the dataset and extract the target variable:**
data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep = ';') y = data['quality'] X = data.drop(['quality'], axis = 1)
_____no_output_____
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Split the dataset for training, validation and testing:**
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = SEED) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.2, random_state = SEED)
_____no_output_____
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Define a function that creates the model:**
def create_model(opt): model = Sequential() model.add(Dense(100, input_dim = X_train.shape[1], activation = 'relu')) model.add(Dense(50, activation = 'relu')) model.add(Dense(25, activation = 'relu')) model.add(Dense(10, activation = 'relu')) model.add(Dense(1, activation = 'linear')) return model
_____no_output_____
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Create a function that defines callbacks we will be using during training:**
def create_callbacks(opt): callbacks = [ EarlyStopping(monitor = 'val_acc', patience = 200, verbose = 2), ModelCheckpoint('optimizers_best_' + opt + '.h5', monitor = 'val_acc', save_best_only = True, verbose = 0) ] return callbacks
_____no_output_____
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Create a dict of the optimizers we want to try:**
opts = dict({ 'sgd': SGD(), 'sgd-0001': SGD(lr = 0.0001, decay = 0.00001), 'adam': Adam(), 'adadelta': Adadelta(), 'rmsprop': RMSprop(), 'rmsprop-0001': RMSprop(lr = 0.0001), 'nadam': Nadam(), 'adamax': Adamax() })
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer.
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Train our networks and store results:**
batch_size = 128 n_epochs = 1000 results = [] # Loop through the optimizers for opt in opts: model = create_model(opt) callbacks = create_callbacks(opt) model.compile(loss = 'mse', optimizer = opts[opt], metrics = ['accuracy']) hist = model.fit(X_train.values, y_train, batch_size = batch_size, epochs = n_epochs, validation_data = (X_val.values, y_val), verbose = 0, callbacks = callbacks) best_epoch = np.argmax(hist.history['val_acc']) best_acc = hist.history['val_acc'][best_epoch] best_model = create_model(opt) # Load the model weights with the highest validation accuracy best_model.load_weights('optimizers_best_' + opt + '.h5') best_model.compile(loss = 'mse', optimizer = opts[opt], metrics = ['accuracy']) score = best_model.evaluate(X_test.values, y_test, verbose = 0) results.append([opt, best_epoch, best_acc, score[1]])
Epoch 00201: early stopping Epoch 00414: early stopping Epoch 00625: early stopping Epoch 00373: early stopping Epoch 00413: early stopping Epoch 00230: early stopping Epoch 00269: early stopping Epoch 00424: early stopping
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
**Compare the results:**
res = pd.DataFrame(results) res.columns = ['optimizer', 'epochs', 'val_accuracy', 'test_accuracy'] res
_____no_output_____
MIT
Chapter 2/8_Experimenting with different optimizers.ipynb
Anacoder1/Python_DeepLearning_Cookbook
Jupyter (IPython) Advanced Features--- Outline- Keyboard shortcuts- Magic- Accessing the underlying operating system- Using different languages inside single notebook- File magic- Using Jupyter more efficiently- Profiling- Output- Automation- Extensions- 'Big Data' Analysis Sources: [IPython Tutorial](https://github.com/ipython/ipython-in-depth/blob/pycon-2019/1%20-%20Beyond%20Plain%20Python.ipynb), [Dataquest](https://www.dataquest.io/blog/advanced-jupyter-notebooks-tutorial/), and [Dataquest](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/), [Alex Rogozhnikov Blog](http://arogozhnikov.github.io/2016/09/10/jupyter-features.html) [Toward Data Science](https://towardsdatascience.com/how-to-effortlessly-optimize-jupyter-notebooks-e864162a06ee) --- Keyboard ShortcutsKeyboard ShortcutsAs in the classic Notebook, you can navigate the user interface through keyboard shortcuts. You can find and customize the current list of keyboard shortcuts by selecting the Advanced Settings Editor item in the Settings menu, then selecting Keyboard Shortcuts in the Settings tab. Shortcut Keys for Jupyter labWhile working with any tools, it helps if you know shortcut key to perform most frequent tasks. It increases your productivity and can be very comfortable while working. I have listed down some of the shortcuts which I use frequently while working on Jupyter Lab. Hopefully, it will be useful for others too. Also, you can check full list of shortcut by accessing the __commands tab__ in the Jupyter lab. You will find it below the Files on the left hand side.1. **ESC** takes users into command mode view while **ENTER** takes users into cell mode view.2. **A** inserts a cell above the currently selected cell. Before using this, make sure that you are in command mode (by pressing ESC).3. **B** inserts a cell below the currently selected cell. Before using this make sure that you are in command mode (by pressing ESC).4. **D + D** = Pressing D two times in a quick succession in command mode deletes the currently selected cell. 5. Jupyter Lab gives you an option to change your cell into Code cell, Markdown cell or Raw Cell. You can use **M** to change current cell to a markdown cell, **Y** to change it to a code cell and  **R** to change it to a raw cell.6. ****CTRL + B**** = Jupyter lab has two columns design. One column is for launcher or code blocks and another column is for file view etc. To increase workspace while writing code, we can close it.  **CTRL + B** is the shortcut for toggling the file view column in the Jupyter lab.7. **SHIFT + M** = It merges multiple selected cells into one cell. 8. **CTRL + SHIFT + –** = It splits the current cell into two cells from where your cursor is.9. **SHIFT+J** or **SHIFT + DOWN** = It selects the next cell in a downward direction.  It will help in making multiple selections of cells.10. **SHIFT + K** or **SHIFT + UP** = It selects the next cell in an upwards direction. It will help in making multiple selections of cells.11. **CTRL +** / = It helps you in either commenting or uncommenting any line in the Jupyter lab. For this to work, you don’t even need to select the whole line. It will comment or uncomment line where your cursor is. If you want to do it for more that one line then you will need to first select all the line and then use this shortcut.A PDF!!!- https://blog.ja-ke.tech/2019/01/20/jupyterlab-shortcuts.html- https://github.com/Jakeler/jupyter-shortcuts Magics--- Magics are turning simple python into *magical python*. Magics are the key to power of ipython.Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%. What is Magic??? Information about IPython's 'magic' % functions.
%magic
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
List available python magics
%lsmagic
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
%envYou can manage environment variables of your notebook without restarting the jupyter server process. Some libraries (like theano) use environment variables to control behavior, %env is the most convenient way.
# %env - without arguments lists environmental variables %env OMP_NUM_THREADS=4
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Accessing the underlying operating system--- Executing shell commandsYou can call any shell command. This in particular useful to manage your virtual environment.
!pip install numpy !pip list | grep Theano
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Adding packages can also be done using... %conda install numpy %pip install numpy will attempt to install packages in the current environment.
!pwd %pwd pwd files = !ls . print("files in notebooks directory:") print(files) !echo $files !echo {files[0].upper()}
2-1-JUPYTER-ECOSYSTEM.IPYNB
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Note that all this is available even in multiline blocks:
import os for i,f in enumerate(files): if f.endswith('ipynb'): !echo {"%02d" % i} - "{os.path.splitext(f)[0]}" else: print('--')
00 - 2-1-Jupyter-ecosystem -- 02 - 2-10-jupyter-code-script-of-scripts 03 - 2-11-Advanced-jupyter 04 - 2-2-jupyter-get-in-and-out 05 - 2-3-jupyter-notebook-basics 06 - 2-4-jupyter-markdown 07 - 2-5-jupyter-code-python 08 - 2-6-jupyter-code-r 09 - 2-7-jupyter-command-line 10 - 2-8-jupyter-magics -- 12 - 2-Jupyter-help 13 - Advanced_jupyter 14 - big-data-analysis-jupyter -- -- 17 - jupyter-advanced 18 - matplotlib-anatomy --
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
I could take the same list with a bash commandbecause magics and bash calls return python variables:
names = !ls ../images/ml_demonstrations/*.png names[:5]
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Suppress output of last linesometimes output isn't needed, so we can either use `pass` instruction on new line or semicolon at the end %conda install matplotlib
%matplotlib inline from matplotlib import pyplot as plt import numpy # if you don't put semicolon at the end, you'll have output of function printed plt.hist(numpy.linspace(0, 1, 1000)**1.5);
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Using different languages inside single notebook---If you're missing those much, using other computational kernels:- %%python2- %%python3- %%ruby- %%perl- %%bash- %%Ris possible, but obviously you'll need to setup the corresponding kernel first.
# %%ruby # puts 'Hi, this is ruby.' %%bash echo 'Hi, this is bash.'
Hi, this is bash.
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Running R code in Jupyter notebook Installing R kernelEasy Option: Installing the R Kernel Using AnacondaIf you used Anaconda to set up your environment, getting R working is extremely easy. Just run the below in your terminal:
# %conda install -c r r-essentials
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Running R and Python in the same notebook.The best solution to this is to install rpy2 (requires a working version of R as well), which can be easily done with pip:
%pip install rpy2
Collecting rpy2 Downloading rpy2-3.3.6.tar.gz (179 kB)  |████████████████████████████████| 179 kB 465 kB/s eta 0:00:01  ERROR: Command errored out with exit status 1: command: /Users/squiresrb/opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/setup.py'"'"'; __file__='"'"'/private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-pip-egg-info-8nhbgt8z cwd: /private/var/folders/d7/8vn6rd1d6f37gtgy_13h3b95mx3fmv/T/pip-install-79ynf4cx/rpy2/ Complete output (2 lines): cffi mode: CFFI_MODE.ANY Error: rpy2 in API mode cannot be built without R in the PATH or R_HOME defined. Correct this or force ABI mode-only by defining the environment variable RPY2_CFFI_MODE=ABI ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. [?25hNote: you may need to restart the kernel to use updated packages.
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
You can then use the two languages together, and even pass variables inbetween:
%load_ext rpy2.ipython %R require(ggplot2) import pandas as pd df = pd.DataFrame({ 'Letter': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'], 'X': [4, 3, 5, 2, 1, 7, 7, 5, 9], 'Y': [0, 4, 3, 6, 7, 10, 11, 9, 13], 'Z': [1, 2, 3, 1, 2, 3, 1, 2, 3] }) %%R -i df ggplot(data = df) + geom_point(aes(x = X, y= Y, color = Letter, size = Z))
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Writing functions in cython (or fortran)Sometimes the speed of numpy is not enough and I need to write some fast code. In principle, you can compile function in the dynamic library and write python wrappers...But it is much better when this boring part is done for you, right?You can write functions in cython or fortran and use those directly from python code.First you'll need to install:```%pip install cython ```
%pip install cython %load_ext Cython %%cython def myltiply_by_2(float x): return 2.0 * x myltiply_by_2(23.)
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
I also should mention that there are different jitter systems which can speed up your python code.More examples in [my notebook](http://arogozhnikov.github.io/2015/09/08/SpeedBenchmarks.html). For more information see the IPython help at: [Cython](https://github.com/ipython/ipython-in-depth/blob/pycon-2019/6%20-%20Cross-Language-Integration.ipynb) File magic %%writefile Export the contents of a cell
%%writefile?
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
`%pycat` ill output in the pop-up window:```Show a syntax-highlighted file through a pager.This magic is similar to the cat utility, but it will assume the fileto be Python source and will show it with syntax highlighting.This magic command can either take a local filename, an url,an history range (see %history) or a macro as argument ::%pycat myscript.py%pycat 7-27%pycat myMacro%pycat http://www.example.com/myscript.py``` %load loading code directly into cell. You can pick local file or file on the web.After uncommenting the code below and executing, it will replace the content of cell with contents of file.
# %load https://matplotlib.org/_downloads/f7171577b84787f4b4d987b663486a94/anatomy.py
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
%run to execute python code%run can execute python code from .py files — this is a well-documented behavior. But it also can execute other jupyter notebooks! Sometimes it is quite useful.NB. %run is not the same as importing python module.
# this will execute all the code cells from different notebooks %run ./matplotlib-anatomy.ipynb
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Using Jupyter more efficiently--- Store Magic - %store: lazy passing data between notebooks %store lets you store your macro and use it across all of your Jupyter Notebooks.
data = 'this is the string I want to pass to different notebook' %store data del data # deleted variable # in second notebook I will use: %store -r data print(data)
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
%who: analyze variables of global scope
%whos # pring names of string variables %who str
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Multiple cursorsSince recently jupyter supports multiple cursors (in a single cell), just like sublime ot intelliJ! __Alt + mouse selection__ for multiline selection and __Ctrl + mouse clicks__ for multicursors.Gif taken from http://swanintelligence.com/multi-cursor-in-jupyter.html Timing When you need to measure time spent or find the bottleneck in the code, ipython comes to the rescue. %%timeimport timetime.sleep(2) sleep for two seconds
# measure small code snippets with timeit ! import numpy %timeit numpy.random.normal(size=100) %%writefile pythoncode.py import numpy def append_if_not_exists(arr, x): if x not in arr: arr.append(x) def some_useless_slow_function(): arr = list() for i in range(10000): x = numpy.random.randint(0, 10000) append_if_not_exists(arr, x) # shows highlighted source of the newly-created file %pycat pythoncode.py from pythoncode import some_useless_slow_function, append_if_not_exists
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Hiding code or output - Click on the blue vertical bar or line to the left to collapse code or output Commenting and uncommenting a block of codeYou might want to add new lines of code and comment out the old lines while you’re working. This is great if you’re improving the performance of your code or trying to debug it.- First, select all the lines you want to comment out.- Next hit cmd + / to comment out the highlighted code! Pretty Print all cell outputsNormally only the last output in the cell will be printed. For everything else, you have to manually add print(), which is fine but not super convenient. You can change that by adding this at the top of the notebook:
from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all"
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Profiling: %prun, %lprun, %mprun--- See a much longer explination of profiling and timeing in Jake Vanderplas' Python Data Science Handbook: https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html
# shows how much time program spent in each function %prun some_useless_slow_function()
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Example of output:```26338 function calls in 0.713 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 10000 0.684 0.000 0.685 0.000 pythoncode.py:3(append_if_not_exists) 10000 0.014 0.000 0.014 0.000 {method 'randint' of 'mtrand.RandomState' objects} 1 0.011 0.011 0.713 0.713 pythoncode.py:7(some_useless_slow_function) 1 0.003 0.003 0.003 0.003 {range} 6334 0.001 0.000 0.001 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.713 0.713 :1() 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}```
# %load_ext memory_profiler ??? To profile memory, you can install and run pmrun # %pip install memory_profiler # %pip install line_profiler # tracking memory consumption (show in the pop-up) # %mprun -f append_if_not_exists some_useless_slow_function()
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Example of output:```Line Mem usage Increment Line Contents================================================ 3 20.6 MiB 0.0 MiB def append_if_not_exists(arr, x): 4 20.6 MiB 0.0 MiB if x not in arr: 5 20.6 MiB 0.0 MiB arr.append(x)``` **%lprun** is line profiling, but it seems to be broken for latest IPython release, so we'll manage without magic this time: ```pythonimport line_profilerlp = line_profiler.LineProfiler()lp.add_function(some_useless_slow_function)lp.runctx('some_useless_slow_function()', locals=locals(), globals=globals())lp.print_stats()``` Debugging with %debugJupyter has own interface for [ipdb](https://docs.python.org/2/library/pdb.html). Makes it possible to go inside the function and investigate what happens there.This is not pycharm and requires much time to adapt, but when debugging on the server this can be the only option (or use pdb from terminal).
#%%debug filename:line_number_for_breakpoint # Here some code that fails. This will activate interactive context for debugging
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
A bit easier option is `%pdb`, which activates debugger when exception is raised:
# %pdb # def pick_and_take(): # picked = numpy.random.randint(0, 1000) # raise NotImplementedError() # pick_and_take()
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Output--- [RISE](https://github.com/damianavila/RISE): presentations with notebookExtension by Damian Avila makes it possible to show notebooks as demonstrations. Example of such presentation: http://bollwyvl.github.io/live_reveal//7It is very useful when you teach others e.g. to use some library. Jupyter output systemNotebooks are displayed as HTML and the cell output can be HTML, so you can return virtually anything: video/audio/images. In this example I scan the folder with images in my repository and show first five of them:
import os from IPython.display import display, Image names = [f for f in os.listdir('../images/') if f.endswith('.png')] for name in names[:5]: display(Image('../images/' + name, width=300))
_____no_output_____
MIT
notebooks/2-Jupyter/jupyter-advanced-Copy1.ipynb
burkesquires/jupyter_training_2020
Boolean Operators
a = 10 b = 9 c = 8 print (10 > 9) print (10 == 9) print (10 < 9) print (a) print (a > b) c = print (a > b) c ##true print(bool("Hello")) print(bool(15)) print(bool(True)) print(bool(1)) ##false print(bool(False)) print(bool(0)) print(bool(None)) print(bool([])) def myFunction(): return True print(myFunction()) def myFunction(): return True if myFunction(): print("Yes/True") else: print("No/Flase") print(10>9) a = 6 #0000 0110 b = 7 #0000 0111 print(a == b) print(a != b)
True False True
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
Python Operators
print(10 + 5) print(10 - 5) print(10 * 5) print(10 / 5) print(10 % 5) print(10 // 3) print(10 ** 2)
15 5 50 2.0 0 3 100
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
Bitwise Operators
a = 60 #0011 1100 b = 13 print (a^b) print (~a) print (a<<2) print (a>>2) #0000 1111
49 -61 240 15
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
Assignment Operator
x = 2 x += 3 #Same As x = x+3 print(x) x
5
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
Logical Operators
a = 5 b = 6 print(a>b and a==a) print(a<b or b==a)
False True
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
Identity Operator
print(a is b) print(a is not b)
False True
Apache-2.0
Operations_and_Expressions_in_Python.ipynb
michaelll22/CPEN-21A-ECE-2-2
I estimate 10s per game. 100 starting positions, 100 secondary starting positions, then 10000 openings. 4 threads, and symmetries that produce x4 data. If I want 12 points per opening, then that would be:
estimated_seconds = 10000 * 12 * 10/ (4 * 4) estimated_hours = estimated_seconds / 3600 print(estimated_hours)
20.833333333333332
MIT
Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb
mtasende/artificial-intelligence
The plan is as follows: - Create a book (or load previously saved) - for each starting action for player 1 (100) and each starting action for player 2 (100) run 3 experiments (DETERMINISTIC BOOK FILLING). - Run epsilon-greedy algorithm to make a STOCHASTIC BOOK FILLING (using the opening book up to its depth [1-epsilon of the time]). Reduce epsilon exponentially to zero.
book = b.load_latest_book(depth=4) type(book) sum(abs(value) for value in book.values()) #book # book -> {(state, action): counts} agent_names = ('CustomPlayer1', 'CustomPlayer2') agent1 = isolation.Agent(custom.CustomPlayer, agent_names[0]) agent2 = isolation.Agent(custom.CustomPlayer, agent_names[1]) agents = (agent1, agent2) state = isolation.isolation.Isolation() time_limit = 150 match_id = 0 tic = time.time() winner, game_history, match_id = isolation.play((agents, state, time_limit, match_id)) toc = time.time() print('Elapsed time: {}'.format((toc-tic))) root = isolation.isolation.Isolation() opening_states = list(b.get_full_states(root, depth=2)) print(type(opening_states)) print(len(opening_states)) len([s for s in opening_states if s.ply_count==1]) [s for s in opening_states if s.ply_count==0] 99*99 opening_states[0]
_____no_output_____
MIT
Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb
mtasende/artificial-intelligence
Let's generate the corresponding matches
# Constant parameteres time_limit = 150 depth = 4 full_search_depth = 2 matches_per_opening = 3 # Create the agents that will play agent_names = ('CustomPlayer1', 'CustomPlayer2') agent1 = isolation.Agent(custom.CustomPlayer, agent_names[0]) agent2 = isolation.Agent(custom.CustomPlayer, agent_names[1]) agents = (agent1, agent2) # Get the initial states root = isolation.isolation.Isolation() opening_states = list(b.get_full_states(root, depth=full_search_depth)) # Generate the matches matches = [(agents, state, time_limit, match_id) for match_id, state in enumerate(opening_states)] matches = matches * 3 print('Generated {} matches.'.format(len(matches))) # Create or load the book book = b.load_latest_book(depth=depth) matches[0] def active_player(state): return state.ply_count % 2 active_player(matches[0][1]) batch_size = 10 x = list(range(10,45)) batches = [x[i*batch_size:(i+1)*batch_size] for i in range(len(x) // batch_size + (len(x) % batch_size != 0))] batches l = [1,2,3,445] isinstance(l[3], int) l.insert(0,45) l from multiprocessing.pool import ThreadPool as Pool num_processes = 1 batch_size = 10 # Small test for debugging matches = matches[:10] results = [] pool = Pool(num_processes) tic = time.time() for result in pool.imap_unordered(isolation.play, matches): results.append(result) winner, game_history, match_id = result print('Results for match {}: {} wins.'.format(match_id, winner.name)) _, state, _, _ = matches[match_id] if state.locs[1] is not None: game_history.insert(0,state.locs[1]) if state.locs[0] is not None: game_history.insert(0,state.locs[0]) root = isolation.isolation.Isolation() print(game_history) b.process_game_history(root, game_history, book, agent_names.index(winner.name), active_player=state.ply_count % 2, depth=depth) toc = time.time() print('Elapsed time {} seconds.'.format((toc-tic))) sum(abs(value) for value in book.values()) seconds = 29403 * 37 / 10 print('{} seconds'.format(seconds)) print('{} hours'.format(seconds/3600)) game_history
_____no_output_____
MIT
Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb
mtasende/artificial-intelligence
Let's add the symmetry conditions to the game processing
s_a = list(book.keys())[0] s_a W, H = 11, 9 def h_symmetry(loc): if loc is None: return None row = loc // (W + 2) center = W + (row - 1) * (W + 2) + (W + 2) // 2 + 1 if row != 0 else W // 2 return 2 * center - loc h_symmetry(28) h_symmetry(1) center = (H // 2) * (W + 2) + W // 2 center def c_symmetry(loc): if loc is None: return None center = (H // 2) * (W + 2) + W // 2 return 2 * center - loc c_symmetry(81) c_symmetry(67) def v_symmetry(loc): if loc is None: return None col = loc % (W + 2) center = (H // 2) * (W + 2) + col return 2 * center - loc v_symmetry(2) v_symmetry(28) v_symmetry(48) v_symmetry(86) symmetric = b.sym_sa(s_a, loc_sym=h_symmetry, cardinal_sym=b.cardinal_sym_h) symmetric print(isolation.DebugState.from_state(s_a[0])) print(isolation.DebugState.from_state(symmetric[0])) def process_game_history(state, game_history, book, winner_id, active_player=0, depth=4): """ Given an initial state, and a list of actions, this function iterates through the resulting states of the actions and updates count of wins in the state/action book""" OPENING_MOVES = 2 game_value = 2 * (active_player == winner_id) - 1 curr_state = state # It is a named tuple, so I think it is immutable. No need to copy. for num_action, action in enumerate(game_history): if (curr_state, action) in book.keys(): book[(curr_state, action)] += game_value if curr_state.ply_count <= OPENING_MOVES: book[b.sym_sa((curr_state, action), loc_sym=h_symmetry, cardinal_sym=b.cardinal_sym_h)] += game_value book[b.sym_sa((curr_state, action), loc_sym=v_symmetry, cardinal_sym=b.cardinal_sym_v)] += game_value book[b.sym_sa((curr_state, action), loc_sym=c_symmetry, cardinal_sym=b.cardinal_sym_c)] += game_value curr_state = curr_state.result(action) active_player = 1 - active_player game_value = 2 * (active_player == winner_id) - 1 # Break on depth equal to book if num_action >= depth - 1: break
_____no_output_____
MIT
Projects/3_Adversarial Search/scratchpad/n03_book_creation.ipynb
mtasende/artificial-intelligence
Statistical Downscaling and Bias-Adjustment`xclim` provides tools and utilities to ease the bias-adjustement process through its `xclim.sdba` module. Almost all adjustment algorithms conform to the `train` - `adjust` scheme, formalized within `TrainAdjust` classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to `sim`, which could be a future simulation.This presents examples, while a bit more info and the API are given on [this page](../sdba.rst).A very simple "Quantile Mapping" approach is available through the "Empirical Quantile Mapping" object. The object is created through the `.train` method of the class, and the simulation is adjusted with `.adjust`.
from __future__ import annotations import cftime import matplotlib.pyplot as plt import numpy as np import xarray as xr %matplotlib inline plt.style.use("seaborn") plt.rcParams["figure.figsize"] = (11, 5) # Create toy data to explore bias adjustment, here fake temperature timeseries t = xr.cftime_range("2000-01-01", "2030-12-31", freq="D", calendar="noleap") ref = xr.DataArray( ( -20 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15 + 0.1 * (t - t[0]).days / 365 ), # "warming" of 1K per decade, dims=("time",), coords={"time": t}, attrs={"units": "K"}, ) sim = xr.DataArray( ( -18 * np.cos(2 * np.pi * t.dayofyear / 365) + 2 * np.random.random_sample((t.size,)) + 273.15 + 0.11 * (t - t[0]).days / 365 ), # "warming" of 1.1K per decade dims=("time",), coords={"time": t}, attrs={"units": "K"}, ) ref = ref.sel(time=slice(None, "2015-01-01")) hist = sim.sel(time=slice(None, "2015-01-01")) ref.plot(label="Reference") sim.plot(label="Model") plt.legend() from xclim import sdba QM = sdba.EmpiricalQuantileMapping.train( ref, hist, nquantiles=15, group="time", kind="+" ) scen = QM.adjust(sim, extrapolation="constant", interp="nearest") ref.groupby("time.dayofyear").mean().plot(label="Reference") hist.groupby("time.dayofyear").mean().plot(label="Model - biased") scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2000-15", linestyle="--" ) scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2015-30", linestyle="--" ) plt.legend()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
In the previous example, a simple Quantile Mapping algorithm was used with 15 quantiles and one group of values. The model performs well, but our toy data is also quite smooth and well-behaved so this is not surprising. A more complex example could have biais distribution varying strongly across months. To perform the adjustment with different factors for each months, one can pass `group='time.month'`. Moreover, to reduce the risk of sharp change in the adjustment at the interface of the months, `interp='linear'` can be passed to `adjust` and the adjustment factors will be interpolated linearly. Ex: the factors for the 1st of May will be the average of those for april and those for may.
QM_mo = sdba.EmpiricalQuantileMapping.train( ref, hist, nquantiles=15, group="time.month", kind="+" ) scen = QM_mo.adjust(sim, extrapolation="constant", interp="linear") ref.groupby("time.dayofyear").mean().plot(label="Reference") hist.groupby("time.dayofyear").mean().plot(label="Model - biased") scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2000-15", linestyle="--" ) scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2015-30", linestyle="--" ) plt.legend()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
The training data (here the adjustment factors) is available for inspection in the `ds` attribute of the adjustment object.
QM_mo.ds QM_mo.ds.af.plot()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
GroupingFor basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a `window` int argument to pad the groups with data from adjacent ones. Units of `window` are the sampling frequency of the main grouping dimension (usually `time`). For more complex grouping, or simply for clarity, one can pass a `xclim.sdba.base.Grouper` directly.Example here with another, simpler, adjustment method. Here we want `sim` to be scaled so that its mean fits the one of `ref`. Scaling factors are to be computed separately for each day of the year, but including 15 days on either side of the day. This means that the factor for the 1st of May is computed including all values from the 16th of April to the 15th of May (of all years).
group = sdba.Grouper("time.dayofyear", window=31) QM_doy = sdba.Scaling.train(ref, hist, group=group, kind="+") scen = QM_doy.adjust(sim) ref.groupby("time.dayofyear").mean().plot(label="Reference") hist.groupby("time.dayofyear").mean().plot(label="Model - biased") scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2000-15", linestyle="--" ) scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2015-30", linestyle="--" ) plt.legend() sim QM_doy.ds.af.plot()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Modular approachThe `sdba` module adopts a modular approach instead of implementing published and named methods directly.A generic bias adjustment process is laid out as follows:- preprocessing on `ref`, `hist` and `sim` (using methods in `xclim.sdba.processing` or `xclim.sdba.detrending`)- creating and training the adjustment object `Adj = Adjustment.train(obs, hist, **kwargs)` (from `xclim.sdba.adjustment`)- adjustment `scen = Adj.adjust(sim, **kwargs)`- post-processing on `scen` (for example: re-trending)The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying `Adj.ds` dataset and often has a `af` variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to their part of the API docs.For heavy processing, this separation allows the computation and writing to disk of the training dataset before performing the adjustment(s). See the [advanced notebook](sdba-advanced.ipynb).Parameters needed by the training and the adjustment are saved to the `Adj.ds` dataset as a `adj_params` attribute. Other parameters, those only needed by the adjustment are passed in the `adjust` call and written to the history attribute in the output scenario dataarray. First example : pr and frequency adaptationThe next example generates fake precipitation data and adjusts the `sim` timeseries but also adds a step where the dry-day frequency of `hist` is adapted so that is fits the one of `ref`. This ensures well-behaved adjustment factors for the smaller quantiles. Note also that we are passing `kind='*'` to use the multiplicative mode. Adjustment factors will be multiplied/divided instead of being added/substracted.
vals = np.random.randint(0, 1000, size=(t.size,)) / 100 vals_ref = (4 ** np.where(vals < 9, vals / 100, vals)) / 3e6 vals_sim = ( (1 + 0.1 * np.random.random_sample((t.size,))) * (4 ** np.where(vals < 9.5, vals / 100, vals)) / 3e6 ) pr_ref = xr.DataArray( vals_ref, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"} ) pr_ref = pr_ref.sel(time=slice("2000", "2015")) pr_sim = xr.DataArray( vals_sim, coords={"time": t}, dims=("time",), attrs={"units": "mm/day"} ) pr_hist = pr_sim.sel(time=slice("2000", "2015")) pr_ref.plot(alpha=0.9, label="Reference") pr_sim.plot(alpha=0.7, label="Model") plt.legend() # 1st try without adapt_freq QM = sdba.EmpiricalQuantileMapping.train( pr_ref, pr_hist, nquantiles=15, kind="*", group="time" ) scen = QM.adjust(pr_sim) pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference") pr_hist.sel(time="2010").plot(alpha=0.7, label="Model - biased") scen.sel(time="2010").plot(alpha=0.6, label="Model - adjusted") plt.legend()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
In the figure above, `scen` has small peaks where `sim` is 0. This problem originates from the fact that there are more "dry days" (days with almost no precipitation) in `hist` than in `ref`. The next example works around the problem using frequency-adaptation, as described in [Themeßl et al. (2010)](https://doi.org/10.1007/s10584-011-0224-4).
# 2nd try with adapt_freq sim_ad, pth, dP0 = sdba.processing.adapt_freq( pr_ref, pr_sim, thresh="0.05 mm d-1", group="time" ) QM_ad = sdba.EmpiricalQuantileMapping.train( pr_ref, sim_ad, nquantiles=15, kind="*", group="time" ) scen_ad = QM_ad.adjust(pr_sim) pr_ref.sel(time="2010").plot(alpha=0.9, label="Reference") pr_sim.sel(time="2010").plot(alpha=0.7, label="Model - biased") scen_ad.sel(time="2010").plot(alpha=0.6, label="Model - adjusted") plt.legend()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Second example: tas and detrendingThe next example reuses the fake temperature timeseries generated at the beginning and applies the same QM adjustment method. However, for a better adjustment, we will scale sim to ref and then detrend the series, assuming the trend is linear. When `sim` (or `sim_scl`) is detrended, its values are now anomalies, so we need to normalize `ref` and `hist` so we can compare similar values.This process is detailed here to show how the sdba module should be used in custom adjustment processes, but this specific method also exists as `sdba.DetrendedQuantileMapping` and is based on [Cannon et al. 2015](https://doi.org/10.1175/JCLI-D-14-00754.1). However, `DetrendedQuantileMapping` normalizes over a `time.dayofyear` group, regardless of what is passed in the `group` argument. As done here, it is anyway recommended to use `dayofyear` groups when normalizing, especially for variables with strong seasonal variations.
doy_win31 = sdba.Grouper("time.dayofyear", window=15) Sca = sdba.Scaling.train(ref, hist, group=doy_win31, kind="+") sim_scl = Sca.adjust(sim) detrender = sdba.detrending.PolyDetrend(degree=1, group="time.dayofyear", kind="+") sim_fit = detrender.fit(sim_scl) sim_detrended = sim_fit.detrend(sim_scl) ref_n, _ = sdba.processing.normalize(ref, group=doy_win31, kind="+") hist_n, _ = sdba.processing.normalize(hist, group=doy_win31, kind="+") QM = sdba.EmpiricalQuantileMapping.train( ref_n, hist_n, nquantiles=15, group="time.month", kind="+" ) scen_detrended = QM.adjust(sim_detrended, extrapolation="constant", interp="nearest") scen = sim_fit.retrend(scen_detrended) ref.groupby("time.dayofyear").mean().plot(label="Reference") sim.groupby("time.dayofyear").mean().plot(label="Model - biased") scen.sel(time=slice("2000", "2015")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2000-15", linestyle="--" ) scen.sel(time=slice("2015", "2030")).groupby("time.dayofyear").mean().plot( label="Model - adjusted - 2015-30", linestyle="--" ) plt.legend()
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Third example : Multi-method protocol - Hnilica et al. 2017In [their paper of 2017](https://doi.org/10.1002/joc.4890), Hnilica, Hanel and Puš present a bias-adjustment method based on the principles of Principal Components Analysis. The idea is simple : use principal components to define coordinates on the reference and on the simulation and then transform the simulation data from the latter to the former. Spatial correlation can thus be conserved by taking different points as the dimensions of the transform space. The method was demonstrated in the article by bias-adjusting precipitation over different drainage basins.The same method could be used for multivariate adjustment. The principle would be the same, concatening the different variables into a single dataset along a new dimension. An example is given in the [advanced notebook](sdba-advanced.ipynb).Here we show how the modularity of `xclim.sdba` can be used to construct a quite complex adjustment protocol involving two adjustment methods : quantile mapping and principal components. Evidently, as this example uses only 2 years of data, it is not complete. It is meant to show how the adjustment functions and how the API can be used.
# We are using xarray's "air_temperature" dataset ds = xr.tutorial.open_dataset("air_temperature") # To get an exagerated example we select different points # here "lon" will be our dimension of two "spatially correlated" points reft = ds.air.isel(lat=21, lon=[40, 52]).drop_vars(["lon", "lat"]) simt = ds.air.isel(lat=18, lon=[17, 35]).drop_vars(["lon", "lat"]) # Principal Components Adj, no grouping and use "lon" as the space dimensions PCA = sdba.PrincipalComponents.train(reft, simt, group="time", crd_dim="lon") scen1 = PCA.adjust(simt) # QM, no grouping, 20 quantiles and additive adjustment EQM = sdba.EmpiricalQuantileMapping.train( reft, scen1, group="time", nquantiles=50, kind="+" ) scen2 = EQM.adjust(scen1) # some Analysis figures fig = plt.figure(figsize=(12, 16)) gs = plt.matplotlib.gridspec.GridSpec(3, 2, fig) axPCA = plt.subplot(gs[0, :]) axPCA.scatter(reft.isel(lon=0), reft.isel(lon=1), s=20, label="Reference") axPCA.scatter(simt.isel(lon=0), simt.isel(lon=1), s=10, label="Simulation") axPCA.scatter(scen2.isel(lon=0), scen2.isel(lon=1), s=3, label="Adjusted - PCA+EQM") axPCA.set_xlabel("Point 1") axPCA.set_ylabel("Point 2") axPCA.set_title("PC-space") axPCA.legend() refQ = reft.quantile(EQM.ds.quantiles, dim="time") simQ = simt.quantile(EQM.ds.quantiles, dim="time") scen1Q = scen1.quantile(EQM.ds.quantiles, dim="time") scen2Q = scen2.quantile(EQM.ds.quantiles, dim="time") for i in range(2): if i == 0: axQM = plt.subplot(gs[1, 0]) else: axQM = plt.subplot(gs[1, 1], sharey=axQM) axQM.plot(refQ.isel(lon=i), simQ.isel(lon=i), label="No adj") axQM.plot(refQ.isel(lon=i), scen1Q.isel(lon=i), label="PCA") axQM.plot(refQ.isel(lon=i), scen2Q.isel(lon=i), label="PCA+EQM") axQM.plot( refQ.isel(lon=i), refQ.isel(lon=i), color="k", linestyle=":", label="Ideal" ) axQM.set_title(f"QQ plot - Point {i + 1}") axQM.set_xlabel("Reference") axQM.set_xlabel("Model") axQM.legend() axT = plt.subplot(gs[2, :]) reft.isel(lon=0).plot(ax=axT, label="Reference") simt.isel(lon=0).plot(ax=axT, label="Unadjusted sim") # scen1.isel(lon=0).plot(ax=axT, label='PCA only') scen2.isel(lon=0).plot(ax=axT, label="PCA+EQM") axT.legend() axT.set_title("Timeseries - Point 1")
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Fourth example : Multivariate bias-adjustment with multiple steps - Cannon 2018This section replicates the "MBCn" algorithm described by [Cannon (2018)](https://doi.org/10.1007/s00382-017-3580-6). The method relies on some univariate algorithm, an adaption of the N-pdf transform of [Pitié et al. (2005)](https://ieeexplore.ieee.org/document/1544887/) and a final reordering step.In the following, we use the AHCCD and CanESM2 data are reference and simulation and we correct both `pr` and `tasmax` together.
from xclim.core.units import convert_units_to from xclim.testing import open_dataset dref = open_dataset( "sdba/ahccd_1950-2013.nc", chunks={"location": 1}, drop_variables=["lat", "lon"] ).sel(time=slice("1981", "2010")) dref = dref.assign( tasmax=convert_units_to(dref.tasmax, "K"), pr=convert_units_to(dref.pr, "kg m-2 s-1"), ) dsim = open_dataset( "sdba/CanESM2_1950-2100.nc", chunks={"location": 1}, drop_variables=["lat", "lon"] ) dhist = dsim.sel(time=slice("1981", "2010")) dsim = dsim.sel(time=slice("2041", "2070")) dref
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Perform an initial univariate adjustment.
# additive for tasmax QDMtx = sdba.QuantileDeltaMapping.train( dref.tasmax, dhist.tasmax, nquantiles=20, kind="+", group="time" ) # Adjust both hist and sim, we'll feed both to the Npdf transform. scenh_tx = QDMtx.adjust(dhist.tasmax) scens_tx = QDMtx.adjust(dsim.tasmax) # remove == 0 values in pr: dref["pr"] = sdba.processing.jitter_under_thresh(dref.pr, "0.01 mm d-1") dhist["pr"] = sdba.processing.jitter_under_thresh(dhist.pr, "0.01 mm d-1") dsim["pr"] = sdba.processing.jitter_under_thresh(dsim.pr, "0.01 mm d-1") # multiplicative for pr QDMpr = sdba.QuantileDeltaMapping.train( dref.pr, dhist.pr, nquantiles=20, kind="*", group="time" ) # Adjust both hist and sim, we'll feed both to the Npdf transform. scenh_pr = QDMpr.adjust(dhist.pr) scens_pr = QDMpr.adjust(dsim.pr) scenh = xr.Dataset(dict(tasmax=scenh_tx, pr=scenh_pr)) scens = xr.Dataset(dict(tasmax=scens_tx, pr=scens_pr))
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Stack the variables to multivariate arrays and standardize themThe standardization process ensure the mean and standard deviation of each column (variable) is 0 and 1 respectively.`hist` and `sim` are standardized together so the two series are coherent. We keep the mean and standard deviation to be reused when we build the result.
# Stack the variables (tasmax and pr) ref = sdba.processing.stack_variables(dref) scenh = sdba.processing.stack_variables(scenh) scens = sdba.processing.stack_variables(scens) # Standardize ref, _, _ = sdba.processing.standardize(ref) allsim, savg, sstd = sdba.processing.standardize(xr.concat((scenh, scens), "time")) hist = allsim.sel(time=scenh.time) sim = allsim.sel(time=scens.time)
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Perform the N-dimensional probability density function transformThe NpdfTransform will iteratively randomly rotate our arrays in the "variables" space and apply the univariate adjustment before rotating it back. In Cannon (2018) and Pitié et al. (2005), it can be seen that the source array's joint distribution converges toward the target's joint distribution when a large number of iterations is done.
from xclim import set_options # See the advanced notebook for details on how this option work with set_options(sdba_extra_output=True): out = sdba.adjustment.NpdfTransform.adjust( ref, hist, sim, base=sdba.QuantileDeltaMapping, # Use QDM as the univariate adjustment. base_kws={"nquantiles": 20, "group": "time"}, n_iter=20, # perform 20 iteration n_escore=1000, # only send 1000 points to the escore metric (it is realy slow) ) scenh = out.scenh.rename(time_hist="time") # Bias-adjusted historical period scens = out.scen # Bias-adjusted future period extra = out.drop_vars(["scenh", "scen"]) # Un-standardize (add the mean and the std back) scenh = sdba.processing.unstandardize(scenh, savg, sstd) scens = sdba.processing.unstandardize(scens, savg, sstd)
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Restoring the trendThe NpdfT has given us new "hist" and "sim" arrays with a correct rank structure. However, the trend is lost in this process. We reorder the result of the initial adjustment according to the rank structure of the NpdfT outputs to get our final bias-adjusted series.`sdba.processing.reordering` : 'ref' the argument that provides the order, 'sim' is the argument to reorder.
scenh = sdba.processing.reordering(hist, scenh, group="time") scens = sdba.processing.reordering(sim, scens, group="time") scenh = sdba.processing.unstack_variables(scenh) scens = sdba.processing.unstack_variables(scens)
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
There we are!Let's trigger all the computations. Here we write the data to disk and use `compute=False` in order to trigger the whole computation tree only once. There seems to be no way in xarray to do the same with a `load` call.
from dask import compute from dask.diagnostics import ProgressBar tasks = [ scenh.isel(location=2).to_netcdf("mbcn_scen_hist_loc2.nc", compute=False), scens.isel(location=2).to_netcdf("mbcn_scen_sim_loc2.nc", compute=False), extra.escores.isel(location=2) .to_dataset() .to_netcdf("mbcn_escores_loc2.nc", compute=False), ] with ProgressBar(): compute(tasks)
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Let's compare the series and look at the distance scores to see how well the Npdf transform has converged.
scenh = xr.open_dataset("mbcn_scen_hist_loc2.nc") fig, ax = plt.subplots() dref.isel(location=2).tasmax.plot(ax=ax, label="Reference") scenh.tasmax.plot(ax=ax, label="Adjusted", alpha=0.65) dhist.isel(location=2).tasmax.plot(ax=ax, label="Simulated") ax.legend() escores = xr.open_dataarray("mbcn_escores_loc2.nc") diff_escore = escores.differentiate("iterations") diff_escore.plot() plt.title("Difference of the subsequent e-scores.") plt.ylabel("E-scores difference") diff_escore
_____no_output_____
Apache-2.0
docs/notebooks/sdba.ipynb
Ouranosinc/dcvar
Working with PDBsum in Jupyter & Demonstration of PDBsum protein interface data to dataframe script Usually you'll want to get some data from PDBsum and analyze it. For the current example in this series of notebooks, I'll cover how to bring in a file of protein-protein interactions and then progress through using that in combination with Python to analyze the results and ultimately compare the results to a different structure.-----If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!. Some tips: Code cells have boxes around them. When you hover over them an icon appears. To run a code cell either click the icon, or click on the cell and then hit Shift+Enter. The Shift+Enter combo will also move you to the next cell, so it's a quick way to work through the notebook. While a cell is running a * appears in the square brackets next to the cell. Once the cell has finished running the asterisk will be replaced with a number. In most cases you'll want to start from the top of notebook and work your way down running each cell in turn. Later cells might depend on the results of earlier ones. To edit a code cell, just click on it and type stuff. Remember to run the cell once you've finished editing. ---- Retrieving Protein-Protein interface reports/ the list of interactions Getting list of interactions between two proteins under individual entries under PDBsum's 'Prot-prot' tab via command line.Say example from [here](http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=6ah3&template=interfaces.html&o=RESIDUE&l=3) links to the following as 'List ofinteractions' in the bottom right of the page:```text http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl?pdb=6ah3&chain1=B&chain2=G``` Then based on suggestion at top [here](https://stackoverflow.com/a/52363117/8508004) that would be used in a curl command where the items after the `?` in the original URL get placed into quotes and provided following the `--data` flag argument option in the call to `curl`, like so:```textcurl -L -o data.txt --data "pdb=6ah3&chain1=B&chain2=G" http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl```**Specifically**, the `--data "pdb=6ah3&chain1=B&chain2=G"` is the part coming from the end of the original URL.Putting that into action in Jupyter to fetch for the example the interactions list in a text:
!curl -L -o data.txt --data "pdb=6ah3&chain1=B&chain2=G" http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetIface.pl
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7063 0 7037 100 26 9033 33 --:--:-- --:--:-- --:--:-- 9055
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
To prove that the data file has been retieved, we'll show the first 16 lines of it by running the next cell:
!head -16 data.txt
<PRE> List of atom-atom interactions across protein-protein interface --------------------------------------------------------------- <P> PDB code: 6ah3 Chains B }{ G ------------------------------ <P> Hydrogen bonds -------------- <----- A T O M 1 -----> <----- A T O M 2 -----> Atom Atom Res Res Atom Atom Res Res no. name name no. Chain no. name name no. Chain Distance 1. 9937 NZ LYS 326 B <--> 20598 O LYS 122 G 2.47
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
Later in this series of notebooks, I'll demonstrate how to make this step even easier with just the PDB entry id and the chains you are interested in and the later how to loop on this process to get multiple data files for interactions from different structures. Making a Pandas dataframe from the interactions fileTo convert the data to a dataframe, we'll use a script. If you haven't encountered Pandas dataframes before I suggest you see the first two notebooks that come up with you launch a session from my [blast-binder](https://github.com/fomightez/blast-binder) site. Those first two notebooks cover using the dataframe containing BLAST results some. To get that script, you can run the next cell. (It is not included in the repository where this launches from to insure you always get the most current version, which is assumed to be the best available at the time.)
!curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/pdbsum_prot_interactions_list_to_df.py
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 23915 100 23915 0 0 35272 0 --:--:-- --:--:-- --:--:-- 35220
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
We have the script now. And we already have a data file for it to process. To process the data file, run the next command where we use Python to run the script and direct it at the results file, `data.txt`, we made just a few cells ago.
%run pdbsum_prot_interactions_list_to_df.py data.txt
Provided interactions data read and converted to a dataframe... A dataframe of the data has been saved as a file in a manner where other Python programs can access it (pickled form). RESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl'
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
As of writing this, the script we are using outputs a file that is a binary, compact form of the dataframe. (That means it is tiny and not human readable. It is called 'pickled'. Saving in that form may seem odd, but as illustrated [here](Output-to-more-universal,-table-like-formats) below this is is a very malleable form. And even more pertinent for dealing with data in Jupyter notebooks, there is actually an easier way to interact with this script when in Jupyter notebook that skips saving this intermediate file. So hang on through the long, more trandtional way of doing this before the easier way is introduced. And I saved it in the compact form and not the mroe typical tab-delimited form because we mostly won't go this route and might as well make tiny files while working along to a better route. It is easy to convert back and forth using the pickled form assuming you can match the Pandas/Python versions.)We can take that file where the dataframe is pickled, and bring it into active memory in this notebook with another command form the Pandas library. First, we have to import the Pandas library.Run the next command to bring the dataframe into active memory. Note the name comes from the name noted when we ran the script in the cell above.
import pandas as pd df = pd.read_pickle("prot_int_pickled_df.pkl")
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
When that last cell ran, you won't notice any output, but something happened. We can look at that dataframe by calling it in a cell.
df
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
You'll notice that if the list of data is large, that the Jupyter environment represents just the head and tail to make it more reasonable. There are ways you can have Jupyter display it all which we won't go into here. Instead we'll start to show some methods of dataframes that make them convenient. For example, you can use the `head` method to see the start like we used on the command line above.
df.head()
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
Now what types of interactions are observed for this pair of interacting protein chains?To help answer that, we can group the results by the type column.
grouped = df.groupby('type') for type, grouped_df in grouped: print(type) display(grouped_df)
Hydrogen bonds
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
Same data as earlier but we can cleary see we have Hydrogen bonds, Non-bonded contacts (a.k.a., van der Waals contacts), and salt bridges, and we immediately get a sense of what types of interactions are more abundant.You may want to get a sense of what else you can do by examining he first two notebooks that come up with you launch a session from my [blast-binder](https://github.com/fomightez/blast-binder) site. Those first two notebooks cover using the dataframe containing BLAST results some.Shortly, we'll cover how to bring the dataframe we just made into the notebook without dealing with a file intermediate; however, next I'll demonstrate how to save it as text for use elsewhere, such as in Excel. Output to more universal, table-like formatsI've tried to sell you on the power of the Python/Pandas dataframe, but it isn't for all uses or everyone. However, most everyone is accustomed to dealing with text based tables or even Excel. In fact, a text-based based table perhaps tab or comma-delimited would be the better way to archive the data we are generating here. Python/Pandas makes it easy to go from the dataframe form to these tabular forms. You can even go back later from the table to the dataframe, which may be inportant if you are going to different versions of Python/Pandas as I briefly mentioned parenthetically above.**First, generating a text-based table.**
#Save / write a TSV-formatted (tab-separated values/ tab-delimited) file df.to_csv('pdbsum_data.tsv', sep='\t',index = False) #add `,header=False` to leave off header, too
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
Because `df.to_csv()` defaults to dealing with csv, you can simply use `df.to_csv('example.csv',index = False)` for comma-delimited (comma-separated) files.You can see that worked by looking at the first few lines with the next command. (Feel free to make the number higher or delete the number all together. I restricted it just to first line to make output smaller.)
!head -5 pdbsum_data.tsv
Atom1 no. Atom1 name Atom1 Res name Atom1 Res no. Atom1 Chain Atom2 no. Atom2 name Atom2 Res name Atom2 Res no. Atom2 Chain Distance type 9937 NZ LYS 326 B 20598 O LYS 122 G 2.47 Hydrogen bonds 9591 O CYS 280 B 19928 CG1 ILE 29 G 3.77 Non-bonded contacts 9591 O CYS 280 B 19930 CD1 ILE 29 G 3.42 Non-bonded contacts 9593 SG CYS 280 B 19872 NZ LYS 22 G 3.81 Non-bonded contacts
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
If you had need to go back from a tab-separated table to a dataframe, you can run something like in the following cell.
reverted_df = pd.read_csv('pdbsum_data.tsv', sep='\t') reverted_df.to_pickle('reverted_df.pkl') # OPTIONAL: pickle that data too
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
For a comma-delimited (CSV) file you'd use `df = pd.read_csv('example.csv')` because `pd.read_csv()` method defaults to comma as the separator (`sep` parameter).You can verify that read from the text-based table by viewing it with the next line.
reverted_df.head()
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
**Generating an Excel spreadsheet from a dataframe.**Because this is an specialized need, there is a special module needed that I didn't bother installing by default and so it needs to be installed before generating the Excel file. Running the next cell will do both.
%pip install openpyxl # save to excel (KEEPS multiINDEX, and makes sparse to look good in Excel straight out of Python) df.to_excel('pdbsum_data.xlsx') # after openpyxl installed
Requirement already satisfied: openpyxl in /srv/conda/envs/notebook/lib/python3.7/site-packages (3.0.6) Requirement already satisfied: et-xmlfile in /srv/conda/envs/notebook/lib/python3.7/site-packages (from openpyxl) (1.0.1) Requirement already satisfied: jdcal in /srv/conda/envs/notebook/lib/python3.7/site-packages (from openpyxl) (1.4.1) Note: you may need to restart the kernel to use updated packages.
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
You'll need to download the file first to your computer and then view it locally as there is no viewer in the Jupyter environment.Adiitionally, it is possible to add styles to dataframes and the styles such as shading of cells and coloring of text will be translated to the Excel document made as well.Excel files can be read in to Pandas dataframes directly without needing to go to a text based intermediate first.
# read Excel df_from_excel = pd.read_excel('pdbsum_data.xlsx',engine='openpyxl') # see https://stackoverflow.com/a/65266270/8508004 where notes xlrd no longer supports xlsx
Collecting xlrd Downloading xlrd-2.0.1-py2.py3-none-any.whl (96 kB)  |████████████████████████████████| 96 kB 2.8 MB/s eta 0:00:011 [?25hInstalling collected packages: xlrd Successfully installed xlrd-2.0.1 Note: you may need to restart the kernel to use updated packages.
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
That can be viewed to convince yourself it worked by running the next command.
df_from_excel.head()
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
Next, we'll cover how to bring the dataframe we just made into the notebook without dealing with a file intermediate.---- Making a Pandas dataframe from the interactions file directly in JupyterFirst we'll check for the script we'll use and get it if we don't already have it. (The thinking is once you know what you are doing you may have skipped all the steps above and not have the script you'll need yet. It cannot hurt to check and if it isn't present, bring it here.)
# Get a file if not yet retrieved / check if file exists import os file_needed = "pdbsum_prot_interactions_list_to_df.py" if not os.path.isfile(file_needed): !curl -OL https://raw.githubusercontent.com/fomightez/structurework/master/pdbsum-utilities/{file_needed}
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
This is going to rely on approaches very similar to those illustrated [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/PatMatch%20with%20more%20Python.ipynbPassing-results-data-into-active-memory-without-a-file-intermediate) and [here](https://github.com/fomightez/patmatch-binder/blob/6f7630b2ee061079a72cd117127328fd1abfa6c7/notebooks/Sending%20PatMatch%20output%20directly%20to%20Python.ipynbRunning-Patmatch-and-passing-the-results-to-Python-without-creating-an-output-file-intermediate).We obtained the `pdbsum_prot_interactions_list_to_df.py` script in the preparation steps above. However, instead of using it as an external script as we did earlier in this notebook, we want to use the core function of that script within this notebook for the options that involve no pickled-object file intermediate. Similar to the way we imported a lot of other useful modules in the first notebook and a cell above, you can run the next cell to bring in to memory of this notebook's computational environment, the main function associated with the `pdbsum_prot_interactions_list_to_df.py` script, aptly named `pdbsum_prot_interactions_list_to_df`. (As written below the command to do that looks a bit redundant;however, the first from part of the command below actually is referencing the `pdbsum_prot_interactions_list_to_df.py` script, but it doesn't need the `.py` extension because the import only deals with such files.)
from pdbsum_prot_interactions_list_to_df import pdbsum_prot_interactions_list_to_df
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
We can demonstrate that worked by calling the function.
pdbsum_prot_interactions_list_to_df()
_____no_output_____
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
If the module was not imported, you'd see `ModuleNotFoundError: No module named 'pdbsum_prot_interactions_list_to_df'`, but instead you should see it saying it is missing `data_file` to act on because you passed it nothing.After importing the main function of that script into this running notebook, you are ready to demonstrate the approach that doesn't require a file intermediates. The imported `pdbsum_prot_interactions_list_to_df` function is used within the computational environment of the notebook and the dataframe produced assigned to a variable in the running the notebook. In the end, the results are in an active dataframe in the notebook without needing to read the pickled dataframe. **Although bear in mind the pickled dataframe still gets made, and it is good to download and keep that pickled dataframe since you'll find it convenient for reading and getting back into an analysis without need for rerunning earlier steps again.**
direct_df = pdbsum_prot_interactions_list_to_df("data.txt") direct_df.head()
Provided interactions data read and converted to a dataframe... A dataframe of the data has been saved as a file in a manner where other Python programs can access it (pickled form). RESULTING DATAFRAME is stored as ==> 'prot_int_pickled_df.pkl' Returning a dataframe with the information as well.
MIT
notebooks/Working with PDBsum in Jupyter Basics.ipynb
fomightez/pdbsum-binder
survived, pclass, sibsp, parch, fare
X = df[['pclass', 'sibsp', 'parch', 'fare']] Y = df[['survived']] X.shape, Y.shape from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X, Y) x_train.shape, x_test.shape, y_train.shape, y_test.shape from sklearn.linear_model import LogisticRegression logR = LogisticRegression() type(logR) logR.fit(x_train, y_train) logR.classes_ logR.coef_ # 'pclass', 'sibsp', 'parch', 'fare' logR.score(x_train, y_train) logR.predict(x_train) logR.predict_proba(x_train) logR.predict_proba(x_train[10:13]) 0.41873577+0.58126423 logR.predict(x_train[10:13]) print('Hello') from sklearn import metrics metrics.confusion_matrix(x_train, y_train)
_____no_output_____
Apache-2.0
titanic_classfication.ipynb
jhee-yun/test_machinelearning1
Working with SeqFish data
import stlearn as st
_____no_output_____
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
The data is downloaded from https://www.spatialomics.org/SpatialDB/download.php| Technique | PMID | Title | Expression | SV genes|| ----------- | ----------- | ----------- | ----------- | ----------- ||seqFISH|30911168|Transcriptome-scale super-resolved imaging in tissues by RNA seqFISH+ seqfish_30911168.tar.gz|seqfish_30911168_SVG.tar.gzRead SeqFish data and we select field 5.
data = st.ReadSeqFish(count_matrix_file="../Downloads/seqfish_30911168/cortex_svz_counts.matrix", spatial_file="../Downloads/seqfish_30911168/cortex_svz_cellcentroids.csv", field=5)
D:\Anaconda3\envs\test2\lib\site-packages\anndata-0.7.3-py3.8.egg\anndata\_core\anndata.py:119: ImplicitModificationWarning: Transforming to str index. warnings.warn("Transforming to str index.", ImplicitModificationWarning)
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
Quality checking for the data
st.pl.QC_plot(data)
_____no_output_____
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
Plot gene Nr4a1
st.pl.gene_plot(data,genes="Nr4a1")
_____no_output_____
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
Running Preprocessing for MERFISH data
st.pp.filter_genes(data,min_cells=3) st.pp.normalize_total(data) st.pp.log1p(data) st.pp.scale(data)
Normalization step is finished in adata.X Log transformation step is finished in adata.X Scale step is finished in adata.X
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
Running PCA to reduce the dimensions to 50
st.em.run_pca(data,n_comps=50,random_state=0)
PCA is done! Generated in adata.obsm['X_pca'], adata.uns['pca'] and adata.varm['PCs']
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st
Perform Louvain clustering
st.pp.neighbors(data,n_neighbors=25) st.tl.clustering.louvain(data) st.pl.cluster_plot(data,use_label="louvain",spot_size=10)
_____no_output_____
BSD-3-Clause
docs/tutorials/Read_seqfish.ipynb
duypham2108/dev_st