path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
guides/configs.ipynb | ###Markdown
Configs[](https://github.com/lab-ml/labml)[](https://colab.research.google.com/github/lab-ml/labml/blob/master/guides/configs.ipynb)[](https://docs.labml.ai/api/configs.html) The configurations provide an API to easily manage hyper-parametersand other configurable parameters of the experiments.The configuration of each experiment run are stored.These can be viewed on [the web app](https://github.com/labmlai/labml/tree/master/app).
###Code
!pip install labml --quiet
import torch
from torch import nn
from labml import tracker, monit, experiment, logger
from labml.configs import BaseConfigs, option, calculate, hyperparams, aggregate
###Output
_____no_output_____
###Markdown
Define a configuration class
###Code
class TransformerConfigs(BaseConfigs):
d_model: int = 512
d_ff: int = 2048
attention: nn.Module = 'MultiHead'
ffn: nn.Module = 'MLP'
ffn_activation: nn.Module = 'ReLU'
###Output
_____no_output_____
###Markdown
Use of type hinting is optional. Calculated configurationsYou can specify multiple config calculator functions.You pick which one to use by its name.
###Code
@option(TransformerConfigs.ffn_activation)
def ReLU(c: TransformerConfigs):
return nn.ReLU()
@option(TransformerConfigs.ffn_activation)
def GELU(c: TransformerConfigs):
return nn.GELU()
###Output
_____no_output_____
###Markdown
Inheriting and re-using configuration classesConfigs classes can be inherited. This lets you separate configs into modules instead of passing [monolithic config object](https://www.reddit.com/r/MachineLearning/comments/g1vku4/d_antipatterns_in_open_sourced_ml_research_code/).You can even inherit a entire experiment setups and make a few modifications.
###Code
class MyTransformerConfigs(TransformerConfigs):
positional_embeddings: nn.Module = 'Rotary'
ffn_activation: nn.Module = 'GELU'
###Output
_____no_output_____
###Markdown
SubmodulesConfigurations can be nested.
###Code
class Configs(BaseConfigs):
transformer: TransformerConfigs = 'rotary_transformer'
total_steps: int
epochs: int
steps_per_epoch: int
tokenizer: any
dataset: any
task: any
@option(Configs.transformer, 'rotary_transformer')
def rotary_transformer_configs(c: Configs):
conf = MyTransformerConfigs()
conf.d_model = 256
return conf
###Output
_____no_output_____
###Markdown
*It will initialize to default (based on type hint) if no options are provided.* Advanced Usage Calculating with predefined functions or lambdasYou can also compute configs with `lambda` functions or predefined functions
###Code
_ = calculate(Configs.total_steps,
[Configs.epochs, Configs.steps_per_epoch], # args
lambda e, s: e * s)
###Output
_____no_output_____
###Markdown
AggregatesYou can use aggregates to setup configs that depend on each other.For example, we change `dataset` and `epochs` based on the `task`.
###Code
aggregate(Configs.task, 'wiki', (Configs.dataset, 'wikipedia'), (Configs.epochs, 10))
aggregate(Configs.task, 'arxiv', (Configs.dataset, 'arxiv'), (Configs.epochs, 100))
###Output
_____no_output_____
###Markdown
Hyper-parameterslabml will identify any parameter you modify outside the declarationof the class as hyper-parameters.You can also specify hyper-parameters manually.The hyper-parameters will be highlighted among other configs in logs and in [the web app](https://github.com/labmlai/labml/tree/master/app).These will also be logged in to Tensorboard.
###Code
hyperparams(Configs.epochs)
hyperparams(Configs.total_steps, is_hyperparam=False)
###Output
_____no_output_____
###Markdown
Running the experimentHere's how you run an experiment with the configurations.
###Code
conf = Configs()
conf.task = 'arxiv'
experiment.create(name='test_configs')
experiment.configs(conf)
logger.inspect(model=conf.epochs)
experiment.start()
###Output
_____no_output_____
###Markdown
Configs[](https://github.com/lab-ml/labml)[](https://colab.research.google.com/github/lab-ml/labml/blob/master/guides/configs.ipynb)[](https://docs.labml.ai/api/configs.html) The configurations provide an API to easily manage hyper-parametersand other configurable parameters of the experiments.The configuration of each experiment run are stored.These can be viewed on [Dashboard](https://github.com/vpj/labmlml_dashboard).
###Code
%%capture
!pip install labml
import torch
from torch import nn
from labml import tracker, monit, experiment, logger
from labml.configs import BaseConfigs, option, calculate, hyperparams, aggregate
###Output
_____no_output_____
###Markdown
Define a configuration class
###Code
class DeviceConfigs(BaseConfigs):
use_cuda: bool = True
cuda_device: int = 0
device: any
###Output
_____no_output_____
###Markdown
Calculated configurations
###Code
@option(DeviceConfigs.device)
def cuda(c: DeviceConfigs):
is_cuda = c.use_cuda and torch.cuda.is_available()
if not is_cuda:
return torch.device("cpu")
else:
if c.cuda_device < torch.cuda.device_count():
return torch.device(f"cuda:{c.cuda_device}")
else:
logger.log(f"Cuda device index {c.cuda_device} higher than "
f"device count {torch.cuda.device_count()}", Text.warning)
return torch.device(f"cuda:{torch.cuda.device_count() - 1}")
###Output
_____no_output_____
###Markdown
Inheriting and re-using configuration classesConfigs classes can be inherited. This lets you separate configs into modules instead of passing [monolithic config object](https://www.reddit.com/r/MachineLearning/comments/g1vku4/d_antipatterns_in_open_sourced_ml_research_code/).You can even inherit a entire experiment setups and make a few modifications.
###Code
class Configs(DeviceConfigs):
model_size: int = 1024
input_size: int = 10
output_size: int = 10
model: any = 'two_hidden_layer'
epochs = 10
steps_per_epcoch = 1024
total_steps: int
variant: str
###Output
_____no_output_____
###Markdown
Defining configurations optionsYou can specify multiple config calculator functions.You pick which one to use by its name.
###Code
class OneHiddenLayerModule(nn.Module):
def __init__(self, input_size: int, model_size: int, output_size: int):
super().__init__()
self.input_fc = nn.Linear(input_size, model_size)
self.output_fc = nn.Linear(model_size, output_size)
def forward(x: torch.Tensor):
x = F.relu(self.input_fc(x))
return self.output_fc(x)
# This is just for illustration purposes, ideally you should have a configuration
# for number of hidden layers.
# A real world example would be different architectures, like a dense network vs a CNN
class TwoHiddenLayerModule(nn.Module):
def __init__(self, input_size: int, model_size: int, output_size: int):
super().__init__()
self.input_fc = nn.Linear(input_size, model_size)
self.middle_fc = nn.Linear(model_size, model_size)
self.output_fc = nn.Linear(model_size, output_size)
def forward(x: torch.Tensor):
x = F.relu(self.input_fc(x))
x = F.relu(self.middle_fc(x))
return self.output_fc(x)
@option(Configs.model)
def one_hidden_layer(c: Configs):
return OneHiddenLayerModule(c.input_size, c.model_size, c.output_size)
@option(Configs.model)
def two_hidden_layer(c: Configs):
return TwoHiddenLayerModule(c.input_size, c.model_size, c.output_size)
###Output
_____no_output_____
###Markdown
Note that the configurations calculators pass only the needed parameters and not the whole config object.The library forces you to do that.However, you can directly set the model as an option, with `__init__` accepting `Configs` as a parameter,it is not a usage pattern we encourage. Calculating with predefined functions of lambdasYou can also compute configs with `lambda` functions or predefined functions
###Code
_ = calculate(Configs.total_steps, [Configs.epochs, Configs.steps_per_epcoch], lambda e, s: e * s)
###Output
_____no_output_____
###Markdown
AggregatesYou can use aggregates to setup configs that depend on each other.For example, this is useful when you have LSTM based model and a CNN based model,and different data loaders for each architecture.Here we use `variant` to set both `model` and number of epochs.
###Code
aggregate(Configs.variant, 'small', (Configs.model, 'one_hidden_layer'), (Configs.epochs, 10))
aggregate(Configs.variant, 'large', (Configs.model, 'two_hidden_layer'), (Configs.epochs, 100))
###Output
_____no_output_____
###Markdown
Hyper-parametersLabML will identify any parameter you explicily specify outside the declarationof the class as hyper-parameters (in this case `variant` because we set it with `conf.variant = 'large'`.You can also specify hyper-parameters manually.The hyper-parameters will be highlighted among other configs in logs and in dashboard.These will also be logged in to Tensorboard.
###Code
hyperparams(Configs.epochs)
hyperparams(Configs.total_steps, is_hyperparam=False)
###Output
_____no_output_____
###Markdown
Running the experimentHere's how you run an experiment with the configurations.
###Code
conf = Configs()
conf.variant = 'large'
experiment.create(name='test_configs')
experiment.configs(conf)
logger.inspect(model=conf.model)
experiment.start()
###Output
_____no_output_____
###Markdown
Configs[](https://github.com/lab-ml/labml)[](https://colab.research.google.com/github/lab-ml/labml/blob/master/guides/configs.ipynb)`.. currentmodule:: labml.configs` The configurations provide an API to easily manage hyper-parametersand other configurable parameters of the experiments.The configuration of each experiment run are stored.These can be viewed on [Dashboard](https://github.com/vpj/labmlml_dashboard).
###Code
%%capture
!pip install labml
import torch
from torch import nn
from labml import tracker, monit, experiment, logger
from labml.configs import BaseConfigs, option, calculate, hyperparams, aggregate
###Output
_____no_output_____
###Markdown
Define a configuration class
###Code
class DeviceConfigs(BaseConfigs):
use_cuda: bool = True
cuda_device: int = 0
device: any
###Output
_____no_output_____
###Markdown
Calculated configurations
###Code
@option(DeviceConfigs.device)
def cuda(c: DeviceConfigs):
is_cuda = c.use_cuda and torch.cuda.is_available()
if not is_cuda:
return torch.device("cpu")
else:
if c.cuda_device < torch.cuda.device_count():
return torch.device(f"cuda:{c.cuda_device}")
else:
logger.log(f"Cuda device index {c.cuda_device} higher than "
f"device count {torch.cuda.device_count()}", Text.warning)
return torch.device(f"cuda:{torch.cuda.device_count() - 1}")
###Output
_____no_output_____
###Markdown
Inheriting and re-using configuration classesConfigs classes can be inherited. This lets you separate configs into modules instead of passing [monolithic config object](https://www.reddit.com/r/MachineLearning/comments/g1vku4/d_antipatterns_in_open_sourced_ml_research_code/).You can even inherit a entire experiment setups and make a few modifications.
###Code
class Configs(DeviceConfigs):
model_size: int = 1024
input_size: int = 10
output_size: int = 10
model: any = 'two_hidden_layer'
epochs = 10
steps_per_epcoch = 1024
total_steps: int
variant: str
###Output
_____no_output_____
###Markdown
Defining configurations optionsYou can specify multiple config calculator functions.You pick which one to use by its name.
###Code
class OneHiddenLayerModule(nn.Module):
def __init__(self, input_size: int, model_size: int, output_size: int):
super().__init__()
self.input_fc = nn.Linear(input_size, model_size)
self.output_fc = nn.Linear(model_size, output_size)
def forward(x: torch.Tensor):
x = F.relu(self.input_fc(x))
return self.output_fc(x)
# This is just for illustration purposes, ideally you should have a configuration
# for number of hidden layers.
# A real world example would be different architectures, like a dense network vs a CNN
class TwoHiddenLayerModule(nn.Module):
def __init__(self, input_size: int, model_size: int, output_size: int):
super().__init__()
self.input_fc = nn.Linear(input_size, model_size)
self.middle_fc = nn.Linear(model_size, model_size)
self.output_fc = nn.Linear(model_size, output_size)
def forward(x: torch.Tensor):
x = F.relu(self.input_fc(x))
x = F.relu(self.middle_fc(x))
return self.output_fc(x)
@option(Configs.model)
def one_hidden_layer(c: Configs):
return OneHiddenLayerModule(c.input_size, c.model_size, c.output_size)
@option(Configs.model)
def two_hidden_layer(c: Configs):
return TwoHiddenLayerModule(c.input_size, c.model_size, c.output_size)
###Output
_____no_output_____
###Markdown
Note that the configurations calculators pass only the needed parameters and not the whole config object.The library forces you to do that.However, you can directly set the model as an option, with `__init__` accepting `Configs` as a parameter,it is not a usage pattern we encourage. Calculating with predefined functions of lambdasYou can also compute configs with `lambda` functions or predefined functions
###Code
_ = calculate(Configs.total_steps, [Configs.epochs, Configs.steps_per_epcoch], lambda e, s: e * s)
###Output
_____no_output_____
###Markdown
AggregatesYou can use aggregates to setup configs that depend on each other.For example, this is useful when you have LSTM based model and a CNN based model,and different data loaders for each architecture.Here we use `variant` to set both `model` and number of epochs.
###Code
aggregate(Configs.variant, 'small', (Configs.model, 'one_hidden_layer'), (Configs.epochs, 10))
aggregate(Configs.variant, 'large', (Configs.model, 'two_hidden_layer'), (Configs.epochs, 100))
###Output
_____no_output_____
###Markdown
Hyper-parametersLabML will identify any parameter you explicily specify outside the declarationof the class as hyper-parameters (in this case `variant` because we set it with `conf.variant = 'large'`.You can also specify hyper-parameters manually.The hyper-parameters will be highlighted among other configs in logs and in dashboard.These will also be logged in to Tensorboard.
###Code
hyperparams(Configs.epochs)
hyperparams(Configs.total_steps, is_hyperparam=False)
###Output
_____no_output_____
###Markdown
Running the experimentHere's how you run an experiment with the configurations.
###Code
conf = Configs()
conf.variant = 'large'
experiment.create(name='test_configs')
experiment.configs(conf)
logger.inspect(model=conf.model)
experiment.start()
###Output
_____no_output_____ |
juliainjulia-PattiMalfavon.ipynb | ###Markdown
Classwork 13 Julia Taylor Patti and Andrew Malfavon
###Code
"""
juliamap(x, z; maxiter):
implement the iteration algorithm for a Julia Set.
**Returns:** integer number of iterations, or zero if the iteration never diverges.
- c : complex constant defining the set
- z : complex number being iterated
- maxiter : maximum iteration number, defaults to 100
"""
function juliamap(c, z; maxiter=100)
for n = 1:maxiter
z = z^2 + c
if abs(z) > 2
return n
end
end
return 0
end
@doc juliamap
# Specialize juliamap to c=0
j0(z) = juliamap(0,z)
# Vectorize j0 over arrays of Complex numbers
@vectorize_1arg Complex j0
# List the available methods for j0 for different types
methods(j0)
# Create a complex plane
function complex_plane(xmin=-2, xmax=2, ymin=-2, ymax=2; xpoints=2000, ypoints=2000)
# y is a column vector
y = linspace(ymin, ymax, ypoints)
# x uses a transpose, yielding a row vector
x = linspace(xmin, xmax, xpoints)'
# z uses broadcasted addition and multiplication to create a plane
z = x .+ y.*im;
# The final line of a block is treated as the return value, in the absence
# of an explicit return statement
end
# The vectorized function can be applied directly to the plane
@time cp = complex_plane()
@time j0p = j0(cp)
###Output
_____no_output_____
###Markdown
The code works by generating two arrays and then doing the hvcat command on the transpose of the first array and the imaginary multiplied second array to form an n by n complex plane. The difference between the comma and the semicolon is that the comma gives you the transpose while the semicolon indicates the combination of horizontal and vertical concatination which should take place.
###Code
immutable ComplexPlane
x :: LinSpace{Float64}
y :: LinSpace{Float64}
z :: Array{Complex{Float64},2}
function ComplexPlane(xmin=-2, xmax=2, ymin=-2, ymax=2;
xpoints=2000, ypoints=2000)
x = linspace(xmin, xmax, xpoints)
y = linspace(ymin, ymax, ypoints)
z = x' .+ y.*im
new(x,y,z)
end
end
cp = ComplexPlane(xpoints=200,ypoints=200);
typeof(cp)
print(typeof(cp.x))
j0(cp.z)
###Output
_____no_output_____ |
notebooks/MR/Old_notebooks/acquisition_data.ipynb | ###Markdown
This demo differs from the main code due to the range of readouts printed out.
###Code
#'''
#Upper-level interface demo that illustrates how MR data can be interfaced
#from python.
#
#Usage:
# acquisition_data.py [--help | options]
#
#Options:
# -f <file>, --file=<file> raw data file
# [default: simulated_MR_2D_cartesian.h5]
# -p <path>, --path=<path> path to data files, defaults to data/examples/MR
# subfolder of SIRF root folder
# -r <rnge>, --range=<rnge> range of readouts to examine as string '(a,b)'
# [default: '(0,1)'] CHANGED FOR JUPYTER
# -s <slcs>, --slices=<slcs> max number of slices to display [default: 8]
# -e <engn>, --engine=<engn> reconstruction engine [default: Gadgetron]
#'''
#
## CCP PETMR Synergistic Image Reconstruction Framework (SIRF).
## Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
## Copyright 2015 - 2017 University College London.
## Copyright 2015 - 2017 Physikalisch-Technische Bundesanstalt.
##
## This is software developed for the Collaborative Computational
## Project in Positron Emission Tomography and Magnetic Resonance imaging
## (http://www.ccppetmr.ac.uk/).
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
## http://www.apache.org/licenses/LICENSE-2.0
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
__version__ = '0.1.0'
from docopt import docopt
from ast import literal_eval
# import engine module
#exec('from p' + args['--engine'] + ' import *')
from sirf.Gadgetron import *
# process command-line options
data_file = 'simulated_MR_2D_cartesian.h5'
data_path = examples_data_path('MR')
ro_range = literal_eval('(0,1)')
slcs = 8
scheme = AcquisitionData.get_storage_scheme()
print('storage scheme: %s' % repr(scheme))
# locate the input data file
input_file = existing_filepath(data_path, data_file)
# acquisition data will be read from an HDF file input_file
acq_data = AcquisitionData(input_file)
# the raw k-space data is a list of different readouts
# of different data type (e.g. noise correlation data, navigator data,
# image data,...);
# the number of all readouts is
na = acq_data.number_of_readouts('all')
# the number of image data readouts is
ni = acq_data.number_of_readouts()
print('readouts: total %d, image data %d' % (na, ni))
# sort acquisition data
# currently performed with respect to (in this order):
# - repetition
# - slice
# - kspace encode step 1
acq_data.sort()
# retrieve the range of readouts to examine
if ro_range[0] >= ro_range[1] or ro_range[1] >= na:
raise error('Wrong readouts range')
where = range(ro_range[0], ro_range[1])
# retrieve readouts flags
flags = acq_data.get_ISMRMRD_info('flags')
# inspect the first readout flag
if flags[0] & IMAGE_DATA_MASK:
print('first readout is image data')
else:
# should see this if input data file is test_2D_2x.h5
print('first readout is not image data')
# display flags
print('Flags'),
print(flags[where])
# inspect some kspace_encode_step_1 counters
encode_step_1 = acq_data.get_ISMRMRD_info('kspace_encode_step_1')
print('Ky/PE - encoding'),
print(encode_step_1[where])
# inspect some slice counters
slice = acq_data.get_ISMRMRD_info('slice')
print('Slices'),
print(slice[where])
# inspect some repetition counters
repetition = acq_data.get_ISMRMRD_info('repetition')
print('Repetitions'),
print(repetition[where])
# inspect some physiology time stamps
pts = acq_data.get_ISMRMRD_info('physiology_time_stamp')
print('Physiology time stamps'),
print(pts[where])
# copy raw data into python array and determine its size
# in the case of the provided dataset 'simulated_MR_2D_cartesian.h5' the
# size is 2x256 phase encoding, 8 receiver coils and points 512 readout
# points (frequency encoding dimension)acq_array = acq_data.as_array()
acq_array = acq_data.as_array()
acq_shape = acq_array.shape
print('input data dimensions: %dx%dx%d' % acq_shape)
# cap the number of readouts to display
ns = (slice[ni - 1] + 1)*(repetition[ni - 1] + 1)
print('total number of slices: %d' % ns)
nr = ni//ns
print('readouts per slice: %d' % nr)
if ns > slcs:
print('too many slices, showing %d only' % slcs)
ny = slcs*nr # display this many only
else:
ny = ni # display all
acq_array = numpy.transpose(acq_array,(1,0,2))
acq_array = acq_array[:,:ny,:]
title = 'Acquisition data (magnitude)'
show_3D_array(acq_array, power = 0.2, suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
cloned_acq_data = acq_data.clone()
cloned_acq_array = cloned_acq_data.as_array()
cloned_acq_shape = cloned_acq_array.shape
print('cloned data dimensions: %dx%dx%d' % cloned_acq_shape)
cloned_acq_array = numpy.transpose(cloned_acq_array,(1,0,2))
cloned_acq_array = cloned_acq_array[:,:ny,:]
title = 'Cloned acquisition data (magnitude)'
show_3D_array(cloned_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
# pre-process acquired k-space data
# Prior to image reconstruction several pre-processing steps such as
# asymmetric echo compensation, noise decorelation for multi-coil data or
# removal of oversampling along frequency encoding (i.e. readout or kx)
# direction. So far only the removal of readout oversampling and noise and
# asymmetric echo adjusting is implemented
print('---\n pre-processing acquisition data...')
processed_acq_data = preprocess_acquisition_data(acq_data)
# copy processed acquisition data into an array and determine its size
# by removing the oversampling factor of 2 along the readout direction, the
# number of readout samples was halfed
processed_acq_array = processed_acq_data.as_array()
processed_acq_shape = processed_acq_array.shape
print('processed data dimensions: %dx%dx%d' % processed_acq_shape)
processed_acq_array = numpy.transpose(processed_acq_array,(1,0,2))
processed_acq_array = processed_acq_array[:,:ny,:]
title = 'Processed acquisition data (magnitude)'
show_3D_array(processed_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts')
# xlabel = 'samples', ylabel = 'readouts', cmap = 'gray')
###Output
_____no_output_____
###Markdown
This demo differs from the main code due to the range of readouts printed out.
###Code
#'''
#Upper-level interface demo that illustrates how MR data can be interfaced
#from python.
#
#Usage:
# acquisition_data.py [--help | options]
#
#Options:
# -f <file>, --file=<file> raw data file
# [default: simulated_MR_2D_cartesian.h5]
# -p <path>, --path=<path> path to data files, defaults to data/examples/MR
# subfolder of SIRF root folder
# -r <rnge>, --range=<rnge> range of readouts to examine as string '(a,b)'
# [default: '(0,1)'] CHANGED FOR JUPYTER
# -s <slcs>, --slices=<slcs> max number of slices to display [default: 8]
# -e <engn>, --engine=<engn> reconstruction engine [default: Gadgetron]
#'''
#
## CCP PETMR Synergistic Image Reconstruction Framework (SIRF).
## Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
## Copyright 2015 - 2017 University College London.
## Copyright 2015 - 2017 Physikalisch-Technische Bundesanstalt.
##
## This is software developed for the Collaborative Computational
## Project in Positron Emission Tomography and Magnetic Resonance imaging
## (http://www.ccppetmr.ac.uk/).
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
## http://www.apache.org/licenses/LICENSE-2.0
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
__version__ = '0.1.0'
from docopt import docopt
from ast import literal_eval
# import engine module
#exec('from p' + args['--engine'] + ' import *')
from sirf.Gadgetron import *
# process command-line options
data_file = 'simulated_MR_2D_cartesian.h5'
data_path = examples_data_path('MR')
ro_range = literal_eval('(0,1)')
slcs = 8
scheme = AcquisitionData.get_storage_scheme()
print('storage scheme: %s' % repr(scheme))
# locate the input data file
input_file = existing_filepath(data_path, data_file)
# acquisition data will be read from an HDF file input_file
acq_data = AcquisitionData(input_file)
# the raw k-space data is a list of different readouts
# of different data type (e.g. noise correlation data, navigator data,
# image data,...);
# the number of all readouts is
na = acq_data.number_of_readouts('all')
# the number of image data readouts is
ni = acq_data.number_of_readouts()
print('readouts: total %d, image data %d' % (na, ni))
# sort acquisition data;
# currently performed with respect to (in this order):
# - repetition
# - slice
# - kspace encode step 1
acq_data.sort()
# retrieve the range of readouts to examine
if ro_range[0] >= ro_range[1] or ro_range[1] >= na:
raise error('Wrong readouts range')
where = range(ro_range[0], ro_range[1])
# retrieve readouts flags
flags = acq_data.get_info('flags')
# inspect the first readout flag
if flags[0] & IMAGE_DATA_MASK:
print('first readout is image data')
else:
# should see this if input data file is test_2D_2x.h5
print('first readout is not image data')
# display flags
print('Flags'),
print(flags[where])
# inspect some kspace_encode_step_1 counters
encode_step_1 = acq_data.get_info('kspace_encode_step_1')
print('Ky/PE - encoding'),
print(encode_step_1[where])
# inspect some slice counters
slice = acq_data.get_info('slice')
print('Slices'),
print(slice[where])
# inspect some repetition counters
repetition = acq_data.get_info('repetition')
print('Repetitions'),
print(repetition[where])
# inspect some physiology time stamps
pts = acq_data.get_info('physiology_time_stamp')
print('Physiology time stamps'),
print(pts[where])
# copy raw data into python array and determine its size
# in the case of the provided dataset 'simulated_MR_2D_cartesian.h5' the
# size is 2x256 phase encoding, 8 receiver coils and points 512 readout
# points (frequency encoding dimension)acq_array = acq_data.as_array()
acq_array = acq_data.as_array()
acq_shape = acq_array.shape
print('input data dimensions: %dx%dx%d' % acq_shape)
# cap the number of readouts to display
ns = (slice[ni - 1] + 1)*(repetition[ni - 1] + 1)
print('total number of slices: %d' % ns)
nr = ni//ns
print('readouts per slice: %d' % nr)
if ns > slcs:
print('too many slices, showing %d only' % slcs)
ny = slcs*nr # display this many only
else:
ny = ni # display all
acq_array = numpy.transpose(acq_array,(1,0,2))
acq_array = acq_array[:,:ny,:]
title = 'Acquisition data (magnitude)'
show_3D_array(acq_array, power = 0.2, suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
cloned_acq_data = acq_data.clone()
cloned_acq_array = cloned_acq_data.as_array()
cloned_acq_shape = cloned_acq_array.shape
print('cloned data dimensions: %dx%dx%d' % cloned_acq_shape)
cloned_acq_array = numpy.transpose(cloned_acq_array,(1,0,2))
cloned_acq_array = cloned_acq_array[:,:ny,:]
title = 'Cloned acquisition data (magnitude)'
show_3D_array(cloned_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
# pre-process acquired k-space data
# Prior to image reconstruction several pre-processing steps such as
# asymmetric echo compensation, noise decorelation for multi-coil data or
# removal of oversampling along frequency encoding (i.e. readout or kx)
# direction. So far only the removal of readout oversampling and noise and
# asymmetric echo adjusting is implemented
print('---\n pre-processing acquisition data...')
processed_acq_data = preprocess_acquisition_data(acq_data)
# copy processed acquisition data into an array and determine its size
# by removing the oversampling factor of 2 along the readout direction, the
# number of readout samples was halfed
processed_acq_array = processed_acq_data.as_array()
processed_acq_shape = processed_acq_array.shape
print('processed data dimensions: %dx%dx%d' % processed_acq_shape)
processed_acq_array = numpy.transpose(processed_acq_array,(1,0,2))
processed_acq_array = processed_acq_array[:,:ny,:]
title = 'Processed acquisition data (magnitude)'
show_3D_array(processed_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts')
# xlabel = 'samples', ylabel = 'readouts', cmap = 'gray')
###Output
_____no_output_____
###Markdown
This demo differs from the main code due to the range of readouts printed out.
###Code
#'''
#Upper-level interface demo that illustrates how MR data can be interfaced
#from python.
#
#Usage:
# acquisition_data.py [--help | options]
#
#Options:
# -f <file>, --file=<file> raw data file
# [default: simulated_MR_2D_cartesian.h5]
# -p <path>, --path=<path> path to data files, defaults to data/examples/MR
# subfolder of SIRF root folder
# -r <rnge>, --range=<rnge> range of readouts to examine as string '(a,b)'
# [default: '(0,1)'] CHANGED FOR JUPYTER
# -s <slcs>, --slices=<slcs> max number of slices to display [default: 8]
# -e <engn>, --engine=<engn> reconstruction engine [default: Gadgetron]
#'''
#
## CCP PETMR Synergistic Image Reconstruction Framework (SIRF).
## Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
## Copyright 2015 - 2017 University College London.
## Copyright 2015 - 2017 Physikalisch-Technische Bundesanstalt.
##
## This is software developed for the Collaborative Computational
## Project in Positron Emission Tomography and Magnetic Resonance imaging
## (http://www.ccppetmr.ac.uk/).
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
## http://www.apache.org/licenses/LICENSE-2.0
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
__version__ = '0.1.0'
from docopt import docopt
from ast import literal_eval
# import engine module
#exec('from p' + args['--engine'] + ' import *')
from sirf.Gadgetron import *
# process command-line options
data_file = 'simulated_MR_2D_cartesian.h5'
data_path = examples_data_path('MR')
ro_range = literal_eval('(0,1)')
slcs = 8
scheme = AcquisitionData.get_storage_scheme()
print('storage scheme: %s' % repr(scheme))
# locate the input data file
input_file = existing_filepath(data_path, data_file)
# acquisition data will be read from an HDF file input_file
acq_data = AcquisitionData(input_file)
# the raw k-space data is a list of different readouts
# of different data type (e.g. noise correlation data, navigator data,
# image data,...);
# the number of all readouts is
na = acq_data.number_of_readouts('all')
# the number of image data readouts is
ni = acq_data.number_of_readouts()
print('readouts: total %d, image data %d' % (na, ni))
# sort acquisition data
# currently performed with respect to (in this order):
# - repetition
# - slice
# - kspace encode step 1
acq_data.sort()
# retrieve the range of readouts to examine
if ro_range[0] >= ro_range[1] or ro_range[1] >= na:
raise error('Wrong readouts range')
where = range(ro_range[0], ro_range[1])
# retrieve readouts flags
flags = acq_data.get_info('flags')
# inspect the first readout flag
if flags[0] & IMAGE_DATA_MASK:
print('first readout is image data')
else:
# should see this if input data file is test_2D_2x.h5
print('first readout is not image data')
# display flags
print('Flags'),
print(flags[where])
# inspect some kspace_encode_step_1 counters
encode_step_1 = acq_data.get_info('kspace_encode_step_1')
print('Ky/PE - encoding'),
print(encode_step_1[where])
# inspect some slice counters
slice = acq_data.get_info('slice')
print('Slices'),
print(slice[where])
# inspect some repetition counters
repetition = acq_data.get_info('repetition')
print('Repetitions'),
print(repetition[where])
# inspect some physiology time stamps
pts = acq_data.get_info('physiology_time_stamp')
print('Physiology time stamps'),
print(pts[where])
# copy raw data into python array and determine its size
# in the case of the provided dataset 'simulated_MR_2D_cartesian.h5' the
# size is 2x256 phase encoding, 8 receiver coils and points 512 readout
# points (frequency encoding dimension)acq_array = acq_data.as_array()
acq_array = acq_data.as_array()
acq_shape = acq_array.shape
print('input data dimensions: %dx%dx%d' % acq_shape)
# cap the number of readouts to display
ns = (slice[ni - 1] + 1)*(repetition[ni - 1] + 1)
print('total number of slices: %d' % ns)
nr = ni//ns
print('readouts per slice: %d' % nr)
if ns > slcs:
print('too many slices, showing %d only' % slcs)
ny = slcs*nr # display this many only
else:
ny = ni # display all
acq_array = numpy.transpose(acq_array,(1,0,2))
acq_array = acq_array[:,:ny,:]
title = 'Acquisition data (magnitude)'
show_3D_array(acq_array, power = 0.2, suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
cloned_acq_data = acq_data.clone()
cloned_acq_array = cloned_acq_data.as_array()
cloned_acq_shape = cloned_acq_array.shape
print('cloned data dimensions: %dx%dx%d' % cloned_acq_shape)
cloned_acq_array = numpy.transpose(cloned_acq_array,(1,0,2))
cloned_acq_array = cloned_acq_array[:,:ny,:]
title = 'Cloned acquisition data (magnitude)'
show_3D_array(cloned_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts', \
show = False)
# cmap = 'gray', show = False)
# pre-process acquired k-space data
# Prior to image reconstruction several pre-processing steps such as
# asymmetric echo compensation, noise decorelation for multi-coil data or
# removal of oversampling along frequency encoding (i.e. readout or kx)
# direction. So far only the removal of readout oversampling and noise and
# asymmetric echo adjusting is implemented
print('---\n pre-processing acquisition data...')
processed_acq_data = preprocess_acquisition_data(acq_data)
# copy processed acquisition data into an array and determine its size
# by removing the oversampling factor of 2 along the readout direction, the
# number of readout samples was halfed
processed_acq_array = processed_acq_data.as_array()
processed_acq_shape = processed_acq_array.shape
print('processed data dimensions: %dx%dx%d' % processed_acq_shape)
processed_acq_array = numpy.transpose(processed_acq_array,(1,0,2))
processed_acq_array = processed_acq_array[:,:ny,:]
title = 'Processed acquisition data (magnitude)'
show_3D_array(processed_acq_array, power = 0.2, \
suptitle = title, label = 'coil', \
xlabel = 'samples', ylabel = 'readouts')
# xlabel = 'samples', ylabel = 'readouts', cmap = 'gray')
###Output
_____no_output_____ |
bhsa/display.ipynb | ###Markdown
You might want to consider the [start](search.ipynb) of this tutorial.Short introductions to other TF datasets:* [Dead Sea Scrolls](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/dss.ipynb),* [Old Babylonian Letters](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/oldbabylonian.ipynb),or the* [Q'uran](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/quran.ipynb) Rich displayText-Fabric offers pretty and plain displays of textual objects.A **plain** display of an object is a simple reference to that object if it is big, or the text of that object if it is small.A **pretty** display of an object is a representation of the structure of that object. It contains text and features of sub objects.Provided the object is not too big.
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
IncantationThe ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook areexplained in the [start tutorial](start.ipynb).
###Code
from tf.app import use
# A = use('bhsa', hoist=globals())
A = use("bhsa:clone", checkout="clone", hoist=globals())
###Output
_____no_output_____
###Markdown
Arbitrary nodesWe pretty-print some (arbitrary) nodes.The first verse.
###Code
v1 = A.nodeFromSectionStr("Genesis 1:1")
v1
A.pretty(v1)
###Output
_____no_output_____
###Markdown
With standard features displayed:
###Code
A.pretty(v1, standardFeatures=True)
###Output
_____no_output_____
###Markdown
Now a phrase. We display it with little and with much information.
###Code
phrase = 651605
A.pretty(phrase, withNodes=False, prettyTypes=False)
A.pretty(phrase, withNodes=True, standardFeatures=True, hideTypes=False)
###Output
_____no_output_____
###Markdown
If we want to see the subphrases but not the phrase atoms:
###Code
A.pretty(phrase, withNodes=True, standardFeatures=True, hiddenTypes="phrase_atom")
###Output
_____no_output_____
###Markdown
Use the following to find out which display options are available and what their current values are.
###Code
A.displayShow()
###Output
_____no_output_____
###Markdown
Where is this phrase on SHEBANQ?You can click on the passage reference.You can generate a link that points to where a node is on SHEBANQ as follows:
###Code
A.webLink(phrase)
###Output
_____no_output_____
###Markdown
If you want just the url:
###Code
A.webLink(phrase, urlOnly=True)
###Output
_____no_output_____
###Markdown
A link to another passage:
###Code
z = A.nodeFromSectionStr("Ezra 3:4")
A.webLink(z)
###Output
_____no_output_____
###Markdown
PlainWe can represent a node in plain representation and highlight specific portions.
###Code
firstVerse = F.otype.s("verse")[0]
allPhrases = F.otype.s("phrase")
phrases = {allPhrases[1], allPhrases[3]}
words = (2, 4, 6, 9)
firstSentence = F.otype.s("sentence")[0]
A.plain(firstSentence)
###Output
_____no_output_____
###Markdown
First we highlight some words:
###Code
highlights = set(words)
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
Now some phrases:
###Code
highlights = set(phrases)
print(highlights)
A.plain(firstVerse, highlights=highlights)
###Output
{651576, 651574}
###Markdown
As you see, when we highlight bigger things than words, we put ahighlighted border around the words in those things. We can do both:
###Code
highlights = set(phrases) | set(words)
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
We can also highlight the verse itself.
###Code
highlights = {firstVerse}
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
We can use different colors for highlighting:* some words are red* some other words are green* phrases are blue
###Code
highlights = {i: "lightsalmon" for i in [1, 5, 9]}
highlights.update({i: "mediumaquamarine" for i in [3, 7]})
highlights.update({i: "blue" for i in phrases})
highlights.update({firstVerse: "#eeeeee"})
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
PrettyWe define two verse nodes:
###Code
verse1 = A.nodeFromSectionStr("Genesis 1:7")
verse2 = A.nodeFromSectionStr("Genesis 1:17")
###Output
_____no_output_____
###Markdown
and display the first one:
###Code
A.pretty(verse1)
###Output
_____no_output_____
###Markdown
In the next verse we choose a bit more to display: we include standard features:
###Code
A.pretty(verse2, standardFeatures=True)
###Output
_____no_output_____
###Markdown
The labels of the nodes come from features in the data: hover over a label to see which feature is responsible.The same holds for the unnamed features below the words, in particular the gloss. Note that the sentence in this verse continues after the verse ends, that is why it has no left border.In the BHSA,entences, clauses and phrases may be discontinuous.The designers of the BHSA data (Eep Talstra and Constantijn Sikkel et al.) have added node types*sentence_atom*, *clause_atom* and *phrase_atom*.They are the continuous chunks within the objects of their corresponding non-atom types.The atom types form a nice nest of building blocks.Usually we hide the atom types from view.But we can make them visible:
###Code
A.pretty(verse2, hideTypes=False)
###Output
_____no_output_____
###Markdown
Back to the view without the atoms.We can even leave out the node types:
###Code
A.pretty(verse2, prettyTypes=False)
###Output
_____no_output_____
###Markdown
We put in the features (again) and also add node numbers:
###Code
A.pretty(verse2, withNodes=True, standardFeatures=True)
###Output
_____no_output_____
###Markdown
Now we selectively remove a few features from the display:
###Code
A.pretty(verse2, standardFeatures=True, suppress={"gloss", "typ"})
###Output
_____no_output_____
###Markdown
Now we add features to the display: `lex` and `g_word` :
###Code
A.displaySetup(extraFeatures=["lex", "g_word"], standardFeatures=True)
###Output
_____no_output_____
###Markdown
We also made `standardFeatures=True)` the temporary default.
###Code
A.pretty(verse2)
###Output
_____no_output_____
###Markdown
and we reset the pretty features to the default values:
###Code
A.displayReset("extraFeatures")
A.pretty(verse2)
###Output
_____no_output_____
###Markdown
We can also opt for less detail: suppose we do not want to dig deeper than the phrases:
###Code
A.pretty(verse2, baseTypes={"phrase"})
###Output
_____no_output_____
###Markdown
or if clauses are enough:
###Code
A.pretty(verse2, baseTypes={"clause"})
###Output
_____no_output_____
###Markdown
even sentences are possible:
###Code
A.pretty(verse2, baseTypes={"sentence"})
###Output
_____no_output_____
###Markdown
Before we go on, we reset the display completely.
###Code
A.displayReset()
###Output
_____no_output_____
###Markdown
Query resultsWe run a TF query and show some of its results with a lot of pomp and circumstance.The query is written by Stephen Ku, and he is the one who prompted me to writerich display function for query results.It asks for a sentence in which there are three clauses, each entirely before the next one.The first clause has a predicate phrase containing a verb.The second clause has a predicate phrase, a verb is not required nor forbidden.The third clause has an object phrase containing a (proper) noun or personal/demonstrative/interrogative pronoun.
###Code
ellipQuery = """
sentence
c1:clause
phrase function=Pred
word pdp=verb
c2:clause
phrase function=Pred
c3:clause typ=Ellp
phrase function=Objc
word pdp=subs|nmpr|prps|prde|prin
c1 << c2
c2 << c3
"""
###Output
_____no_output_____
###Markdown
Above is the query *template*. Now we *run* the query.
###Code
results = A.search(ellipQuery)
###Output
2.01s 1473 results
###Markdown
There are several ways to present the results.Here are results 10-12 in a table:
###Code
A.table(results, start=10, end=12)
###Output
_____no_output_____
###Markdown
You can also show the results in pretty displays.The `A.show()` function asks you for some limits (it will not show more than 100 at a time), and then it displays them.It lists the results as follows:* a heading showing which result in the sequence of all results this is* a display of all verses that have result material, with the places highlighted that correspond to a node in the result tupleWe show result 10 only.
###Code
A.show(results, start=10, end=10, withNodes=True)
###Output
_____no_output_____
###Markdown
Note that although the *standard* features are not all shown, the features mentioned in the query are shown.We can suppress that as well:
###Code
A.show(results, start=10, end=10, withNodes=True, queryFeatures=False)
###Output
_____no_output_____
###Markdown
We can also package the results tuples in other things than verses, e.g. sentences, and at the same time cut off the displays at phrases:
###Code
A.displaySetup(queryFeatures=False)
A.show(
results,
start=10,
end=12,
withNodes=True,
condenseType="sentence",
baseTypes={"phrase"},
)
###Output
_____no_output_____
###Markdown
Note, that now the phrases are heavily highlighted whereas the highlighted words just have a box around them. Let's leave out some information:
###Code
A.show(
results,
start=10,
end=12,
withNodes=False,
prettyTypes=False,
condenseType="sentence",
baseTypes={"clause"},
withPassage=False,
)
###Output
_____no_output_____
###Markdown
You might want to consider the [start](search.ipynb) of this tutorial.Short introductions to other TF datasets:* [Dead Sea Scrolls](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/dss.ipynb),* [Old Babylonian Letters](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/oldbabylonian.ipynb),or the* [Q'uran](https://nbviewer.jupyter.org/github/annotation/tutorials/blob/master/lorentz2020/quran.ipynb) Rich displayText-Fabric offers pretty and plain displays of textual objects.A **plain** display of an object is a simple reference to that object if it is big, or the text of that object if it is small.A **pretty** display of an object is a representation of the structure of that object. It contains text and features of sub objects.Provided the object is not too big.
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
IncantationThe ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook areexplained in the [start tutorial](start.ipynb).
###Code
from tf.app import use
# A = use('bhsa', hoist=globals())
A = use("bhsa:clone", checkout="clone", hoist=globals())
###Output
_____no_output_____
###Markdown
Arbitrary nodesWe pretty-print some (arbitrary) nodes.The first verse.
###Code
v1 = A.nodeFromSectionStr("Genesis 1:1")
v1
A.pretty(v1)
###Output
_____no_output_____
###Markdown
With standard features displayed:
###Code
A.pretty(v1, standardFeatures=True)
###Output
_____no_output_____
###Markdown
Now a phrase. We display it with little and with much information.
###Code
phrase = 651605
A.pretty(phrase, withNodes=False, prettyTypes=False)
A.pretty(phrase, withNodes=True, standardFeatures=True, hideTypes=False)
###Output
_____no_output_____
###Markdown
If we want to see the subphrases but not the phrase atoms:
###Code
A.pretty(phrase, withNodes=True, standardFeatures=True, hiddenTypes="phrase_atom")
###Output
_____no_output_____
###Markdown
Use the following to find out which display options are available and what their current values are.
###Code
A.displayShow()
###Output
_____no_output_____
###Markdown
Where is this phrase on SHEBANQ?You can click on the passage reference.You can generate a link that points to where a node is on SHEBANQ as follows:
###Code
A.webLink(phrase)
###Output
_____no_output_____
###Markdown
If you want just the url:
###Code
A.webLink(phrase, urlOnly=True)
###Output
_____no_output_____
###Markdown
A link to another passage:
###Code
z = A.nodeFromSectionStr("Ezra 3:4")
A.webLink(z)
###Output
_____no_output_____
###Markdown
PlainWe can represent a node in plain representation and highlight specific portions.
###Code
firstVerse = F.otype.s("verse")[0]
allPhrases = F.otype.s("phrase")
phrases = {allPhrases[1], allPhrases[3]}
words = (2, 4, 6, 9)
firstSentence = F.otype.s("sentence")[0]
A.plain(firstSentence)
###Output
_____no_output_____
###Markdown
First we highlight some words:
###Code
highlights = set(words)
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
Now some phrases:
###Code
highlights = set(phrases)
print(highlights)
A.plain(firstVerse, highlights=highlights)
###Output
{651576, 651574}
###Markdown
As you see, when we highlight bigger things than words, we put ahighlighted border around the words in those things. We can do both:
###Code
highlights = set(phrases) | set(words)
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
We can also highlight the verse itself.
###Code
highlights = {firstVerse}
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
We can use different colors for highlighting:* some words are red* some other words are green* phrases are blue
###Code
highlights = {i: "lightsalmon" for i in [1, 5, 9]}
highlights.update({i: "mediumaquamarine" for i in [3, 7]})
highlights.update({i: "blue" for i in phrases})
highlights.update({firstVerse: "#eeeeee"})
A.plain(firstVerse, highlights=highlights)
###Output
_____no_output_____
###Markdown
PrettyWe define two verse nodes:
###Code
verse1 = A.nodeFromSectionStr("Genesis 1:7")
verse2 = A.nodeFromSectionStr("Genesis 1:17")
###Output
_____no_output_____
###Markdown
and display the first one:
###Code
A.pretty(verse1)
###Output
_____no_output_____
###Markdown
In the next verse we choose a bit more to display: we include standard features:
###Code
A.pretty(verse2, standardFeatures=True)
###Output
_____no_output_____
###Markdown
The labels of the nodes come from features in the data: hover over a label to see which feature is responsible.The same holds for the unnamed features below the words, in particular the gloss. Note that the sentence in this verse continues after the verse ends, that is why it has no left border.In the BHSA,entences, clauses and phrases may be discontinuous.The designers of the BHSA data (Eep Talstra and Constantijn Sikkel et al.) have added node types*sentence_atom*, *clause_atom* and *phrase_atom*.They are the continuous chunks within the objects of their corresponding non-atom types.The atom types form a nice nest of building blocks.Usually we hide the atom types from view.But we can make them visible:
###Code
A.pretty(verse2, hideTypes=False)
###Output
_____no_output_____
###Markdown
Back to the view without the atoms.We can even leave out the node types:
###Code
A.pretty(verse2, prettyTypes=False)
###Output
_____no_output_____
###Markdown
We put in the features (again) and also add node numbers:
###Code
A.pretty(verse2, withNodes=True, standardFeatures=True)
###Output
_____no_output_____
###Markdown
Now we selectively remove a few features from the display:
###Code
A.pretty(verse2, standardFeatures=True, suppress={"gloss", "typ"})
###Output
_____no_output_____
###Markdown
Now we add features to the display: `lex` and `g_word` :
###Code
A.displaySetup(extraFeatures=["lex", "g_word"], standardFeatures=True)
###Output
_____no_output_____
###Markdown
We also made `standardFeatures=True)` the temporary default.
###Code
A.pretty(verse2)
###Output
_____no_output_____
###Markdown
and we reset the pretty features to the default values:
###Code
A.displayReset("extraFeatures")
A.pretty(verse2)
###Output
_____no_output_____
###Markdown
We can also opt for less detail: suppose we do not want to dig deeper than the phrases:
###Code
A.pretty(verse2, baseTypes={"phrase"})
###Output
_____no_output_____
###Markdown
or if clauses are enough:
###Code
A.pretty(verse2, baseTypes={"clause"})
###Output
_____no_output_____
###Markdown
even sentences are possible:
###Code
A.pretty(verse2, baseTypes={"sentence"})
###Output
_____no_output_____
###Markdown
Before we go on, we reset the display completely.
###Code
A.displayReset()
###Output
_____no_output_____
###Markdown
Query resultsWe run a TF query and show some of its results with a lot of pomp and circumstance.The query is written by Stephen Ku, and he is the one who prompted me to writerich display function for query results.It asks for a sentence in which there are three clauses, each entirely before the next one.The first clause has a predicate phrase containing a verb.The second clause has a predicate phrase, a verb is not required nor forbidden.The third clause has an object phrase containing a (proper) noun or personal/demonstrative/interrogative pronoun.
###Code
ellipQuery = """
sentence
c1:clause
phrase function=Pred
word pdp=verb
c2:clause
phrase function=Pred
c3:clause typ=Ellp
phrase function=Objc
word pdp=subs|nmpr|prps|prde|prin
c1 << c2
c2 << c3
"""
###Output
_____no_output_____
###Markdown
Above is the query *template*. Now we *run* the query.
###Code
results = A.search(ellipQuery)
###Output
2.01s 1473 results
###Markdown
There are several ways to present the results.Here are results 10-12 in a table:
###Code
A.table(results, start=10, end=12)
###Output
_____no_output_____
###Markdown
You can also show the results in pretty displays.The `A.show()` function asks you for some limits (it will not show more than 100 at a time), and then it displays them.It lists the results as follows:* a heading showing which result in the sequence of all results this is* a display of all verses that have result material, with the places highlighted that correspond to a node in the result tupleWe show result 10 only.
###Code
A.show(results, start=10, end=10, withNodes=True)
###Output
_____no_output_____
###Markdown
Note that although the *standard* features are not all shown, the features mentioned in the query are shown.We can suppress that as well:
###Code
A.show(results, start=10, end=10, withNodes=True, queryFeatures=False)
###Output
_____no_output_____
###Markdown
We can also package the results tuples in other things than verses, e.g. sentences, and at the same time cut off the displays at phrases:
###Code
A.displaySetup(queryFeatures=False)
A.show(
results,
start=10,
end=12,
withNodes=True,
condenseType="sentence",
baseTypes={"phrase"},
)
###Output
_____no_output_____
###Markdown
Note, that now the phrases are heavily highlighted whereas the highlighted words just have a box around them. Let's leave out some information:
###Code
A.show(
results,
start=10,
end=12,
withNodes=False,
prettyTypes=False,
condenseType="sentence",
baseTypes={"clause"},
withPassage=False,
)
###Output
_____no_output_____ |
data/Alzheimer_paper.ipynb | ###Markdown
Sample Data: Alzheimer's disease **Downloaded datasets from https://www.embopress.org/doi/full/10.15252/msb.20199356.** Citation:```Bader, J., Geyer, P., Müller, J., Strauss, M., Koch, M., & Leypoldt, F. et al. (2020). Proteome profiling in cerebrospinal fluid reveals novel biomarkers of Alzheimer's disease. Molecular Systems Biology, 16(6). doi: 10.15252/msb.20199356```
###Code
import pandas as pd
import numpy as np
ev1_raw_df = pd.read_excel("Dataset_EV1.xlsx", skiprows=1)
ev2_raw_df = pd.read_excel("Dataset_EV2.xlsx")
ev1_raw_df.rename(columns = {'Unnamed: 0': 'Genes', 'Unnamed: 1': 'Proteins'}, inplace=True)
ev1_raw_df.rename(columns = {k:k.split("]")[1].strip() for k in ev1_raw_df.columns[2:]}, inplace=True)
ev1_raw_df.drop("Genes", axis=1, inplace=True)
ev1_raw_df.set_index("Proteins", inplace=True)
ev1_df = ev1_raw_df.T.reset_index()
ev1_df.rename(columns = {'index': 'Samples'}, inplace=True)
ev2_raw_df.columns = ['_'+_ for _ in ev2_raw_df.columns]
ev2_raw_df.rename(columns={'_sample name': 'Samples'}, inplace=True)
df = pd.merge(ev1_df, ev2_raw_df, on="Samples", how='left')
print(df.columns)
print(df.shape)
df.describe()
df.iloc[0,:][ev2_raw_df.columns]
# Prepare for the exporting the file
df.set_index("Samples", inplace=True)
df.replace('Filtered', np.NaN, inplace=True)
# Export
# df.to_csv("Alzheimer_data.csv", sep=";", index=False)
writer = pd.ExcelWriter('Alzheimer.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Data', index=False)
writer.save()
###Output
_____no_output_____ |
Tutorials/TensorFlow_V1/notebooks/6_MultiGPU/multigpu_basics.ipynb | ###Markdown
Multi-GPU BasicsBasic Multi-GPU computation example using TensorFlow library.This tutorial requires your machine to have 2 GPUs"/cpu:0": The CPU of your machine."/gpu:0": The first GPU of your machine"/gpu:1": The second GPU of your machineFor this example, we are using 2 GTX-980
###Code
import numpy as np
import tensorflow as tf
import datetime
#Processing Units logs
log_device_placement = True
#num of multiplications to perform
n = 10
# Example: compute A^n + B^n on 2 GPUs
# Create random large matrix
A = np.random.rand(1e4, 1e4).astype('float32')
B = np.random.rand(1e4, 1e4).astype('float32')
# Creates a graph to store results
c1 = []
c2 = []
# Define matrix power
def matpow(M, n):
if n < 1: #Abstract cases where n < 1
return M
else:
return tf.matmul(M, matpow(M, n-1))
# Single GPU computing
with tf.device('/gpu:0'):
a = tf.constant(A)
b = tf.constant(B)
#compute A^n and B^n and store results in c1
c1.append(matpow(a, n))
c1.append(matpow(b, n))
with tf.device('/cpu:0'):
sum = tf.add_n(c1) #Addition of all elements in c1, i.e. A^n + B^n
t1_1 = datetime.datetime.now()
with tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:
# Runs the op.
sess.run(sum)
t2_1 = datetime.datetime.now()
# Multi GPU computing
# GPU:0 computes A^n
with tf.device('/gpu:0'):
#compute A^n and store result in c2
a = tf.constant(A)
c2.append(matpow(a, n))
#GPU:1 computes B^n
with tf.device('/gpu:1'):
#compute B^n and store result in c2
b = tf.constant(B)
c2.append(matpow(b, n))
with tf.device('/cpu:0'):
sum = tf.add_n(c2) #Addition of all elements in c2, i.e. A^n + B^n
t1_2 = datetime.datetime.now()
with tf.Session(config=tf.ConfigProto(log_device_placement=log_device_placement)) as sess:
# Runs the op.
sess.run(sum)
t2_2 = datetime.datetime.now()
print "Single GPU computation time: " + str(t2_1-t1_1)
print "Multi GPU computation time: " + str(t2_2-t1_2)
###Output
Single GPU computation time: 0:00:11.833497
Multi GPU computation time: 0:00:07.085913
|
K_Nearest_Neighbors_Supervised_Machine_Learning.ipynb | ###Markdown
2 Major types of Supervised Machine Learning 1. **Classification**: Predict a class label which is a choice from a predefined list of possibilities.* Sometimes separated into binary classification which is special case between exactly 2 classes i.e. predicting if email is spam or not spam * Multiclass classification - classification between more than 2 classes i.e. predicting iris from 3 classes. 2. **Regression**: predict a continuous number, or a floating-point number in programming terms. i.e. predicting a person's income from person's age, home address, and education level Generalization, Overfitting, and UnderfittingGeneralization = machine learning model can accurately make predictions from training set to the test set. Overfitting = complex model that works for the training set but not the test set Underfitting = too simple of model for the training set and test set.
###Code
#Install the mglearn package for this book
! pip install mglearn
import mglearn
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import pandas as pd
print("pandas version:", pd.__version__)
import matplotlib
print("matplotlib version:", matplotlib.__version__)
import matplotlib.pyplot as plt
import numpy as np
print("NumPy version:", np.__version__)
import scipy as sp
print("SciPy version:", sp.__version__)
import IPython
print("IPython version:", IPython.__version__)
from IPython.display import display
import sklearn
print("scikit-learn version:", sklearn.__version__)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
#generate dataset
x, y = mglearn.datasets.make_forge()
#plot dataset
mglearn.discrete_scatter(x[:,0], x[:, 1], y)
plt.legend(["Class 0", "Class 1"], loc=4)
plt.xlabel("First feature")
plt.ylabel("Second feature")
print("X.shape: ", x.shape)
###Output
X.shape: (26, 2)
###Markdown
Illustrate regression algorithmsUse a syntehtic wave dataset The wave dataset has a single input feature and a continuous target variable (or response)
###Code
X, y = mglearn.datasets.make_wave(n_samples = 40)
plt.plot(X, y, 'o')
plt.ylim(-3, 3)
plt.xlabel("Feature")
plt.ylabel("Target")
from sklearn.datasets import load_breast_cancer
#wisconsin breast cancer dataset
cancer = load_breast_cancer()
print("cancer.keys():\n", cancer.keys())
###Output
cancer.keys():
dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename'])
###Markdown
Bunch classes are like dictionaries and can be accessed using a dot. bunch.key instead of bunch["keys"]
###Code
print("Shape of cancer data:", cancer.data.shape)
print("Sample counts per class:\n",
{n: v for n, v in zip(cancer.target_names, np.bincount(cancer.target))})
print("Feaute names: \n", cancer.feature_names)
print("Cancer Description: \n", cancer.DESCR)
###Output
Cancer Description:
.. _breast_cancer_dataset:
Breast cancer wisconsin (diagnostic) dataset
--------------------------------------------
**Data Set Characteristics:**
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
.. topic:: References
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.
###Markdown
Boston Housing Market Using regression to predict the housing price in Boston based on median value of homes in several Boston neighborhoods in the 1970s, crime rates, proximity to the Charles River, highway accessibility, etc.)
###Code
from sklearn.datasets import load_boston
boston = load_boston()
print("Boston Data Shape: ", boston.data.shape)
print("Boston Description: \n", boston.DESCR)
X, y = mglearn.datasets.load_extended_boston()
print("X.shape: ", X.shape)
###Output
X.shape: (506, 104)
###Markdown
k-Nearest Neighborsk-NN algorithm is the simplest machine learning algorithm. Building the model consists only of storing the training dataset. To make a prediction for a new data point, the algorithm finds the closest data points in the training dataset -its "nearest" neighbor
###Code
mglearn.plots.plot_knn_classification(n_neighbors = 1)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function make_blobs is deprecated; Please import make_blobs directly from scikit-learn
warnings.warn(msg, category=FutureWarning)
###Markdown
Using voting to find an arbitary number k of neighbors
###Code
mglearn.plots.plot_knn_classification(n_neighbors = 3)
mglearn.plots.plot_knn_classification(n_neighbors = 4)
mglearn.plots.plot_knn_classification(n_neighbors = 6)
mglearn.plots.plot_knn_classification(n_neighbors = 10)
from sklearn.model_selection import train_test_split
X, y = mglearn.datasets.make_forge()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors = 3)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function make_blobs is deprecated; Please import make_blobs directly from scikit-learn
warnings.warn(msg, category=FutureWarning)
###Markdown
Fitting the classifier using the training set. For KNeighborsClassifier, store the dataset and computer the niehbors during prediction
###Code
clf.fit(X_train, y_train)
print("Test set predictions:", clf.predict(X_test))
###Output
Test set predictions: [1 0 1 0 1 0 0]
###Markdown
To evaluate the model's generalization, call the store method with the test data together
###Code
print("Test set accuracy: {:.2f}".format(clf.score(X_test, y_test)))
###Output
Test set accuracy: 0.86
###Markdown
Analyzing KNeighborsClassifier For 2 dimensional datasets, illustrate the prediction for all the possible test points in the xy-plane. Color the plane according to the class that would be assinged to a point in this region. *Decision boundary* - divide between where the algorithm assigns class 0 vs assign class 1
###Code
fig, axes = plt.subplots(1, 3, figsize = (10 , 3)) #this will create the boxes
fig, axes = plt.subplots(1, 3, figsize = (10 , 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate and fit in one line
clf = KNeighborsClassifier(n_neighbors = n_neighbors).fit(X,y)
mglearn.plots.plot_2d_separator(clf, X, fill = True, eps = 0.5, ax = ax, alpha = .4)
#pretty pictures
fig, axes = plt.subplots(1, 3, figsize = (10 , 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate and fit in one line
clf = KNeighborsClassifier(n_neighbors = n_neighbors).fit(X,y)
mglearn.plots.plot_2d_separator(clf, X, fill = True, eps = 0.56, ax = ax, alpha = .8)
#pretty pictures
#change the color
fig, axes = plt.subplots(1, 3, figsize = (10 , 3)) #this will show the 3 graphs
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate and fit in one line
clf = KNeighborsClassifier(n_neighbors = n_neighbors).fit(X,y)
mglearn.plots.plot_2d_separator(clf, X, fill = True, eps = 0.56, ax = ax, alpha = .8)
mglearn.discrete_scatter(X[:,0], X[:, 1], y, ax = ax)
axes[0].legend(loc = 3)
fig, axes = plt.subplots(1, 3, figsize = (10 , 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate and fit in one line
clf = KNeighborsClassifier(n_neighbors = n_neighbors).fit(X,y)
mglearn.plots.plot_2d_separator(clf, X, fill = True, eps = 0.56, ax = ax, alpha = .8)
mglearn.discrete_scatter(X[:,0], X[:, 1], y, ax = ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc = 3)
###Output
_____no_output_____
###Markdown
Decision boundaries created by the nearest neighbors model for different values of n_neighbors More and more neighbors leads to a smoother decision boundary. A smoother boundary corresponds to a simpler model. Investigating the breast cancer model Split the dataset into a training and a test set Evaluate the training and test set performance with different numbers of neighbors
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify = cancer.target, random_state = 66)
training_accuracy = []
test_accuracy = []
#try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
#build the model
clf = KNeighborsClassifier(n_neighbors = n_neighbors)
clf.fit(X_train, y_train)
#record training set accuracy
training_accuracy.append(clf.score(X_train, y_train))
#record generalization accuracy
test_accuracy.append(clf.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label = "training accuracy")
plt.plot(neighbors_settings, test_accuracy, label = "test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
###Output
_____no_output_____
###Markdown
K-neighbors regressionRegression variant of the k-nearest neighbors algorithm. Use the wave dataset Prediction uses a single neighbor as the target value of the nearest neighbor
###Code
mglearn.plots.plot_knn_regression(n_neighbors = 1)
###Output
_____no_output_____
###Markdown
*Predictions made by one nearest neighbor regression on the wave dataset*More than the single closest neighbor for regression. When using multiple nearest neighbors, the prediction is the average of the relevant neighbors
###Code
mglearn.plots.plot_knn_regression(n_neighbors= 3)
mglearn.plots.plot_knn_regression(n_neighbors= 5)
from sklearn.neighbors import KNeighborsRegressor
X, y = mglearn.datasets.make_wave(n_samples = 40)
#split the wave dataset into a training and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
#instantiate the model and set the number of neighbors to consider to 3
reg = KNeighborsRegressor(n_neighbors = 3)
#fit the model using the training data and training targets
reg.fit(X_train, y_train)
print("Test set predictions: \n", reg.predict(X_test))
###Output
Test set predictions:
[-0.05396539 0.35686046 1.13671923 -1.89415682 -1.13881398 -1.63113382
0.35686046 0.91241374 -0.44680446 -1.13881398]
###Markdown
Evaluate the model using the score method, which for regressors returns the R^2 score. R^2, or coefficient of determination, is a measure of goodness of a prediction for a regression model. It yields a score that is between 0 and 1 with 1 being a perfect prediction. 0 means a constant model that predicts the mean of the training set responses, y_train. R^2 can be negative which indicates anticorrelated predictions.
###Code
print("Test set R^2: {:2f}".format(reg.score(X_test, y_test)))
###Output
Test set R^2: 0.834417
###Markdown
A score of 0.83 is a relatively good model fit Analyzing KNeighborsRegressorFor 1 dimensional dataset, we can see what prediction looks like for all possible feature values.
###Code
fig, axes = plt.subplots(1, 3, figsize = (15, 4))
# create 1,000 data points, evenly spaced between - 3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1,1)
from sklearn.neighbors import KNeighborsRegressor
X, y = mglearn.datasets.make_wave(n_samples = 40)
#split the wave dataset into a training and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
#instantiate the model and set the number of neighbors to consider to 3
reg = KNeighborsRegressor(n_neighbors = 3)
#fit the model using the training data and training targets
reg.fit(X_train, y_train)
print("Test set predictions: \n", reg.predict(X_test))
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# create 1,000 data points, evenly spaced between - 3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1,1)
for n_neighbors, ax in zip([1, 3, 9], axes):
# make predictions using 1, 3, or 9 neighbors
reg = KNeighborsRegressor(n_neighbors=n_neighbors)
reg.fit(X_train, y_train)
ax.plot(line, reg.predict(line))
ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize = 8)
ax.plot(X_test, y_test, 'v', c= mglearn.cm2(1), markersize = 8)
ax.set_title("{} neighbors(s)\n train score: {:.2} test score: {:.2f}".format(n_neighbors, reg.score(X_train, y_train), reg.score(X_test, y_test)))
ax.set_xlabel("Features")
ax.set_ylabel("Target")
axes[0].legend(["Model predictions", "Training data/target", "Test data/target"], loc = "best")
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# create 1,000 data points, evenly spaced between -3 and 3
line = np.linspace(-3, 3, 1000).reshape(-1, 1)
for n_neighbors, ax in zip([1, 3, 9], axes):
# make predictions using 1, 3, or 9 neighbors
reg = KNeighborsRegressor(n_neighbors=n_neighbors)
reg.fit(X_train, y_train)
ax.plot(line, reg.predict(line))
ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)
ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)
ax.set_title(
"{} neighbor(s)\n train score: {:.2f} test score: {:.2f}".format(
n_neighbors, reg.score(X_train, y_train),
reg.score(X_test, y_test)))
ax.set_xlabel("Feature")
ax.set_ylabel("Target")
axes[0].legend(["Model predictions", "Training data/target",
"Test data/target"], loc="best")
###Output
_____no_output_____ |
c_01_intro_to_CV_and_Python/basic_python_tutorial_part_2.ipynb | ###Markdown
Python Workshop: Basics II[](https://colab.research.google.com/github/YoniChechik/AI_is_Math/blob/master/c_01_intro_to_CV_and_Python/basic_python_tutorial_part_2.ipynb)Based on:this [git](https://github.com/zhiyzuo/python-tutorial) of Zhiya Zuo&tutorials from [tutorialspoint](https://www.tutorialspoint.com/python) Control LogicsIn the following examples, we show examples of comparison, `if-else` loop, `for` loop, and `while` loop. ComparisonPython syntax for comparison is the same as our hand-written convention: 1. Larger (or equal): `>` (`>=`)2. Smaller (or equal): `<` (`<=`)3. Equal to: `==`4. Not equal to: `!=`
###Code
3 == 5
72 >= 2
test_str = "test"
test_str == "test" # can also compare strings
###Output
_____no_output_____
###Markdown
If-Else
###Code
sum_ =0
if sum_ == 0:
print("sum_ is 0")
elif sum_ < 0:
print("sum_ is less than 0")
else:
print("sum_ is above 0 and its value is " + str(sum_)) # Cast sum_ into string type.
###Output
sum_ is 0
###Markdown
Comparing strings are similar
###Code
store_name = 'Walmart'
if 'Wal' in store_name:
print("The store is not Walmart. It's " + store_name + ".")
else:
print("The store is Walmart.")
###Output
The store is not Walmart. It's Walmart.
###Markdown
For loop
###Code
for letter in store_name:
print(letter)
###Output
W
a
l
m
a
r
t
###Markdown
`range()` is a function to create integer sequences:
###Code
a_range = range(5)
print(a_range)
print("range(5) gives" + str(list(range(5)))) # By default starts from 0
print("range(1,9) gives: " + str(list(range(1, 9)))) # From 1 to 8 (Again the end index is exclusive.)
for index in range(len(store_name)): # length of a sequence
print("The %ith letter in store_name is: %s"%(index, store_name[index]))
###Output
The 0th letter in store_name is: W
The 1th letter in store_name is: a
The 2th letter in store_name is: l
The 3th letter in store_name is: m
The 4th letter in store_name is: a
The 5th letter in store_name is: r
The 6th letter in store_name is: t
###Markdown
List comprehensions List comprehensions provides an easy way to create lists:
###Code
x = [i for i in range(10)]
print(x)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
a lot of cool things can be done in one line!
###Code
x = [i + 2 for i in range(10)]
print(x)
x = [i ** 2 for i in range(10) if i%2==0]
print(x)
###Output
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
[0, 4, 16, 36, 64]
###Markdown
While loop
###Code
x = 2
while x < 10:
print(x)
x = x + 1
###Output
2
3
4
5
6
7
8
9
###Markdown
Notes on `break` and `continue``break` means get out of the loop immediately. Any code after the `break` will NOT be executed.
###Code
store_name = 'Walmart'
index = 0
while True:
print(store_name[index])
index += 1 # a += b means a = a + b
if store_name[index] == "a":
print("End at a")
break # instead of setting flag to False, we can directly break out of the loop
print("Hello!") # This will NOT be run
###Output
W
End at a
###Markdown
`continue` means get to the next iteration of loop. It will __break__ the current iteration and __continue__ to the next.
###Code
for letter in store_name:
if letter == "a":
continue # Not printing a
else:
print(letter)
###Output
W
l
m
r
t
###Markdown
FunctionsStructure of a function```pythondef func_name(arg1, arg2, arg3, ...): Do something here return output```return output` is **NOT** requiredOne input one output
###Code
def F(n): # bonus- what does this function do?
if n<0:
print("Incorrect input")
elif n==0:
return 0
elif n==1:
return 1
else:
return F(n-1)+F(n-2)
print(F(2))
print(F(3))
print(F(5))
print(F(7))
###Output
1
2
5
13
###Markdown
Multiple outputsreference- geometric sequence equations:$a_n = a_1 \cdot q^{n-1}$$S_n = \frac{a_1\cdot(q^n-1)}{q-1}$
###Code
def geo_seq(a_1, q, n):
a_n = a_1 * (q ** (n-1))
S_n = (a_1 * (q ** n - 1)) / (q - 1)
return a_n, S_n
print(geo_seq(2, 2, 1)) # multiple outputs returns as a tuple
print(geo_seq(2, 2, 2))
print(geo_seq(2, 2, 2))
print(geo_seq(2, 2, 3)[1]) # get only second element
###Output
(2, 2.0)
(4, 6.0)
(4, 6.0)
14.0
###Markdown
optional args
###Code
def geo_seq_optional_args(a_1, q=2, n=1):
a_n = a_1 * (q ** (n-1))
S_n = (a_1 * (q ** n - 1)) / (q - 1)
return a_n, S_n
print(geo_seq_optional_args(2))
print(geo_seq_optional_args(2, n=2))
###Output
(2, 2.0)
(4, 6.0)
###Markdown
ClassesAs been said before- Python is object oriented programing (OOP) language, so every variable is actually an instance of some class.Here are some class basics:
###Code
class Employee:
# the function that is being called each time a new instance is created
def __init__(self, name="Jhon", salary=10000):
# per instance variables
self.name = name
self.salary = salary
def display_employee(self):
print("Name : " + self.name + ", Salary: "+ str(self.salary))
def change_salary(self, new_salary):
self.salary = new_salary
emp1 = Employee() #create new instance
emp1.display_employee()
emp2 = Employee("Bob", salary=20000) #create new instance
emp2.display_employee()
emp2.change_salary(30000)
emp2.display_employee()
# instance variables are also accessible - no such thing private/public vars
emp2.name = "Larry"
emp2.display_employee()
###Output
Name : Jhon, Salary: 10000
Name : Bob, Salary: 20000
Name : Bob, Salary: 30000
Name : Larry, Salary: 30000
###Markdown
FIle I/OThis section is about some basics on reading and writing data, in Python native style Write data to a file
###Code
f = open("tmp1.csv", "w") # f is a file handler, while "w" is the mode (w for write)
for item in range(6):
f.write(str(item) + "\n")
f.close() # close the filer handler for security reasons.
###Output
_____no_output_____
###Markdown
*Note that without the typecasting from `int` to `str`, an error will be raised.*A more commonly used way:
###Code
with open("tmp2.csv", "w") as f:
for item in range(4):
f.write(str(item))
f.write("\n")
# no need to close file, when out of 'with' scope the file closes automatically
###Output
_____no_output_____
###Markdown
Occasionally, we need to _append new elements_ instead of _overwriting_ existing files. In this case, we should use `a` mode in our `open` function:
###Code
with open("tmp2.csv", "a") as f: # 'a' == append to end of file
for item in range(15, 19):
f.write(str(item)+"\n")
###Output
_____no_output_____
###Markdown
Read data to a fileTo read a text file into Python, we use `r` mode (for _read_)
###Code
f = open("tmp1.csv", "r") # this time, use read mode
contents = [item.strip("\n") for item in f] # list comprehension. This is the same as for-loop but more concise + stripping newline
print(contents)
f.close()
###Output
['0', '1', '2', '3', '4', '5']
###Markdown
Also using `with`:
###Code
with open("tmp2.csv", "r") as f:
print(f.readlines())
# delete the files...
import os
os.remove("tmp1.csv")
os.remove("tmp2.csv")
###Output
_____no_output_____
###Markdown
PackagesOften times, we need either internal or external help for complicated computation tasks. In these occasions, we need to _import packages_. Built-in packagesPython provides many built-in packages to prevent extra work on some common and useful functionsWe will use __math__ as an example.
###Code
import math # use import to load a library
###Output
_____no_output_____
###Markdown
To use functions from the library, do: `library_name.function_name`. For example, when we want to calculate the logarithm using a function from `math` library, we can do `math.log`
###Code
x = 3
print("e^x = e^3 = %f"%math.exp(x))
print("log(x) = log(3) = %f"%math.log(x))
###Output
e^x = e^3 = 20.085537
log(x) = log(3) = 1.098612
###Markdown
You can also import one specific function:
###Code
from math import exp # You can import a specific function
print(exp(x)) # This way, you don't need to use math.exp but just exp
###Output
20.085536923187668
###Markdown
Or all:
###Code
from math import * # Import all functions - not recommended do to overriding of functions
print(exp(x))
print(log(x)) # Before importing math, calling `exp` or `log` will raise errors
###Output
20.085536923187668
1.0986122886681098
###Markdown
You can import a package with a shortened name:
###Code
import math as m
m.exp(3)
###Output
_____no_output_____
###Markdown
Python Workshop: Basics II[](https://colab.research.google.com/github/YoniChechik/AI_is_Math/blob/master/c_01_intro_to_CV_and_Python/basic_python_tutorial_part_2.ipynb)Based on:this [git](https://github.com/zhiyzuo/python-tutorial) of Zhiya Zuo&tutorials from [tutorialspoint](https://www.tutorialspoint.com/python) Control LogicsIn the following examples, we show examples of comparison, `if-else` loop, `for` loop, and `while` loop. ComparisonPython syntax for comparison is the same as our hand-written convention:1. Larger (or equal): `>` (`>=`)2. Smaller (or equal): `<` (`<=`)3. Equal to: `==`4. Not equal to: `!=`
###Code
3 == 5
72 >= 2
test_str = "test"
test_str == "test" # can also compare strings
###Output
_____no_output_____
###Markdown
If-Else
###Code
sum_ = 0
if sum_ == 0:
print("sum_ is 0")
elif sum_ < 0:
print("sum_ is less than 0")
else:
print("sum_ is above 0 and its value is " + str(sum_)) # Cast sum_ into string type.
###Output
sum_ is 0
###Markdown
Comparing strings are similar
###Code
store_name = "Walmart"
if "Wal" in store_name:
print("The store is not Walmart. It's " + store_name + ".")
else:
print("The store is Walmart.")
###Output
The store is not Walmart. It's Walmart.
###Markdown
For loop
###Code
for letter in store_name:
print(letter)
###Output
W
a
l
m
a
r
t
###Markdown
`range()` is a function to create integer sequences:
###Code
a_range = range(5)
print(a_range)
print("range(5) gives" + str(list(range(5)))) # By default starts from 0
print("range(1,9) gives: " + str(list(range(1, 9)))) # From 1 to 8 (Again the end index is exclusive.)
for index in range(len(store_name)): # length of a sequence
print("The %ith letter in store_name is: %s" % (index, store_name[index]))
###Output
The 0th letter in store_name is: W
The 1th letter in store_name is: a
The 2th letter in store_name is: l
The 3th letter in store_name is: m
The 4th letter in store_name is: a
The 5th letter in store_name is: r
The 6th letter in store_name is: t
###Markdown
List comprehensionsList comprehensions provides an easy way to create lists:
###Code
x = [i for i in range(10)]
print(x)
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
a lot of cool things can be done in one line!
###Code
x = [i + 2 for i in range(10)]
print(x)
x = [i ** 2 for i in range(10) if i % 2 == 0]
print(x)
###Output
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
[0, 4, 16, 36, 64]
###Markdown
While loop
###Code
x = 2
while x < 10:
print(x)
x = x + 1
###Output
2
3
4
5
6
7
8
9
###Markdown
Notes on `break` and `continue``break` means get out of the loop immediately. Any code after the `break` will NOT be executed.
###Code
store_name = "Walmart"
index = 0
while True:
print(store_name[index])
index += 1 # a += b means a = a + b
if store_name[index] == "a":
print("End at a")
break # instead of setting flag to False, we can directly break out of the loop
print("Hello!") # This will NOT be run
###Output
W
End at a
###Markdown
`continue` means get to the next iteration of loop. It will __break__ the current iteration and __continue__ to the next.
###Code
for letter in store_name:
if letter == "a":
continue # Not printing a
else:
print(letter)
###Output
W
l
m
r
t
###Markdown
FunctionsStructure of a function```pythondef func_name(arg1, arg2, arg3, ...): Do something here return output```return output` is **NOT** requiredOne input one output
###Code
def F(n): # bonus- what does this function do?
if n < 0:
print("Incorrect input")
elif n == 0:
return 0
elif n == 1:
return 1
else:
return F(n - 1) + F(n - 2)
print(F(2))
print(F(3))
print(F(5))
print(F(7))
###Output
1
2
5
13
###Markdown
Multiple outputsreference- geometric sequence equations:$a_n = a_1 \cdot q^{n-1}$$S_n = \frac{a_1\cdot(q^n-1)}{q-1}$
###Code
def geo_seq(a_1, q, n):
a_n = a_1 * (q ** (n - 1))
S_n = (a_1 * (q ** n - 1)) / (q - 1)
return a_n, S_n
print(geo_seq(2, 2, 1)) # multiple outputs returns as a tuple
print(geo_seq(2, 2, 2))
print(geo_seq(2, 2, 2))
print(geo_seq(2, 2, 3)[1]) # get only second element
###Output
(2, 2.0)
(4, 6.0)
(4, 6.0)
14.0
###Markdown
optional args
###Code
def geo_seq_optional_args(a_1, q=2, n=1):
a_n = a_1 * (q ** (n - 1))
S_n = (a_1 * (q ** n - 1)) / (q - 1)
return a_n, S_n
print(geo_seq_optional_args(2))
print(geo_seq_optional_args(2, n=2))
###Output
(2, 2.0)
(4, 6.0)
###Markdown
ClassesAs been said before- Python is object oriented programing (OOP) language, so every variable is actually an instance of some class.Here are some class basics:
###Code
class Employee:
# the function that is being called each time a new instance is created
def __init__(self, name="Jhon", salary=10000):
# per instance variables
self.name = name
self.salary = salary
def display_employee(self):
print("Name : " + self.name + ", Salary: " + str(self.salary))
def change_salary(self, new_salary):
self.salary = new_salary
emp1 = Employee() # create new instance
emp1.display_employee()
emp2 = Employee("Bob", salary=20000) # create new instance
emp2.display_employee()
emp2.change_salary(30000)
emp2.display_employee()
# instance variables are also accessible - no such thing private/public vars
emp2.name = "Larry"
emp2.display_employee()
###Output
Name : Jhon, Salary: 10000
Name : Bob, Salary: 20000
Name : Bob, Salary: 30000
Name : Larry, Salary: 30000
###Markdown
FIle I/OThis section is about some basics on reading and writing data, in Python native style Write data to a file
###Code
f = open("tmp1.csv", "w") # f is a file handler, while "w" is the mode (w for write)
for item in range(6):
f.write(str(item) + "\n")
f.close() # close the filer handler for security reasons.
###Output
_____no_output_____
###Markdown
*Note that without the typecasting from `int` to `str`, an error will be raised.*A more commonly used way:
###Code
with open("tmp2.csv", "w") as f:
for item in range(4):
f.write(str(item))
f.write("\n")
# no need to close file, when out of 'with' scope the file closes automatically
###Output
_____no_output_____
###Markdown
Occasionally, we need to _append new elements_ instead of _overwriting_ existing files. In this case, we should use `a` mode in our `open` function:
###Code
with open("tmp2.csv", "a") as f: # 'a' == append to end of file
for item in range(15, 19):
f.write(str(item) + "\n")
###Output
_____no_output_____
###Markdown
Read data to a fileTo read a text file into Python, we use `r` mode (for _read_)
###Code
f = open("tmp1.csv", "r") # this time, use read mode
contents = [
item.strip("\n") for item in f
] # list comprehension. This is the same as for-loop but more concise + stripping newline
print(contents)
f.close()
###Output
['0', '1', '2', '3', '4', '5']
###Markdown
Also using `with`:
###Code
with open("tmp2.csv", "r") as f:
print(f.readlines())
# delete the files...
import os
os.remove("tmp1.csv")
os.remove("tmp2.csv")
###Output
_____no_output_____
###Markdown
PackagesOften times, we need either internal or external help for complicated computation tasks. In these occasions, we need to _import packages_. Built-in packagesPython provides many built-in packages to prevent extra work on some common and useful functionsWe will use __math__ as an example.
###Code
import math # use import to load a library
###Output
_____no_output_____
###Markdown
To use functions from the library, do: `library_name.function_name`. For example, when we want to calculate the logarithm using a function from `math` library, we can do `math.log`
###Code
x = 3
print("e^x = e^3 = %f" % math.exp(x))
print("log(x) = log(3) = %f" % math.log(x))
###Output
e^x = e^3 = 20.085537
log(x) = log(3) = 1.098612
###Markdown
You can also import one specific function:
###Code
from math import exp # You can import a specific function
print(exp(x)) # This way, you don't need to use math.exp but just exp
###Output
20.085536923187668
###Markdown
Or all:
###Code
from math import * # Import all functions - not recommended do to overriding of functions
print(exp(x))
print(log(x)) # Before importing math, calling `exp` or `log` will raise errors
###Output
20.085536923187668
1.0986122886681098
###Markdown
You can import a package with a shortened name:
###Code
import math as m
m.exp(3)
###Output
_____no_output_____ |
youtuberankwithpage.ipynb | ###Markdown
pagenatin
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = uri+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtube_rank.xlsx')
###Output
_____no_output_____
###Markdown
href="https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=4"href="https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=7" pagenation 내가한것
###Code
url = "https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page="
songs = [] # or list()
for pagenum in range(1,11):
target = f'{url}{pagenum}'
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
# print(category.text.strip(), title.text.strip())
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberank.xls', index=False)
###Output
_____no_output_____
###Markdown
pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
song = []
for page in range(1, 11):
target = uri + str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
###Output
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=1
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=2
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=3
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=4
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=5
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=6
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=7
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=8
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=9
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=10
###Markdown
pagenation
###Code
uri= 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target=uri+str(page)
print(target)
browser.get(target)
html=browser.page_source
soup=BeautifulSoup(html,'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
range(1, 10)
list = range(1, 10)
list
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
Pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = uri+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html,'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = uri+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html,'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
Pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
# uri = f"https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page={page}"
songs = [] # or list()
for page in range(1,11):
target = uri+str(page)
print(target)
browser.get(target)
# browser.get(uri)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
# 브라우저에 주소를 넣는 기능-> selenium,(o), beautifulsoup(x)
url= 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1, 11):
target= url + str(page)
print(target)
browser.get(target)
html= browser.page_source
soup= BeautifulSoup(html, 'html.parser')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
# uri = f"https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page={page}"
songs = [] # or list()
for page in range(1,11):
target = uri+str(page)
print(target)
browser.get(target)
# browser.get(uri)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = []
for page in range(1,11):
target = url+str(page)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
songs
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
# url = 'fhttps://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page={page}'
songs = [] # or list()
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberank.xls')
###Output
_____no_output_____
###Markdown
pagenation url을 가져오고 뒤에 숫자를 for문으로 주소 리스트 생성 후 브라우저에 입력
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
uri = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = []
for page in range(1,11):
target = uri + str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankwithpage.xls')
###Output
_____no_output_____
###Markdown
pagenation
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list() 위치가 바뀜
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
len(songs)
###Output
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=1
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=2
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=3
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=4
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=5
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=6
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=7
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=8
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=9
https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page=10
###Markdown
pagenation
###Code
url = 'https://youtube-rank.com/board/bbs/board.php?bo_table=youtube&page='
songs = [] # or list()
for page in range(1,11):
target = url+str(page)
print(target)
browser.get(target)
html = browser.page_source
soup = BeautifulSoup(html,'html.parser')
contents = soup.select('tr.aos-init')
for content in contents:
category = content.select('p.category')[0]
title = content.select('h1 > a[href*="board"]')[0]
songs.append([category.text.strip(), title.text.strip()])
songs
pd_data = pd.DataFrame(songs)
pd_data.to_excel('./saves/youtuberankpage.xls')
###Output
_____no_output_____ |
kaggle_fraud/fraud_baseline_lightgbm_fe.ipynb | ###Markdown
Для того, чтобы улучшать качество предсказаний, можно не только менять модели и подбирать их гиперпараметры, но и модифицировать выборку. Это можно делать разными техниками - преобразования признаков, их отбор, извлечение новых и т.п. Это может помочь в силу того, что модель иногда не способна увидеть закономерности, скрытые внутри признаков, в отличие от человека, который может выделить их вручную. Подобные операции даже могут привести к появлению так называемых "magic features": их добавление к выборке может дать сильный прирост в качестве. Пример: [magic features от золотого призера одного из соревнований](https://www.kaggle.com/jturkewitz/magic-features-0-03-gain).Посмотрим на распределение значений числового признака `'TransactionAmt'` в обучающей выборке:
###Code
plt.figure(figsize=(11, 8))
sns.distplot(df_train['TransactionAmt'])
plt.show()
###Output
_____no_output_____
###Markdown
Как видно, распределение очень сильно смещено. Прологарифмируем признак и добавим результат в данные как новый:
###Code
df_train['TransactionAmt_log'] = np.log1p(df_train['TransactionAmt'])
df_test['TransactionAmt_log'] = np.log1p(df_test['TransactionAmt'])
plt.figure(figsize=(11, 8))
sns.distplot(df_train['TransactionAmt_log'])
plt.show()
###Output
_____no_output_____
###Markdown
Итак, распределение мы выровняли.Заметим, что отнюдь не все суммы транзакций (в долларах) - целочисленные:
###Code
df_train['TransactionAmt'].value_counts()[:15]
###Output
_____no_output_____
###Markdown
Это означает, что можно рассмотреть признак "количество центов" (вдруг мошенничество чаще совершается в случае "нецелых" транзакций?). Добавим его в данные и построим два его распределения: для мошеннических и обычных транзакций.
###Code
df_train['TransactionAmt_Cents'] = np.modf(df_train['TransactionAmt'])[0] * 100
df_test['TransactionAmt_Cents'] = np.modf(df_test['TransactionAmt'])[0] * 100
plt.figure(figsize=(11, 8))
sns.distplot(df_train[df_train['isFraud'] == 0]['TransactionAmt_Cents'], label='isFraud 0')
sns.distplot(df_train[df_train['isFraud'] == 1]['TransactionAmt_Cents'], label='isFraud 1')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Из имеющихся признаков можно извлекать новые. Многие признаки анонимизированы, однако есть достаточно понятные - скажем, почта или OS устройства, с которого совершалась транзакция. Извлечем новый признак - окончание почты, который может нести в себе какую-нибудь информацию - например, информацию о стране (если название домена заканчивается на `.fr`).
###Code
for col in ['P_emaildomain', 'R_emaildomain']:
df_train[col + '_suffix'] = df_train[col].map(lambda x: str(x).split('.')[-1])
df_test[col + '_suffix'] = df_test[col].map(lambda x: str(x).split('.')[-1])
df_train[['P_emaildomain', 'P_emaildomain_suffix', 'R_emaildomain', 'R_emaildomain_suffix']].tail(10)
###Output
_____no_output_____
###Markdown
Новые признаки можно извлекать на основании сочетаний некоторых имеющихся. Например, добавим в данные признак, отвечающий за то, совпадают ли домены покупателя и получателя - вдруг это что-то даст.
###Code
df_train['same_emaildomain'] = (df_train['P_emaildomain'] == df_train['R_emaildomain']).astype('uint8')
df_test['same_emaildomain'] = (df_test['P_emaildomain'] == df_test['R_emaildomain']).astype('uint8')
df_train[['P_emaildomain', 'R_emaildomain', 'same_emaildomain']].tail()
###Output
_____no_output_____
###Markdown
Можно комбинировать признаки - производить арифметические операции между числовыми признаками, использовать сочетания категориальных. Такие комбинации иногда позволяют породить мощные признаки. Например, если какие-то из признаков `'card1'-'card6'` и `'addr1'-'addr2'` содержат важную информацию о клиенте, то объединение некоторых из них может помочь лучше идентифицировать клиента и факт мошенничества в транзакции. Главное не переусердствовать - если признаки содержат очень много категорий, то их объединение скорее всего приведет к появлению признака с колоссальным количеством категорий, что в свою очередь может привести к ухудшению качества. Из полученного признака тогда можно извлечь какую-то информацию с помощью группировки и агрегирования, но оставлять его просто так, скорее всего, опасно. Чтобы понять, о чем речь, попробуйте объединить признаки `'card1'` и `'card2'` - это два самых важных признака для модели из бейзлайна.В данном случае скомбинируем признаки `'card3'` и `'card5'` - они входят в топ-50 важных признаков для модели из бейзлайна.
###Code
df_train['card3_card5'] = df_train['card3'].astype(str) + '_' + df_train['card5'].astype(str)
df_test['card3_card5'] = df_test['card3'].astype(str) + '_' + df_test['card5'].astype(str)
df_train[['card3', 'card5', 'card3_card5']].head(10)
for col in ['card3', 'card5', 'card3_card5']:
print('Number of categories in train for {}: {}'.format(col, df_train[col].nunique()))
###Output
Number of categories in train for card3: 106
Number of categories in train for card5: 111
Number of categories in train for card3_card5: 553
###Markdown
Можно также закодировать категориальные признаки, исходя из их частоты встречаемости в выборке.
###Code
for col in ['card1', 'card2']:
card_freq = df_train[col].value_counts().to_dict()
df_train['{}_cnt'.format(col)] = df_train[col].map(card_freq)
df_test['{}_cnt'.format(col)] = df_test[col].map(card_freq)
df_train[['card1', 'card1_cnt', 'card2', 'card2_cnt']].head(10)
###Output
_____no_output_____
###Markdown
Наконец, можно использовать агрегирование и группировку. Например, наряду с `'card1'` и `'card2'`, одним из важнейших признаков для модели в бейзлайне является `'TransactionAmt'`. Давайте добавим в данные признаки, отвечающие за среднюю, медианную, максимальную и минимальную суммы покупок для каждой категории в `'card1'` и `'card2'`.
###Code
new_cols = []
for col in ['card1', 'card2']:
for agg_type in ['mean', 'median', 'min', 'max']:
agg_col_name = 'TransactionAmt_{}_{}'.format(col, agg_type)
card_agg = df_train.groupby(col)['TransactionAmt'].agg([agg_type]).rename({agg_type: agg_col_name}, axis=1)
df_train = df_train.merge(card_agg, how='left', on=col)
df_test = df_test.merge(card_agg, how='left', on=col)
new_cols.append(agg_col_name)
df_train[['TransactionAmt', 'card1'] + new_cols[:4] + ['card2'] + new_cols[4:]].head(10)
###Output
_____no_output_____
###Markdown
Напоследок отметим, что можно проводить также и отбор признаков - дело в том, что если в данных много неинформативных признаков, они могут лишь создать помехи при обучении. Способов отбора много, начиная от ручного и заканчивая специально разработанными для этого методами. В данном случае мы оставим все как есть.После всех операций с признаками удалим столбец `'TransactionAmt'`, раз он уже есть у нас в логарифмированном виде - чтобы модель на него не отвлекалась.
###Code
df_train.drop('TransactionAmt', axis=1, inplace=True)
df_test.drop('TransactionAmt', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Подготовим полученную выборку к обучению модели так же, как мы делали это в обычном бейзлайне.
###Code
for col in tqdm(df_train.columns.drop('isFraud')):
if df_train[col].dtype == 'O':
df_train[col] = df_train[col].fillna('unseen_category')
df_test[col] = df_test[col].fillna('unseen_category')
le = LabelEncoder()
le.fit(list(df_train[col]) + list(df_test[col]))
df_train[col] = le.transform(df_train[col])
df_test[col] = le.transform(df_test[col])
df_train[col] = df_train[col].astype('category')
df_test[col] = df_test[col].astype('category')
else:
df_train[col] = df_train[col].fillna(-1)
df_test[col] = df_test[col].fillna(-1)
# выделяем фолды
month_length = 3600 * 24 * 30
fold0_idx = df_train[df_train['TransactionDT'] < df_train['TransactionDT'].min() + month_length].index
fold1_idx = df_train[(df_train['TransactionDT'].min() + month_length <= df_train['TransactionDT']) & (df_train['TransactionDT'] < df_train['TransactionDT'].min() + 2 * month_length)].index
fold2_idx = df_train[(df_train['TransactionDT'].min() + 2 * month_length <= df_train['TransactionDT']) & (df_train['TransactionDT'] < df_train['TransactionDT'].min() + 3 * month_length)].index
fold3_idx = df_train[df_train['TransactionDT'].min() + 3 * month_length <= df_train['TransactionDT']].index
folds_idx = [fold0_idx, fold1_idx, fold2_idx, fold3_idx]
# выделяем идентификационный и временной признаки
df_train.drop(['TransactionID', 'TransactionDT'], axis=1, inplace=True)
df_test.drop(['TransactionID', 'TransactionDT'], axis=1, inplace=True)
###Output
100%|██████████| 448/448 [00:18<00:00, 23.91it/s]
###Markdown
Обучим модель с теми же гиперпараметрами, что и в обычном бейзлайне.
###Code
%%time
params = {
'objective': 'binary',
'boosting_type': 'gbdt',
'metric': 'auc',
'n_jobs': -1,
'n_estimators': 2000,
'seed': 13,
'early_stopping_rounds': 200,
}
scores = []
feature_importances = pd.DataFrame()
feature_importances['feature'] = df_train.columns.drop('isFraud')
test_preds = []
for i in range(len(folds_idx)):
X_train = df_train.drop(folds_idx[i], axis=0)
y_train = X_train['isFraud'].values
X_val = df_train.iloc[folds_idx[i]]
y_val = X_val['isFraud'].values
X_train = X_train.drop('isFraud', axis=1)
X_val = X_val.drop('isFraud', axis=1)
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_val, y_val, reference=lgb_train)
lgb_model = lgb.train(params, lgb_train, valid_sets=lgb_eval, verbose_eval=100)
feature_importances['fold_{}'.format(i)] = lgb_model.feature_importance()
y_pred = lgb_model.predict(X_val)
score_fold = roc_auc_score(y_val, y_pred)
scores.append(score_fold)
y_test_pred = lgb_model.predict(df_test)
test_preds.append(y_test_pred)
for i in range(len(scores)):
print('Fold {}, AUC-ROC: {:.5f}'.format(i, scores[i]))
print('CV AUC-ROC: {:.5f}'.format(np.mean(scores)))
feature_importances.head()
fold_cols = [col for col in feature_importances.columns if col.startswith('fold_')]
feature_importances['average'] = feature_importances[fold_cols].mean(axis=1)
feature_importances.head()
plt.figure(figsize=(15, 15))
sns.barplot(data=feature_importances.sort_values(by='average', ascending=False).head(50), x='average', y='feature', palette="BuGn_r")
plt.title('Top feature importances')
plt.show()
###Output
_____no_output_____
###Markdown
Видно, что качество модели улучшилось, и в топе появились сгенерированные признаки - агрегации и счетчики. А также центы :)
###Code
final_pred = np.average(test_preds, axis=0)
sub = pd.DataFrame({'TransactionID': sample_submission['TransactionID'], 'isFraud': final_pred})
sub.to_csv('submission_baseline_fe.csv', index=False)
plt.figure(figsize=(11, 8))
plt.hist(sub['isFraud'], bins=30)
plt.title('Distribution of isFraud prediction on test')
plt.show()
###Output
_____no_output_____ |
CODATA/kafka-sparkstreaming-cassandra-master/kafkaSendDataPy.ipynb | ###Markdown
kafkaSendDataPyThis notebook sends data to Kafka on the topic 'test'. A message that gives the current time is sent every second Add dependencies
###Code
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--conf spark.ui.port=4041 --packages org.apache.kafka:kafka_2.11:0.10.0.0,org.apache.kafka:kafka-clients:0.10.0.0 pyspark-shell'
###Output
_____no_output_____
###Markdown
Load modules and start SparkContextNote that SparkContext must be started to effectively load the package dependencies. One core is used.
###Code
from pyspark import SparkContext
sc = SparkContext("local[1]", "KafkaSendStream")
from kafka import KafkaProducer
import time
###Output
_____no_output_____
###Markdown
Start Kafka producerOne message giving current time is sent every second to the topic test
###Code
producer = KafkaProducer(bootstrap_servers='localhost:9092')
while True:
message=time.strftime("%Y-%m-%d %H:%M:%S")
producer.send('test', message)
time.sleep(1)
###Output
_____no_output_____ |
sm/multinode-ddp-adascale.ipynb | ###Markdown
Run locally multiGPU training
###Code
! cd /home/ec2-user/SageMaker/autoscaler-external/autoscaler/pytorch && python setup.py install
# Install conda as some fp16 hooks are not supported
! conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch
! pip install tensorboard
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 \
/home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm/trainer_adascale.py --num_epochs 200 \
--batch_size 32 \
--use_adascale \
--autoscaler_cfg autoscaler.yaml
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 /home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm/trainer_adascale.py --num_epochs 200 --batch_size 32 --use_adascale --autoscaler_cfg autoscaler.yaml
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
notebook.display(port=6006, height=1000)
! pwd
! ls /home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm
! cd /home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm && python ddp-launcher.py --gpus 8 \
--data_dir /home/ec2-user/SageMaker/data/ \
--model_dir /home/ec2-user/SageMaker/ \
--num_epochs 10
###Output
### Data directory: ['cifar-10-batches-py', 'cifar-10-python.tar.gz']
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : cifar.py
min_nodes : 1
max_nodes : 1
nproc_per_node : 8
run_id : none
rdzv_backend : static
rdzv_endpoint : 127.0.0.1:7777
rdzv_configs : {'rank': 0, 'timeout': 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_vwvb17pn/none_qtjccaff
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/distributed/elastic/utils/store.py:53: FutureWarning: This is an experimental API and will be changed in future.
"This is an experimental API and will be changed in future.", FutureWarning
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=127.0.0.1
master_port=7777
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
role_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
global_ranks=[0, 1, 2, 3, 4, 5, 6, 7]
role_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]
global_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/3/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker4 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/4/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker5 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/5/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker6 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/6/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker7 reply file to: /tmp/torchelastic_vwvb17pn/none_qtjccaff/attempt_0/7/error.json
^C
Traceback (most recent call last):
File "ddp-launcher.py", line 119, in <module>
'--data_dir', args.data_dir
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/subprocess.py", line 289, in call
return p.wait(timeout=timeout)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/subprocess.py", line 1477, in wait
(pid, sts) = self._try_wait(0)
File "/home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/subprocess.py", line 1424, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
###Markdown
Multinode multi GPU
###Code
config = {
'batch_size': 256,
'num_epochs' : 50}
bucket = 'mansmane-us-west-2'
# Training time of this job
token = str(uuid.uuid4())[:10] # we create a unique token to avoid checkpoint collisions in S3
job = PyTorch(
entry_point='ddp-launcher.py',
source_dir='/home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm',
role=get_execution_role(),
framework_version='1.8.1',
instance_count=1,
instance_type='ml.p3.16xlarge',
base_job_name='resnet-multi-GPU-g5',
py_version='py36',
hyperparameters=config,
checkpoint_s3_uri='s3://{}/{}/checkpoints'.format(bucket, token), # S3 destination of /opt/ml/checkpoints files
output_path='s3://{}/{}'.format(bucket, token),
code_location='s3://{}/{}/code'.format(bucket, token), # source_dir code will be staged in S3 there
environment={"SMDEBUG_LOG_LEVEL":"off"}, # reduce verbosity of Debugger
debugger_hook_config=False, # deactivate debugger to avoid warnings in model artifact
disable_profiler=True, # keep running resources to a minimum to avoid permission errors
metric_definitions=[
{"Name": "Train_loss", "Regex": "Training_loss: ([0-9.]+).*$"},
{"Name": "Learning_rate", "Regex": "learning rate: ([0-9.]+).*$"},
{"Name": "Val_loss", "Regex": "Val_loss: ([0-9.]+).*$"},
{"Name": "Throughput", "Regex": "Throughput: ([0-9.]+).*$"},
{"Name": "Val_pixel_acc", "Regex": "Val_pixel_acc: ([0-9.]+).*$"}
],
tags=[{'Key': 'Project', 'Value': 'A2D2_segmentation'}]) # tag the job for experiment tracking
train_path = 's3://mansmane-us-west-2/cifar10/'
job.fit({'dataset': train_path}, wait=False)
token = str(uuid.uuid4())[:10] # we create a unique token to avoid checkpoint collisions in S3
instance_count = 2
job = PyTorch(
entry_point='ddp-launcher.py',
source_dir='/home/ec2-user/SageMaker/pytorch_resnet_cifar10_mirror/sm',
role=get_execution_role(),
framework_version='1.8.1',
instance_count=instance_count,
instance_type='ml.p3.16xlarge',
base_job_name='resnet-multi-GPU-g5-instance-' + str(instance_count),
py_version='py36',
hyperparameters=config,
checkpoint_s3_uri='s3://{}/{}/checkpoints'.format(bucket, token), # S3 destination of /opt/ml/checkpoints files
output_path='s3://{}/{}'.format(bucket, token),
code_location='s3://{}/{}/code'.format(bucket, token), # source_dir code will be staged in S3 there
environment={"SMDEBUG_LOG_LEVEL":"off"}, # reduce verbosity of Debugger
debugger_hook_config=False, # deactivate debugger to avoid warnings in model artifact
disable_profiler=True, # keep running resources to a minimum to avoid permission errors
metric_definitions=[
{"Name": "Train_loss", "Regex": "Training_loss: ([0-9.]+).*$"},
{"Name": "Learning_rate", "Regex": "learning rate: ([0-9.]+).*$"},
{"Name": "Val_loss", "Regex": "Val_loss: ([0-9.]+).*$"},
{"Name": "Throughput", "Regex": "Throughput: ([0-9.]+).*$"},
{"Name": "Val_pixel_acc", "Regex": "Val_pixel_acc: ([0-9.]+).*$"}
],
tags=[{'Key': 'Project', 'Value': 'A2D2_segmentation'}]) # tag the job for experiment tracking
train_path = 's3://mansmane-us-west-2/cifar10/'
job.fit({'dataset': train_path}, wait=False)
###Output
_____no_output_____ |
Final_Project_Report.ipynb | ###Markdown
Analysis of Undergraduate Retention Rates at 4-Year Universities A6. Final Project Report **Samir D. Patel****DATA 512 - Human Centered Data Science****University of Washington** IntroductionCollege selection can often be an overwhelming and stressful process for high school graduates in the United States. Every year, tens of millions of prospective students must weigh several selection factors such as tuition cost, proximity, program reputation, and future success indicators such as average salary upon graduation. The decision of selecting the right institution could result in greater chances of personal and financial success in the future.While popular annual college rankings, such as those published by the U.S. News and World Report, are thought as a useful guide for some students and parents, to others they provide only a one-size fits all solution for students. More specifically, their goal is to match elite students to elite universities. But for the average prospective college student, are these published rankings helpful? Given the number of options for collegial study in the country, students may want to hone in on options for colleges that fit them best and maximize their chances of success, with decisions based on their socio-economic background, geographic location, interests, and aptitude. Achieving this requires taking into account factors such as each applicant’s background, qualities, needs, and aspirations that can better predict the chances of success. Goals and Motivations in Respect to Human-Centered Data ScienceThe goal of this research will be to perform an analysis focused on one particular success indicator for students: retention rate. Many indicators in university ranking publications focus on commonly known metrics such as ranking by program, tuition costs, and admissions criteria (SAT score, GPA). But for many students, knowing likelihood of being able to attend and sustain enrollment in school may be the superior indicator. By working through a research question and analysis based on retention rate, the motivation of this work is to evaluate alternative success metrics in an effort to understand optimal conditions for the average student's success at a university. This can hopefully be built upon and become useful for those helping students and parents in the decision process (e.g. academic counselors, policymakers and university admissions officials). Background (or Related Work) Existing Evidence/Literature study:Retention rate, also commonly referred to as persistence, is the percentage of first-time, first-year undergraduate enrollees that will continue at the institution the following academic year [1]. With growing focus on persistence and graduation rates, the effects of financial aid (loans, grants) are being revisited by policymakers, interested in assessing the effectiveness of these tools [2]. The New York Times' data-journalism site, "The Upshot" recently saw in 2015 that the University of California public school system led the United States in enrolling high-performing students of all economic backgrounds [3]. The three factors used in the "College Access Index" to determine performance of these institutions were the percentage of students receiving Pell Grants, graduate rates of those students, and the net costs of attending college. Given the general importance that tuition costs play into decisions for prospective students, the impacts of financial aid (both in the forms of loans and grants) on student retention would make for an interesting research question to assess if outcomes are consistently positive across a larger sample size of institutions.Locale of an academic institution is a factor that deserves consideration in respect to a student's retention. In February of 2017, an article in the Times Higher Education University Rankings discussed a study which looked at physical campus characteristics and the resulting impacts on student satisfaction and performance [4]. The specific physical qualities of campuses included campus size, urban score and living score. The results of the research showed that the 5 universities with the best physical scores had higher student retention and graduation rates. This assessment of urban vs. rural characteristics makes for another potentially interesting research question given the breadth of the College Score data set. Data SetThe U.S. Department of Education’s Office of Planning, Evaluation and Policy Development provides an open dataset, [“College Scorecard”](https://collegescorecard.ed.gov/data/documentation/), that appears rich enough to assess student success in terms of retention rate. The abstract introducing this data set details the U.S. Department of Education's goal of making it “easier for students to search for a college that is a good fit for them” [5]. The raw data set contains data on each accredited university offering an undergraduate program in the U.S. For each university, the data acquired includes a number of the features, many of which are the norm with college data sets (e.g. university location, admissions rate, tuition cost, SAT/ACT score percentiles).As mentioned, the primary parameter of interest in this study is the retention rate for an institution, which is separately recorded in the College Scorecard data by full-time/part-time status along with degree type (2-year vs 4-year). Other interesting features in the data set which can be leveraged for additional insights, include percent of students receiving federal loans, average/median loan debt amounts, and estimated earnings post-graduation.In total, the data set for the 2015-16 academic year has 1,777 columns in total and 7,593 observations (each representing an institution). In addition, the data is also available for years dating all the way back to 1996-1997. **Limitations**: It is important to note, there is considerable sparsity in the data set, as assessed in the data importation and cleaning steps seen later in the notebook. This renders many columns as unusable but since we are not interested in modeling at this stage, only exploratory and research analysis, hopefully there should be enough data for interesting insights.Given the huge amount of data features, it will be beyond my bandwidth go into details on summarizing each and every one's meaning and data type. But for the reader's own reference if interested, the data dictionary provided by the “College Scorecard” contains all of this information and can be found below:https://collegescorecard.ed.gov/assets/CollegeScorecardDataDictionary.xlsx Research Questions and Hypotheses:Using the College Scorecard data set, we will look to leverage its data on institution statistics for the 2015-16 year in efforts to evaluate various hypotheses formed around the question of retention rate. Motivating Research QuestionWhat conditions affect student retention at universities offering 4-year undergraduate degrees? Hypotheses:- **H1: Public institutions in the U.S. have the highest student retention.**- **H2: Institutions with remote/rural locales result in higher student retention.**- **H3: Institutions providing higher percentages of financial aid are more likely to see higher retention.** MethodsThe methods in this research will start by taking the College Scorecard data and in respect to the motivating reserach question and hypotheses, and performing any necessary data cleaning in order to work with the data.The data set will be curated and processed using the programming language Python, which can handle the large the sizes of the datasets. The data cleaning process will be aided by commonly used packages such as pandas and numpy.After these steps, the data will first be investigated, using exploratory data analysis within Python to help identify any patterns and validate any assumptions while also helping understand the data set fundamentally. In turn, this will lend itself as helpful towards the research analysis portion, where we can assess validity of the formed hypotheses. If limitations arise or the nature of the data requires specific curation techniques or considerations, this is the step that will allow for any iterations and/or adjustments to the research design.Both the exploratory and research analyses will involve continued use of the pandas and numpy packages, as well as matplotlib and seaborn for data visualization. Importing relevant packages - pandas: A package used for data structuring and analysis. - numpy: A package used for scientific computing and advanced mathematical functionalities. - matplotlib: A plotting library built for Python. - seaborn: A visualization library built on top of matplotlib for more detailed graphics.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Importing the dataAs mentioned, the College Scorecard data being used in this analysis is from the 2015-16 academic year. This particular year's data set file (CSV) is accessible through my [Github Repository](https://github.com/samirpdx/data-512-final-project/blob/master/sample%20data/MERGED2015_16_PP.csv) or one can visit the [College Scorecard Website](https://collegescorecard.ed.gov/data/) for the full zip file containing all years of data. If more specific subset of data is desired, the latter link also details the API where smaller subsets can be accessed through calls to the API endpoint. However, for the purpose of being able to work with the entire data set, it is much easier to download the CSV for now.Once the CSV data has been locally downloaded, we will read in the data into a Pandas DataFrame object.
###Code
data = pd.read_csv("./sample data/MERGED2015_16_PP.csv", low_memory = False)
data = data.replace('PrivacySuppressed',np.NaN)
###Output
_____no_output_____
###Markdown
Several values in the data set are labeled as "Privacy Suppressed". To keep NA labeling consistent, we will convert these values to NaN
###Code
data = data.replace('PrivacySuppressed',np.NaN)
###Output
_____no_output_____
###Markdown
To understand the number of observations and dimensions, let's look at the shape.
###Code
data.shape
###Output
_____no_output_____
###Markdown
The data set has 7,593 observations (a row representing data for a single institution) and 1,777 features/predictors.Below, we get our first glance at how the data looks by printing the first 5 rows :
###Code
data.head(5)
###Output
_____no_output_____
###Markdown
To assess the sparsity, we will count the NA values:
###Code
nulls = data.isnull().sum().sum()
total = data.shape[0]*data.shape[1]
nulls/total
###Output
_____no_output_____
###Markdown
We see that while the data set has potential breadth, 75% of potential values are missing. This will be an important consideration in terms of assessing potential features available. Data Cleaning Since we will be looking at institutions offering at a minimum, 4-year degree programs, we will filter the dataset using the "HIGHDEG" feature which corresponds to highest degrees awarded by an institution.
###Code
temp = data[data["HIGHDEG"] >= 3]
###Output
_____no_output_____
###Markdown
Retention rate is the primary indicator we will be looking at and requires us to massage the data set, which contains 4 separate columns/features corresponding to retention. To create a single column and feature out of these 4 columns, we will reshape the data using the "melt" function in Pandas.
###Code
# We will store the column names in a vector
subcols = ["RET_FT4", "RET_FTL4","RET_PT4", "RET_PTL4"]
###Output
_____no_output_____
###Markdown
We will make a list of all the columns in the data set and then remove those that are retention related. This is because "melt" function requires an argument of ID variables (which will not be part of the reshaping process.)
###Code
keepcols = temp.columns
keepcols = keepcols.drop(subcols)
keepcols = pd.Series(keepcols)
###Output
_____no_output_____
###Markdown
Now we can execute the "melt" function for reshaping and rename values in the new column for clarity.
###Code
data1 = pd.melt(temp, id_vars = keepcols.values, value_vars=["RET_FT4","RET_PT4"], var_name="Retention Category",
value_name="Retention Rate")
# This step will rename the values in the new Retention Category column
data1['Retention Category'] = data1['Retention Category'].replace({'RET_FT4': 'Full Time 4-Year',
'RET_PT4': 'Part Time 4-Year'})
###Output
_____no_output_____
###Markdown
Similarly, the following categories will require data type conversion and renaming of values to help create the visualizations.- **DEP_STAT_PCT_IND**: Percentage of students who are financially dependent and have family incomes between $0-30,000- **INSTNM**: Institution name- **HIGHDEG**: "Highest degree awarded- **CONTROL**: Control of institution- **LOCALE**: Degree of urbanization of institution- **REGION**: Region
###Code
# Data Type Conversion
data1['DEP_STAT_PCT_IND'] = data1['DEP_STAT_PCT_IND'].astype('float64',
copy = False)
data1['INSTNM'] = data1['INSTNM'].astype(str)
data1["HIGHDEG"] = data1["HIGHDEG"].astype(str)
data1['DEP_INC_PCT_LO'] = data1['DEP_INC_PCT_LO'].astype('float64',
copy = False)
# Data value decoding (using DataDictionary from College Score)
data1['HIGHDEG'] = data1['HIGHDEG'].replace({'3': "Bachelor's", '4': 'Graduate'})
data1['CONTROL'] = data1['CONTROL'].replace({3: "Private For-Profit",
2: "Private Non-Profit",
1: "Public"})
data1['LOCALE'] = data1['LOCALE'].replace({11: "Large City",
12: "Mid-Size City",
13: "Small City",
21: "Large Suburb",
22: "Mid-size Suburb",
23: "Small Suburb",
31: "Fringe Town",
32: "Distant Town",
33: "Remote Town",
41: "Fringe Rural",
42: "Distant Rural",
43: "Remote Rural"
})
# LOCALE also needs to be ordinal
data1['LOCALE'] = pd.Categorical(data1['LOCALE'],
categories=["Large City",
"Mid-Size City",
"Small City",
"Large Suburb",
"Mid-size Suburb",
"Small Suburb",
"Fringe Town",
"Distant Town",
"Remote Town",
"Fringe Rural",
"Distant Rural",
"Remote Rural"],
ordered=True)
data1['REGION'] = data1['REGION'].replace({0: "U.S. Service Schools",
1: "New England",
2: "Mid East",
3: "Great Lakes",
4: "Plains",
5: "Southeast",
6: "Southwest",
7: "Rocky Mtns",
8: "Far West",
9: "Outlying Areas",
})
# REGION also needs to be ordinal
data1['REGION'] = pd.Categorical(data1['REGION'],
categories=["U.S. Service Schools",
"New England",
"Mid East",
"Great Lakes",
"Plains",
"Southeast",
"Southwest",
"Rocky Mtns",
"Far West",
"Outlying Areas"],
ordered=True)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis To understand the data set a bit more, particularly the features of interest, we will explore it.A good place to start can be a correlation matrix; however, given the very large number of features, we will insted look at the highest ranking Pearson correlation values relating to the Retention Variable. Pearson Correlation
###Code
c = data.corr().abs()
s = c.unstack()
so = s.sort_values(kind="quicksort")
corrmat = pd.DataFrame(so, columns = ['corr'], index=None)
corr_rr = corrmat.loc['RET_FT4'].sort_values(by = 'corr', ascending = False)
corr_rr.head(20)
###Output
_____no_output_____
###Markdown
In ranking the top fields in respect to "4-year retention rate", we see the highest Pearson correlation rates pertain to SAT/ACT score features.To understand Pearson values of the features relating to our hypotheses, we will search for some individually:
###Code
c = data.corr()
s = c.unstack()
so = s.sort_values(kind="quicksort")
corrmat = pd.DataFrame(so, columns = ['corr'], index=None)
#Pearson correlation in respect to Loan offerings of the instiution
corr_rr.loc['PCTFLOAN']
#Pearson correlation in respect to Pell Grant offerings of the instiution
corr_rr.loc['PCTPELL']
###Output
_____no_output_____
###Markdown
The coefficients when compared directly to the 4-year retention rate show moderate negative correlations for Loan and Pell Grant offerings. In respect to pending research analysis on Loan and Grant offerings versus Retention Rate, we can keep these results in mind going forward.While the REGION, LOCALE AND CONTROL variables were of interest to the research, because they are categorical in nature (and the data variable is also not setup as ordinal), we are not looking for their correlations. In a separate note, we will be careful in this reserach to remember not to assume correlation is causation. Distributions Next, we will evaluate the distributions of data to get an idea of enrollment size and retention rates by variables of interest.
###Code
# Split data distributions as separate variables by institution type
target_0 = data1.loc[data1['CONTROL'] == 'Private For-Profit']
target_1 = data1.loc[data1['CONTROL'] == "Private Non-Profit"]
target_2 = data1.loc[data1['CONTROL'] == "Public"]
###Output
_____no_output_____
###Markdown
**_Distribution of Undergraduates Enrolled by Institution Type_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_2['UGDS'], color="red", shade=True, label="public")
ax = sns.kdeplot(target_1['UGDS'], color="blue", shade=True, label="private non-profit")
ax = sns.kdeplot(target_0['UGDS'], color="green", shade=True, label="private for-profit")
sns.plt.xlim(0, 25000)
ax.set_xlabel('Undergraduate Enrollment')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
These results are interesting and show public universities have a bit of a wider enrollment distribution while private for-profit universities generally show lower enrollment numbers. **_Distribution of Retention Rate by Institution Type_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_2['Retention Rate'], color="red", shade=True, label="public")
ax = sns.kdeplot(target_1['Retention Rate'], color="blue", shade=True, label="private non-profit")
ax = sns.kdeplot(target_0['Retention Rate'], color="green", shade=True, label="private for-profit")
sns.plt.xlim(-0.5, 1.5)
ax.set_xlabel('Retention Rate')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
When looking at retention rate distributions by institution type, we see the some similarities between private non-profit and public universities (particularly the similar modes), despite some slight enrollment differences. Private for-profit universities lag behind, with multiple modes. Next, we will evaluate the distributions of data for enrollment size and retention rates by LOCALE.
###Code
# Sort the dataframe by target
target_0 = data1.loc[data1['LOCALE'] == 'Large City']
target_1 = data1.loc[data1['LOCALE'] == "Mid-Size City"]
target_2 = data1.loc[data1['LOCALE'] == "Small City"]
target_3 = data1.loc[data1['LOCALE'] == "Large Suburb"]
target_4 = data1.loc[data1['LOCALE'] == "Mid-size Suburb"]
target_5 = data1.loc[data1['LOCALE'] == "Small Suburb"]
target_6 = data1.loc[data1['LOCALE'] == "Fringe Town"]
target_7 = data1.loc[data1['LOCALE'] == "Distant Town"]
target_8 = data1.loc[data1['LOCALE'] == "Remote Town"]
target_9 = data1.loc[data1['LOCALE'] == "Fringe Rural"]
target_10 = data1.loc[data1['LOCALE'] == "Distant Rural"]
target_11 = data1.loc[data1['LOCALE'] == "Remote Rural"]
###Output
_____no_output_____
###Markdown
**_Distribution of Number of Undergraduates Enrolled by Locale (City)_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_0['UGDS'], color="red", shade=True, label="Large City")
ax = sns.kdeplot(target_1['UGDS'], color="blue", shade=True, label="Mid-Size City")
ax = sns.kdeplot(target_2['UGDS'], color="green", shade=True, label="Small City")
sns.plt.xlim(0, 25000)
ax.set_xlabel('Undergraduate Enrollment')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
We see in the comparison of city locales for institutions, that there are far more large cities with universities with lower enrollment. The distribution spread of universities with larger enrollments is seen with mid-size or small size cities. **_Distribution of Number of Undergraduates Enrolled by Locale (Suburb)_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_3['UGDS'], color="red", shade=True, label="Large Suburb")
ax = sns.kdeplot(target_4['UGDS'], color="blue", shade=True, label="Mid-Size Suburb")
ax = sns.kdeplot(target_5['UGDS'], color="green", shade=True, label="Small Suburb")
sns.plt.xlim(0, 25000)
ax.set_xlabel('Undergraduate Enrollment')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
For suburbs, again more institutions exist in large suburbs of lower end enrollment, while mid-size and smaller schools see wider distributions and more institutions at the higher enrollments. **_Distribution of Number of Undergraduates Enrolled by Locale (Town/Rural)_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_6['UGDS'], color="red", shade=True, label="Fringe Town")
ax = sns.kdeplot(target_7['UGDS'], color="blue", shade=True, label="Distant Town")
ax = sns.kdeplot(target_8['UGDS'], color="green", shade=True, label="Remote Town")
ax = sns.kdeplot(target_9['UGDS'], color="yellow", shade=True, label="Fringe Rural")
ax = sns.kdeplot(target_10['UGDS'], color="orange", shade=True, label="Distant Rural")
ax = sns.kdeplot(target_11['UGDS'], color="purple", shade=True, label="Remote Rural")
sns.plt.xlim(0, 25000)
ax.set_xlabel('Undergraduate Enrollment')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
The distributions here appear to show that as expected, many rural schools exist in the lower enrollment tiers with a shift seen for schools in towns. **_Distribution of Retention Rate by Locale (City/Suburb)_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_0['Retention Rate'], color="red", shade=True, label="Large City")
ax = sns.kdeplot(target_1['Retention Rate'], color="blue", shade=True, label="Mid Size City")
ax = sns.kdeplot(target_2['Retention Rate'], color="green", shade=True, label="Small City")
ax = sns.kdeplot(target_3['Retention Rate'], color="yellow", shade=True, label="Large Suburb")
ax = sns.kdeplot(target_4['Retention Rate'], color="orange", shade=True, label="Mid Size Suburb")
ax = sns.kdeplot(target_5['Retention Rate'], color="purple", shade=True, label="Small Suburb")
sns.plt.xlim(-0.5, 1.5)
ax.set_xlabel('Retention Rate')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
Viewing the distributions helps us assess frequency of institutions of particular retention rates, but we may need a different plot to extract more detail. **_Distribution of Retention Rate by Locale (Town/Rural)_**
###Code
sns.set(font_scale=1.5) # crazy big
ax = sns.kdeplot(target_6['Retention Rate'], color="red", shade=True, label="Fringe Town")
ax = sns.kdeplot(target_7['Retention Rate'], color="blue", shade=True, label="Distant Town")
ax = sns.kdeplot(target_8['Retention Rate'], color="green", shade=True, label="Remote Town")
ax = sns.kdeplot(target_9['Retention Rate'], color="yellow", shade=True, label="Fringe Rural")
ax = sns.kdeplot(target_10['Retention Rate'], color="orange", shade=True, label="Distant Rural")
ax = sns.kdeplot(target_11['Retention Rate'], color="purple", shade=True, label="Remote Rural")
sns.plt.xlim(-0.5, 1.5)
ax.set_xlabel('Retention Rate')
plt.show()
###Output
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:454: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X>clip[0], X<clip[1])] # won't work for two columns.
C:\Users\Samir\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
###Markdown
Again, viewing the distributions helps us assess frequency of institutions of particular retention rates, but we may need a different plot to extract more detail. Research Analysis To build on some of the high-level insights, we will try to assess the various parameters of interest in respect to retention rate in a multi-variate fashion to help us assess where effects are coming from. Hypothesis H1: _Public institutions in the U.S. have the highest student retention_First we will look at retention rate in respect to the Retention Category (which tells us if a student is part-time or full-time) and split this assessment by the highest degree offered at the institution (a Bachelor's degree or Graduate degree)
###Code
sns.set(font_scale=1.5) # crazy big
g = sns.factorplot(x="Retention Category", y="Retention Rate",
hue="CONTROL",
col="HIGHDEG",
data=data1, kind="box",
size=8, aspect=0.5)
plt.subplots_adjust(top=0.8)
g.fig.suptitle('University Retention Rate by Student Status', size = 30)
g._legend.set_title("Control of Institution")
plt.savefig('./images/figure1.png')
plt.show()
###Output
_____no_output_____
###Markdown
FindingsWe see in the results of the two sets of boxplots, that retention rate appears lowest consistently for part-time 4-year programs regardless of institution type or highest degree offering. And across the board, private for-profit institutions have the lowest retention rate by each grouping. Hypothesis (H1) supposed that public institutions have the highest retention rate. Well, this does appear to be the case with the caveat that private non-profit institutions (despite lower enrollments as seen earlier) also share similar retention rates when broken out by highest degree offering and part-time/full-time comparisons. Hypothesis H2: _Institutions with remote/rural locales result in higher student retention._Next we will look at retention rate in respect to the locale (where the institution is located) and the Retention Category (which tells us if a student is part-time or full-time).
###Code
sns.set(font_scale=1.5) # crazy big
g = sns.factorplot(x="Retention Category", y="Retention Rate",
hue="LOCALE",
# col="HIGHDEG",
data=data1, kind="box",
size=10, aspect=1)
plt.subplots_adjust(top=0.8)
g.fig.suptitle('University Retention Rate by Student Status', size = 23)
g._legend.set_title("Locale")
plt.savefig('./images/figure4.png')
plt.show()
###Output
_____no_output_____
###Markdown
FindingsIn this plot, we see high variability for the Part-Time retention rate results at 4-year institutions, with locales showing similar medians. However, the Full-Time retention rate data shows a steady decrease (showing a bit more detail than we could infer from the earlier distribution plots in the EDA). As the locale becomes less urbanized and more rural/remote, the retention rate appears to drop. This a a result that opposes the hypothesis, which suggested institutions with remote/rural locales would see higher retention rates. Hypothesis (H3): _Institutions providing higher percentages of financial aid are more likely to see higher retention._Lastly, we will look at the final hypothesis by assessing the relationship between retention rates and student loans and grants (each, respectively). Again, we will break out the data and compare Full-Time and Part-Time results.
###Code
sns.lmplot('PCTFLOAN', # Horizontal axis
'Retention Rate', # Vertical axis
data=data1, # Data source
fit_reg=True, # Don't fix a regression line
hue="Retention Category", # Set color
order = 1,
scatter_kws={"marker": "D", # Set marker style
"s": 2}) # S marker size
# Set title
plt.title('Retention Rate by % of Students Receiving Loans')
# Set x-axis label
plt.xlabel('% of Undergrads Receiving Student Loans')
# Set y-axis label
plt.ylabel('Retention Rate')
plt.savefig('./images/figure6.png')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
sns.lmplot('PCTPELL', # Horizontal axis
'Retention Rate', # Vertical axis
data=data1, # Data source
fit_reg=True, # Don't fix a regression line
hue="Retention Category", # Set color
order = 1,
scatter_kws={"marker": "D", # Set marker style
"s": 2}) # S marker size
# Set title
plt.title('Retention Rate by % of Students Receiving Pell Grants')
# Set x-axis label
plt.xlabel('% of Undergrads Receiving Pell Grants')
# Set y-axis label
plt.ylabel('Retention Rate')
plt.savefig('./images/figure7.png')
plt.show()
###Output
_____no_output_____
###Markdown
As a supplement to the plots and to properly assess the Pearson correlations by Retention Category (Full-Time 4-Year and Part-Time 4-Year), we will re-calculate the values:
###Code
c = data1.corr()
s = c.unstack()
so = s.sort_values(kind="quicksort")
corrmat = pd.DataFrame(so, columns = ['corr'], index=None)
corr_rr = corrmat.loc['Retention Rate'].sort_values(by = 'corr', ascending = False)
#Pearson correlation in respect to Loan offerings of the instiution
corr_rr.loc['PCTFLOAN']
#Pearson correlation in respect to Pell Grant offerings of the instiution
corr_rr.loc['PCTPELL']
###Output
_____no_output_____
###Markdown
Visualizing the trend The below graph visualizes the correlation between cost of attendance and median earning after 10 years of entry, and we can see the regression line conveys the same message as the regression summary: the higher the average cost of attendance the higher the median earnings 10 years after entry.
###Code
# Plot the correlation between earnings and average cost of attendance
sns.regplot(x='COSTT4_A',
y='MD_EARN_WNE_P10',
data=earning_analysis).set_title('Cost of Attendance vs. Post-school Earnings');
###Output
_____no_output_____
###Markdown
The regression scatterplot below visualizes the negative correlation between admission rate and post-school earnings. This plot explains more of the potential reasons for a negative correlation than the regression summary. We can see that there are clusters of high admission rate schools around 0.6 to 0.8 mark, these schools do not provide a clear correlation for the post-school earnings. At the same time, potentially school that have less strict criteria for admitting students may not be able to filter out unqualified candidates, thus resulting in the lower post-school earning.
###Code
# Plot the correlation between earnings and admission rate
sns.regplot(x='ADM_RATE',
y='MD_EARN_WNE_P10',
data=earning_analysis).set_title('Admission Rate vs. Post-school Earnings');
###Output
_____no_output_____
###Markdown
From the below visualization, we can see that there are several outlier schools on the lower lefthand side of the plot, further investigation can start from potential outliers.
###Code
# Plot the correlation between earnings and pcnt federal loan
sns.regplot(x='PCTFLOAN',
y='MD_EARN_WNE_P10',
data=earning_analysis).set_title('Pcnt Federal Loan vs. Post-school Earnings');
###Output
_____no_output_____
###Markdown
Below graph tells a slightly different picture. While still fitting average cost of attendance variable agasint post-school earnings, this graph also compared private and public schools separately. From the graph, we can see the regression lines difference between private and public schools, although the scatterplot shows while average cost of attendance is below 50,000 per year, there is no significant difference in post-school earnings between the two type of school controls.
###Code
sns.lmplot(x="COSTT4_A", y="MD_EARN_WNE_P10",
data=earning_analysis,
fit_reg=True,
hue='CONTROL_MERGED',
legend=True).set_titles('Cost of Attendance vs. Post-school Earnings By types of school');
###Output
_____no_output_____
###Markdown
College Scorecard Analysis **Yumeng Ding** **DATA512 Human Centered Data Science** **University of Washington** Introduction Choosing what colleges to apply to is an important decision to make for any graduating seniors, and the question is even more broad and influential for an international student trying to pursue education in the United States. My analysis will be adopting an international student as my user persona and explore the [College Scorecard](https://collegescorecard.ed.gov/data/) dataset made public by [the Obama Administration designed to increase transparency](https://obamawhitehouse.archives.gov/the-press-office/2015/09/12/fact-sheet-empowering-students-choose-college-right-them) in the education system. What do international students consider when applying to colleges? First, they will mainly focus on four-year degree granting institutions since they provide better faculty resource and research opportunities. Secondly, they will want to find universities that are inclusive of different races and promote racial diversity. Lastly, they want to have an education that will help their future careers, while in this analysis, we will use post-school earnings as a potential indicator of career trajectory. Motivated by the above senario, this project report will use most recent college scorecard dataset and post-school earnings data to explore the differences in education availability and accessibility across states and regions to help identify potential drivers for a higher post-school earning. Background Since the release of College Scorecard data, U.S. Department of Education built a [tool](https://collegescorecard.ed.gov) for students and parents to find schools and compare schools. The major selection drop downs are programs/degrees, Location, Size and Name. There is also an advance search option where users can choose type of school, specialized mission and religious affiliation. However, there aren't a central location where information over all the states are visualized for a better general picture. Moreover, the website does not provide linkage between school and post-school earnings information, even though future earnings is one of the decision drivers for some students.Therefore, I want to leverage these public data to provide an analysis on higher education availability across different states while also provide insights into what are the factors for a higher post-school earning. Methods
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
plt.rcParams['figure.figsize'] = (12,7)
###Output
_____no_output_____
###Markdown
Data Overview There are two main datasets used in this report:* **'Most_recent_data.csv'** contains information on 7175 US higher education institutions in 2016-2017 year. The complete dataset has 7175 rows and 1899 columns, with variableds including geographical locations, degree offerings, majors, student body information, faculty information, etc. We will be limiting to five areas in this analysis focusing on availabity of schools and racial composition.* **'Post-school_earnings.csv'** contains post-school earnings report for the same 7175 US higher education institutions reported in 2017. The data includes comprehensive measures of employment status and earningss data subgrouped into 10, 8, 7 and 6 years after entry. All datasets used for this projects are publicly available and accessible under [CC0 1.0 Universal (CC0 1.0) Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/) within the United States. Data Preparation Load in both most recent scorecard dataset and post-school earnings dataset Both datasets can be downloaded locally for processing and analysis. The 'Most_recent_data.csv' is compressed to fit on github, therefore user will need to unzip the file for before running the code.Users can also use API calls to access the data [here](http://api.data.gov/ed/collegescorecard/). The full datasets of scorecard data contains 7175 rows and 1899 columns as described below. The full datasets of post school earning data contains 7175 rows and 92 columsn as described below.We will clean both datasets in the next section in preparation of the analysis
###Code
scorecard = pd.read_csv('Most_recent_data.csv', sep=',', header='infer', low_memory=False)
scorecard.shape
scorecard.head()
post_school_earning = pd.read_csv('Post-school_earnings.csv', sep=',', header='infer', low_memory=False)
post_school_earning.shape
post_school_earning.head()
###Output
_____no_output_____
###Markdown
Clean scorecard and post_school_earning to only keep columns relevant for this project For the purpose of this analysis, we will keep 24 variables from 5 main categories of infomration from the college scorecard dataset. School: Name, Location, Degree Type, Public/Private Nonprofit/Private For-Profit ['INSTNM','STABBR','ZIP','SCH_DEG','PREDDEG','CONTROL','REGION'] Admissions: Acceptance Rate, SAT scores ['ADM_RATE','SATVRMID','SATMTMID','SATWRMID'] Student: Number of Undergraduate Students, Undergraduate Student Body by Race/Ethnicity ['UG','UG_NRA','UG_UNKN','UG_WHITENH','UG_BLACKNH','UG_API','UG_AIANOLD','UG_HISPOLD'] Cost: Average Cost of Attendance, Tuition and Fees, Average Faculty Salary ['COSTT4_A','TUITIONFEE_IN','TUITIONFEE_OUT','AVGFACSAL'] Aid: Percent of Undergraduates Receiving Federal Loans ['PCTFLOAN']We also renamed Institution Name, State, and Racial Composition columns to be more interpretable.We cleaned the post-school earnings data to keep only information on unemployment rate and post-school earnings 10 years and 6 years after entry to provide more simplisitic interpretations. School: Name ['INSTNM'] Unemployment: Unemployment Rate ['UNEMP_RATE'] Post-school Earnings: Statistics for earnings 10 years after entry, Statistics for earnings 6 years after entry ['COUNT_NWNE_P10','COUNT_WNE_P10','MN_EARN_WNE_P10','MD_EARN_WNE_P10', 'PCT10_EARN_WNE_P10','PCT25_EARN_WNE_P10','PCT75_EARN_WNE_P10', 'PCT90_EARN_WNE_P10','SD_EARN_WNE_P10', 'COUNT_NWNE_P6','COUNT_WNE_P6','MN_EARN_WNE_P6','MD_EARN_WNE_P6', 'PCT10_EARN_WNE_P6','PCT25_EARN_WNE_P6','PCT75_EARN_WNE_P6', 'PCT90_EARN_WNE_P6','SD_EARN_WNE_P6'] We also renamed Institution Name in the earnings dataset for consistency. The cleaned version will now contain 20 variables in earnings data.
###Code
scorecard_clean = scorecard[['INSTNM','STABBR','ZIP','SCH_DEG','PREDDEG','CONTROL','REGION',
'ADM_RATE','SATVRMID','SATMTMID','SATWRMID',
'UG','UG_NRA','UG_UNKN','UG_WHITENH','UG_BLACKNH','UG_API','UG_AIANOLD','UG_HISPOLD',
'COSTT4_A','TUITIONFEE_IN','TUITIONFEE_OUT','AVGFACSAL','PCTFLOAN']]
scorecard_clean = scorecard_clean.rename(columns={'INSTNM':'Institution_Name','STABBR':'STATE',
'UG_NRA':'Non_Resident_Alien','UG_UNKN':'Unknown_Race',
'UG_WHITENH':'White','UG_BLACKNH':'Black',
'UG_API':'Asian_Pacific_Islander','UG_AIANOLD':'Native_American',
'UG_HISPOLD':'Hispanic'})
scorecard_clean.shape
scorecard_clean.head()
earning_clean = post_school_earning[['INSTNM','UNEMP_RATE',
'COUNT_NWNE_P10','COUNT_WNE_P10','MN_EARN_WNE_P10','MD_EARN_WNE_P10',
'PCT10_EARN_WNE_P10','PCT25_EARN_WNE_P10','PCT75_EARN_WNE_P10',
'PCT90_EARN_WNE_P10','SD_EARN_WNE_P10',
'COUNT_NWNE_P6','COUNT_WNE_P6','MN_EARN_WNE_P6','MD_EARN_WNE_P6',
'PCT10_EARN_WNE_P6','PCT25_EARN_WNE_P6','PCT75_EARN_WNE_P6',
'PCT90_EARN_WNE_P6','SD_EARN_WNE_P6']]
earning_clean = earning_clean.rename(columns={'INSTNM':'Institution_Name'})
earning_clean.shape
earning_clean.head()
###Output
_____no_output_____
###Markdown
Filter colleges to only include 4-year-degree granting schools before conducting analysis In the College Scorecard Data Dictionary, column PREDDEG (Predominant undergraduate degree awarded) could have 5 values, as described below: 0 Not classified 1 Predominantly certificate-degree granting 2 Predominantly associate's-degree granting 3 Predominantly bachelor's-degree granting 4 Entirely graduate-degree granting Since there are more inconsistencies in the data for Predominantly certificate-degree granting and Predominantly associate's-degree granting schools and Entirely graduate-degree granting schools are out-of-scope for the purpose of this analysis. We will limit to only Predominantly bachelor's-degree granting colleges below.After limiting to bachelor's-degree granting colleges, we have 2113 colleges left in our scorecard dataset.
###Code
# check unique values of PREDDEG before limiting
scorecard_clean.PREDDEG.unique()
# keep rows of colleges that are predominantly bachelor's degree granting (PREDDEG = 3)
scorecard_clean = scorecard_clean.loc[scorecard_clean['PREDDEG'] == 3]
scorecard_clean.shape
###Output
_____no_output_____
###Markdown
Filter colleges to include only continental US schools using region codes Since the college scorecard dataset is comprehensive of all colleges in both continental US and other US territories, which is not within the scope of this analysis. We will limit to only colleges in the continental US below.The Scorecard data is bucketed into below regions, we will group by both states and regions and count the number of school available in each state and region. 0 U.S. Service Schools 1 New England (CT, ME, MA, NH, RI, VT) 2 Mid East (DE, DC, MD, NJ, NY, PA) 3 Great Lakes (IL, IN, MI, OH, WI) 4 Plains (IA, KS, MN, MO, NE, ND, SD) 5 Southeast (AL, AR, FL, GA, KY, LA, MS, NC, SC, TN, VA, WV) 6 Southwest (AZ, NM, OK, TX) 7 Rocky Mountains (CO, ID, MT, UT, WY) 8 Far West (AK, CA, HI, NV, OR, WA) 9 Outlying Areas (AS, FM, GU, MH, MP, PR, PW, VI) For the purpose of this analysis, we will focus on continental US.After limiting to continental US, we have 2062 colleges with 25 variables.
###Code
# build a dictionary for region and limit to continental US
# and then rename into interpretable Region Names instead of numbers
region_lookup = {
'0':'U.S. Service Schools',
'1':'New England',
'2':'Mid East',
'3':'Great Lakes',
'4':'Plains',
'5':'Southeast',
'6':'Southwest',
'7':'Rocky Mountains',
'8':'Far West',
'9':'Outlying Areas'
}
# check unique values of REGION before limiting
scorecard_clean.REGION.unique()
# keep rows of colleges that are within continental US
scorecard_clean = scorecard_clean.loc[~scorecard_clean['REGION'].isin([0,9])]
scorecard_clean['REGION_RENAME'] = scorecard_clean['REGION'].apply(lambda x: region_lookup[str(x)])
scorecard_clean.shape
###Output
_____no_output_____
###Markdown
We will build our analysis based on scorecard_clean dataframe which contains 2062 rows and 25 columns. Research Questions & Findings In this section, we will use the cleaned version of scorecard and earnings data from the above section to answer three research questions that are intended to help our user persona to choose colleges to apply to.Research Questions:1. What are the states that have higher cost of enrollment? What are the states that have the most four-year degree granting institutions available?2. How are racial diversity different by regions and types of colleges (public vs. private)?3. What are the potential factors of higher post-school earnings? RQ1: Segment cost of enrollment by states, are there regional distinctions of average cost of enrollment, what are potential implications behind the regional cost differences, and how would that affect education accessibility? * **Availability (In terms of count of colleges):** At the regional level, Southeast, Mid East and Great Lakes have the most 4-year degree granting schools, with Florida, North Caroline and Georgia contributing 198 colleges in the southeast region. In general, both east coast and west coast have more educational resources in terms of count of colleges compared to the middle regions. On the state level, New York and California has 175 and 155 colleges available respectively, which is more than regions like Southwest and Rocky Mountains. These numbers show a significant skew to the coastal states which is a potential indicator of educational resource inequality. For example, a student living in Wyoming will only have one choice for bachelor's degree granting colleges, whereas a student living in New York can choose from 175 different colleges. This will also impact the possibilities of high school students getting admitted into a 4-year degree granting college, since each college will have an upper limit for capacity and faculty resource.
###Code
# check unique values of REGION
scorecard_clean.REGION_RENAME.unique()
# check unique values of STATES, this will include Washington District of Columbia as 'DC'
scorecard_clean.STATE.unique()
# count total number of colleges in each region using group by
region_college_cnt = scorecard_clean.groupby('REGION_RENAME').count()['Institution_Name']
# count total number of colleges in each state using group by
state_college_cnt = scorecard_clean.groupby('STATE').count()['Institution_Name']
# output count of colleges by regions
df_region_college_cnt = region_college_cnt.to_frame(name='college_cnt')
df_region_college_cnt
# output the top 10 and bottom 10 states by total number of colleges
df_state_college_cnt = state_college_cnt.to_frame(name='college_cnt')
df_state_college_cnt.sort_values(by=['college_cnt'],ascending=False).head(10)
df_state_college_cnt.sort_values(by=['college_cnt'],ascending=False).tail(10)
###Output
_____no_output_____
###Markdown
* **Accessibility (In terms of average cost of attendance):** On the region level, New England has the highest average cost of attendance of over 42,000 per year. The top three states in average cost of attendance are all in the New England region, which is the main contributor to the average 42,911 per year price. Even though there are 166 colleges to choose from in this region, a family with below average income will not be able to afford such a high price of attendance. The other call out here in conjunction with number of colleges available is the fact that Iowa has only 35 colleges available, but the average cost of attendance is among the top 10 across states. The limited resource and high cost of attendance will hinder students' ability to obtain higher education in the state of Iowa.
###Code
# avg cost of attendance in each region using group by
region_avg_cost = scorecard_clean.groupby('REGION_RENAME').mean()['COSTT4_A']
# avg cost of attendance in each state using group by
state_avg_cost = scorecard_clean.groupby('STATE').mean()['COSTT4_A']
# output avg cost of attendance by region
df_region_avg_cost = region_avg_cost.to_frame(name='avg_attendance_cost')
df_region_avg_cost
# output the top 10 and bottom 10 states by average cost of attendance
df_state_avg_cost = state_avg_cost.to_frame(name='avg_attendance_cost')
df_state_avg_cost.sort_values(by=['avg_attendance_cost'],ascending=False).head(10)
df_state_avg_cost.sort_values(by=['avg_attendance_cost'],ascending=False).tail(10)
###Output
_____no_output_____
###Markdown
Visualize the Availablity & Accessibility on Map Since it will be easier for the end users to interprete results by comparing maps of US, this section will visualize the above tables into US map for comparison.There are three dictionaries defined in the beginning for lookup purposes: short_state_names: lookup between abbreviated state codes into actual state names state_count_dict: lookup between abbreviated state codes into count of colleges state_avg_cost_dict: lookup between abbreviated state codes into average cost of attendance These dictionaries are used with Matplotlib's Basemap toolkits to create the map of US by state with hue denoting the count and cost of colleges aggregated on the state level.
###Code
# create short_state_names dictionary for lookup between abbreviated state codes and state names
short_state_names = {
'AK': 'Alaska',
'AL': 'Alabama',
'AR': 'Arkansas',
'AS': 'American Samoa',
'AZ': 'Arizona',
'CA': 'California',
'CO': 'Colorado',
'CT': 'Connecticut',
'DC': 'District of Columbia',
'DE': 'Delaware',
'FL': 'Florida',
'GA': 'Georgia',
'GU': 'Guam',
'HI': 'Hawaii',
'IA': 'Iowa',
'ID': 'Idaho',
'IL': 'Illinois',
'IN': 'Indiana',
'KS': 'Kansas',
'KY': 'Kentucky',
'LA': 'Louisiana',
'MA': 'Massachusetts',
'MD': 'Maryland',
'ME': 'Maine',
'MI': 'Michigan',
'MN': 'Minnesota',
'MO': 'Missouri',
'MP': 'Northern Mariana Islands',
'MS': 'Mississippi',
'MT': 'Montana',
'NA': 'National',
'NC': 'North Carolina',
'ND': 'North Dakota',
'NE': 'Nebraska',
'NH': 'New Hampshire',
'NJ': 'New Jersey',
'NM': 'New Mexico',
'NV': 'Nevada',
'NY': 'New York',
'OH': 'Ohio',
'OK': 'Oklahoma',
'OR': 'Oregon',
'PA': 'Pennsylvania',
'PR': 'Puerto Rico',
'RI': 'Rhode Island',
'SC': 'South Carolina',
'SD': 'South Dakota',
'TN': 'Tennessee',
'TX': 'Texas',
'UT': 'Utah',
'VA': 'Virginia',
'VI': 'Virgin Islands',
'VT': 'Vermont',
'WA': 'Washington',
'WI': 'Wisconsin',
'WV': 'West Virginia',
'WY': 'Wyoming',
'PR': 'Puerto Rico'
}
# define dictionary for lookup between abbreviated state codes into count of colleges
state_count = np.asarray(state_college_cnt.reset_index()).tolist()
state_count_dict = dict(state_count)
# define dictionary for lookup between abbreviated state codes into average cost of attendance
state_cost = np.asarray(state_avg_cost.reset_index()).tolist()
state_avg_cost_dict = dict(state_cost)
# import libraries for graphing US map by state contours
from mpl_toolkits.basemap import Basemap
from geopy.geocoders import Nominatim
import math
from matplotlib.colors import rgb2hex, Normalize
from matplotlib.patches import Polygon
from matplotlib.colorbar import ColorbarBase
###Output
_____no_output_____
###Markdown
*The Basemap package requires the download of st99_d00 shapefiles for contouring the states. The shapefiles are covered under the [license](https://github.com/matplotlib/basemap) of Basemap package, and it states "Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notices appear in all copies and that both the copyright notices and this permission notice appear in supporting documentation." Through the maps below, we are easily and clearly see the difference of count and average cost across the US. I used reversed Red, Yellow and Green scale for coloring count of college and average cost of attendance. Since just by looking at the graph, users can see that New York and California offers more 4-year degree granting colleges, whereas states like Massachusetts and Vermont will cost more to attend colleges on average. Another interesting and reassuring read from the maps is that even though Wyoming only has one 4-year degree granting college available, the cost to attend that college is also the lowest across all states, giving their residents a better chance to attain higher education.
###Code
# the below chuck of code is used to plot the contour map of continental US states
# as well as using hue to denote the difference in count of colleges in each state
# define fig and ax for plotting
fig, ax = plt.subplots()
# define map variable to load shape file and draw bounds
map = Basemap(llcrnrlon=-119,llcrnrlat=20,urcrnrlon=-64,urcrnrlat=49,
projection='lcc',lat_1=33,lat_2=45,lon_0=-95)
# load the shapefile, use the name 'states' to call states
map.readshapefile('st99_d00', name='states', drawbounds=True)
# use colors as Red, Yellow, Green hue to denote higher counts to lower counts
colors={}
statenames=[]
cmap = plt.cm.RdYlGn
vmin = 0; vmax = 175 # set range as the highest count to the lowest count
norm = Normalize(vmin=vmin, vmax=vmax) # normalize the min and max for math calculation
# Loop through each state to draw the contours and color the states
# Since we are only concerned with continental US, we are skipping Puerto Rico in the loop
for shapedict in map.states_info:
statename = shapedict['NAME']
# Skip Puerto Rico.
if statename not in ['Puerto Rico']:
state = [key for key, value in short_state_names.items() if value == statename][0]
count = state_count_dict[state] #6
# Skip Puerto Rico.
if statename not in ['Puerto Rico']:
# calling colormap with values between 0 and 1 returns rgba value.
colors[statename] = cmap(np.sqrt((count-vmin)/(vmax-vmin)))[:3]
statenames.append(statename)
for nshape,seg in enumerate(map.states):
# Skip Puerto Rico.
if statenames[nshape] not in ['Puerto Rico']:
color = rgb2hex(colors[statenames[nshape]])
poly = Polygon(seg,facecolor=color,edgecolor=color)
ax.add_patch(poly)
# construct a colorbar for scale
cax = fig.add_axes([0.27, 0.1, 0.5, 0.05]) # posititon of the color bar
cb = ColorbarBase(cax,cmap=cmap,norm=norm, orientation='horizontal')
cb.ax.set_xlabel('Count of Colleges by States',fontsize=20)
plt.show();
# the below chuck of code is used to plot the contour map of continental US states
# as well as using hue to denote the difference in average cost of attendance in each state
# define fig and ax for plotting
fig, ax = plt.subplots()
# define map variable to load shape file and draw bounds
map = Basemap(llcrnrlon=-119,llcrnrlat=20,urcrnrlon=-64,urcrnrlat=49,
projection='lcc',lat_1=33,lat_2=45,lon_0=-95)
# load the shapefile, use the name 'states' to call states
map.readshapefile('st99_d00', name='states', drawbounds=True)
# use colors as Red, Yellow, Green hue to denote higher avg cost to lower avg cost
colors={}
statenames=[]
cmap = plt.cm.RdYlGn_r
vmin = 20000; vmax = 45000 # set range as the highest avg cost to the lowest avg cost
norm = Normalize(vmin=vmin, vmax=vmax) # normalize the min and max for math calculation
# Loop through each state to draw the contours and color the states
# Since we are only concerned with continental US, we are skipping Puerto Rico in the loop
for shapedict in map.states_info:
statename = shapedict['NAME']
# Skip Puerto Rico.
if statename not in ['Puerto Rico']:
state = [key for key, value in short_state_names.items() if value == statename][0]
cost = state_avg_cost_dict[state] #6
# Skip Puerto Rico.
if statename not in ['Puerto Rico']:
# calling colormap with values between 0 and 1 returns rgba value.
colors[statename] = cmap(np.sqrt((cost-vmin)/(vmax-vmin)))[:3]
statenames.append(statename)
for nshape,seg in enumerate(map.states):
# Skip Puerto Rico.
if statenames[nshape] not in ['Puerto Rico']:
color = rgb2hex(colors[statenames[nshape]])
poly = Polygon(seg,facecolor=color,edgecolor=color)
ax.add_patch(poly)
# construct a colorbar for scale
cax = fig.add_axes([0.27, 0.1, 0.5, 0.05]) # posititon of the color bar
cb = ColorbarBase(cax,cmap=cmap,norm=norm, orientation='horizontal')
cb.ax.set_xlabel('Average Cost of Enrollment by States',fontsize=20)
plt.show();
###Output
_____no_output_____
###Markdown
RQ2: How does racial diversity change by different types of colleges (public vs. private) and different regions? Racial diversity is an important consideration for international students when applying to colleges, this section will segment racial diversity compostion by types of colleges and regions to help users get a visualized distribution of diversity index. For easier interpretation, there will be two additional data preparation steps: 1. Create a new dataframe to only include REGION, CONTROL and Racial information. ['Institution_Name','REGION_RENAME','CONTROL','UG', 'Non_Resident_Alien','Unknown_Race','White','Black', 'Asian_Pacific_Islander','Native_American','Hispanic'] 2. Merge two kinds of Private control into 'Private' for easier side-by-side comparison. ['CONTROL']: 1:Public 2:Private nonprofit 3:Private for-profit Merged as ['CONTROL_MERGED']: 'Private' 'Public'
###Code
# create a dataframe that only include school name, control, undergraduate racial composition
racial_composition = scorecard_clean[['Institution_Name','REGION_RENAME','CONTROL','UG',
'Non_Resident_Alien','Unknown_Race','White','Black',
'Asian_Pacific_Islander','Native_American','Hispanic']]
racial_composition.shape
racial_composition.head()
# check unique values of CONTROL before merging
racial_composition.CONTROL.unique()
# merge control 2 and 3 into Private and the rest as Public
racial_composition['CONTROL_MERGED'] = np.where(racial_composition['CONTROL'].isin([2,3]), 'Private','Public')
racial_composition.head()
###Output
_____no_output_____
###Markdown
* **Racial Diversity By Control Types**: We can see from the bar chart below, public schools have a higher percentage of all minority groups in terms of races (Black, Asian, Pacific Islander, Native American and Hispanic), whereas private schools have higher percentage of White students. One thing to point out here, which was not hypothesized in the project plan, is the fact that private schools have a significantly higher percentage of non resident alien students. One reason behind this is that public schools are targeted to provide higher education to citizens and residents within the state for funding reasons, while private school does not have residents in the each state as a priority. The students selection process and composition is therefore different between these two types of school controls. As an international student, it might be better to apply to more private schools than public schools, both for a higher chance of admission and a more diverse student body.
###Code
# pivot the dataframe to calculate mean racial composition by Private and Public Schools
pivot_racial = racial_composition.groupby('CONTROL_MERGED').agg({'Non_Resident_Alien':'mean',
'Unknown_Race':'mean',
'White':'mean','Black':'mean',
'Asian_Pacific_Islander':'mean',
'Native_American':'mean','Hispanic':'mean'})
# Transpose the pivot for side-by-side comparison
pivot_racial = pivot_racial.T
pivot_racial
# Use bar chart to compare each race between Private and Public Schools
pivot_racial.plot(kind='barh', grid=True)
plt.xlabel('Percent of Race',fontsize=15)
plt.ylabel('Race',fontsize=15)
plt.title('Percent of Race By Private vs. Public Institutions',fontsize=20)
plt.show();
###Output
_____no_output_____
###Markdown
* **Racial Diversity By Regions**: It is very clear from the below stacked bar chart that Far West and Southwest has the most racial diverse student body. The diversity is different in regions, Far West has more Asian/Pacific Islander and Hispanic whereas Southwest has more Black students. As international students, users can see that Far West also has the biggest percentage for Non_resident_alien students.
###Code
# pivot the dataframe to calculate mean racial composition by Regions
pivot_region = racial_composition.groupby('REGION_RENAME').agg({'White':'mean','Black':'mean',
'Asian_Pacific_Islander':'mean',
'Native_American':'mean',
'Hispanic':'mean',
'Non_Resident_Alien':'mean','Unknown_Race':'mean'})
pivot_region
# Use stacked bar chart to compare racial composition across difference regions
pivot_region.plot.bar(stacked=True)
plt.ylabel('Percent of Race',fontsize=15)
plt.xlabel('Region',fontsize=15)
plt.title('Percent of Race By Region',fontsize=20)
plt.show();
###Output
_____no_output_____
###Markdown
RQ3: What are factors for higher post-school earnings? In order to make the regression models meaningful, we need to do one last step of data preparation before building a regression model to determine what factors are potential indicators of higher post-school earnings. We will be joining scorecard_clean and earnings data after removing all the Null and 'PrivacySuppressed' values from earning dataframe.We will only keep variables that are relevant to answer this question, we are using Median measures for post-school earnings since it is able to represent less skewed earnings. scorecard_limited: ['Institution_Name','STATE','CONTROL','ADM_RATE','COSTT4_A','AVGFACSAL','PCTFLOAN'] earning_limited: ['Institution_Name','UNEMP_RATE','MD_EARN_WNE_P10','MD_EARN_WNE_P6'] We again will combine control type into only Private vs. Public to facilitate side-by-side comparisons.
###Code
# limiting both scorecard data and earnings data to include only relevant variables
# remove Null and PrivacySuppressed values in earnings data for later regression model
scorecard_clean.shape
earning_clean.shape
scorecard_limited = scorecard_clean[['Institution_Name','STATE','CONTROL','ADM_RATE','COSTT4_A','AVGFACSAL','PCTFLOAN']]
earning_limited = earning_clean[['Institution_Name','UNEMP_RATE','MD_EARN_WNE_P10','MD_EARN_WNE_P6']].dropna()
earning_limited.shape
# merging two types of private schools into just Private vs. Public
# we will then drop the original int type of CONTROL after merging
scorecard_limited['CONTROL_MERGED'] = np.where(scorecard_limited['CONTROL'].isin([2,3]), 'Private','Public')
scorecard_limited = scorecard_limited.drop(['CONTROL'],axis=1)
scorecard_limited.shape
###Output
_____no_output_____
###Markdown
In order to analysis school metrics with earning metrics, we are merging scorecard data with earning data by school names. Since we removed data points due to Null values, the merged dataframe contains 1488 rows(schools) and 10 columns(metric variables) for regression analysis.
###Code
# merging scorecard_limited and earning_limited on Institution_Name to create dataframe for regression
earning_analysis = scorecard_limited.merge(earning_limited, how='inner',
left_on='Institution_Name',
right_on='Institution_Name')
# some of the value types of earning data are not in numeric form for modeling
# changing data types to numeric for modeling
# we also want to model private and public schools separately when possible, so changing the type to categorical
earning_analysis["UNEMP_RATE"] = pd.to_numeric(earning_analysis.UNEMP_RATE, errors='coerce')
earning_analysis["MD_EARN_WNE_P10"] = pd.to_numeric(earning_analysis.MD_EARN_WNE_P10, errors='coerce')
earning_analysis["MD_EARN_WNE_P6"] = pd.to_numeric(earning_analysis.MD_EARN_WNE_P6, errors='coerce')
earning_analysis['CONTROL_MERGED'] = earning_analysis['CONTROL_MERGED'].astype('category')
earning_analysis = earning_analysis.dropna()
earning_analysis.shape
earning_analysis.head()
###Output
_____no_output_____
###Markdown
In order to get a meaningful coefficient on all explainatory variables, we will normalize all the numeric columns before model fitting. After normalization, all the earnings value will be on a 0 to 1 scale.
###Code
# get an overview of all column data types
earning_analysis.dtypes
# importing a statsmodels package for regression modeling
from sklearn.preprocessing import MinMaxScaler
import statsmodels.formula.api as sm
# apply scalers on all numeric values, so that all the numeric variables will be evaluated on a scale of 0 to 1
earning_analysis_norm = earning_analysis.copy(deep=True)
Scaler = MinMaxScaler()
earning_analysis_norm[['ADM_RATE', 'COSTT4_A','AVGFACSAL','PCTFLOAN','UNEMP_RATE','MD_EARN_WNE_P10','MD_EARN_WNE_P6']] = Scaler.fit_transform(earning_analysis_norm[['ADM_RATE', 'COSTT4_A','AVGFACSAL','PCTFLOAN','UNEMP_RATE','MD_EARN_WNE_P10','MD_EARN_WNE_P6']])
earning_analysis_norm.head()
###Output
_____no_output_____
###Markdown
Model fit **Three Regression Models:** From the normalized data, I was able to explore the relationships between earnings 10 years after entry, earnings 6 years after entry and unemployment rate against three potential indicators: cost of attendance, admission rate and percentage of federal loan. The below three regression models tell a consistent yet interesting story. * Cost of attendance is a statistically significant positive indicator of post-school earnings, that is to say, the higher the average cost of attendance, the higher median post school earnings. The effect of cost is slightly larger for 10 years after entry than 6 years, while we also see a larger spread of earnings distribution in 10 years than 6 years. Averaging cost of attendance is also a statistically significant negative indicator of unemployment rate, meaning the higher the average cost of attendance, the lower of the unemployment rate for the school. Both of these results align with the original hypothesis. * Admission rate, however, is not consistently a statistically significant indicator for post-school earnings. It has a p-value of 0.002 for 10 years after entry, although having a slightly negative effect, and has a p-value of 0.169 for 6 years after entry, which does not have statistically significant predicative power. Therefore, we can not conclude the effect of admission rate on earnings with this model set up. Admission rate does have a statistically significant negative predicative power for unemployment rate. * Percent of Federal Loans does not have a result that align with the original hypothesis. Both models with post-school earnings show that percent of federal loans is a negative predicative variable for earnings. Further investigations are needed to detect why providing federal loans to students has a negative effect on future earning outcomes. We should also check schools that have higher percent of federal loans and see if there are other common characteristics associated with getting more loans.
###Code
# fit ordinary least squares regression on cost of attendance,
# admission rate and percent of federal loan against eanings
result_10year = sm.ols(formula="MD_EARN_WNE_P10 ~ COSTT4_A + ADM_RATE + PCTFLOAN",
data=earning_analysis_norm).fit()
result_10year.params
result_10year.summary()
# fit ordinary least squares regression on cost of attendance,
# admission rate and percent of federal loan against eanings
result_6year = sm.ols(formula="MD_EARN_WNE_P6 ~ COSTT4_A + ADM_RATE + PCTFLOAN",
data=earning_analysis_norm).fit()
result_6year.params
result_6year.summary()
# fit ordinary least squares regression on cost of attendance,
# admission rate and percent of federal loan against eanings
result_unemployment = sm.ols(formula="UNEMP_RATE ~ COSTT4_A + ADM_RATE + PCTFLOAN",
data=earning_analysis_norm).fit()
result_unemployment.params
result_unemployment.summary()
###Output
_____no_output_____ |
2022/02-Data Analysis/Data Analysis.ipynb | ###Markdown
Introduction to Computational Neuroscience Practice II: Data Analysis Marharyta Domnich, Farid Hasanov, Karl Kristjan Kaup, Raul Vicente Important:Make sure that you saved your ipynb file correctly to avoid loss of information. Please submit this **ipynb** file only (unless you have extra files then zip this file with the extra ones). Everything should be included in this file questions, answers, codes, plots, and comments about your solutions. My **Pseudonym** is: [YOUR ANSWER] and it took me approximately: [YOUR ANSWER] hours to complete the home work.The data of how long it took you to complete the home work will help us to improve the home works to be balanced. *** 1. IntroductionIn this practice session we go through how to work with recordings from brain areas (EEG) andindividual cells (intracranial recording, spiking data). In the lecture we also covered some otherways of recording brain activity, such as fMRI and MEG. We do not have the time to look atthe data from these imaging techniques, but if you find them fascinating, you can work withthem in your course project. 2. Part I: EEG dataIn EEG (Electroencephalography) electrodes are placed on different parts of your head and electrical signal is recorded from them. Certain electrodes (on the earlobes, face) do not record the brain activity but are instead used as a reference to filter out noise caused by muscle activity and electrical signals in the room. When all the electrodes are attached, the person looks like this:After some preprocessing (filtering out noise with the help of reference electrodes), this is how an usual EEG recording of one channel (signal from one electrode is called a channel) looks like: Figure 1: Neuronal response to different orientations of the bar. On the x axis there is time, on the y axis strength of the recorded signal in $μV$. Thedashed line at the time point $x = 1000$ is the moment when the stimulus (picture) was shownto the test subject. *** Exercise 1: Event Related Potential (ERP) (1pt)Event related potentials are brain activity changes in response to certain events. For example, we expect that some brain regions change there activity after a person hears a sound. Consider Figure 1, the test subject was shown a stimulus at $T=1000$. From the plot we can not conclude that the stimulus had an effect on the subject’s brain activity (as measured bythis specific electrode). There seems to be a voltage increase at $T=1200$ that ends at $T=1500$, but it could be a random event not related to the stimulus. The plot EEG signal is too noisy to tell.To remove the noise and reach conclusions on the effect of the stimulus, scientists conduct the same experiment several times and then average the results. In each trial the noise is different and will cancel out if we average over trials. Therefore if there is a brain response, will appear much more clearly.In the data folder you have the file erptrials.csv. Each row is one trial recorded for 2 seconds with sampling rate of 1000 Hz. The stimulus was shown at the time point 1000 (1 second). There are 79 trials in this file. Your task is to plot an average of all 79 trials to see if there is a clear ERP response or not.1. Plot an average of all 79 trials to see if there is a clear ERP response or not. Add a red vertical line at T=1000 like on Figure 1 (Don't forget to name the axis on your plot).
###Code
# You can use any library you want but remember to draw the plot.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 6)
erp_data = None
erp_average = None
################################
##### YOUR CODE STARTS HERE ####
# Hints: you can use pd.read_csv to read the data and np.mean to average it.
################################
erp_data = ????
erp_average=????
##### YOUR CODE ENDS HERE #####
################################
plt.title("Average EEG record")
plt.show()
###Output
_____no_output_____
###Markdown
2. How many milliseconds after the stimulus presentation does the ERP happen? [YOUR ANSWER] 3. How long does it take for the EEG signal to return to normal range? [YOUR ANSWER] *** Exercise 2: Frequency Analysis (2pt)The most popular operation that you can do with continuous brain data such as EEG is converting it from the time domain to the frequency domain. It is known that any function can be represented as a sum of sinusoids (Fourier transform). Therefore we can decompose our signal into such sinusoids and observe the different frequency components. For brain data such transformation makes particular sense, because of the brain rhythms – different frequencies of the firings of the neurons are related to the different kinds of mental activity [1](http://en.wikipedia.org/wiki/ElectroencephalographyWave_patterns) .In this exercise we will plot the power spectrum (like the two plots below) and see how alpha wave (brain oscillation at 8-13Hz) emerges when the test subject’s eyes are closed. In the data folder you can find two files: eyes open.csv and eyes closed.csv. The data is recorded from the channel Pz (you can google where it is located). One row contains 4 seconds of the signal, sampling rate is 512. Each file has 15 recordings.Your task is to perform Fourier analysis on both datasets, plot power spectra and compare theresults. Do it as follows:1. Plot one or two of the signals just to see how they look like.
###Code
################################
##### YOUR CODE STARTS HERE ####
eyes_closed_data = ????
eyes_open_data = ????
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
[Explain What You See here] 2. For each recorded signal (2048 data points): 1. Use `np.fft.fft(signal)` to compute power spectrum of this signal. You will get a vector of complex numbers. 2. Use `np.abs(result of fft)` to obtain the magnitude. `np.abs(result of fft)^2` will give you power.3. Sum together the 15 power spectrum distributions, that you got from the previous step.4. And divide the resulting vector by 15 to obtain the average.5. Plot it (You need two plots, one for eye open and another for eye closed).
###Code
################################
##### YOUR CODE STARTS HERE ####
# Hint: you can use a for loop or take advantage of numpy matrix operations
eyes_closed_power =????
eyes_open_power = ????
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
6. You will see that the right part of the graph is mirror image of the left part. Discard the right part and plot it again.
###Code
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
7. Your X axis goes from 1 to 1024, which does not correspond to actual frequencies. Compute correct X axis as follows and plot it afterward:
###Code
dt= 1/512 # time step length in ms
df = 1/4 #frequency step in Hz, 4 corresponds to the length of recording
fNQ = 1/dt/2 #fNQ is the maximal frequency, in our case it is
#256 if you have discarded part of the data in step 5
xaxis= np.arange(0,fNQ,df) #points for your X axis, should be of the same length
#as the vector of frequencies
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
[Explain What You See here] Now your axis goes from 0 to 256 (or 512 if you did not discard the right side of the graph)with step of 0.25. This corresponds to the frequency range pow function produced.After making the plot more beautiful and focusing only on the range from 0 to 30 Hz you shouldobtain a result that looks something like this:Eyes open | Eyes Closed:-------------------------:|:-------------------------: | *** 3 Part II: Spiking DataIn the first part we looked at continuous brain data: the EEG signal was changing over thetime. Another very popular form of data is spiking data. Spiking data is obtained when weinsert electrodes inside the brain and record the activity of individual neurons. The conceptis very simple: we attach a sensor to the neuron and whenever the neuron emits and actionpotential (aka ”fires”) we write a “1” into our data file, otherwise we write a “0”. The resultingdata shows us for each time point (in our case each millisecond) whether the neuron has firedor not. 3.1 Dataset We will be working with a dataset with spiking data recordedfrom 72 neurons. The data was recorded from one mouse whilereceived a stimulus: moving bars on the screen which can havedifferent orientations. On Figure 2 you can see an example: thewhite and black bars are tilted (orientation). The bars also movein the direction perpendicular to the tilt during the experiment. Figure 2: Example of the stimulus. During the actual experiment the bars are moving. We want to find if different neurons react differently to moving bars with different orientations. Have a look at **Figure 3** the spiking pattern could indicate that this neuron is more active when the bar is tilted $45^{o}$ clockwise from the horizontal position. Also the neuron activity seems to fade for orientations away from $45^{o}$. We would like to rediscover from the data, that indeed different neurons respond differently to different orientations. Figure 3: Neuronal response to different orientations of the bar. NB: Note that in the dataset, we have 13 stimuli (13 orientations), but the first one should be ignored (it has orientation $-1^o$). So you should only use the other 12. *** Exercise 3: Raster plots (2pt)Let us start by plotting some spiking data. Under the **data/plain** folder you have recordings from 72 neurons of a mouse. The data is available in the plain text format.The files have names **neuron_NN_stimulus_SS.csv** where **NN** is the number of the neuron (from 1 to 72) and **SS** is the number of the stimulus (from 1 to 13). Inside each file one line represents one trial. For each millisecond it has value 0 (no spike) or 1 (spike). Files with names **stimulus_SS.csv** describe the stimulus: they hold four values:1. Time before the stimulus (in seconds).2. Duration of the stimulus (in seconds).3. Time after the stimulus (in seconds).4. Orientation of the stimulus (in degrees).**Your task is**:1. Take any of the neurons and plot all trials as a raster plot (see Figure 4). You will notice that neuronal response to the stimulus varies a lot (lot of noise!), this is why you usually need several trials. Figure 4: Raster plot of 10 trials on the same neuron. On the $X$ axis we have time and on the $Y$ axis trials. A vertical bar indicates a spike in that trial. To draw a vertical line in Python you can use `plt.vlines(spike_time,y_min,y_max)`where **spike_time** is the time and **y_min,y_max** is the length of the line which should corrospond to the trial. If necessary modify the length the bars to make the raster plot more readable.
###Code
#Import all the data
data = {}; #Create a dictionary to store the data.
data_path = "./data/plain/" # data path
for n in range(1, 73): #loop over 72 neurons
neuron = []
for s in range(2, 14): #loop over 12 stimulus
print('Loading neuron:{},stimiulus:{} '.format(n,s),end="\r")
#Get the specific stimilus spikes for specific information in to the list.
neuron.append(np.genfromtxt(data_path + "neuron_%02d_stimulus_%02d.csv" % (n, s), delimiter=','))
data[n] = np.array(neuron)
plt.rcParams['figure.figsize'] = (16, 7)
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
[Explain What You See here] The next step is to **create two more raster plots**. The first one will illustrate the behaviour of all the neurons under the same stimulus, while the other one will have responses of the same neuron but to the different stimuli.2. Raster plot, where on $X$ axis we have time and on $Y$ axis we have all 72 neurons. Vertical bars are the responses to a stimulus (choose any) by the corresponding neuron in any of the trials (!). Please note that in our dataset different recordings have different length, but this should not be a problem. **(neuron to neuron variability)****Hint:** For task 2 plots, we need to average spiking data over the trials so that each neuron would be represented by only one vector, not 10. Simplest way to do that is just to add trials together. Let us say that you have 10 trials, and each of them is a vector of 4500 time points. You just sum those vectors together to obtain one vector of length 4500. After that replace all values which are greater than 1 with 1 (meaning if there was a spike in that time point in at least one of the trials).
###Code
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
[Explain What You See here] 3. Raster plot, where on $Y$ axis we have all 12 stimuli and bars are the responses from the same neuron (choose any). **(variability across stimuli)**
###Code
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
plt.show()
###Output
_____no_output_____
###Markdown
[Explain What You See here] *** Exercise 4: Tuning Curve as Rose Diagram (2pt)From the lecture you must be familiar with the term **receptive field**. **Tuning curve** is a plot that helps to describe the receptive field of a neuron with respect to some variable - how strongly does the neuron respond (how often it fires) if we give the variable different values. Figure 3 describes how a neuron's response varies for different orientations of the bar, the plot on the right is the tuning curve of this neuron (with respect to orientation).For orientations of bars there is a really neat way to visualize the tuning curve. This visualization method is called **rose chart** or **angle histogram**. You can see an example on **Figure 5**. The idea is that the values are placed on the circumference on the circle and the length of the sector is determined by the number of times the value appears in the list. It is like a histogram drawn in circle.In our case it allows to represent our data in a much more natural way because different orientations form a circle. In our case the lengths of the sectors correspond to sum of spikes in this orientation. There are rose charts for some of the neurons on **Figure 5**. We can clearly see that neuron 8 reacts more to the orientation of $0^{o}$, the neuron 6 is most active in the range of $270^{o}$ to $330^{o}$ and so on. Figure 5: Rose diagram: the number of spikes for each of the 12 orientations. **Your task is** to produce similar plot for any 9 neurons (not the same ones as on previous figure). To produce a plot for one neuron do the following:1. Create a dictionary **rose_neurons** where you store an array **A** for every neuron of the 9 you picked.2. **A** array contain the number of spikes for every orientation (notice that this array is neuron dependant.
###Code
rose_neurons={}
################################
##### YOUR CODE STARTS HERE ####
#Note: Every array A should include 12 values (value per orientation)
##### YOUR CODE ENDS HERE #####
################################
###Output
_____no_output_____
###Markdown
3. Draw the plot.
###Code
plt.rcParams['figure.figsize'] = (16, 16)
#Array with the angles of bins
Angels = [0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330]
RadAngels = np.deg2rad(Angels)
counter=0
#Loop over the picked neurons
for j in rose_neurons.keys():
#Making the plot polar.
ax = plt.subplot(3,3,1+counter,projection='polar')
#Draw the rose table using matplotlib
ax.bar(RadAngels,rose_neurons[j],width=2*np.pi/12,edgecolor='blue', color='None')
ax.set_title('neuron:{}'.format(i),pad=15)
counter+=1
plt.tight_layout()
###Output
_____no_output_____
###Markdown
4. Also, look through the diagrams you obtain and point out which neuron has the most specific tuning curve and which direction it prefers. [Your Answer Here] ***End of obligatory exercises*** Exercise 5 What else our neurons are tuned to? (Bonus: up to 1)In this session we have seen an example of how neurons are **tuned** to respond to very specific stimuli. Your task is to find other interesting examples of stimuli our neurons are tuned to react to. Are there special neurons, which fire when you look at a human face? Neurons which react on the temperature? Hunger? Numbers?Find the most interesting examples (from humans, animals, insects). Write at least 150 words (images, charts, plots are recommended). [Your Answer Here] *** Exercise 6 Post-Stimulus Time Histogram (PSTH) (Bonus 1)Another useful analysis tool is a histogram, where on $X$ axis we have time points (or time ranges) and on $Y$ axis the number of spikes that occurred during given time range. It is called **Post-Stimulus Time Histogram (PSTH)**.1. Choose any neuron, any stimulus.2. Take an average over all trials as we did before.3. Choose time window, for example 20ms or 50ms.4. Plot a histogram, where on the $X$ axis we have time windows and on the $Y$ axis the numbers of spikes that occurred during those windows.
###Code
################################
##### YOUR CODE STARTS HERE ####
##### YOUR CODE ENDS HERE #####
################################
###Output
_____no_output_____ |
openmdao/docs/openmdao_book/features/core_features/working_with_components/discrete_variables.ipynb | ###Markdown
Discrete VariablesThere may be times when it’s necessary to pass variables that are not floats or float arrays between components. These variables can be declared as discrete variables. A discrete variable can be any picklable python object.In explicit and implicit components, the user must call `add_discrete_input` and `add_discrete_output` to declare discrete variables in the `setup` method. Methods for Adding Discrete VariablesHere are the methods used to add discrete variables to components.```{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_input :noindex:``````{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_output :noindex:``` Discrete Variable ConsiderationsDiscrete variables, like continuous ones, can be connected to each other using the `connect` function or by promoting an input and an output to the same name. The type of the output must be a valid subclass of the type of the input or the connection will raise an exception.```{warning}If a model computes derivatives and any of those derivatives depend on the value of a discrete output variable, an exception will be raised.```If a component or group contains discrete variables, then the discrete inputs and/or outputs will be passed to the relevant API functions. In general, if nonlinear inputs are passed to a function, then a discrete inputs argument will be added. If nonlinear outputs are passed, then a discrete outputs argument will be added. The signatures of the affected functions are shown below:```{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_jacvec_product :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_partials :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.apply_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.guess_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.linearize :noindex:``````{eval-rst} .. automethod:: openmdao.core.group.Group.guess_nonlinear :noindex:``` Discrete Variable ExamplesAn example is given below that shows an explicit component that has a discrete input along with continuous inputs and outputs.
###Code
import numpy as np
import openmdao.api as om
class BladeSolidity(om.ExplicitComponent):
def setup(self):
# Continuous Inputs
self.add_input('r_m', 1.0, units="ft", desc="Mean radius")
self.add_input('chord', 1.0, units="ft", desc="Chord length")
# Discrete Inputs
self.add_discrete_input('num_blades', 2, desc="Number of blades")
# Continuous Outputs
self.add_output('blade_solidity', 0.0, desc="Blade solidity")
def compute(self, inputs, outputs, discrete_inputs, discrete_outputs):
num_blades = discrete_inputs['num_blades']
chord = inputs['chord']
r_m = inputs['r_m']
outputs['blade_solidity'] = chord / (2.0 * np.pi * r_m / num_blades)
# build the model
prob = om.Problem()
prob.model.add_subsystem('SolidityComp', BladeSolidity(),
promotes_inputs=['r_m', 'chord', 'num_blades'])
prob.setup()
prob.set_val('num_blades', 2)
prob.set_val('r_m', 3.2)
prob.set_val('chord', .3)
prob.run_model()
# minimum value
print(prob['SolidityComp.blade_solidity'])
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob['SolidityComp.blade_solidity'], 0.02984155, 1e-4)
###Output
_____no_output_____
###Markdown
Similarly, discrete variables can be added to implicit components.
###Code
import openmdao.api as om
class ImpWithInitial(om.ImplicitComponent):
"""
An implicit component to solve the quadratic equation: x^2 - 4x + 3
(solutions at x=1 and x=3)
"""
def setup(self):
self.add_input('a', val=1.)
self.add_input('b', val=-4.)
self.add_discrete_input('c', val=3)
self.add_output('x', val=5.)
self.declare_partials(of='*', wrt='*')
def apply_nonlinear(self, inputs, outputs, residuals, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
c = discrete_inputs['c']
x = outputs['x']
residuals['x'] = a * x ** 2 + b * x + c
def linearize(self, inputs, outputs, partials, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
x = outputs['x']
partials['x', 'a'] = x ** 2
partials['x', 'b'] = x
partials['x', 'x'] = 2 * a * x + b
def guess_nonlinear(self, inputs, outputs, resids, discrete_inputs, discrete_outputs):
# Default initial state of zero for x takes us to x=1 solution.
# Here we set it to a value that will take us to the x=3 solution.
outputs['x'] = 5
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', ImpWithInitial())
model.comp.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
model.comp.linear_solver = om.ScipyKrylov()
prob.setup()
prob.run_model()
print(prob.get_val('comp.x'))
assert_near_equal(prob.get_val('comp.x'), 3., 1e-4)
###Output
_____no_output_____
###Markdown
Discrete VariablesThere may be times when it’s necessary to pass variables that are not floats or float arrays between components. These variables can be declared as discrete variables. A discrete variable can be any picklable python object.In explicit and implicit components, the user must call `add_discrete_input` and `add_discrete_output` to declare discrete variables in the `setup` method. Methods for Adding Discrete VariablesHere are the methods used to add discrete variables to components.```{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_input :noindex:``````{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_output :noindex:``` Discrete Variable ConsiderationsDiscrete variables, like continuous ones, can be connected to each other using the `connect` function or by promoting an input and an output to the same name. The type of the output must be a valid subclass of the type of the input or the connection will raise an exception.```{warning}If a model computes derivatives and any of those derivatives depend on the value of a discrete output variable, an exception will be raised.```If a component or group contains discrete variables, then the discrete inputs and/or outputs will be passed to the relevant API functions. In general, if nonlinear inputs are passed to a function, then a discrete inputs argument will be added. If nonlinear outputs are passed, then a discrete outputs argument will be added. The signatures of the affected functions are shown below:```{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_jacvec_product :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_partials :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.apply_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.guess_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.linearize :noindex:``````{eval-rst} .. automethod:: openmdao.core.group.Group.guess_nonlinear :noindex:``` Discrete Variable ExamplesAn example is given below that shows an explicit component that has a discrete input along with continuous inputs and outputs.
###Code
import numpy as np
import openmdao.api as om
class BladeSolidity(om.ExplicitComponent):
def setup(self):
# Continuous Inputs
self.add_input('r_m', 1.0, units="ft", desc="Mean radius")
self.add_input('chord', 1.0, units="ft", desc="Chord length")
# Discrete Inputs
self.add_discrete_input('num_blades', 2, desc="Number of blades")
# Continuous Outputs
self.add_output('blade_solidity', 0.0, desc="Blade solidity")
def compute(self, inputs, outputs, discrete_inputs, discrete_outputs):
num_blades = discrete_inputs['num_blades']
chord = inputs['chord']
r_m = inputs['r_m']
outputs['blade_solidity'] = chord / (2.0 * np.pi * r_m / num_blades)
# build the model
prob = om.Problem()
prob.model.add_subsystem('SolidityComp', BladeSolidity(),
promotes_inputs=['r_m', 'chord', 'num_blades'])
prob.setup()
prob.set_val('num_blades', 2)
prob.set_val('r_m', 3.2)
prob.set_val('chord', .3)
prob.run_model()
# minimum value
print(prob['SolidityComp.blade_solidity'])
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob['SolidityComp.blade_solidity'], 0.02984155, 1e-4)
###Output
_____no_output_____
###Markdown
Similarly, discrete variables can be added to implicit components.
###Code
import openmdao.api as om
class ImpWithInitial(om.ImplicitComponent):
"""
An implicit component to solve the quadratic equation: x^2 - 4x + 3
(solutions at x=1 and x=3)
"""
def setup(self):
self.add_input('a', val=1.)
self.add_input('b', val=-4.)
self.add_discrete_input('c', val=3)
self.add_output('x', val=5.)
self.declare_partials(of='*', wrt='*')
def apply_nonlinear(self, inputs, outputs, residuals, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
c = discrete_inputs['c']
x = outputs['x']
residuals['x'] = a * x ** 2 + b * x + c
def linearize(self, inputs, outputs, partials, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
x = outputs['x']
partials['x', 'a'] = x ** 2
partials['x', 'b'] = x
partials['x', 'x'] = 2 * a * x + b
def guess_nonlinear(self, inputs, outputs, resids, discrete_inputs, discrete_outputs):
# Default initial state of zero for x takes us to x=1 solution.
# Here we set it to a value that will take us to the x=3 solution.
outputs['x'] = 5
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', ImpWithInitial())
model.comp.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
model.comp.linear_solver = om.ScipyKrylov()
prob.setup()
prob.run_model()
print(prob.get_val('comp.x'))
assert_near_equal(prob.get_val('comp.x'), 3., 1e-4)
###Output
_____no_output_____
###Markdown
Discrete VariablesThere may be times when it’s necessary to pass variables that are not floats or float arrays between components. These variables can be declared as discrete variables. A discrete variable can be any picklable python object.In explicit and implicit components, the user must call `add_discrete_input` and `add_discrete_output` to declare discrete variables in the `setup` method. Methods for Adding Discrete VariablesHere are the methods used to add discrete variables to components.```{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_input :noindex:``````{eval-rst} .. automethod:: openmdao.core.component.Component.add_discrete_output :noindex:``` Discrete Variable ConsiderationsDiscrete variables, like continuous ones, can be connected to each other using the `connect` function or by promoting an input and an output to the same name. The type of the output must be a valid subclass of the type of the input or the connection will raise an exception.```{warning}If a model computes derivatives and any of those derivatives depend on the value of a discrete output variable, an exception will be raised.```If a component or group contains discrete variables, then the discrete inputs and/or outputs will be passed to the relevant API functions. In general, if nonlinear inputs are passed to a function, then a discrete inputs argument will be added. If nonlinear outputs are passed, then a discrete outputs argument will be added. The signatures of the affected functions are shown below:```{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_jacvec_product :noindex:``````{eval-rst} .. automethod:: openmdao.core.explicitcomponent.ExplicitComponent.compute_partials :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.apply_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.guess_nonlinear :noindex:``````{eval-rst} .. automethod:: openmdao.core.implicitcomponent.ImplicitComponent.linearize :noindex:``````{eval-rst} .. automethod:: openmdao.core.group.Group.guess_nonlinear :noindex:``` Discrete Variable ExamplesAn example is given below that shows an explicit component that has a discrete input along with continuous inputs and outputs.
###Code
import numpy as np
import openmdao.api as om
class BladeSolidity(om.ExplicitComponent):
def setup(self):
# Continuous Inputs
self.add_input('r_m', 1.0, units="ft", desc="Mean radius")
self.add_input('chord', 1.0, units="ft", desc="Chord length")
# Discrete Inputs
self.add_discrete_input('num_blades', 2, desc="Number of blades")
# Continuous Outputs
self.add_output('blade_solidity', 0.0, desc="Blade solidity")
def compute(self, inputs, outputs, discrete_inputs, discrete_outputs):
num_blades = discrete_inputs['num_blades']
chord = inputs['chord']
r_m = inputs['r_m']
outputs['blade_solidity'] = chord / (2.0 * np.pi * r_m / num_blades)
# build the model
prob = om.Problem()
prob.model.add_subsystem('SolidityComp', BladeSolidity(),
promotes_inputs=['r_m', 'chord', 'num_blades'])
prob.setup()
prob.set_val('num_blades', 2)
prob.set_val('r_m', 3.2)
prob.set_val('chord', .3)
prob.run_model()
# minimum value
print(prob['SolidityComp.blade_solidity'])
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob['SolidityComp.blade_solidity'], 0.02984155, 1e-4)
###Output
_____no_output_____
###Markdown
Similarly, discrete variables can be added to implicit components.
###Code
import openmdao.api as om
class ImpWithInitial(om.ImplicitComponent):
"""
An implicit component to solve the quadratic equation: x^2 - 4x + 3
(solutions at x=1 and x=3)
"""
def setup(self):
self.add_input('a', val=1.)
self.add_input('b', val=-4.)
self.add_discrete_input('c', val=3)
self.add_output('x', val=5.)
self.declare_partials(of='*', wrt='*')
def apply_nonlinear(self, inputs, outputs, residuals, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
c = discrete_inputs['c']
x = outputs['x']
residuals['x'] = a * x ** 2 + b * x + c
def linearize(self, inputs, outputs, partials, discrete_inputs, discrete_outputs):
a = inputs['a']
b = inputs['b']
x = outputs['x']
partials['x', 'a'] = x ** 2
partials['x', 'b'] = x
partials['x', 'x'] = 2 * a * x + b
def guess_nonlinear(self, inputs, outputs, resids, discrete_inputs, discrete_outputs):
# Default initial state of zero for x takes us to x=1 solution.
# Here we set it to a value that will take us to the x=3 solution.
outputs['x'] = 5
prob = om.Problem()
model = prob.model
model.add_subsystem('comp', ImpWithInitial())
model.comp.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
model.comp.linear_solver = om.ScipyKrylov()
prob.setup()
prob.run_model()
print(prob.get_val('comp.x'))
assert_near_equal(prob.get_val('comp.x'), 3., 1e-4)
###Output
_____no_output_____ |
examples/ConflictResolver.ipynb | ###Markdown
Solve Conflict using non conflicting nodes as goal nodesThis example answers on how to correct existing road element when the road is in conflict with building footprints and has no reference data available.
###Code
import os
import sys
import matplotlib.pyplot as plt
import geopandas as gpd
import pandas as pd
%matplotlib inline
plt.rcParams['figure.figsize'] = [10, 10]
plt.rcParams['figure.dpi'] = 100
module_path = os.path.abspath(os.path.join('../'))
if module_path not in sys.path:
sys.path.append(module_path)
from kaizen_mapping.utils.gis import read_data_frame
from kaizen_mapping.map.trace import traces_from_data_frame
from kaizen_mapping.map.grid import PixelGrid
from kaizen_mapping.utils.gis import convert_and_get_extent
from kaizen_mapping.map.navigator import AStar
from shapely.geometry import LineString
###Output
_____no_output_____
###Markdown
Read trace Data Frame
###Code
trace_data_frame = read_data_frame(r"D:\Cypherics\Library\kaizen\data\demo1.shp")
###Output
_____no_output_____
###Markdown
Read Obstacle Data Frame
###Code
obstacle_data_frame = read_data_frame("D:\Cypherics\Library\kaizen\data\demo1_bfp.shp")
###Output
_____no_output_____
###Markdown
Visualize the data frames
###Code
f, ax = plt.subplots(1)
obstacle_data_frame.plot(ax=ax, label='Building Footprint')
trace_data_frame.plot(ax=ax,cmap=None, color="red", label='Trace Line')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Initialize the grid for the area on which the conflict is to be resolvedthe bounding box of the area can be extracted from [this](https://boundingbox.klokantech.com/) website and export it in json formatThe bounding box can be of a city, or a smaller region, what ever the extent, it should occupy all the traces and obstacle to run
###Code
grid = PixelGrid.pixel_grid(
resolution=1,
grid_bounds=convert_and_get_extent(
[
[
[2.13242075, 42.84341088],
[2.23118924, 42.84341088],
[2.23118924, 42.90975515],
[2.13242075, 42.90975515],
[2.13242075, 42.84341088],
]
],
crs_to="epsg:26910",
),
)
###Output
_____no_output_____
###Markdown
Add obstacle to the grid so the navigator is aware of which part of the grid is to avoid
###Code
grid.add_obstacle(obstacle_data_frame, extend_boundary_pixel=2)
###Output
_____no_output_____
###Markdown
Generate Traces from dataframe
###Code
traces = traces_from_data_frame(trace_data_frame)
###Output
_____no_output_____
###Markdown
Initialize the navigator to find path
###Code
navigator = AStar()
solved_path_list = list()
# A GENERATOR WHICH WILL KEEP ON YEILDING FOR EVERY TRACE PRESENT IN TRACES
for solved_path in navigator.path_finder_from_traces(grid, traces, search_space_threshold=30, epsilon=1):
solved_path_list.append(LineString(solved_path))
solved_path_data_frame = gpd.GeoDataFrame(solved_path_list,
columns=['LineString'],
geometry='LineString')
f, ax = plt.subplots(1)
obstacle_data_frame.plot(ax=ax, label='Building Footprint')
trace_data_frame.plot(ax=ax,cmap=None, color="red", label='Trace Line')
solved_path_data_frame.plot(ax=ax, cmap=None, color="green", label='Solved Conflict')
plt.legend()
plt.show()
###Output
[1;37m>>[KNavigator Progress: Found GOAL (5391, 2296), GOAL COUNT - 4/4COUNT - 4/5 |
Reddit Webscraping using PRAW/Reddit API.ipynb | ###Markdown
Scraping Reddit Data Run in Google Colab View source on GitHub  Using the PRAW library, a wrapper for the Reddit API, everyone can easily scrape data from Reddit or even create a Reddit bot.
###Code
import praw
###Output
_____no_output_____
###Markdown
Before it can be used to scrape data we need to authenticate ourselves. For this we need to create a Reddit instance and provide it with a client_id , client_secret and a user_agent . To create a Reddit application and get your id and secret you need to navigate to [this page](https://www.reddit.com/prefs/apps).
###Code
reddit = praw.Reddit(client_id='my_client_id',
client_secret='my_client_secret',
user_agent='my_user_agent')
###Output
_____no_output_____
###Markdown
We can get information or posts from a specifc subreddit using the reddit.subreddit method and passing it a subreddit name.
###Code
# get 10 hot posts from the MachineLearning subreddit
hot_posts = reddit.subreddit('MachineLearning').hot(limit=10)
###Output
_____no_output_____
###Markdown
Now that we scraped 10 posts we can loop through them and print some information.
###Code
for post in hot_posts:
print(post.title)
# get hot posts from all subreddits
hot_posts = reddit.subreddit('all').hot(limit=10)
for post in hot_posts:
print(post.title)
# get MachineLearning subreddit data
ml_subreddit = reddit.subreddit('MachineLearning')
print(ml_subreddit.description)
###Output
**[Rules For Posts](https://www.reddit.com/r/MachineLearning/about/rules/)**
--------
+[Research](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AResearch)
--------
+[Discussion](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ADiscussion)
--------
+[Project](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AProject)
--------
+[News](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ANews)
--------
***[@slashML on Twitter](https://twitter.com/slashML)***
--------
**Beginners:**
--------
Please have a look at [our FAQ and Link-Collection](http://www.reddit.com/r/MachineLearning/wiki/index)
[Metacademy](http://www.metacademy.org) is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
--------
[Advanced Courses](https://www.reddit.com/r/MachineLearning/comments/51qhc8/phdlevel_courses?st=isz2lqdk&sh=56c58cd6)
--------
**AMAs:**
[Libratus Poker AI Team (12/18/2017)]
(https://www.reddit.com/r/MachineLearning/comments/7jn12v/ama_we_are_noam_brown_and_professor_tuomas/)
[DeepMind AlphaGo Team (10/19/2017)](https://www.reddit.com/r/MachineLearning/comments/76xjb5/ama_we_are_david_silver_and_julian_schrittwieser/)
[Google Brain Team (9/17/2017)](https://www.reddit.com/r/MachineLearning/comments/6z51xb/we_are_the_google_brain_team_wed_love_to_answer/)
[Google Brain Team (8/11/2016)]
(https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/)
[The MalariaSpot Team (2/6/2016)](https://www.reddit.com/r/MachineLearning/comments/4m7ci1/ama_the_malariaspot_team/)
[OpenAI Research Team (1/9/2016)](http://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team/)
[Nando de Freitas (12/26/2015)](http://www.reddit.com/r/MachineLearning/comments/3y4zai/ama_nando_de_freitas/)
[Andrew Ng and Adam Coates (4/15/2015)](http://www.reddit.com/r/MachineLearning/comments/32ihpe/ama_andrew_ng_and_adam_coates/)
[Jürgen Schmidhuber (3/4/2015)](http://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/)
[Geoffrey Hinton (11/10/2014)]
(http://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/)
[Michael Jordan (9/10/2014)](http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan/)
[Yann LeCun (5/15/2014)](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/)
[Yoshua Bengio (2/27/2014)](http://www.reddit.com/r/MachineLearning/comments/1ysry1/ama_yoshua_bengio/)
--------
Related Subreddit :
* [LearnMachineLearning](http://www.reddit.com/r/LearnMachineLearning)
* [Statistics](http://www.reddit.com/r/statistics)
* [Computer Vision](http://www.reddit.com/r/computervision)
* [Compressive Sensing](http://www.reddit.com/r/CompressiveSensing/)
* [NLP] (http://www.reddit.com/r/LanguageTechnology)
* [ML Questions] (http://www.reddit.com/r/MLQuestions)
* /r/MLjobs and /r/BigDataJobs
* /r/datacleaning
* /r/DataScience
* /r/scientificresearch
* /r/artificial
###Markdown
Because we only have a limited amoung of requests per day it is a good idea to save the scraped data in some kind of variable or file.
###Code
import pandas as pd
posts = []
ml_subreddit = reddit.subreddit('MachineLearning')
for post in ml_subreddit.hot(limit=10):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
posts
posts.to_csv('top_ml_subreddit_posts.csv')
###Output
_____no_output_____
###Markdown
PRAW also allows us to get information about a specifc post/submission
###Code
submission = reddit.submission(url="https://www.reddit.com/r/MapPorn/comments/a3p0uq/an_image_of_gps_tracking_of_multiple_wolves_in/")
# or
submission = reddit.submission(id="a3p0uq") #id comes after comments/
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
This will work for some submission, but for others that have more comments this code will throw an AttributeError saying:``AttributeError: 'MoreComments' object has no attribute 'body'``These MoreComments object represent the “load more comments” and “continue this thread” links encountered on the websites, as described in more detail in the comment documentation.There get rid of the MoreComments objects, we can check the datatype of each comment before printing the body.
###Code
from praw.models import MoreComments
for top_level_comment in submission.comments:
if isinstance(top_level_comment, MoreComments):
continue
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The below cell is another way of getting rid of the MoreComments objects
###Code
submission.comments.replace_more(limit=0)
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The above codeblocks only got the top lebel comments. If we want to get the complete ``CommentForest`` we need to use the ``.list`` method.
###Code
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
print(comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
It might still be, just with a believable shitpost title included
It IS funny that the scientists used the MS Paint palette.
Considering how wolves mark their territory it might actually be a piss post.
Red is the Australia of Wolf Risk.
Well these wolves do use shit posts to mark their territories, so is a quality post that depends on shitposts.
I thought it was a map of something moving over the U.S. over time...
Wolf 1: "Damn it, Frederick, you can't go into other packs' territory like that! You'll start a wolf war!"
Wolf 2 (alpha af): *lights cigarette, drags a long puff, then flicks it onto the gas can next to a red wolf pissed-on tree. The tree explodes into a fireball.* "I'm counting on it."
It's less about keen senses, and more about copious amounts of piss.
HE IS...THE WHITE WOLF. DAKINGINDANORF
On the top half I think the white line is just a border - it's fatter and in straight lines.
Is this a visual result of dogs peeing on trees?
Because some wolves aren't looking for anything logical, like prey. They won't listen to huffs, barks, growls and howls. Some wolves just want to watch the world burn.
You might even call it...a lone wolf.
I bet a strong smell of wolf piss clearly marks the territory borders...
Those senses are why dogs don't belong in wilderness areas. They like to pee and poop to leave scent marks just like wolves do to mark their territory, and they especially like to do it in a place where they smell something interesting. When they do that, they cover the territorial markers of other animals. It's like going for a walk along the border of North and South Korea and kicking over all of the border markers.
I'm pretty sure that's just Moon Moon getting lost.
Yeah...canines are incredible and smelling and differentiating urine. Other than everyone, who knew?
The WhiteFang clan is aggressive as always, always picking fights with the BloodMoons.
Or could be downs
Well, white things have a long history of claiming territory that doesn't belong to them.
> there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
I'm thinking it's more for reasons of "genetic diversity", if ya know what I mean.
if there's one thing my dog understands it is boundaries
> I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
Or has a really bad sense of smell. Maybe a cold.
Strategy: piss on EVERYTHING
They are very good at smelling pee and very good at peeing everywhere.
I mean, they literally piss and shit all over everything on purpose, don't really need keen senses.
They piss on every stone to mark boundaries. They have a great sense of smell. I expect it's easy to avoid the others territory.
It really is.
I saw that guy too. Gives no fucks
Just like how police put up yellow tape around a crime scene and you know to not walk in, wolves leave a yellow tape of pee.
Seriously, I think you’re right. I think there might be an alpha that’s more alpha than the others of the other packs. Basically like the Hulk, don’t fuck with him and he won’t fuck with you. Fuck with him and you’re done.
White is going for the diplomatic victory.
Hes just lookin for the ladies
Looks like the white colored pack has one wolf that just doesn’t give a fuck and goes all the way east
He’s been writing love letters to the orange one and finally had enough of his family and his life and decided to go on an adventure to find his true love in the north.
Because whites invade colors. Ya, we get it.
White and yellow!
I think it might have to do with the reds being pushovers.
It looks like they have less territory and are more cramped and white and yellow venture into it.
Maybe he's just super popular and the other wolves have accepted him into their clique.
Monkeys will go on solo missions into enemy territory to mate.
I wonder if that's what this Lone Wolf is doing.
White wolf is a pokemon go champ
Yeah, it seems to demonstrate that territory boundaries, like human countries, aren't just a construct of our own intelligence, but rather a more innate behaviour of social predators in general.
Right? I expected a lot more overlap on every border.
The "straight" lines inside the borders are what interest me. Are those "Game Trails"?
Wouldn't be surprised if that pack is in an area of the forest with less food so they are forced to hunt in others' territory at times
I'm *pretty* sure the thick white line represents the national border between US and Canada (national park is near the NE edge of Minnesota)...
It's Moon Moon
The white wolves invading the red wolves land... who would have guessed that would happen?
Maybe he just has wares to sell to the other packs.
White wolf: Hi! Want to be my friend?
Typical white wolf privilege.
I was gonna say white wolf gives no fucks
The White Wolf is named Geralt, and he's just an adventurer
He's a migrant worker wolf
It's called white privilege.
*Lone Wolf...
But he respects yellow wolf
Conversely, it seems like the red wolves are very polite
Fucking white privileged wolves
Maybe he is a diplomat or is juvenile looking for a mate.
White wolf has a kick ass name too. It's the moonshadow pack. No wonder they impose all over the place...
Typical Whitey.
The master wolf race, the white wolf.
Red wolf has cardio for days
One man wolf pack.
>wolves eating beaver
( ͡° ͜ʖ ͡°)
Veryinteresting
Oh man, that video has some excellent wildlife footage.
The thought of a wolf snarling at me with blueberry juice all in its mouth sounds horrifying
Thanks for posting!
PSA: open in incognito to bypass the "answer survey to read more" bullshit
That dude went all the way thru purple, look at the upper right
Hey don't call Triss Merigold like that!
>I fucked your bitch, you wolf motherfucker!
Poor green wolves have the smallest territory :(
I wanted to know how large the ranges are, so I compared them with Google Maps satellite images. They're roughly 10-15 km across each, if anyone else was wondering.
Like dicks
Do you get asked to show a picture of a six pack often?
Seven, there is a ~~king~~ wolf ~~in the north~~ across the river
More likely tail.
it would be beautiful to display area as discrete points sized by frequency of occupation. the lines crossing each other over and over again destroys interesting (and meaningful) information.
I wondered the same thing, so I found the location on Google Maps and....nothing. It looks the same as all the territory around it. It's near a highway, but that highway passes straight through their territory and doesnt affect the wolves' movements anywhere else.
humans would be my guess
Or the Phantom of the Opera
Man San Andreas was hard enough WITH cheats on.
Not a fan of diversity?
That’s... insane. I saw this post several places today and was curious about the scale, but I figured we were talking <5 sq miles each, not >25...
>larger than San Marino
What is San Marino and, out of curiosity, why would you choose that as a reference?
Holy shit. That was crazy. Fuck those cannibal chimps. I hope the escapees get their homeboys and retaliate.
Thanks for the link! Good stuff!
White Wolf is defs ShadowClan
Go Team White
[deleted]
This is some straight up "Warriors" shit, but with wolves.
Don't leave us hanging
Which people would those be? I've tried to come up with non-racist translation, but I'm failing.
Borders aren't racist. Packs aren't race based afaik either?
Lmao that’s the Canadian-U.S. border you’re trying to point out.
Now that would be next level trolling.
It seems to check out. [Facebook page with a lot more information here.](https://www.facebook.com/VoyageursWolfProject)
Ummm animals don’t travel in perfectly straight lines over long distances... well maybe birds but not wolves. Is it maybe pinging their location every x amount of time and connecting the dots??
If not, it’s pretty impressive how far some wolves go in a perfectly straight line.
Underrated comment
Low risk, but little prey. +2. Yellow is Europe. Abundance of prey, surrounded by hostile packs. +5
get this man in front of an executive producer...this...instant!
FWIW, I heard that the study that coined the term 'alpha dogs' was found to be a bit wrong when that pack was revisited. If memory serves me well, they found out that the "alphas" turned out to be the parents of the other wolves/dogs. So what we think of as "alpha" behavior is just parenting.
Hi! Jim, from Netflix. You're greenlit for 3 seasons! We look forward to seeing the pilot of "Wolf War" very soon!
It might be a non-alpha female trying to get pregnant (which they are not supposed to do she would be punished for doing this) but the mating drive can be quite strong for them I guess sometimes. Her own alpha male wouldn't mate with her.
Either that, or he's on drugs. My money is on drugs.
Game of Dens
!ThesaurizeThis
> *a red wolf pissed-on tree.*
beautiful. :p
I imagine him being in a Romeo Juliet scenario. Sneaking off to get some forbidden tail.
Wolfare
>Damn it, Frederick,
Frederick: We're werewolves, not swear wolves.
Oh Summer... first wolf war huh
blachsheep 2: blackwolf
!Thesaurizethis !DoTheFandango
https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/watch%3Fv%3DJw0c9z8EllE&ved=2ahUKEwiQ29Tm_4vfAhVJzFQKHQlNCZsQyCkwAHoECAsQBA&usg=AOvVaw2SDijitxjRxb6h5Su393wd
I’d like to see this movie made in the style of A Dog’s Way Home, all happy on the surface, but twisted and dark underneath.
Why does wolf 1 give wolf 2 a name but Frederick is still called wolf 2 after lol
r/prequelmemes talking to each other
Plot twist: Frederick is played by Liam Neeson. No one knows it, but he was once a man, now reincarnated as a wolf. He’s seeking vengeance with a fury that few of the other wolves can even comprehend.
The Grey 2: Wolf War is gonna be AWESOME!
It’s survivors all over again
Played by Willem Defoe
In the styling of Fantastic Mr. Fox
you forgot to narrow his eye lids
Some wolves just want to watch the world burn
Some wolves just want to watch the world burn
They don't use their senses to detect the piss?
You can see it at boundary waters canoe area in Minnesota. Clearly demarcated with pee in the snow and on trees.
The White Wolf has rested long enough.
KINGINDANORF!!
KINGINDANORF
Maybe it’s a few fat white wolves known to most folks as Border Patrol.
Yeah that's the Canadian border
upvoted so more people can DISCOVER THE WOLF TRUTH
What do you expect? They killed his dog...
That white wolf, he just wants to watch the humans...turn.
You bet your ass my dog now owns this whole national forest I'd like to see yours try to take it from him! Oh wait never mind he peed on it it's his now...
And yet when I do this to claim my favorite booth at Applebee's I'm "scaring the children" and deserve a "drunk and disorderly charge."
Not nice!!
And the Kashmir border dispute is due to a lack of urine.
Idk man, don't you think that's a bit of an overstatement? I think its a pretty big leap to connect hunting territories of Wolves to human countries.
huh?
natural mountain ranges, rivers, and coast lines play a huge part in territory boundaries
So explain the similar size of each territory. We could also point out that, unlike humans, they don’t seek to expand territory, but rather only “have” territory to keep an even distribution of available food/water
Oh of course, boggles my mind that some people think otherwise.
Talk about jumping to conclusions...
It’s pee. All pee.
Nice observation! It's an effect of the GPS collar "fix rate.". They vary according to collar and research needs (older collars - lower fix rates generally) . Fix rates between 15 minutes and 15 hours are common in habitat studies, so depending on the behavior of the animal, you can get real long lines connecting the sample locations.
If you looks closely the white wolf paths outside of the normal "area" typically follows the shoreline of a body of water. So, I think you are right.
They are the only pack whose territory is not adjacent to a water source.
Maybe they are actually the Kings of the wolves and they go to others' lands to collect tribute.
You're thinking of the black pack.
100%, I'm no wolf expert but I don't think they're known for walking in straight lines and turning at 90 degree angles.
Maybe it’s a few fat white wolves who are known to most folks as Border Patrol.
That's an unfortunate color choice....
The norther one in the top right corner is the national border (and park boundary, which coincides), the southern white line, which everybody is joking about is just the park boundary. Three of the packs reside entirely outside the national park.
Who invited moon moon?
Lissen chat.
White wolf imperialists 😤
Apparently they eat a lot of beaver
Right?! I don’t know that much about wolves but I thought the distinct boundaries were fascinating! I thought of this sub immediately :)
I think that might be a county line
That's the MN border, not the white wolf line (same for the thick line in the teal and green)
But the green territory is right up along that waterline to the north, it's probably very bountiful hunting grounds.
In fact, that might be why they have less space - each pack takes as much space as it needs, and they need less
##teampink
Poor Orange Wolves to the north are so sparse. I hope the Green Wolves don't invade their land.
Looks like there’s a smaller orangeish one, cuts off though
Fun fact: A housecat's "territory", meaning the area in which they range when outside, is usually about a mile in diameter.
UNLESS they're fixed, when it drops down to less than half that.
...are dicks roughly all the same size?
Well now that he has it, someone better ask
Is that a separate pack or did one crazy yellow wolf cross over?
⠰⡿⠿⠛⠛⠻⠿⣷
⠀⠀⠀⠀⠀⠀⣀⣄⡀⠀⠀⠀⠀⢀⣀⣀⣤⣄⣀⡀
⠀⠀⠀⠀⠀⢸⣿⣿⣷⠀⠀⠀⠀⠛⠛⣿⣿⣿⡛⠿⠷
⠀⠀⠀⠀⠀⠘⠿⠿⠋⠀⠀⠀⠀⠀⠀⣿⣿⣿⠇
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁
⠀⠀⠀⠀⣿⣷⣄⠀⢶⣶⣷⣶⣶⣤⣀
⠀⠀⠀⠀⣿⣿⣿⠀⠀⠀⠀⠀⠈⠙⠻⠗
⠀⠀⠀⣰⣿⣿⣿⠀⠀⠀⠀⢀⣀⣠⣤⣴⣶⡄
⠀⣠⣾⣿⣿⣿⣥⣶⣶⣿⣿⣿⣿⣿⠿⠿⠛⠃
⢰⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄
⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡁
⠈⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠁
⠀⠀⠛⢿⣿⣿⣿⣿⣿⣿⡿⠟
⠀⠀⠀⠀⠀⠉⠉⠉
I think it eliminates some of the information (as you say), but it gets a different picture across. No visualization can do everything, and this seems to still give a lot of useful information for many purposes.
Definitely a Balrog pit.
I believe diversity is an old, old wooden ship..used in the civil war era.
I went to a list of countries by area and it was the one closest to 25 sq. miles.
[San Marino](https://en.wikipedia.org/wiki/San_Marino) is one of the smallest countries in the world and is completely surrounded by Italy, similar to the Vatican City.
>Borders aren't racist.
bold statement on reddit
It's a joke
Guess we'll never know.
Anyone know how to make Quiche?
Easy karma.. Just open ms paint, go crazy with lines, then make up a title.
r/mspaintisbeautiful
These GPS tracker don’t record continuously to increase the efficiency of their battery. This is basically a set of data points being connected with lines.
EP: "I'd like to produce your wolf movie"
/u/mcjunker: *lights cigarette, drags a long puff, then flicks it onto the gas can he brought and placed next to a desk. The desk explodes into a fireball.* "I'm counting on it."
​
Now is no time to CRY WOLF! This summer, Tom Cruise is A WOLF IN DEATHS CLOTHING.
[Done](https://i.etsystatic.com/14448759/r/il/34afc0/1341759833/il_570xN.1341759833_oi7i.jpg)
Are we blind?! Deploy the executive producers!
Where's Ryan Reynolds when you need him?
So the Chads are actually the Dads?
The research that underlay that old pack theory was done on a bunch of unrelated wolves in captivity. It's as if human sociology were based entirely on a single prison study. Later field studies revealed that, just as you said, wolf packs are families, and the "alpha pair" is just mom and dad.
My understanding was that the issue with the original study was that it was done with wolves in captivity, with no relation to each other, which didn't apply to wolves in the wild. And, as you say, to the extent that there are heirarchies in wild packs, it is on account of family structure.
Sooo... is that why the dad bod fad is happening?
Hey TIL this as well! Loved it.
Yeah more specifically, "alpha" behavior only presents itself among wolves/dogs in captivity or otherwise domesticated. Like the author of Jaws, the author of this study always regretted publishing his findings due to the volume of misinformation that's spread since
"Identity theft is not a joke Jim! Millions of families suffer each year!... Michael!"
yeah, because her own alpha male would be her dad. she or he might just be running around trying to get their teen wolf freak on
He's not a coke head, he just really likes the smell.
For fucks sake, Moon Moon, get your act together
Classicist 1: "Bloody it, Town, you can't go into else lades' geographic region like that! You'll leave a Hugo Wolf war!"
Wolf 2 (important af): *returns coffin nail, fall backs a long-term puff of air, then twinkles it onto the fossil fuel can adjacent to a cherry assaulter pissed-on run. The Sir Herbert Beerbohm Tree irrupts into a meteor.* "I'm count on it."
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
Eyelid. Obviously, he's rocking an eyepatch.
Otherwise, good correction.
Oh, they use their senses, I'm just saying that given the amounts of pee involved, their sense of smell probably doesn't have to be so keen.
But that could just be drunk Viking fans. They’ll pee on anything.
Maybe it's a bunch of furries.
**AWOOOOOOO**OOOooo
I'm pretty sure there's more than enough on the Indian side
/r/brandnewsentence
[deleted]
Human borders have had a real impact on animal lives. The Iron Curtain in particular resulted in a long area that wildlife could flourish in.
A study found that deer in the Harz mountains in central Germany still avoid crossing the old East-West border; which as well as landmines had a rather nasty spring-mounted mine mounted to the final fence before the West that would kill deer from 120 metres away.
Also, since the Schengen Area has eliminated border controls from Poland to Germany, wolves have been turning up in the latter.
How? These are literally Wolf-states...
It's not a leap to say that social animals form social groups. And groupings have boundaries, which is what makes them groups in the first place. You can observe this in everything from mice to chimps.
Well, som borders are wholly arbitrary constructs. Pretty much every US state border, for example.
There are people who think that human society and culture just appeared one day.
Lol that's a border
You're like the autistic sherlock holmes.
Noticed this too less water means less prey
Wolf OCD is a really understudied problem
You’re definitely right that it’s the border BUT you can see examples on this map of the white wolf walking a straight line and then turning at a 90 degree angle in the bottom left. So I wouldn’t use that as an argument point.
To assure battery life, the GPS tracker may only broadcast its location every few minutes. When you connect the dots you get straight lines.
Edit: according to the [Article](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not) they take 72 points per day.
Fucking moon moon
[Woof](https://youtu.be/dhh0yqPT0zY)
Did not know that wolves eat blueberries. Didn’t even know they could since foods like grapes are poisonous
r/Dataisbeautiful might like this too
Interstate wolf, man. He can't be stopped!
Dang you are right, that about had my jaw dropping thinking of the implications. He was all over blues territory though.
That's what she said.
I'll show myself out.
"Grove Street is king! Say it with me niggas, Grove Street is KING! Yeah!"
That entire area has a relatively thin population so the green ones are just getting shafted.
This is exactly what I was thinking. Who needs the space when you’re in such a prime location.
Unsubscribe from cat facts
Having an outdoor housecat that is unaltered is a poor, irresponsible decision in regards to helping reduce feral cat populations and having an alive pet.
Typically. Depends on the gender though.
Hey /u/joppiejoo can you show me a picture of your sixpack?
a different pack
Oh, I didn't know that. thanks!
pardon?
/r/outoftheloop
r/gifrecipes
Red wolf pack leader: *on stump, addressing feral red wolves dressed in vaguely Nazi-ish uniforms* "Brothers! The forest is ours! The inferior wolves shall be destroyed once and for all. And this shall be a world Red in tooth and claw!"
Red wolves: *snarl militantly*
Tom Cruise (dressed as a white wolf with an eyepatch and a bandana): *flies attack helicopter over the parade field, drowning out the red wolves' chants. The missile systems auto lock on the Red wolf pack leader* "I don't think so."
Red wolf pack leader: "Impossible! Wolves can't operate advanced human technology!"
Tom Cruise: "Once you pay $10,000 to reach level 10 in Scientology and fully clear your system of thetans, anything is possible." *launches missile. The red wolf leader screams as he erupts into a fireball the size of a skyscraper.*
*DIRECTED BY MICHAEL BAY*
Directed by Michael Bay, with executive producer John Woo.
r/redditwritesamovie
This is actually a rather obscure horror film from 2003 called *Cry\_Wolf* with Lindy "Canada's most sexily evil redhead" Booth in it.
And the way it feels inside his nose!
Hmmm...
Hmm, I kinda wanna disagree with you there. I think it would require some level of *keen-ness* to be able to tell apart your own pact's piss and other pact's piss. After all, belonging in a different pact wouldn't impact your smell that much, since they are literally in the same geological location and it's also quite likely that they share a small gene pool and thus there wouldn't be much difference (or so we think) in the smell.
Also if there are copious amounts of piss from numerous different wolves in the forest, than it would require even more keen senses to be able to draw useful information from such a mess.
It's true, piss smells pretty strongly. I'm a scientist.
Exactly how much pee are we talking here?
Just like you can tell Buffalo Bills territory by the heaps of broken tables. Or Eagles territory by the discarded D cell batteries.
And now you've started a religious war. You proud of yourself?
I think this is certainly convincing evidence for the presence of territoriality among pack animals, including humans, but I'd argue it's different to equate that directly to national boundaries. National boundaries aren't based in the same intuitive, biologically-palpable markers, and can be quite defined quite arbitrarily. Will humans innately know they've crossed into another country if there were no signs, border posts or other markers to indicate it? It can be difficult even to know when we've crossed a county line, or even across private property lines. Signs and border posts are the human equivalents of scent markings, of course - but then if there are no signs, would we even notice the border?
So I think we can say that we have an innate tendency to be territorial, but the exact scale and nature of those territorial boundaries are extremely flexible for us. We're not reliant on physical scent marking, but on highly abstract social processes. The countries we have today would not at all have been intuitive or sensible to humans living 3000 years ago - the idea that communities could exist on such a scale would seem ludicrous. Hell, people in the 13 Colonies did not at all think of each other as living within the same community just because they lived within the same federal country line. But generations pass, narratives are created, and presto, the 13 Colonies become the USA, not just administratively, but intuitively and socially. So it's something that ultimately can be immensely flexible.
Wolves in a pack know each other very well, you haven't even met 99.99% of the people living in your country...
IMO, comparing the way animals tend to be territorial with the very recent invention of national borders is dubious. It suggests that borders are natural, human nature, ect, when the reality is Humans have lived without boarders far longer than they've lived with them.
Also
>The alternative has been genocide and war many times.
I'm not sure what you mean by that? The alternative to what, boarders? I'd argue that boarders have cause more genocide and war more than the "alternative."
Someone is a bit desperatly trying to connect the two things, it seems.
> Many pack and social animal use boarders
Many solo animals have defined borders as well, while many social animals don't have defined territories.
> People are no different.
People are very different from wolfes, sorry to burst your bubble.
> Respected boarders allow us get along and cooperate. The alternative has been genocide and war many times.
Are they though? In modern times nearly all the wars and their victims are happening inside states, with resoources and money as goals not borders. The IS did not rise in Iraq because the borders where badly desgined, neither did Soviet Russia start it's domination and rule over eastern europe becasue of ill defined border. The Mongols didn't conquer the whole world because of an absence of borders, nor was the Holocaust triggered by borders.
I’m not sure about that, in my region at least each state has a unique culture and it’s readily acknowledged by pretty much anyone you talk to.
Are you insinuating that the borders of these wolves' packs aren't wholly arbitrary?
This reminds me of that "hog walls around states" map
Oh my god he's walking all over my SCREEN!
"Well, clearly, the blue part is land..."
Sherlock Holmes is the autistic Sherlock Holmes
....And now I am reading this https://www.veterinarypracticenews.com/obsessive-compulsive-disorder-in-animals/
WolfCD.
That’s why, this December, the NFL is committed to bringing awareness of Wolf OCD to you at home as a part of their charitable, lupine-centric outreach campaign.
I'm not sure, I suspect the tracker pings every so often then just connects the dots.
I have to assume this tracking information comes along with first hand accounts, right? Otherwise, why assume they're eating blueberries, rather than predating upon other animals coming to eat the blueberries (rabbits, woodchucks, etc.)?
I didn't see that in the article, just seemed like an odd leap to go from spending time in the blueberry patches to mean they're definitely eating a bunch of blueberries. It would be like thinking bears really like drinking water when the salmon run because they're spending so much time in rivers.
International line, customs be damned.
Zooms out map
"...oh good lord"
r/unexpectedsanandreas
It’s a baked, pie-like dish.
Thank you for this sub :)
Executive Producer Dick Wolf
!Thesaurizethis
It's actually level OT8.
Reads post, thinks this movie already went down hill, reads Directed by Michael Bay... clever, very clever.
The fact that you didn't put some variant of the line "there's a storm coming" in your fake trailer dishonors us all.
And everything changed, when the red wolves attacked
John Woolf*
Missed a golden opportunity for a "John Woo is actually a wolf wearing a Chinese person's skin" joke
*john awooo
And his dick
It's "pack"
I'm guessing they're mostly going off of https://en.wikipedia.org/wiki/Major_histocompatibility_complex comparison for territoriality (like a lot of species are already known to do to find genetically-distant mates.)
The convenient thing about the MHC is that it's not *just* genetic, it also differs based on what diseases you've encountered in your life, so two members of the same pack will smell more similar (since they're constantly passing diseases back and forth, just like a human family) but members of different packs will smell more different.
So you just have to avoid any area that smells like not-you, histochemically, and you'll be fine.
I agree, but I'm a mere piss enthusiast.
Especially after asparagus.
Trust me, I ate some last night.
As an Eagles fan, who's Dad is from Buffalo, that has spent significant time in the Minnesota Boundary Waters....
Just yes. Yes to this thread
Religious wars are easy to win.
All you have to do is kill all the heathens.
The comparison is about the formation of boundaries. They are not comparing a human customs booth to a bush with wolf piss on it.
Hence the idea of tiered governments. Ideally, you can have a first degree connection with a lot more people in your local neighborhood. By extension, a second degree connection with a good majority of your community. A third degree connection with most of your state. etc. Overly large, separated, centralized governing structures don't exactly work perfectly for people either.
How many billions of wolves are there in a pack?
And? Do you think that national identity is nonexistent? Or that humans developed nation-states entirely independently of the fact that we are social animals?
Yeah this is the obvious point people are missing here. If people lived in small tribes with some kind of boundary that 'outsiders' weren't exactly welcome in, alright that kinda resembles a wolf pack. You look out for and trust the few dozen/hundred people that you know, eat with, hunt with, and see literally everyday.
A nation-state of 300 million people, across a continent, and saying "this is my wolf pack, totally a natural and biological human inclination" is ridiculous.
Territorial borders of humans are not a recent invention. It’s a universal trait of humans to group into tribes that control a piece of territory, especially since the invention of farming.
With all due respect it’s almost insulting that you would say something so absolutely contrary to reality with such confidence. Who are you trying to fool and why? I genuinely don’t understand.
> Humans have lived without boarders far longer than they've lived with them.
Do you have a source for this? I'm curious to read a little more.
On the surface i feel like some of our boarders that are in the middle of fields may be more recent but i'd believe lakes/rivers clearly separated territories between different groups of humans.
Humans, however, are very similar to chimps and they definitely have defined territory and borders
Not really arbitrary it's constrained by resources, population size and the presence of other packs as well as geography of rivers or man made citys or obstructions blocking their paths.
Untapped infinite h o g s
Link?
Michael Moore once had a tv show on Fox (seriously) called TV Nation, they had an episode where they looked at pet OCD and related disorders and it was pretty interesting. I particularly remember this dog that had an absolute obsession with a chunk of wood and would carry it everywhere, rub up and all over on it, push it along the ground, etc, not-stop and obsessively. prozac helped him iirc.
EDIT - found it:
https://www.youtube.com/watch?v=ujPjkbI42yA
it's one of a couple segments of that show that were kind of awkward though, in that it seemed like they wanted to mock what they were documenting but ended up not getting the 'right' material to make the participants look bad. Another instance like that featured now-famous presenter Louis Theroux exploring commercial crime scene clean-up services.
I hope someone finds this comment at least somewhat interesting
Hitler was right
Somebody make this a sub, RIGHT NOW
r/subsyoufellfor
Surely you can't be serious?
Like pie?
God *damn* it why didn't I think of that?
Sum savage crowd drawing card: *on ambo, addressing savage colorful assailants clad in mistily Nazi-ish furnishes* "Friends! The land is ours! The mediocre mashers shall be blasted past and for all. And this shall be a terrestrial planet Coloured in way and claw!"
Red philanderers: *verbalise militantly*
Tom Search (polished as a river classical scholar with an patch and a hankie): *controls set on heavier-than-air craft ended the procession reply, drowning out the chromatic colour philanderers' mouths. The weapon system body parts machine hold on the Sum assailant take loss leader* "I don't think up so."
Red Wolf load up person: "Impossibility! Canids can't lock late soul technology!"
Tom Travel: "In one case you bear $10,000 to reach out layer 10 in Church of Scientology and in full clean-handed your scheme of thetans, thing is fermentable." *displaces weapon. The colored Hugo Wolf feature jests as he deepens into a globe the separate of a skyscraper.*
*DIRECTED BY ARCHANGEL BAY LAUREL*
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
John Awoo?
Ah yes, that’s old chestnut
....*sigh* username checks out I guess
Maybe they’re warlock wolves
> It's "pack"
Until blue joins up with red and you're fighting a two-front war.
For the Emperor!
I'm saying that customs booth's are how we make boundaries. Take those away (or the equivalent cultural item, i.e. ritual tree carvings, flags, etc.) and it's anybody's guess where the boundary's supposed to be. Just as if you removed the pissed-on-bushes, the wolves would have no clue as to where the rival packs operate.
Who do you think are closer culturally, a guy who lived his whole life in Seattle and someone from Vancouver, or that same guy from Seattlw with someone from Alabama?
So you agree that humans banding together is natural? Yet somehow can’t fathom doing it at a large scale?
More importantly who the hell is upvoting the idiot.
You just see the superficial but fail to see the reasons that happens, the best explanation for this is the concept of increasing returns of violence, which is the whole premise of the book "The Sovereign Individual".
Idk why you feel insulted when you are the one who's making giant assumptions. Humans are social animals and pretty much every society has some set of rules, that much is true. Which is to say some people will be outsiders. Those "borders" between the inside and the outside though would wary wildly depending on geography, culture, historical context, scarcity etc. That doesn't mean it's somehow "natural" for human societies to be territorial like the wolves are.
Modern international borders on the other hand, are products of nation states, which didn't exist until two hundred or so years ago. They have absolutely nothing to do with behaviors exhibited by pack animals. This is the same idiotic line of thought that led to people calling themselves alpha males and all that.
Oh, so just like human borders then?
https://i.imgur.com/h0wYIxJ.png
https://i.kym-cdn.com/photos/images/original/001/319/692/0a2.png
> I hope someone finds this comment at least somewhat interesting
Mission accomplished :)
I am serious! And don't call me Shirley.
It’s okay. I got your back.
I don't know
Best bot ever.
Good Bot
John oWo
Could be! Explains the teleportations!
People who think borders are an archaic construct? I almost disagree with both posters in some ways. "Borders" are an animal instinct that have no real place in an ideal global society.
The reason it happens is because clan kinship groups benefit by uniting under a banner with territory and borders.
[There are human borders which make absolutely no logical sense](https://nl.wikipedia.org/wiki/Baarle-Nassau#/media/File:Baarle-Nassau_-_Baarle-Hertog-nl.svg) - I don't see those wolves starting to accept enclaves in their territory anytime soon...
Not really at all like modern human borders but go off fam
Oh nice. I was expecting Zelda.
[removed]
<3 you too
Because every person shares the same culture and morals.
We're talking about reality, not idealism. Disagreeing _with_ the way the world works is not the same as disagreeing _about_ the way the world works.
But every person within the same borders share the same culture and morals?
Thinking that nation-states are a natural human inclination and comparable to a fuckin' tribe is hilariously dumb.
###Markdown
Scraping Reddit Data  Using the PRAW library, a wrapper for the Reddit API, everyone can easily scrape data from Reddit or even create a Reddit bot.
###Code
import praw
###Output
_____no_output_____
###Markdown
Before it can be used to scrape data we need to authenticate ourselves. For this we need to create a Reddit instance and provide it with a client_id , client_secret and a user_agent . To create a Reddit application and get your id and secret you need to navigate to [this page](https://www.reddit.com/prefs/apps).
###Code
reddit = praw.Reddit(client_id='my_client_id',
client_secret='my_client_secret',
user_agent='my_user_agent')
###Output
_____no_output_____
###Markdown
We can get information or posts from a specifc subreddit using the reddit.subreddit method and passing it a subreddit name.
###Code
# get 10 hot posts from the MachineLearning subreddit
hot_posts = reddit.subreddit('MachineLearning').hot(limit=10)
###Output
_____no_output_____
###Markdown
Now that we scraped 10 posts we can loop through them and print some information.
###Code
for post in hot_posts:
print(post.title)
# get hot posts from all subreddits
hot_posts = reddit.subreddit('all').hot(limit=10)
for post in hot_posts:
print(post.title)
# get MachineLearning subreddit data
ml_subreddit = reddit.subreddit('MachineLearning')
print(ml_subreddit.description)
###Output
**[Rules For Posts](https://www.reddit.com/r/MachineLearning/about/rules/)**
--------
+[Research](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AResearch)
--------
+[Discussion](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ADiscussion)
--------
+[Project](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AProject)
--------
+[News](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ANews)
--------
***[@slashML on Twitter](https://twitter.com/slashML)***
--------
**Beginners:**
--------
Please have a look at [our FAQ and Link-Collection](http://www.reddit.com/r/MachineLearning/wiki/index)
[Metacademy](http://www.metacademy.org) is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
--------
[Advanced Courses](https://www.reddit.com/r/MachineLearning/comments/51qhc8/phdlevel_courses?st=isz2lqdk&sh=56c58cd6)
--------
**AMAs:**
[Libratus Poker AI Team (12/18/2017)]
(https://www.reddit.com/r/MachineLearning/comments/7jn12v/ama_we_are_noam_brown_and_professor_tuomas/)
[DeepMind AlphaGo Team (10/19/2017)](https://www.reddit.com/r/MachineLearning/comments/76xjb5/ama_we_are_david_silver_and_julian_schrittwieser/)
[Google Brain Team (9/17/2017)](https://www.reddit.com/r/MachineLearning/comments/6z51xb/we_are_the_google_brain_team_wed_love_to_answer/)
[Google Brain Team (8/11/2016)]
(https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/)
[The MalariaSpot Team (2/6/2016)](https://www.reddit.com/r/MachineLearning/comments/4m7ci1/ama_the_malariaspot_team/)
[OpenAI Research Team (1/9/2016)](http://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team/)
[Nando de Freitas (12/26/2015)](http://www.reddit.com/r/MachineLearning/comments/3y4zai/ama_nando_de_freitas/)
[Andrew Ng and Adam Coates (4/15/2015)](http://www.reddit.com/r/MachineLearning/comments/32ihpe/ama_andrew_ng_and_adam_coates/)
[Jürgen Schmidhuber (3/4/2015)](http://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/)
[Geoffrey Hinton (11/10/2014)]
(http://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/)
[Michael Jordan (9/10/2014)](http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan/)
[Yann LeCun (5/15/2014)](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/)
[Yoshua Bengio (2/27/2014)](http://www.reddit.com/r/MachineLearning/comments/1ysry1/ama_yoshua_bengio/)
--------
Related Subreddit :
* [LearnMachineLearning](http://www.reddit.com/r/LearnMachineLearning)
* [Statistics](http://www.reddit.com/r/statistics)
* [Computer Vision](http://www.reddit.com/r/computervision)
* [Compressive Sensing](http://www.reddit.com/r/CompressiveSensing/)
* [NLP] (http://www.reddit.com/r/LanguageTechnology)
* [ML Questions] (http://www.reddit.com/r/MLQuestions)
* /r/MLjobs and /r/BigDataJobs
* /r/datacleaning
* /r/DataScience
* /r/scientificresearch
* /r/artificial
###Markdown
Because we only have a limited amoung of requests per day it is a good idea to save the scraped data in some kind of variable or file.
###Code
import pandas as pd
posts = []
ml_subreddit = reddit.subreddit('MachineLearning')
for post in ml_subreddit.hot(limit=10):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
posts
posts.to_csv('top_ml_subreddit_posts.csv')
###Output
_____no_output_____
###Markdown
PRAW also allows us to get information about a specifc post/submission
###Code
submission = reddit.submission(url="https://www.reddit.com/r/MapPorn/comments/a3p0uq/an_image_of_gps_tracking_of_multiple_wolves_in/")
# or
submission = reddit.submission(id="a3p0uq") #id comes after comments/
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
This will work for some submission, but for others that have more comments this code will throw an AttributeError saying:``AttributeError: 'MoreComments' object has no attribute 'body'``These MoreComments object represent the “load more comments” and “continue this thread” links encountered on the websites, as described in more detail in the comment documentation.There get rid of the MoreComments objects, we can check the datatype of each comment before printing the body.
###Code
from praw.models import MoreComments
for top_level_comment in submission.comments:
if isinstance(top_level_comment, MoreComments):
continue
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The below cell is another way of getting rid of the MoreComments objects
###Code
submission.comments.replace_more(limit=0)
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The above codeblocks only got the top lebel comments. If we want to get the complete ``CommentForest`` we need to use the ``.list`` method.
###Code
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
print(comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
It might still be, just with a believable shitpost title included
It IS funny that the scientists used the MS Paint palette.
Considering how wolves mark their territory it might actually be a piss post.
Red is the Australia of Wolf Risk.
Well these wolves do use shit posts to mark their territories, so is a quality post that depends on shitposts.
I thought it was a map of something moving over the U.S. over time...
Wolf 1: "Damn it, Frederick, you can't go into other packs' territory like that! You'll start a wolf war!"
Wolf 2 (alpha af): *lights cigarette, drags a long puff, then flicks it onto the gas can next to a red wolf pissed-on tree. The tree explodes into a fireball.* "I'm counting on it."
It's less about keen senses, and more about copious amounts of piss.
HE IS...THE WHITE WOLF. DAKINGINDANORF
On the top half I think the white line is just a border - it's fatter and in straight lines.
Is this a visual result of dogs peeing on trees?
Because some wolves aren't looking for anything logical, like prey. They won't listen to huffs, barks, growls and howls. Some wolves just want to watch the world burn.
You might even call it...a lone wolf.
I bet a strong smell of wolf piss clearly marks the territory borders...
Those senses are why dogs don't belong in wilderness areas. They like to pee and poop to leave scent marks just like wolves do to mark their territory, and they especially like to do it in a place where they smell something interesting. When they do that, they cover the territorial markers of other animals. It's like going for a walk along the border of North and South Korea and kicking over all of the border markers.
I'm pretty sure that's just Moon Moon getting lost.
Yeah...canines are incredible and smelling and differentiating urine. Other than everyone, who knew?
The WhiteFang clan is aggressive as always, always picking fights with the BloodMoons.
Or could be downs
Well, white things have a long history of claiming territory that doesn't belong to them.
> there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
I'm thinking it's more for reasons of "genetic diversity", if ya know what I mean.
if there's one thing my dog understands it is boundaries
> I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
Or has a really bad sense of smell. Maybe a cold.
Strategy: piss on EVERYTHING
They are very good at smelling pee and very good at peeing everywhere.
I mean, they literally piss and shit all over everything on purpose, don't really need keen senses.
They piss on every stone to mark boundaries. They have a great sense of smell. I expect it's easy to avoid the others territory.
It really is.
I saw that guy too. Gives no fucks
Just like how police put up yellow tape around a crime scene and you know to not walk in, wolves leave a yellow tape of pee.
Seriously, I think you’re right. I think there might be an alpha that’s more alpha than the others of the other packs. Basically like the Hulk, don’t fuck with him and he won’t fuck with you. Fuck with him and you’re done.
White is going for the diplomatic victory.
Hes just lookin for the ladies
Looks like the white colored pack has one wolf that just doesn’t give a fuck and goes all the way east
He’s been writing love letters to the orange one and finally had enough of his family and his life and decided to go on an adventure to find his true love in the north.
Because whites invade colors. Ya, we get it.
White and yellow!
I think it might have to do with the reds being pushovers.
It looks like they have less territory and are more cramped and white and yellow venture into it.
Maybe he's just super popular and the other wolves have accepted him into their clique.
Monkeys will go on solo missions into enemy territory to mate.
I wonder if that's what this Lone Wolf is doing.
White wolf is a pokemon go champ
Yeah, it seems to demonstrate that territory boundaries, like human countries, aren't just a construct of our own intelligence, but rather a more innate behaviour of social predators in general.
Right? I expected a lot more overlap on every border.
The "straight" lines inside the borders are what interest me. Are those "Game Trails"?
Wouldn't be surprised if that pack is in an area of the forest with less food so they are forced to hunt in others' territory at times
I'm *pretty* sure the thick white line represents the national border between US and Canada (national park is near the NE edge of Minnesota)...
It's Moon Moon
The white wolves invading the red wolves land... who would have guessed that would happen?
Maybe he just has wares to sell to the other packs.
White wolf: Hi! Want to be my friend?
Typical white wolf privilege.
I was gonna say white wolf gives no fucks
The White Wolf is named Geralt, and he's just an adventurer
He's a migrant worker wolf
It's called white privilege.
*Lone Wolf...
But he respects yellow wolf
Conversely, it seems like the red wolves are very polite
Fucking white privileged wolves
Maybe he is a diplomat or is juvenile looking for a mate.
White wolf has a kick ass name too. It's the moonshadow pack. No wonder they impose all over the place...
Typical Whitey.
The master wolf race, the white wolf.
Red wolf has cardio for days
One man wolf pack.
>wolves eating beaver
( ͡° ͜ʖ ͡°)
Veryinteresting
Oh man, that video has some excellent wildlife footage.
The thought of a wolf snarling at me with blueberry juice all in its mouth sounds horrifying
Thanks for posting!
PSA: open in incognito to bypass the "answer survey to read more" bullshit
That dude went all the way thru purple, look at the upper right
Hey don't call Triss Merigold like that!
>I fucked your bitch, you wolf motherfucker!
Poor green wolves have the smallest territory :(
I wanted to know how large the ranges are, so I compared them with Google Maps satellite images. They're roughly 10-15 km across each, if anyone else was wondering.
Like dicks
Do you get asked to show a picture of a six pack often?
Seven, there is a ~~king~~ wolf ~~in the north~~ across the river
More likely tail.
it would be beautiful to display area as discrete points sized by frequency of occupation. the lines crossing each other over and over again destroys interesting (and meaningful) information.
I wondered the same thing, so I found the location on Google Maps and....nothing. It looks the same as all the territory around it. It's near a highway, but that highway passes straight through their territory and doesnt affect the wolves' movements anywhere else.
humans would be my guess
Or the Phantom of the Opera
Man San Andreas was hard enough WITH cheats on.
Not a fan of diversity?
That’s... insane. I saw this post several places today and was curious about the scale, but I figured we were talking <5 sq miles each, not >25...
>larger than San Marino
What is San Marino and, out of curiosity, why would you choose that as a reference?
Holy shit. That was crazy. Fuck those cannibal chimps. I hope the escapees get their homeboys and retaliate.
Thanks for the link! Good stuff!
White Wolf is defs ShadowClan
Go Team White
[deleted]
This is some straight up "Warriors" shit, but with wolves.
Don't leave us hanging
Which people would those be? I've tried to come up with non-racist translation, but I'm failing.
Borders aren't racist. Packs aren't race based afaik either?
Lmao that’s the Canadian-U.S. border you’re trying to point out.
Now that would be next level trolling.
It seems to check out. [Facebook page with a lot more information here.](https://www.facebook.com/VoyageursWolfProject)
Ummm animals don’t travel in perfectly straight lines over long distances... well maybe birds but not wolves. Is it maybe pinging their location every x amount of time and connecting the dots??
If not, it’s pretty impressive how far some wolves go in a perfectly straight line.
Underrated comment
Low risk, but little prey. +2. Yellow is Europe. Abundance of prey, surrounded by hostile packs. +5
get this man in front of an executive producer...this...instant!
FWIW, I heard that the study that coined the term 'alpha dogs' was found to be a bit wrong when that pack was revisited. If memory serves me well, they found out that the "alphas" turned out to be the parents of the other wolves/dogs. So what we think of as "alpha" behavior is just parenting.
Hi! Jim, from Netflix. You're greenlit for 3 seasons! We look forward to seeing the pilot of "Wolf War" very soon!
It might be a non-alpha female trying to get pregnant (which they are not supposed to do she would be punished for doing this) but the mating drive can be quite strong for them I guess sometimes. Her own alpha male wouldn't mate with her.
Either that, or he's on drugs. My money is on drugs.
Game of Dens
!ThesaurizeThis
> *a red wolf pissed-on tree.*
beautiful. :p
I imagine him being in a Romeo Juliet scenario. Sneaking off to get some forbidden tail.
Wolfare
>Damn it, Frederick,
Frederick: We're werewolves, not swear wolves.
Oh Summer... first wolf war huh
blachsheep 2: blackwolf
!Thesaurizethis !DoTheFandango
https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/watch%3Fv%3DJw0c9z8EllE&ved=2ahUKEwiQ29Tm_4vfAhVJzFQKHQlNCZsQyCkwAHoECAsQBA&usg=AOvVaw2SDijitxjRxb6h5Su393wd
I’d like to see this movie made in the style of A Dog’s Way Home, all happy on the surface, but twisted and dark underneath.
Why does wolf 1 give wolf 2 a name but Frederick is still called wolf 2 after lol
r/prequelmemes talking to each other
Plot twist: Frederick is played by Liam Neeson. No one knows it, but he was once a man, now reincarnated as a wolf. He’s seeking vengeance with a fury that few of the other wolves can even comprehend.
The Grey 2: Wolf War is gonna be AWESOME!
It’s survivors all over again
Played by Willem Defoe
In the styling of Fantastic Mr. Fox
you forgot to narrow his eye lids
Some wolves just want to watch the world burn
Some wolves just want to watch the world burn
They don't use their senses to detect the piss?
You can see it at boundary waters canoe area in Minnesota. Clearly demarcated with pee in the snow and on trees.
The White Wolf has rested long enough.
KINGINDANORF!!
KINGINDANORF
Maybe it’s a few fat white wolves known to most folks as Border Patrol.
Yeah that's the Canadian border
upvoted so more people can DISCOVER THE WOLF TRUTH
What do you expect? They killed his dog...
That white wolf, he just wants to watch the humans...turn.
You bet your ass my dog now owns this whole national forest I'd like to see yours try to take it from him! Oh wait never mind he peed on it it's his now...
And yet when I do this to claim my favorite booth at Applebee's I'm "scaring the children" and deserve a "drunk and disorderly charge."
Not nice!!
And the Kashmir border dispute is due to a lack of urine.
Idk man, don't you think that's a bit of an overstatement? I think its a pretty big leap to connect hunting territories of Wolves to human countries.
huh?
natural mountain ranges, rivers, and coast lines play a huge part in territory boundaries
So explain the similar size of each territory. We could also point out that, unlike humans, they don’t seek to expand territory, but rather only “have” territory to keep an even distribution of available food/water
Oh of course, boggles my mind that some people think otherwise.
Talk about jumping to conclusions...
It’s pee. All pee.
Nice observation! It's an effect of the GPS collar "fix rate.". They vary according to collar and research needs (older collars - lower fix rates generally) . Fix rates between 15 minutes and 15 hours are common in habitat studies, so depending on the behavior of the animal, you can get real long lines connecting the sample locations.
If you looks closely the white wolf paths outside of the normal "area" typically follows the shoreline of a body of water. So, I think you are right.
They are the only pack whose territory is not adjacent to a water source.
Maybe they are actually the Kings of the wolves and they go to others' lands to collect tribute.
You're thinking of the black pack.
100%, I'm no wolf expert but I don't think they're known for walking in straight lines and turning at 90 degree angles.
Maybe it’s a few fat white wolves who are known to most folks as Border Patrol.
That's an unfortunate color choice....
The norther one in the top right corner is the national border (and park boundary, which coincides), the southern white line, which everybody is joking about is just the park boundary. Three of the packs reside entirely outside the national park.
Who invited moon moon?
Lissen chat.
White wolf imperialists 😤
Apparently they eat a lot of beaver
Right?! I don’t know that much about wolves but I thought the distinct boundaries were fascinating! I thought of this sub immediately :)
I think that might be a county line
That's the MN border, not the white wolf line (same for the thick line in the teal and green)
But the green territory is right up along that waterline to the north, it's probably very bountiful hunting grounds.
In fact, that might be why they have less space - each pack takes as much space as it needs, and they need less
##teampink
Poor Orange Wolves to the north are so sparse. I hope the Green Wolves don't invade their land.
Looks like there’s a smaller orangeish one, cuts off though
Fun fact: A housecat's "territory", meaning the area in which they range when outside, is usually about a mile in diameter.
UNLESS they're fixed, when it drops down to less than half that.
...are dicks roughly all the same size?
Well now that he has it, someone better ask
Is that a separate pack or did one crazy yellow wolf cross over?
⠰⡿⠿⠛⠛⠻⠿⣷
⠀⠀⠀⠀⠀⠀⣀⣄⡀⠀⠀⠀⠀⢀⣀⣀⣤⣄⣀⡀
⠀⠀⠀⠀⠀⢸⣿⣿⣷⠀⠀⠀⠀⠛⠛⣿⣿⣿⡛⠿⠷
⠀⠀⠀⠀⠀⠘⠿⠿⠋⠀⠀⠀⠀⠀⠀⣿⣿⣿⠇
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁
⠀⠀⠀⠀⣿⣷⣄⠀⢶⣶⣷⣶⣶⣤⣀
⠀⠀⠀⠀⣿⣿⣿⠀⠀⠀⠀⠀⠈⠙⠻⠗
⠀⠀⠀⣰⣿⣿⣿⠀⠀⠀⠀⢀⣀⣠⣤⣴⣶⡄
⠀⣠⣾⣿⣿⣿⣥⣶⣶⣿⣿⣿⣿⣿⠿⠿⠛⠃
⢰⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄
⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡁
⠈⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠁
⠀⠀⠛⢿⣿⣿⣿⣿⣿⣿⡿⠟
⠀⠀⠀⠀⠀⠉⠉⠉
I think it eliminates some of the information (as you say), but it gets a different picture across. No visualization can do everything, and this seems to still give a lot of useful information for many purposes.
Definitely a Balrog pit.
I believe diversity is an old, old wooden ship..used in the civil war era.
I went to a list of countries by area and it was the one closest to 25 sq. miles.
[San Marino](https://en.wikipedia.org/wiki/San_Marino) is one of the smallest countries in the world and is completely surrounded by Italy, similar to the Vatican City.
>Borders aren't racist.
bold statement on reddit
It's a joke
Guess we'll never know.
Anyone know how to make Quiche?
Easy karma.. Just open ms paint, go crazy with lines, then make up a title.
r/mspaintisbeautiful
These GPS tracker don’t record continuously to increase the efficiency of their battery. This is basically a set of data points being connected with lines.
EP: "I'd like to produce your wolf movie"
/u/mcjunker: *lights cigarette, drags a long puff, then flicks it onto the gas can he brought and placed next to a desk. The desk explodes into a fireball.* "I'm counting on it."
​
Now is no time to CRY WOLF! This summer, Tom Cruise is A WOLF IN DEATHS CLOTHING.
[Done](https://i.etsystatic.com/14448759/r/il/34afc0/1341759833/il_570xN.1341759833_oi7i.jpg)
Are we blind?! Deploy the executive producers!
Where's Ryan Reynolds when you need him?
So the Chads are actually the Dads?
The research that underlay that old pack theory was done on a bunch of unrelated wolves in captivity. It's as if human sociology were based entirely on a single prison study. Later field studies revealed that, just as you said, wolf packs are families, and the "alpha pair" is just mom and dad.
My understanding was that the issue with the original study was that it was done with wolves in captivity, with no relation to each other, which didn't apply to wolves in the wild. And, as you say, to the extent that there are heirarchies in wild packs, it is on account of family structure.
Sooo... is that why the dad bod fad is happening?
Hey TIL this as well! Loved it.
Yeah more specifically, "alpha" behavior only presents itself among wolves/dogs in captivity or otherwise domesticated. Like the author of Jaws, the author of this study always regretted publishing his findings due to the volume of misinformation that's spread since
"Identity theft is not a joke Jim! Millions of families suffer each year!... Michael!"
yeah, because her own alpha male would be her dad. she or he might just be running around trying to get their teen wolf freak on
He's not a coke head, he just really likes the smell.
For fucks sake, Moon Moon, get your act together
Classicist 1: "Bloody it, Town, you can't go into else lades' geographic region like that! You'll leave a Hugo Wolf war!"
Wolf 2 (important af): *returns coffin nail, fall backs a long-term puff of air, then twinkles it onto the fossil fuel can adjacent to a cherry assaulter pissed-on run. The Sir Herbert Beerbohm Tree irrupts into a meteor.* "I'm count on it."
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
Eyelid. Obviously, he's rocking an eyepatch.
Otherwise, good correction.
Oh, they use their senses, I'm just saying that given the amounts of pee involved, their sense of smell probably doesn't have to be so keen.
But that could just be drunk Viking fans. They’ll pee on anything.
Maybe it's a bunch of furries.
**AWOOOOOOO**OOOooo
I'm pretty sure there's more than enough on the Indian side
/r/brandnewsentence
[deleted]
Human borders have had a real impact on animal lives. The Iron Curtain in particular resulted in a long area that wildlife could flourish in.
A study found that deer in the Harz mountains in central Germany still avoid crossing the old East-West border; which as well as landmines had a rather nasty spring-mounted mine mounted to the final fence before the West that would kill deer from 120 metres away.
Also, since the Schengen Area has eliminated border controls from Poland to Germany, wolves have been turning up in the latter.
How? These are literally Wolf-states...
It's not a leap to say that social animals form social groups. And groupings have boundaries, which is what makes them groups in the first place. You can observe this in everything from mice to chimps.
Well, som borders are wholly arbitrary constructs. Pretty much every US state border, for example.
There are people who think that human society and culture just appeared one day.
Lol that's a border
You're like the autistic sherlock holmes.
Noticed this too less water means less prey
Wolf OCD is a really understudied problem
You’re definitely right that it’s the border BUT you can see examples on this map of the white wolf walking a straight line and then turning at a 90 degree angle in the bottom left. So I wouldn’t use that as an argument point.
To assure battery life, the GPS tracker may only broadcast its location every few minutes. When you connect the dots you get straight lines.
Edit: according to the [Article](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not) they take 72 points per day.
Fucking moon moon
[Woof](https://youtu.be/dhh0yqPT0zY)
Did not know that wolves eat blueberries. Didn’t even know they could since foods like grapes are poisonous
r/Dataisbeautiful might like this too
Interstate wolf, man. He can't be stopped!
Dang you are right, that about had my jaw dropping thinking of the implications. He was all over blues territory though.
That's what she said.
I'll show myself out.
"Grove Street is king! Say it with me niggas, Grove Street is KING! Yeah!"
That entire area has a relatively thin population so the green ones are just getting shafted.
This is exactly what I was thinking. Who needs the space when you’re in such a prime location.
Unsubscribe from cat facts
Having an outdoor housecat that is unaltered is a poor, irresponsible decision in regards to helping reduce feral cat populations and having an alive pet.
Typically. Depends on the gender though.
Hey /u/joppiejoo can you show me a picture of your sixpack?
a different pack
Oh, I didn't know that. thanks!
pardon?
/r/outoftheloop
r/gifrecipes
Red wolf pack leader: *on stump, addressing feral red wolves dressed in vaguely Nazi-ish uniforms* "Brothers! The forest is ours! The inferior wolves shall be destroyed once and for all. And this shall be a world Red in tooth and claw!"
Red wolves: *snarl militantly*
Tom Cruise (dressed as a white wolf with an eyepatch and a bandana): *flies attack helicopter over the parade field, drowning out the red wolves' chants. The missile systems auto lock on the Red wolf pack leader* "I don't think so."
Red wolf pack leader: "Impossible! Wolves can't operate advanced human technology!"
Tom Cruise: "Once you pay $10,000 to reach level 10 in Scientology and fully clear your system of thetans, anything is possible." *launches missile. The red wolf leader screams as he erupts into a fireball the size of a skyscraper.*
*DIRECTED BY MICHAEL BAY*
Directed by Michael Bay, with executive producer John Woo.
r/redditwritesamovie
This is actually a rather obscure horror film from 2003 called *Cry\_Wolf* with Lindy "Canada's most sexily evil redhead" Booth in it.
And the way it feels inside his nose!
Hmmm...
Hmm, I kinda wanna disagree with you there. I think it would require some level of *keen-ness* to be able to tell apart your own pact's piss and other pact's piss. After all, belonging in a different pact wouldn't impact your smell that much, since they are literally in the same geological location and it's also quite likely that they share a small gene pool and thus there wouldn't be much difference (or so we think) in the smell.
Also if there are copious amounts of piss from numerous different wolves in the forest, than it would require even more keen senses to be able to draw useful information from such a mess.
It's true, piss smells pretty strongly. I'm a scientist.
Exactly how much pee are we talking here?
Just like you can tell Buffalo Bills territory by the heaps of broken tables. Or Eagles territory by the discarded D cell batteries.
And now you've started a religious war. You proud of yourself?
I think this is certainly convincing evidence for the presence of territoriality among pack animals, including humans, but I'd argue it's different to equate that directly to national boundaries. National boundaries aren't based in the same intuitive, biologically-palpable markers, and can be quite defined quite arbitrarily. Will humans innately know they've crossed into another country if there were no signs, border posts or other markers to indicate it? It can be difficult even to know when we've crossed a county line, or even across private property lines. Signs and border posts are the human equivalents of scent markings, of course - but then if there are no signs, would we even notice the border?
So I think we can say that we have an innate tendency to be territorial, but the exact scale and nature of those territorial boundaries are extremely flexible for us. We're not reliant on physical scent marking, but on highly abstract social processes. The countries we have today would not at all have been intuitive or sensible to humans living 3000 years ago - the idea that communities could exist on such a scale would seem ludicrous. Hell, people in the 13 Colonies did not at all think of each other as living within the same community just because they lived within the same federal country line. But generations pass, narratives are created, and presto, the 13 Colonies become the USA, not just administratively, but intuitively and socially. So it's something that ultimately can be immensely flexible.
Wolves in a pack know each other very well, you haven't even met 99.99% of the people living in your country...
IMO, comparing the way animals tend to be territorial with the very recent invention of national borders is dubious. It suggests that borders are natural, human nature, ect, when the reality is Humans have lived without boarders far longer than they've lived with them.
Also
>The alternative has been genocide and war many times.
I'm not sure what you mean by that? The alternative to what, boarders? I'd argue that boarders have cause more genocide and war more than the "alternative."
Someone is a bit desperatly trying to connect the two things, it seems.
> Many pack and social animal use boarders
Many solo animals have defined borders as well, while many social animals don't have defined territories.
> People are no different.
People are very different from wolfes, sorry to burst your bubble.
> Respected boarders allow us get along and cooperate. The alternative has been genocide and war many times.
Are they though? In modern times nearly all the wars and their victims are happening inside states, with resoources and money as goals not borders. The IS did not rise in Iraq because the borders where badly desgined, neither did Soviet Russia start it's domination and rule over eastern europe becasue of ill defined border. The Mongols didn't conquer the whole world because of an absence of borders, nor was the Holocaust triggered by borders.
I’m not sure about that, in my region at least each state has a unique culture and it’s readily acknowledged by pretty much anyone you talk to.
Are you insinuating that the borders of these wolves' packs aren't wholly arbitrary?
This reminds me of that "hog walls around states" map
Oh my god he's walking all over my SCREEN!
"Well, clearly, the blue part is land..."
Sherlock Holmes is the autistic Sherlock Holmes
....And now I am reading this https://www.veterinarypracticenews.com/obsessive-compulsive-disorder-in-animals/
WolfCD.
That’s why, this December, the NFL is committed to bringing awareness of Wolf OCD to you at home as a part of their charitable, lupine-centric outreach campaign.
I'm not sure, I suspect the tracker pings every so often then just connects the dots.
I have to assume this tracking information comes along with first hand accounts, right? Otherwise, why assume they're eating blueberries, rather than predating upon other animals coming to eat the blueberries (rabbits, woodchucks, etc.)?
I didn't see that in the article, just seemed like an odd leap to go from spending time in the blueberry patches to mean they're definitely eating a bunch of blueberries. It would be like thinking bears really like drinking water when the salmon run because they're spending so much time in rivers.
International line, customs be damned.
Zooms out map
"...oh good lord"
r/unexpectedsanandreas
It’s a baked, pie-like dish.
Thank you for this sub :)
Executive Producer Dick Wolf
!Thesaurizethis
It's actually level OT8.
Reads post, thinks this movie already went down hill, reads Directed by Michael Bay... clever, very clever.
The fact that you didn't put some variant of the line "there's a storm coming" in your fake trailer dishonors us all.
And everything changed, when the red wolves attacked
John Woolf*
Missed a golden opportunity for a "John Woo is actually a wolf wearing a Chinese person's skin" joke
*john awooo
And his dick
It's "pack"
I'm guessing they're mostly going off of https://en.wikipedia.org/wiki/Major_histocompatibility_complex comparison for territoriality (like a lot of species are already known to do to find genetically-distant mates.)
The convenient thing about the MHC is that it's not *just* genetic, it also differs based on what diseases you've encountered in your life, so two members of the same pack will smell more similar (since they're constantly passing diseases back and forth, just like a human family) but members of different packs will smell more different.
So you just have to avoid any area that smells like not-you, histochemically, and you'll be fine.
I agree, but I'm a mere piss enthusiast.
Especially after asparagus.
Trust me, I ate some last night.
As an Eagles fan, who's Dad is from Buffalo, that has spent significant time in the Minnesota Boundary Waters....
Just yes. Yes to this thread
Religious wars are easy to win.
All you have to do is kill all the heathens.
The comparison is about the formation of boundaries. They are not comparing a human customs booth to a bush with wolf piss on it.
Hence the idea of tiered governments. Ideally, you can have a first degree connection with a lot more people in your local neighborhood. By extension, a second degree connection with a good majority of your community. A third degree connection with most of your state. etc. Overly large, separated, centralized governing structures don't exactly work perfectly for people either.
How many billions of wolves are there in a pack?
And? Do you think that national identity is nonexistent? Or that humans developed nation-states entirely independently of the fact that we are social animals?
Yeah this is the obvious point people are missing here. If people lived in small tribes with some kind of boundary that 'outsiders' weren't exactly welcome in, alright that kinda resembles a wolf pack. You look out for and trust the few dozen/hundred people that you know, eat with, hunt with, and see literally everyday.
A nation-state of 300 million people, across a continent, and saying "this is my wolf pack, totally a natural and biological human inclination" is ridiculous.
Territorial borders of humans are not a recent invention. It’s a universal trait of humans to group into tribes that control a piece of territory, especially since the invention of farming.
With all due respect it’s almost insulting that you would say something so absolutely contrary to reality with such confidence. Who are you trying to fool and why? I genuinely don’t understand.
> Humans have lived without boarders far longer than they've lived with them.
Do you have a source for this? I'm curious to read a little more.
On the surface i feel like some of our boarders that are in the middle of fields may be more recent but i'd believe lakes/rivers clearly separated territories between different groups of humans.
Humans, however, are very similar to chimps and they definitely have defined territory and borders
Not really arbitrary it's constrained by resources, population size and the presence of other packs as well as geography of rivers or man made citys or obstructions blocking their paths.
Untapped infinite h o g s
Link?
Michael Moore once had a tv show on Fox (seriously) called TV Nation, they had an episode where they looked at pet OCD and related disorders and it was pretty interesting. I particularly remember this dog that had an absolute obsession with a chunk of wood and would carry it everywhere, rub up and all over on it, push it along the ground, etc, not-stop and obsessively. prozac helped him iirc.
EDIT - found it:
https://www.youtube.com/watch?v=ujPjkbI42yA
it's one of a couple segments of that show that were kind of awkward though, in that it seemed like they wanted to mock what they were documenting but ended up not getting the 'right' material to make the participants look bad. Another instance like that featured now-famous presenter Louis Theroux exploring commercial crime scene clean-up services.
I hope someone finds this comment at least somewhat interesting
Hitler was right
Somebody make this a sub, RIGHT NOW
r/subsyoufellfor
Surely you can't be serious?
Like pie?
God *damn* it why didn't I think of that?
Sum savage crowd drawing card: *on ambo, addressing savage colorful assailants clad in mistily Nazi-ish furnishes* "Friends! The land is ours! The mediocre mashers shall be blasted past and for all. And this shall be a terrestrial planet Coloured in way and claw!"
Red philanderers: *verbalise militantly*
Tom Search (polished as a river classical scholar with an patch and a hankie): *controls set on heavier-than-air craft ended the procession reply, drowning out the chromatic colour philanderers' mouths. The weapon system body parts machine hold on the Sum assailant take loss leader* "I don't think up so."
Red Wolf load up person: "Impossibility! Canids can't lock late soul technology!"
Tom Travel: "In one case you bear $10,000 to reach out layer 10 in Church of Scientology and in full clean-handed your scheme of thetans, thing is fermentable." *displaces weapon. The colored Hugo Wolf feature jests as he deepens into a globe the separate of a skyscraper.*
*DIRECTED BY ARCHANGEL BAY LAUREL*
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
John Awoo?
Ah yes, that’s old chestnut
....*sigh* username checks out I guess
Maybe they’re warlock wolves
> It's "pack"
Until blue joins up with red and you're fighting a two-front war.
For the Emperor!
I'm saying that customs booth's are how we make boundaries. Take those away (or the equivalent cultural item, i.e. ritual tree carvings, flags, etc.) and it's anybody's guess where the boundary's supposed to be. Just as if you removed the pissed-on-bushes, the wolves would have no clue as to where the rival packs operate.
Who do you think are closer culturally, a guy who lived his whole life in Seattle and someone from Vancouver, or that same guy from Seattlw with someone from Alabama?
So you agree that humans banding together is natural? Yet somehow can’t fathom doing it at a large scale?
More importantly who the hell is upvoting the idiot.
You just see the superficial but fail to see the reasons that happens, the best explanation for this is the concept of increasing returns of violence, which is the whole premise of the book "The Sovereign Individual".
Idk why you feel insulted when you are the one who's making giant assumptions. Humans are social animals and pretty much every society has some set of rules, that much is true. Which is to say some people will be outsiders. Those "borders" between the inside and the outside though would wary wildly depending on geography, culture, historical context, scarcity etc. That doesn't mean it's somehow "natural" for human societies to be territorial like the wolves are.
Modern international borders on the other hand, are products of nation states, which didn't exist until two hundred or so years ago. They have absolutely nothing to do with behaviors exhibited by pack animals. This is the same idiotic line of thought that led to people calling themselves alpha males and all that.
Oh, so just like human borders then?
https://i.imgur.com/h0wYIxJ.png
https://i.kym-cdn.com/photos/images/original/001/319/692/0a2.png
> I hope someone finds this comment at least somewhat interesting
Mission accomplished :)
I am serious! And don't call me Shirley.
It’s okay. I got your back.
I don't know
Best bot ever.
Good Bot
John oWo
Could be! Explains the teleportations!
People who think borders are an archaic construct? I almost disagree with both posters in some ways. "Borders" are an animal instinct that have no real place in an ideal global society.
The reason it happens is because clan kinship groups benefit by uniting under a banner with territory and borders.
[There are human borders which make absolutely no logical sense](https://nl.wikipedia.org/wiki/Baarle-Nassau#/media/File:Baarle-Nassau_-_Baarle-Hertog-nl.svg) - I don't see those wolves starting to accept enclaves in their territory anytime soon...
Not really at all like modern human borders but go off fam
Oh nice. I was expecting Zelda.
[removed]
<3 you too
Because every person shares the same culture and morals.
We're talking about reality, not idealism. Disagreeing _with_ the way the world works is not the same as disagreeing _about_ the way the world works.
But every person within the same borders share the same culture and morals?
Thinking that nation-states are a natural human inclination and comparable to a fuckin' tribe is hilariously dumb.
###Markdown
Scraping Reddit Data Run in Google Colab View source on GitHub  Using the PRAW library, a wrapper for the Reddit API, everyone can easily scrape data from Reddit or even create a Reddit bot.
###Code
!pip install praw
import praw
###Output
_____no_output_____
###Markdown
Before it can be used to scrape data we need to authenticate ourselves. For this we need to create a Reddit instance and provide it with a client_id , client_secret and a user_agent . To create a Reddit application and get your id and secret you need to navigate to [this page](https://www.reddit.com/prefs/apps).
###Code
reddit = praw.Reddit(client_id='my_client_id',
client_secret='my_client_secret',
user_agent='my_user_agent')
###Output
_____no_output_____
###Markdown
We can get information or posts from a specifc subreddit using the reddit.subreddit method and passing it a subreddit name.
###Code
# get 10 hot posts from the MachineLearning subreddit
hot_posts = reddit.subreddit('MachineLearning').hot(limit=10)
###Output
_____no_output_____
###Markdown
Now that we scraped 10 posts we can loop through them and print some information.
###Code
for post in hot_posts:
print(post.title)
# get hot posts from all subreddits
hot_posts = reddit.subreddit('all').hot(limit=10)
for post in hot_posts:
print(post.title)
# get MachineLearning subreddit data
ml_subreddit = reddit.subreddit('MachineLearning')
print(ml_subreddit.description)
###Output
**[Rules For Posts](https://www.reddit.com/r/MachineLearning/about/rules/)**
--------
+[Research](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AResearch)
--------
+[Discussion](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ADiscussion)
--------
+[Project](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3AProject)
--------
+[News](https://www.reddit.com/r/MachineLearning/search?sort=new&restrict_sr=on&q=flair%3ANews)
--------
***[@slashML on Twitter](https://twitter.com/slashML)***
--------
**Beginners:**
--------
Please have a look at [our FAQ and Link-Collection](http://www.reddit.com/r/MachineLearning/wiki/index)
[Metacademy](http://www.metacademy.org) is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
--------
[Advanced Courses](https://www.reddit.com/r/MachineLearning/comments/51qhc8/phdlevel_courses?st=isz2lqdk&sh=56c58cd6)
--------
**AMAs:**
[Libratus Poker AI Team (12/18/2017)]
(https://www.reddit.com/r/MachineLearning/comments/7jn12v/ama_we_are_noam_brown_and_professor_tuomas/)
[DeepMind AlphaGo Team (10/19/2017)](https://www.reddit.com/r/MachineLearning/comments/76xjb5/ama_we_are_david_silver_and_julian_schrittwieser/)
[Google Brain Team (9/17/2017)](https://www.reddit.com/r/MachineLearning/comments/6z51xb/we_are_the_google_brain_team_wed_love_to_answer/)
[Google Brain Team (8/11/2016)]
(https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/)
[The MalariaSpot Team (2/6/2016)](https://www.reddit.com/r/MachineLearning/comments/4m7ci1/ama_the_malariaspot_team/)
[OpenAI Research Team (1/9/2016)](http://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_openai_research_team/)
[Nando de Freitas (12/26/2015)](http://www.reddit.com/r/MachineLearning/comments/3y4zai/ama_nando_de_freitas/)
[Andrew Ng and Adam Coates (4/15/2015)](http://www.reddit.com/r/MachineLearning/comments/32ihpe/ama_andrew_ng_and_adam_coates/)
[Jürgen Schmidhuber (3/4/2015)](http://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/)
[Geoffrey Hinton (11/10/2014)]
(http://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/)
[Michael Jordan (9/10/2014)](http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan/)
[Yann LeCun (5/15/2014)](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/)
[Yoshua Bengio (2/27/2014)](http://www.reddit.com/r/MachineLearning/comments/1ysry1/ama_yoshua_bengio/)
--------
Related Subreddit :
* [LearnMachineLearning](http://www.reddit.com/r/LearnMachineLearning)
* [Statistics](http://www.reddit.com/r/statistics)
* [Computer Vision](http://www.reddit.com/r/computervision)
* [Compressive Sensing](http://www.reddit.com/r/CompressiveSensing/)
* [NLP] (http://www.reddit.com/r/LanguageTechnology)
* [ML Questions] (http://www.reddit.com/r/MLQuestions)
* /r/MLjobs and /r/BigDataJobs
* /r/datacleaning
* /r/DataScience
* /r/scientificresearch
* /r/artificial
###Markdown
Because we only have a limited amoung of requests per day it is a good idea to save the scraped data in some kind of variable or file.
###Code
import pandas as pd
posts = []
ml_subreddit = reddit.subreddit('MachineLearning')
for post in ml_subreddit.hot(limit=10):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
posts
posts.to_csv('top_ml_subreddit_posts.csv')
###Output
_____no_output_____
###Markdown
PRAW also allows us to get information about a specifc post/submission
###Code
submission = reddit.submission(url="https://www.reddit.com/r/MapPorn/comments/a3p0uq/an_image_of_gps_tracking_of_multiple_wolves_in/")
# or
submission = reddit.submission(id="a3p0uq") #id comes after comments/
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
This will work for some submission, but for others that have more comments this code will throw an AttributeError saying:``AttributeError: 'MoreComments' object has no attribute 'body'``These MoreComments object represent the “load more comments” and “continue this thread” links encountered on the websites, as described in more detail in the comment documentation.There get rid of the MoreComments objects, we can check the datatype of each comment before printing the body.
###Code
from praw.models import MoreComments
for top_level_comment in submission.comments:
if isinstance(top_level_comment, MoreComments):
continue
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The below cell is another way of getting rid of the MoreComments objects
###Code
submission.comments.replace_more(limit=0)
for top_level_comment in submission.comments:
print(top_level_comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
###Markdown
The above codeblocks only got the top lebel comments. If we want to get the complete ``CommentForest`` we need to use the ``.list`` method.
###Code
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
print(comment.body)
###Output
Source: [https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
I thought this was a shit post made in paint before I read the title
Wow, that’s very cool. To think how keen their senses must be to recognize and avoid each other and their territories. Plus, I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
That’s really cool. The edges are surprisingly defined.
White wolf is a dick constantly trespassing other's territories.
[Link to Story](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not)
Cool to imagine that there are similar zones surrounding all these, we just didn't tag those wolves.
You know the white wolf fucked some red's bitch for sure.
It’s wild how they are all roughly the same size.
This what i am gonna show people when they ask for a photo of a sixpack
That's actually awesome.
[deleted]
White Wolf pack is looking for fight
/r/dataisbeautiful
But actually beautiful, not "here's a graph of my heart rate when I went on a date." This is actually gorgeous, informative, and awesome.
I want to know WTF lives here that the wolf keeps avoiding?https://i.imgur.com/T7NrS7F.jpg
I want more data!!! Is the white pack made up of many aggressive wolves so they spread to other territories periodically? Or is it just the one wolf who doesn’t give as much of a fuck? Does a tighter cluster mean a smaller pack or just more territorial? What is the age, gender, and type of wolves that are being tracked?! So many questions, so little information.
/r/misleadingthumbnails minimap of the grand final of the 3v3 Age of Empires 2 tournament
The white pack is drawing a wolf face.
White one tried to be naughty a little by sneaking into red zone just a little bit.
Its amazing how they all seem to be similar in area size.
That one white wolf is Big and Bad.
[removed]
Am I a wolf? If my senses and economic status allowed me to stay so perfectly sequestered from other people, I would without question.
It's content like this that makes reddit great, well done OP.
This is a window into the mind of a wolf. Not only do they have clearly defined ranges, they have clearly defined packs and each wolf must know each other's scent markings. I am blown away.
Also the blue pack is way too cautious.
I found the location on Google Maps. It looks like the green pack's territory covers about 25 sq. miles (larger than San Marino) and also includes the NOvA Far Detector: [https://www.burnsmcd.com/projects/nova-far-detector](https://www.burnsmcd.com/projects/nova-far-detector)
I'd love to see something similar but with Chimps. Who actually wage war, have soldiers, etc.
[Something representing this](https://www.youtube.com/watch?v=a7XuXi3mqYM) (potentially NSFW Chimpanzee cannibalism)
There's a white wolf plotting some shit.
White Clan Wolf: " Let’s do this… LEEROOOOOOOOOOOOOOOOOOOOY JEEEEEENKIIIIIIIIIIINS!"
This research project is called the Voyageurs Wolf Project, and it has a Facebook page associated with it where this map was originally posted. If you're interested in following the project and/or learning more about Wolves, take a look at it!
[https://www.facebook.com/VoyageursWolfProject/](https://www.facebook.com/VoyageursWolfProject/)
This is brilliant!
Wolves fascinate and terrify me in equal measure. Incredible animals with amazing social structures.
The wolves know who each other are.
That one White wolf gets way the fuck into Red's range, like shit I knew I should have asked for directions
Reminds me of the Warrior series about wild cats.
Those white wolves are pretty ballsy
Get these on a live transmitter and put it on a website where people can watch. Instant sports. People will watch these wolves and what they do and root for each color.
White wolf is on an adventure to find itself... Deep man.
Does yellow have the prime territory? They would have to defend incursions from competitors on all sides. I wonder if they are the most badass.
these wolves know how to walk in straight lines very well.
Very interesting. I just wish the white and yellow were seperated by a different color so we can distinguish them better.
Some are adventurous, some are invasive, and some just stay where they are.
This map is in for a great story line of the six packs
The white wolf must listen to manowar
oh wow, that's fascinating
Leaked image of Mount and Blade: Bannerlord factions and their territories
All of the wolves are named Toby
I worked and hung out with that guy. Beautiful country out there, miss it.
scale please?
[STAY AWAY FROM THIS AREA!](https://i.imgur.com/bqc12Be.jpg)
White wolves are the bravest apparently... Those white lines are everywhere
Borders are an imaginary concept made up by humans
see? humans didnt invent the borders
Just before seeing this post I was looking at the map of racial distribution in New York City and I can't help but notice the similarities.
Looks like white privilege to me.:)
White wolf's a hoe
Green and Yellow have it the hardest. I wonder if the spots they're in have the best food and water supply.
DNA analysis of markings and droppings to go with this? Would be nice to compare in 10 years to see if there is some intermingling.
Even their travelling patterns vary. For example the red pack has covered most of their defined territory that they don't stray from where as the white pack tends to push the boundaries of their territory.
I would love to see similar with other pack animals such as painted wolves or single mothers.
and then the fire nation attacked
Makes sense. I very rarely go into other people’s homes as well.
Curious why the white and pink groups have so much "open area"? My guess is that their territory has less "usable" land. Maybe a gorge or something.
This thought brought me to the question? Does each "pack" have the same territory as far as usable land?
The two largest territories are the white and pink, yet they have the most "unused" land?
If each territory is equal in usable land, what would dictate this? Are the packs the same in number? Or is it because of dominance and fighting among the different packs?
Please tell me there isn't a wolf counsel the decides....gerrymandering.
I find it interesting how there doesn't seem to a centralized or preferred spot, but rather the entire territory is relatively evenly covered. You'd think they'd have preferred hunting grounds or game trails or something like that, but I guess not.
edit* actually it looks like Yellow at least has a central hub, but Red is almost completely even. I wonder if the streaks of density are game trails or part of a defined route for grazing animals they prey on. There's an obvious strait line streakiness here and straight lines tend to be uncommon in nature.
Looks to me like the white pack doesn't give one single fuck about pack boundaries
Now if only people would do this by minding their own business
The Ptarmigan's Dilemma. Really good book about evolution and natural phenomena/behaviour like this. Great sections on bear's activity very similar to this, would highly recommend!
r/dataisbeautiful
You can see the white signal scouting on the perifery of the territories. Super cool.
The white wolf pack don't give a fuck bro.
White pack is quite adventurous.
The white line wolf went hella far
I really wonder if the fact that the yellow & green wolve packs are used to encounter more neignbouring packs (being squeezed in the middle) makes them have a different perspective on their enviornment than the others? I mean could they feel more threatened, having "more" neignbours? could they feel more pressure to up their game for resources because of "more" potential rivals?
interesting to think about that.
Next week on gangland
White group just doesn't give a fuck. Going through any group they see fit.
See the seams between the colors? Avoid those places if you don't like stepping in wolf pee.
The hell? These white wolves going on a cross country trip or something?
It seems like white takes some risks.
Awesome! Good one for r/dataisbeautiful !
Wolves trying to take over tamriel
This is actually my husbands family avoiding each other through out the year besides holidays.
White wolf goes where tf he wants
And all I ask it the dude riding my bus doesn’t shove his junk onto me at every stop.
Infinity Dogs
We should enforce open borders for wolves. They seem like nazis. Let's make them pay reparations.
r/colorblindgore
Whenever I see stories/studies like this I always find myself comparing humans to animals. These wolves clearly keep to their own areas for the most part. It’s almost like certain groups of people shouldn’t intertwine with each other, but in today’s world everything is about accepting all. It seems we force cultures to coincide with each other and it doesn’t always workout the greatest.
Each other's territories. Wolves have territories.
Damn, wolves are so racist you'd think they were humans!
The chad white wolf pack vs the virgin red wolfpack
Anyone know if there’s open access data similar to this?
Bet you can find a lot of marked trees at the "borders"
There's something oddly familiar about things from California spreading their tendrils out to the PNW.
when the documentary makers say his area, I will take the word area serios
Yellow wolf pack is basically Israel
White wolf pack has no chill lul!
I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit:
- [/r/circlebroke2] [humans are literally the same as wolves. Jordan Peterson told me so. excuse me while I go piss on my house to mark my territory](https://www.reddit.com/r/circlebroke2/comments/a3trjr/humans_are_literally_the_same_as_wolves_jordan/)
- [/r/dataisbeautiful] [I find this extremely interesting](https://www.reddit.com/r/dataisbeautiful/comments/a3qdiv/i_find_this_extremely_interesting/)
- [/r/dataisbeautiful] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/dataisbeautiful/comments/a3v1dd/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/the_pack] [“AN IMAGE OF GPS TRACKING OF MULTIPLE WOLVES IN SIX DIFFERENT PACKS AROUND VOYAGEURS NATIONAL PARK SHOWS HOW MUCH THE WOLF PACKS AVOID EACH OTHER'S RANGE. IMAGE COURTESY OF THOMAS GABLE”](https://www.reddit.com/r/THE_PACK/comments/a3r1sr/an_image_of_gps_tracking_of_multiple_wolves_in/)
- [/r/unpanderers] [“An image of GPS tracking of multiple wolves in six different packs around Voyageurs National Park shows how much the wolf packs avoid each other's range. Image courtesy of Thomas Gable”](https://www.reddit.com/r/UnPanderers/comments/a3vg63/an_image_of_gps_tracking_of_multiple_wolves_in/)
*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))*
They dont need walls to know where their territory ends.
r/wolves
Interesting!
\#Respect
Seeing this, I'm reminded of the film The Warriors. Got to get back to Coney boys, we're on our own.
Are the longer, straight lines just glitches in the gps?
I’ve never heard of an LGBT Black Metal before
I like the one adventurous grey wolf who snuck deep into red territory and then beelines back home. I imagine a Romeo and Juliet-esque scenario between him and a red wolf Capulet.
Gang territory
Do these packs exchange members to promote cross breeding?
Got a bold white wolf. One just straight checking red's whole edge there. Unless that's a border of some kind in a similar color.
How do you get into wolf pack? Do you have to be born in it or can you maybe change clan down the line?
What I want to know is why some of those lines are so perfectly straight!
It sort of looks like a wolf head, too, in profile at least if you squint hard enough. Purple is the ear, white and blue the mouth, green has an eye carved out in the middle of it, and red's the neck.
Wolf countries
blue pack rules!
Very cool. Thanks for sharing OP.
The Six Kingdoms.
Green packed is either going to start a war or die off slowly, they're cornered with no room for expansion.
Source?
That white wolf is going places
White pack lowkey scouting into others' areas though
Yellow is in a bad spot of war breaks out
I honestly thought this was a shitty map of the Old World.
White is clearly insanity wolf.
Thank you Thomas, very cool!
It almost looks like a giant multicolored wolf head.
I thought this was some guy drawing the borders of Skyrim.
Red wolves have cardio for days
It looks like a member of the white pack has no problem mingling in blue territory. Like some sort of unaccompanied wolf
The white pack love challenges.
The unseen maps of animals
I believe this is called the competitive exclusion principle. Species that compete, include animals of the same species tend to show these characteristics when living in the same proximity.
Wolves: Mind your business, we'll mind ours.
Humans: Let's fuck some people/places/things up.
0/6 gang hideouts discovered
Just out of curiosity what's the size of each territory?
Meanwhile, gold wolf pack is solo on its own island paradise at the top
Even wolves are scared of wolves.
Roughly looks like a map of the world, esp the right side
There's 1 white wolf who don't give a shit. See the white line on the right
Chad white wolves don't care about your "boundaries"
That's a lot of pee.
I'm curious about how a wolf decides to venture within their territory.
Pink wolf, Blue wolf, and White wolf have the widest spans of territory, but Red wolf, Yellow wolf, and Green wolf are more comprehensive about where they go in their own territory.
The red wolf appears to be on meth.
I'd like to see this crossreferenced with the distance in which a wolf could smell or hear or otherwise detect fellow wolves!
Me when I see people I graduated with at Walmart.
TIL wolves are bad at MSpaint
The white pack doesn’t give a fuck
It's like scandinavian people waiting for the bus.
But I would think there needs to be some interaction so that they don't interbreed in order to keep the gene pools healthy.
Basically like gangs, and the gangsters do tend to display animal-like behaviors. Build the wall
#openborders
The white line is the trader
White just be like "they see me Rollin"
The white bastards would take some liberties like that. Typical.
The white wolf clan also appears to be knowledgeable of his GPS tracker and has drawn a modern art version of a white wolfs face.
It might still be, just with a believable shitpost title included
It IS funny that the scientists used the MS Paint palette.
Considering how wolves mark their territory it might actually be a piss post.
Red is the Australia of Wolf Risk.
Well these wolves do use shit posts to mark their territories, so is a quality post that depends on shitposts.
I thought it was a map of something moving over the U.S. over time...
Wolf 1: "Damn it, Frederick, you can't go into other packs' territory like that! You'll start a wolf war!"
Wolf 2 (alpha af): *lights cigarette, drags a long puff, then flicks it onto the gas can next to a red wolf pissed-on tree. The tree explodes into a fireball.* "I'm counting on it."
It's less about keen senses, and more about copious amounts of piss.
HE IS...THE WHITE WOLF. DAKINGINDANORF
On the top half I think the white line is just a border - it's fatter and in straight lines.
Is this a visual result of dogs peeing on trees?
Because some wolves aren't looking for anything logical, like prey. They won't listen to huffs, barks, growls and howls. Some wolves just want to watch the world burn.
You might even call it...a lone wolf.
I bet a strong smell of wolf piss clearly marks the territory borders...
Those senses are why dogs don't belong in wilderness areas. They like to pee and poop to leave scent marks just like wolves do to mark their territory, and they especially like to do it in a place where they smell something interesting. When they do that, they cover the territorial markers of other animals. It's like going for a walk along the border of North and South Korea and kicking over all of the border markers.
I'm pretty sure that's just Moon Moon getting lost.
Yeah...canines are incredible and smelling and differentiating urine. Other than everyone, who knew?
The WhiteFang clan is aggressive as always, always picking fights with the BloodMoons.
Or could be downs
Well, white things have a long history of claiming territory that doesn't belong to them.
> there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
I'm thinking it's more for reasons of "genetic diversity", if ya know what I mean.
if there's one thing my dog understands it is boundaries
> I like to think that there’s one from the white colored clan who just goes way into the other territories because, well, he’s a badass.
Or has a really bad sense of smell. Maybe a cold.
Strategy: piss on EVERYTHING
They are very good at smelling pee and very good at peeing everywhere.
I mean, they literally piss and shit all over everything on purpose, don't really need keen senses.
They piss on every stone to mark boundaries. They have a great sense of smell. I expect it's easy to avoid the others territory.
It really is.
I saw that guy too. Gives no fucks
Just like how police put up yellow tape around a crime scene and you know to not walk in, wolves leave a yellow tape of pee.
Seriously, I think you’re right. I think there might be an alpha that’s more alpha than the others of the other packs. Basically like the Hulk, don’t fuck with him and he won’t fuck with you. Fuck with him and you’re done.
White is going for the diplomatic victory.
Hes just lookin for the ladies
Looks like the white colored pack has one wolf that just doesn’t give a fuck and goes all the way east
He’s been writing love letters to the orange one and finally had enough of his family and his life and decided to go on an adventure to find his true love in the north.
Because whites invade colors. Ya, we get it.
White and yellow!
I think it might have to do with the reds being pushovers.
It looks like they have less territory and are more cramped and white and yellow venture into it.
Maybe he's just super popular and the other wolves have accepted him into their clique.
Monkeys will go on solo missions into enemy territory to mate.
I wonder if that's what this Lone Wolf is doing.
White wolf is a pokemon go champ
Yeah, it seems to demonstrate that territory boundaries, like human countries, aren't just a construct of our own intelligence, but rather a more innate behaviour of social predators in general.
Right? I expected a lot more overlap on every border.
The "straight" lines inside the borders are what interest me. Are those "Game Trails"?
Wouldn't be surprised if that pack is in an area of the forest with less food so they are forced to hunt in others' territory at times
I'm *pretty* sure the thick white line represents the national border between US and Canada (national park is near the NE edge of Minnesota)...
It's Moon Moon
The white wolves invading the red wolves land... who would have guessed that would happen?
Maybe he just has wares to sell to the other packs.
White wolf: Hi! Want to be my friend?
Typical white wolf privilege.
I was gonna say white wolf gives no fucks
The White Wolf is named Geralt, and he's just an adventurer
He's a migrant worker wolf
It's called white privilege.
*Lone Wolf...
But he respects yellow wolf
Conversely, it seems like the red wolves are very polite
Fucking white privileged wolves
Maybe he is a diplomat or is juvenile looking for a mate.
White wolf has a kick ass name too. It's the moonshadow pack. No wonder they impose all over the place...
Typical Whitey.
The master wolf race, the white wolf.
Red wolf has cardio for days
One man wolf pack.
>wolves eating beaver
( ͡° ͜ʖ ͡°)
Veryinteresting
Oh man, that video has some excellent wildlife footage.
The thought of a wolf snarling at me with blueberry juice all in its mouth sounds horrifying
Thanks for posting!
PSA: open in incognito to bypass the "answer survey to read more" bullshit
That dude went all the way thru purple, look at the upper right
Hey don't call Triss Merigold like that!
>I fucked your bitch, you wolf motherfucker!
Poor green wolves have the smallest territory :(
I wanted to know how large the ranges are, so I compared them with Google Maps satellite images. They're roughly 10-15 km across each, if anyone else was wondering.
Like dicks
Do you get asked to show a picture of a six pack often?
Seven, there is a ~~king~~ wolf ~~in the north~~ across the river
More likely tail.
it would be beautiful to display area as discrete points sized by frequency of occupation. the lines crossing each other over and over again destroys interesting (and meaningful) information.
I wondered the same thing, so I found the location on Google Maps and....nothing. It looks the same as all the territory around it. It's near a highway, but that highway passes straight through their territory and doesnt affect the wolves' movements anywhere else.
humans would be my guess
Or the Phantom of the Opera
Man San Andreas was hard enough WITH cheats on.
Not a fan of diversity?
That’s... insane. I saw this post several places today and was curious about the scale, but I figured we were talking <5 sq miles each, not >25...
>larger than San Marino
What is San Marino and, out of curiosity, why would you choose that as a reference?
Holy shit. That was crazy. Fuck those cannibal chimps. I hope the escapees get their homeboys and retaliate.
Thanks for the link! Good stuff!
White Wolf is defs ShadowClan
Go Team White
[deleted]
This is some straight up "Warriors" shit, but with wolves.
Don't leave us hanging
Which people would those be? I've tried to come up with non-racist translation, but I'm failing.
Borders aren't racist. Packs aren't race based afaik either?
Lmao that’s the Canadian-U.S. border you’re trying to point out.
Now that would be next level trolling.
It seems to check out. [Facebook page with a lot more information here.](https://www.facebook.com/VoyageursWolfProject)
Ummm animals don’t travel in perfectly straight lines over long distances... well maybe birds but not wolves. Is it maybe pinging their location every x amount of time and connecting the dots??
If not, it’s pretty impressive how far some wolves go in a perfectly straight line.
Underrated comment
Low risk, but little prey. +2. Yellow is Europe. Abundance of prey, surrounded by hostile packs. +5
get this man in front of an executive producer...this...instant!
FWIW, I heard that the study that coined the term 'alpha dogs' was found to be a bit wrong when that pack was revisited. If memory serves me well, they found out that the "alphas" turned out to be the parents of the other wolves/dogs. So what we think of as "alpha" behavior is just parenting.
Hi! Jim, from Netflix. You're greenlit for 3 seasons! We look forward to seeing the pilot of "Wolf War" very soon!
It might be a non-alpha female trying to get pregnant (which they are not supposed to do she would be punished for doing this) but the mating drive can be quite strong for them I guess sometimes. Her own alpha male wouldn't mate with her.
Either that, or he's on drugs. My money is on drugs.
Game of Dens
!ThesaurizeThis
> *a red wolf pissed-on tree.*
beautiful. :p
I imagine him being in a Romeo Juliet scenario. Sneaking off to get some forbidden tail.
Wolfare
>Damn it, Frederick,
Frederick: We're werewolves, not swear wolves.
Oh Summer... first wolf war huh
blachsheep 2: blackwolf
!Thesaurizethis !DoTheFandango
https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/watch%3Fv%3DJw0c9z8EllE&ved=2ahUKEwiQ29Tm_4vfAhVJzFQKHQlNCZsQyCkwAHoECAsQBA&usg=AOvVaw2SDijitxjRxb6h5Su393wd
I’d like to see this movie made in the style of A Dog’s Way Home, all happy on the surface, but twisted and dark underneath.
Why does wolf 1 give wolf 2 a name but Frederick is still called wolf 2 after lol
r/prequelmemes talking to each other
Plot twist: Frederick is played by Liam Neeson. No one knows it, but he was once a man, now reincarnated as a wolf. He’s seeking vengeance with a fury that few of the other wolves can even comprehend.
The Grey 2: Wolf War is gonna be AWESOME!
It’s survivors all over again
Played by Willem Defoe
In the styling of Fantastic Mr. Fox
you forgot to narrow his eye lids
Some wolves just want to watch the world burn
Some wolves just want to watch the world burn
They don't use their senses to detect the piss?
You can see it at boundary waters canoe area in Minnesota. Clearly demarcated with pee in the snow and on trees.
The White Wolf has rested long enough.
KINGINDANORF!!
KINGINDANORF
Maybe it’s a few fat white wolves known to most folks as Border Patrol.
Yeah that's the Canadian border
upvoted so more people can DISCOVER THE WOLF TRUTH
What do you expect? They killed his dog...
That white wolf, he just wants to watch the humans...turn.
You bet your ass my dog now owns this whole national forest I'd like to see yours try to take it from him! Oh wait never mind he peed on it it's his now...
And yet when I do this to claim my favorite booth at Applebee's I'm "scaring the children" and deserve a "drunk and disorderly charge."
Not nice!!
And the Kashmir border dispute is due to a lack of urine.
Idk man, don't you think that's a bit of an overstatement? I think its a pretty big leap to connect hunting territories of Wolves to human countries.
huh?
natural mountain ranges, rivers, and coast lines play a huge part in territory boundaries
So explain the similar size of each territory. We could also point out that, unlike humans, they don’t seek to expand territory, but rather only “have” territory to keep an even distribution of available food/water
Oh of course, boggles my mind that some people think otherwise.
Talk about jumping to conclusions...
It’s pee. All pee.
Nice observation! It's an effect of the GPS collar "fix rate.". They vary according to collar and research needs (older collars - lower fix rates generally) . Fix rates between 15 minutes and 15 hours are common in habitat studies, so depending on the behavior of the animal, you can get real long lines connecting the sample locations.
If you looks closely the white wolf paths outside of the normal "area" typically follows the shoreline of a body of water. So, I think you are right.
They are the only pack whose territory is not adjacent to a water source.
Maybe they are actually the Kings of the wolves and they go to others' lands to collect tribute.
You're thinking of the black pack.
100%, I'm no wolf expert but I don't think they're known for walking in straight lines and turning at 90 degree angles.
Maybe it’s a few fat white wolves who are known to most folks as Border Patrol.
That's an unfortunate color choice....
The norther one in the top right corner is the national border (and park boundary, which coincides), the southern white line, which everybody is joking about is just the park boundary. Three of the packs reside entirely outside the national park.
Who invited moon moon?
Lissen chat.
White wolf imperialists 😤
Apparently they eat a lot of beaver
Right?! I don’t know that much about wolves but I thought the distinct boundaries were fascinating! I thought of this sub immediately :)
I think that might be a county line
That's the MN border, not the white wolf line (same for the thick line in the teal and green)
But the green territory is right up along that waterline to the north, it's probably very bountiful hunting grounds.
In fact, that might be why they have less space - each pack takes as much space as it needs, and they need less
##teampink
Poor Orange Wolves to the north are so sparse. I hope the Green Wolves don't invade their land.
Looks like there’s a smaller orangeish one, cuts off though
Fun fact: A housecat's "territory", meaning the area in which they range when outside, is usually about a mile in diameter.
UNLESS they're fixed, when it drops down to less than half that.
...are dicks roughly all the same size?
Well now that he has it, someone better ask
Is that a separate pack or did one crazy yellow wolf cross over?
⠰⡿⠿⠛⠛⠻⠿⣷
⠀⠀⠀⠀⠀⠀⣀⣄⡀⠀⠀⠀⠀⢀⣀⣀⣤⣄⣀⡀
⠀⠀⠀⠀⠀⢸⣿⣿⣷⠀⠀⠀⠀⠛⠛⣿⣿⣿⡛⠿⠷
⠀⠀⠀⠀⠀⠘⠿⠿⠋⠀⠀⠀⠀⠀⠀⣿⣿⣿⠇
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠁
⠀⠀⠀⠀⣿⣷⣄⠀⢶⣶⣷⣶⣶⣤⣀
⠀⠀⠀⠀⣿⣿⣿⠀⠀⠀⠀⠀⠈⠙⠻⠗
⠀⠀⠀⣰⣿⣿⣿⠀⠀⠀⠀⢀⣀⣠⣤⣴⣶⡄
⠀⣠⣾⣿⣿⣿⣥⣶⣶⣿⣿⣿⣿⣿⠿⠿⠛⠃
⢰⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡄
⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡁
⠈⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠁
⠀⠀⠛⢿⣿⣿⣿⣿⣿⣿⡿⠟
⠀⠀⠀⠀⠀⠉⠉⠉
I think it eliminates some of the information (as you say), but it gets a different picture across. No visualization can do everything, and this seems to still give a lot of useful information for many purposes.
Definitely a Balrog pit.
I believe diversity is an old, old wooden ship..used in the civil war era.
I went to a list of countries by area and it was the one closest to 25 sq. miles.
[San Marino](https://en.wikipedia.org/wiki/San_Marino) is one of the smallest countries in the world and is completely surrounded by Italy, similar to the Vatican City.
>Borders aren't racist.
bold statement on reddit
It's a joke
Guess we'll never know.
Anyone know how to make Quiche?
Easy karma.. Just open ms paint, go crazy with lines, then make up a title.
r/mspaintisbeautiful
These GPS tracker don’t record continuously to increase the efficiency of their battery. This is basically a set of data points being connected with lines.
EP: "I'd like to produce your wolf movie"
/u/mcjunker: *lights cigarette, drags a long puff, then flicks it onto the gas can he brought and placed next to a desk. The desk explodes into a fireball.* "I'm counting on it."
​
Now is no time to CRY WOLF! This summer, Tom Cruise is A WOLF IN DEATHS CLOTHING.
[Done](https://i.etsystatic.com/14448759/r/il/34afc0/1341759833/il_570xN.1341759833_oi7i.jpg)
Are we blind?! Deploy the executive producers!
Where's Ryan Reynolds when you need him?
So the Chads are actually the Dads?
The research that underlay that old pack theory was done on a bunch of unrelated wolves in captivity. It's as if human sociology were based entirely on a single prison study. Later field studies revealed that, just as you said, wolf packs are families, and the "alpha pair" is just mom and dad.
My understanding was that the issue with the original study was that it was done with wolves in captivity, with no relation to each other, which didn't apply to wolves in the wild. And, as you say, to the extent that there are heirarchies in wild packs, it is on account of family structure.
Sooo... is that why the dad bod fad is happening?
Hey TIL this as well! Loved it.
Yeah more specifically, "alpha" behavior only presents itself among wolves/dogs in captivity or otherwise domesticated. Like the author of Jaws, the author of this study always regretted publishing his findings due to the volume of misinformation that's spread since
"Identity theft is not a joke Jim! Millions of families suffer each year!... Michael!"
yeah, because her own alpha male would be her dad. she or he might just be running around trying to get their teen wolf freak on
He's not a coke head, he just really likes the smell.
For fucks sake, Moon Moon, get your act together
Classicist 1: "Bloody it, Town, you can't go into else lades' geographic region like that! You'll leave a Hugo Wolf war!"
Wolf 2 (important af): *returns coffin nail, fall backs a long-term puff of air, then twinkles it onto the fossil fuel can adjacent to a cherry assaulter pissed-on run. The Sir Herbert Beerbohm Tree irrupts into a meteor.* "I'm count on it."
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
Eyelid. Obviously, he's rocking an eyepatch.
Otherwise, good correction.
Oh, they use their senses, I'm just saying that given the amounts of pee involved, their sense of smell probably doesn't have to be so keen.
But that could just be drunk Viking fans. They’ll pee on anything.
Maybe it's a bunch of furries.
**AWOOOOOOO**OOOooo
I'm pretty sure there's more than enough on the Indian side
/r/brandnewsentence
[deleted]
Human borders have had a real impact on animal lives. The Iron Curtain in particular resulted in a long area that wildlife could flourish in.
A study found that deer in the Harz mountains in central Germany still avoid crossing the old East-West border; which as well as landmines had a rather nasty spring-mounted mine mounted to the final fence before the West that would kill deer from 120 metres away.
Also, since the Schengen Area has eliminated border controls from Poland to Germany, wolves have been turning up in the latter.
How? These are literally Wolf-states...
It's not a leap to say that social animals form social groups. And groupings have boundaries, which is what makes them groups in the first place. You can observe this in everything from mice to chimps.
Well, som borders are wholly arbitrary constructs. Pretty much every US state border, for example.
There are people who think that human society and culture just appeared one day.
Lol that's a border
You're like the autistic sherlock holmes.
Noticed this too less water means less prey
Wolf OCD is a really understudied problem
You’re definitely right that it’s the border BUT you can see examples on this map of the white wolf walking a straight line and then turning at a 90 degree angle in the bottom left. So I wouldn’t use that as an argument point.
To assure battery life, the GPS tracker may only broadcast its location every few minutes. When you connect the dots you get straight lines.
Edit: according to the [Article](https://www.duluthnewstribune.com/news/science-and-nature/4538836-voyageurs-national-park-wolves-eating-beaver-and-blueberries-not) they take 72 points per day.
Fucking moon moon
[Woof](https://youtu.be/dhh0yqPT0zY)
Did not know that wolves eat blueberries. Didn’t even know they could since foods like grapes are poisonous
r/Dataisbeautiful might like this too
Interstate wolf, man. He can't be stopped!
Dang you are right, that about had my jaw dropping thinking of the implications. He was all over blues territory though.
That's what she said.
I'll show myself out.
"Grove Street is king! Say it with me niggas, Grove Street is KING! Yeah!"
That entire area has a relatively thin population so the green ones are just getting shafted.
This is exactly what I was thinking. Who needs the space when you’re in such a prime location.
Unsubscribe from cat facts
Having an outdoor housecat that is unaltered is a poor, irresponsible decision in regards to helping reduce feral cat populations and having an alive pet.
Typically. Depends on the gender though.
Hey /u/joppiejoo can you show me a picture of your sixpack?
a different pack
Oh, I didn't know that. thanks!
pardon?
/r/outoftheloop
r/gifrecipes
Red wolf pack leader: *on stump, addressing feral red wolves dressed in vaguely Nazi-ish uniforms* "Brothers! The forest is ours! The inferior wolves shall be destroyed once and for all. And this shall be a world Red in tooth and claw!"
Red wolves: *snarl militantly*
Tom Cruise (dressed as a white wolf with an eyepatch and a bandana): *flies attack helicopter over the parade field, drowning out the red wolves' chants. The missile systems auto lock on the Red wolf pack leader* "I don't think so."
Red wolf pack leader: "Impossible! Wolves can't operate advanced human technology!"
Tom Cruise: "Once you pay $10,000 to reach level 10 in Scientology and fully clear your system of thetans, anything is possible." *launches missile. The red wolf leader screams as he erupts into a fireball the size of a skyscraper.*
*DIRECTED BY MICHAEL BAY*
Directed by Michael Bay, with executive producer John Woo.
r/redditwritesamovie
This is actually a rather obscure horror film from 2003 called *Cry\_Wolf* with Lindy "Canada's most sexily evil redhead" Booth in it.
And the way it feels inside his nose!
Hmmm...
Hmm, I kinda wanna disagree with you there. I think it would require some level of *keen-ness* to be able to tell apart your own pact's piss and other pact's piss. After all, belonging in a different pact wouldn't impact your smell that much, since they are literally in the same geological location and it's also quite likely that they share a small gene pool and thus there wouldn't be much difference (or so we think) in the smell.
Also if there are copious amounts of piss from numerous different wolves in the forest, than it would require even more keen senses to be able to draw useful information from such a mess.
It's true, piss smells pretty strongly. I'm a scientist.
Exactly how much pee are we talking here?
Just like you can tell Buffalo Bills territory by the heaps of broken tables. Or Eagles territory by the discarded D cell batteries.
And now you've started a religious war. You proud of yourself?
I think this is certainly convincing evidence for the presence of territoriality among pack animals, including humans, but I'd argue it's different to equate that directly to national boundaries. National boundaries aren't based in the same intuitive, biologically-palpable markers, and can be quite defined quite arbitrarily. Will humans innately know they've crossed into another country if there were no signs, border posts or other markers to indicate it? It can be difficult even to know when we've crossed a county line, or even across private property lines. Signs and border posts are the human equivalents of scent markings, of course - but then if there are no signs, would we even notice the border?
So I think we can say that we have an innate tendency to be territorial, but the exact scale and nature of those territorial boundaries are extremely flexible for us. We're not reliant on physical scent marking, but on highly abstract social processes. The countries we have today would not at all have been intuitive or sensible to humans living 3000 years ago - the idea that communities could exist on such a scale would seem ludicrous. Hell, people in the 13 Colonies did not at all think of each other as living within the same community just because they lived within the same federal country line. But generations pass, narratives are created, and presto, the 13 Colonies become the USA, not just administratively, but intuitively and socially. So it's something that ultimately can be immensely flexible.
Wolves in a pack know each other very well, you haven't even met 99.99% of the people living in your country...
IMO, comparing the way animals tend to be territorial with the very recent invention of national borders is dubious. It suggests that borders are natural, human nature, ect, when the reality is Humans have lived without boarders far longer than they've lived with them.
Also
>The alternative has been genocide and war many times.
I'm not sure what you mean by that? The alternative to what, boarders? I'd argue that boarders have cause more genocide and war more than the "alternative."
Someone is a bit desperatly trying to connect the two things, it seems.
> Many pack and social animal use boarders
Many solo animals have defined borders as well, while many social animals don't have defined territories.
> People are no different.
People are very different from wolfes, sorry to burst your bubble.
> Respected boarders allow us get along and cooperate. The alternative has been genocide and war many times.
Are they though? In modern times nearly all the wars and their victims are happening inside states, with resoources and money as goals not borders. The IS did not rise in Iraq because the borders where badly desgined, neither did Soviet Russia start it's domination and rule over eastern europe becasue of ill defined border. The Mongols didn't conquer the whole world because of an absence of borders, nor was the Holocaust triggered by borders.
I’m not sure about that, in my region at least each state has a unique culture and it’s readily acknowledged by pretty much anyone you talk to.
Are you insinuating that the borders of these wolves' packs aren't wholly arbitrary?
This reminds me of that "hog walls around states" map
Oh my god he's walking all over my SCREEN!
"Well, clearly, the blue part is land..."
Sherlock Holmes is the autistic Sherlock Holmes
....And now I am reading this https://www.veterinarypracticenews.com/obsessive-compulsive-disorder-in-animals/
WolfCD.
That’s why, this December, the NFL is committed to bringing awareness of Wolf OCD to you at home as a part of their charitable, lupine-centric outreach campaign.
I'm not sure, I suspect the tracker pings every so often then just connects the dots.
I have to assume this tracking information comes along with first hand accounts, right? Otherwise, why assume they're eating blueberries, rather than predating upon other animals coming to eat the blueberries (rabbits, woodchucks, etc.)?
I didn't see that in the article, just seemed like an odd leap to go from spending time in the blueberry patches to mean they're definitely eating a bunch of blueberries. It would be like thinking bears really like drinking water when the salmon run because they're spending so much time in rivers.
International line, customs be damned.
Zooms out map
"...oh good lord"
r/unexpectedsanandreas
It’s a baked, pie-like dish.
Thank you for this sub :)
Executive Producer Dick Wolf
!Thesaurizethis
It's actually level OT8.
Reads post, thinks this movie already went down hill, reads Directed by Michael Bay... clever, very clever.
The fact that you didn't put some variant of the line "there's a storm coming" in your fake trailer dishonors us all.
And everything changed, when the red wolves attacked
John Woolf*
Missed a golden opportunity for a "John Woo is actually a wolf wearing a Chinese person's skin" joke
*john awooo
And his dick
It's "pack"
I'm guessing they're mostly going off of https://en.wikipedia.org/wiki/Major_histocompatibility_complex comparison for territoriality (like a lot of species are already known to do to find genetically-distant mates.)
The convenient thing about the MHC is that it's not *just* genetic, it also differs based on what diseases you've encountered in your life, so two members of the same pack will smell more similar (since they're constantly passing diseases back and forth, just like a human family) but members of different packs will smell more different.
So you just have to avoid any area that smells like not-you, histochemically, and you'll be fine.
I agree, but I'm a mere piss enthusiast.
Especially after asparagus.
Trust me, I ate some last night.
As an Eagles fan, who's Dad is from Buffalo, that has spent significant time in the Minnesota Boundary Waters....
Just yes. Yes to this thread
Religious wars are easy to win.
All you have to do is kill all the heathens.
The comparison is about the formation of boundaries. They are not comparing a human customs booth to a bush with wolf piss on it.
Hence the idea of tiered governments. Ideally, you can have a first degree connection with a lot more people in your local neighborhood. By extension, a second degree connection with a good majority of your community. A third degree connection with most of your state. etc. Overly large, separated, centralized governing structures don't exactly work perfectly for people either.
How many billions of wolves are there in a pack?
And? Do you think that national identity is nonexistent? Or that humans developed nation-states entirely independently of the fact that we are social animals?
Yeah this is the obvious point people are missing here. If people lived in small tribes with some kind of boundary that 'outsiders' weren't exactly welcome in, alright that kinda resembles a wolf pack. You look out for and trust the few dozen/hundred people that you know, eat with, hunt with, and see literally everyday.
A nation-state of 300 million people, across a continent, and saying "this is my wolf pack, totally a natural and biological human inclination" is ridiculous.
Territorial borders of humans are not a recent invention. It’s a universal trait of humans to group into tribes that control a piece of territory, especially since the invention of farming.
With all due respect it’s almost insulting that you would say something so absolutely contrary to reality with such confidence. Who are you trying to fool and why? I genuinely don’t understand.
> Humans have lived without boarders far longer than they've lived with them.
Do you have a source for this? I'm curious to read a little more.
On the surface i feel like some of our boarders that are in the middle of fields may be more recent but i'd believe lakes/rivers clearly separated territories between different groups of humans.
Humans, however, are very similar to chimps and they definitely have defined territory and borders
Not really arbitrary it's constrained by resources, population size and the presence of other packs as well as geography of rivers or man made citys or obstructions blocking their paths.
Untapped infinite h o g s
Link?
Michael Moore once had a tv show on Fox (seriously) called TV Nation, they had an episode where they looked at pet OCD and related disorders and it was pretty interesting. I particularly remember this dog that had an absolute obsession with a chunk of wood and would carry it everywhere, rub up and all over on it, push it along the ground, etc, not-stop and obsessively. prozac helped him iirc.
EDIT - found it:
https://www.youtube.com/watch?v=ujPjkbI42yA
it's one of a couple segments of that show that were kind of awkward though, in that it seemed like they wanted to mock what they were documenting but ended up not getting the 'right' material to make the participants look bad. Another instance like that featured now-famous presenter Louis Theroux exploring commercial crime scene clean-up services.
I hope someone finds this comment at least somewhat interesting
Hitler was right
Somebody make this a sub, RIGHT NOW
r/subsyoufellfor
Surely you can't be serious?
Like pie?
God *damn* it why didn't I think of that?
Sum savage crowd drawing card: *on ambo, addressing savage colorful assailants clad in mistily Nazi-ish furnishes* "Friends! The land is ours! The mediocre mashers shall be blasted past and for all. And this shall be a terrestrial planet Coloured in way and claw!"
Red philanderers: *verbalise militantly*
Tom Search (polished as a river classical scholar with an patch and a hankie): *controls set on heavier-than-air craft ended the procession reply, drowning out the chromatic colour philanderers' mouths. The weapon system body parts machine hold on the Sum assailant take loss leader* "I don't think up so."
Red Wolf load up person: "Impossibility! Canids can't lock late soul technology!"
Tom Travel: "In one case you bear $10,000 to reach out layer 10 in Church of Scientology and in full clean-handed your scheme of thetans, thing is fermentable." *displaces weapon. The colored Hugo Wolf feature jests as he deepens into a globe the separate of a skyscraper.*
*DIRECTED BY ARCHANGEL BAY LAUREL*
***
^(This is a bot. I try my best, but my best is 80% mediocrity 20% hilarity. Created by OrionSuperman. Check out my best work at /r/ThesaurizeThis)
John Awoo?
Ah yes, that’s old chestnut
....*sigh* username checks out I guess
Maybe they’re warlock wolves
> It's "pack"
Until blue joins up with red and you're fighting a two-front war.
For the Emperor!
I'm saying that customs booth's are how we make boundaries. Take those away (or the equivalent cultural item, i.e. ritual tree carvings, flags, etc.) and it's anybody's guess where the boundary's supposed to be. Just as if you removed the pissed-on-bushes, the wolves would have no clue as to where the rival packs operate.
Who do you think are closer culturally, a guy who lived his whole life in Seattle and someone from Vancouver, or that same guy from Seattlw with someone from Alabama?
So you agree that humans banding together is natural? Yet somehow can’t fathom doing it at a large scale?
More importantly who the hell is upvoting the idiot.
You just see the superficial but fail to see the reasons that happens, the best explanation for this is the concept of increasing returns of violence, which is the whole premise of the book "The Sovereign Individual".
Idk why you feel insulted when you are the one who's making giant assumptions. Humans are social animals and pretty much every society has some set of rules, that much is true. Which is to say some people will be outsiders. Those "borders" between the inside and the outside though would wary wildly depending on geography, culture, historical context, scarcity etc. That doesn't mean it's somehow "natural" for human societies to be territorial like the wolves are.
Modern international borders on the other hand, are products of nation states, which didn't exist until two hundred or so years ago. They have absolutely nothing to do with behaviors exhibited by pack animals. This is the same idiotic line of thought that led to people calling themselves alpha males and all that.
Oh, so just like human borders then?
https://i.imgur.com/h0wYIxJ.png
https://i.kym-cdn.com/photos/images/original/001/319/692/0a2.png
> I hope someone finds this comment at least somewhat interesting
Mission accomplished :)
I am serious! And don't call me Shirley.
It’s okay. I got your back.
I don't know
Best bot ever.
Good Bot
John oWo
Could be! Explains the teleportations!
People who think borders are an archaic construct? I almost disagree with both posters in some ways. "Borders" are an animal instinct that have no real place in an ideal global society.
The reason it happens is because clan kinship groups benefit by uniting under a banner with territory and borders.
[There are human borders which make absolutely no logical sense](https://nl.wikipedia.org/wiki/Baarle-Nassau#/media/File:Baarle-Nassau_-_Baarle-Hertog-nl.svg) - I don't see those wolves starting to accept enclaves in their territory anytime soon...
Not really at all like modern human borders but go off fam
Oh nice. I was expecting Zelda.
[removed]
<3 you too
Because every person shares the same culture and morals.
We're talking about reality, not idealism. Disagreeing _with_ the way the world works is not the same as disagreeing _about_ the way the world works.
But every person within the same borders share the same culture and morals?
Thinking that nation-states are a natural human inclination and comparable to a fuckin' tribe is hilariously dumb.
|
dev/Models/Customer_sequence_Analysis.ipynb | ###Markdown
Sequence AnalysisThe objective of this notebook is to1. Create sequences from trajectories for customers2. Load sequences into R package TraMineR3. Use TraMineR functions to calculate distance between sequences using TRATE or 2-1-1 costs4. Cluster the distance matrix using PAM/k-medoids clustering5. Plot medoid trajectories
###Code
import random
import sys
sys.path.append("..")
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import rpy2
import datetime
from dateutil.relativedelta import relativedelta
from matplotlib.colors import Normalize
import matplotlib.cm as cm
import seaborn as sns
from shapely.geometry import Point
get_ipython().magic(u'matplotlib inline')
%matplotlib inline
from connect_db import db_connection
# filter annyoing warning from pandas
import warnings
warnings.filterwarnings('ignore')
username='kmohan'
cred_location = '/mnt/data/'+username+'/utils/data_creds_redshift.json.nogit'
db = db_connection.DBConnection(cred_location)
# query for the 1week sample
query = """
select customer_nr, com_locs_new as locations,times_new as times, st_time,en_time, mcc
from tuscany.customer_arrays
where times_new is not null
and st_time >= '2017-06-01 00:00:00' and st_time < '2017-09-01 00:00:00'
and mcc = 262
"""
# drop 'customer_id' to save memory
df_trips = db.sql_query_to_data_frame(query, cust_id=True)
df_trips.head()
df_trips.shape
max_time = max(df_trips['en_time'] - df_trips['st_time'])
import math
math.ceil(max_time.total_seconds()/(60*60*12))
max_time = max(df_trips['en_time'] - df_trips['st_time'])
ncols = math.ceil(max_time.total_seconds()/(60*60*12))
columns = np.linspace(1,ncols,ncols)
len(columns)
n_cus = df_trips.shape[0]
len(df_trips)
df_trips['times'] = df_trips['times'].str.split(', ').tolist()
df_trips['locations'] = df_trips['locations'].str.split(', ').tolist()
df_trips.head()
def len_array(row):
return len(row['locations'])
def len_times(row):
return len(row['times'])
cus_nr = df_trips['customer_nr']
com_lens = df_trips.apply(len_array,axis=1)
time_lens = df_trips.apply(len_times,axis=1)
cus_nr = df_trips['customer_nr']
df_lens = pd.DataFrame({'customer_nr':cus_nr,'com_len' : com_lens, 'time_len': time_lens})
df_lens.head()
###Output
_____no_output_____
###Markdown
Monday 0 to Sunday 6
###Code
df_lens['diff'] = df_lens['com_len'] - df_lens['time_len']
filt_cus = df_lens[df_lens['diff'] != 1]['customer_nr']
len(filt_cus)
weeks = 10*np.linspace(1,6,6)
days= np.linspace(0,6.5,14)
weeks
columns = np.add.outer(weeks,days).flatten()
len(columns)
df_sequence = pd.DataFrame(columns=columns,index=cus_nr)
df_sequence.head()
def location_with_max_time(array_like):
return np.bincount(array_like).argmax()
for i in range(0,len(cus_nr)):
# print(i)
df_row = df_trips[i:i+1]
cus = df_row['customer_nr'][i]
st_wk = df_row['st_time'][i].weekday()
st_hr = df_row['st_time'][i].hour
country = df_row['mcc'][i]
# Initialising all values on the sequence to be country of origin
df_sequence.loc[[cus],columns[:]] = country
times = [0]
minutes = [0]
timestamps = [df_row['st_time'][i]]
cum_mins = np.cumsum(np.array(list(map(int,df_row['times'][i]))))
minutes.extend(np.array(list(map(int,df_row['times'][i]))))
times.extend(np.cumsum(np.array(list(map(int,df_row['times'][i])))))
timestamps.extend(np.datetime64(df_row['st_time'][i]) + cum_mins.astype('timedelta64[m]'))
coms = np.array(df_row['com_locs'][i])
tmp_df = pd.DataFrame({'Minutes' : minutes, 'Mins_cum' : times, 'Locations': coms, 'Timestamps':timestamps })
tmp_df['date'] = tmp_df['Timestamps'].dt.date
ts = pd.Series(coms,dtype=np.int64,index=tmp_df['Timestamps'])
ts = ts.resample('1T').bfill()
ts2 = ts.resample('12H').apply(location_with_max_time)
col_idx_st = 2*st_wk + int(st_hr/12)
df_sequence.loc[[cus],columns[col_idx_st:(col_idx_st + len(ts2))]] = ts2.values
df_sequence.loc[:,columns[74:84]].sort_values(by=[62],ascending=0).head()
df_sequence_trimmed = df_sequence.loc[:,columns[0:73]]
!pwd
df_sequence.to_csv("../../sequences_Tuscany_Aug_20000_sample.csv")
st = df_trips['st_time'][0]
st = np.datetime64(st)
cum_array0 = cum_array0.astype('timedelta64[m]')
st + cum_array0
df_trips.tail()
df_row = df_trips[873:874]
df_row
df_row['st_time'][873]
int(df_row['st_time'][873].hour/6)
cus = df_row['customer_nr'][873]
st_wk = df_row['st_time'][873].weekday()
country = df_row['mcc'][873]
times = [0]
minutes = [0]
timestamps = [df_row['st_time'][873]]
cum_mins = np.cumsum(np.array(list(map(int,df_row['times'][873]))))
minutes.extend(np.array(list(map(int,df_row['times'][873]))))
times.extend(np.cumsum(np.array(list(map(int,df_row['times'][873])))))
timestamps.extend(np.datetime64(df_row['st_time'][873]) + cum_mins.astype('timedelta64[m]'))
coms = np.array(df_row['com_locs'][873])
tmp_df = pd.DataFrame({'Minutes' : minutes, 'Mins_cum' : times, 'Locations': coms, 'Timestamps':timestamps })
cus
tmp_df['date'] = tmp_df['Timestamps'].dt.date
tmp_df.head()
ts = pd.Series(coms,dtype=np.int64,index=tmp_df['Timestamps'])
ts.head()
ts = ts.resample('1T').bfill()
ts['2017-08-12']
def location_with_max_time(array_like):
return np.bincount(array_like).argmax()
ts2 = ts.resample('12H').apply(location_with_max_time)
ts2
len(columns)
st_hr = df_row['st_time'][0].hour
2*st_wk + 1 if st_hr >=12 else 0
len(ts2)
df_sequence.shape
len(columns[13:(13+59)])
df_sequence.loc[[cus],columns[:]] = 100
df_sequence.loc[[cus],columns[:]]
counts
str(12)+'H'
def str_list_to_int_list(array_like):
return list(map(int,array_like))
def str_to_list(df_trips):
"""
Convert a str (output of the SQL query) into a list of strs
Parameters:
df_trips: DataFrame containing the column 'locations' and 'times'
"""
# replace str of geolocations by a list of strs with geolocation codes
df_trips['locations'] = list(map(str_list_to_int_list,df_trips['locations'].str.split(', ').tolist()))
# replace str of geolocations by a list of strs with geolocation codes
df_trips['times'] = list(map(str_list_to_int_list,df_trips['times'].str.split(', ').tolist()))
str_to_list(df_trips)
df_trips.head()
def location_with_max_time(array_like):
"""
Returns the value in array apprearing most frequently
Parameters:
array_like: any array
"""
return np.bincount(array_like).argmax()
def create_sequence_for_individual(i,align_by_day_of_week,window_hrs,country_for_missing,ncols):
# extracting the row for each customer
df_row = df_trips[i:i+1]
cus = int(df_row['customer_nr'][i])
st_wk = df_row['st_time'][i].weekday()
st_hr = df_row['st_time'][i].hour
country = int(df_row['mcc'][i])
seq = [np.nan] * ncols
seq[0] = cus
# Initialising all values on the sequence to be country of origin if set to True
if country_for_missing == True:
seq[1:] = [country]*(ncols-1)
# Creating the Pandas Series object from the list of 'times'
# Initialising the time array
timestamps = [np.datetime64(df_row['st_time'][i])]
# Cummulating times spent at each location to create timestamps
cum_mins = np.cumsum(np.array(list(map(int,df_row['times'][i]))))
timestamps.extend(np.datetime64(df_row['st_time'][i]) + cum_mins.astype('timedelta64[m]'))
# Getting list of locations
locs = np.array(df_row['locations'][i])
# Defining the Pandas Series object
ts = pd.Series(locs,dtype=np.int64,index=timestamps)
# Resampling sequence for the required window size
# 1 minute resolution before resampling to window_hrs as we want to find the location spent maximum time at in the window
ts = ts.resample('1T').bfill()
# Resampling to window_hrs
ts2 = ts.resample(str(window_hrs)+'H').apply(location_with_max_time)
# Identifying the columns to insert into
if align_by_day_of_week == True:
col_idx_st = int(24/window_hrs)*st_wk + int(st_hr/window_hrs)
else:
col_idx_st = int(st_hr/window_hrs)
# Inserting location values into the sequence dataframe
# df_sequence.loc[[cus],columns[col_idx_st:(col_idx_st + len(ts2))]] = ts2.values
seq[col_idx_st:(col_idx_st + len(ts2))] = ts2.values
return seq
# TODO: Save function
def create_sequences(df_trips,align_by_day_of_week=True,window_hrs=3,country_for_missing=True,n_threads=5):
"""
Create a dataframe of aligned sequences for sequence clustering analysis
Parameters:
df_trips: DataFrame containing the column 'locations', 'times', 'st_time','customer_nr','mcc'
align_by_day_of_week: If True, the sequences are aligned by the day of week of arrival. Else, the sequences are aligned
by their respective first day of arrival.
window_hrs: window size for sequence creation in hours. A sequence would contain a location for every 'window_hrs'
from start to end times
country_for_missing: If True, the location for entries in the sequence when the individual wasn't in Italy would be
set to the MCC code of the respective country
n_threads: Number of threads to use in parallel
"""
# importing math for ceiling function
import math
from multiprocessing import Pool
from itertools import repeat
# finding the maximum time spent by an individual to set the length fo sequence
max_time = max(df_trips['en_time'] - df_trips['st_time'])
ncols = math.ceil(max_time.total_seconds()/(60*60*window_hrs))
# If aligning by day of week of arrival, we need additional columns
# as someone could arrive on a sunday and stay for max_time
if align_by_day_of_week == True:
ncols += 6*math.ceil(24/window_hrs)
# weeks = 10*np.linspace(1,6,6)
# days= np.linspace(0,6.5,14)
# columns = np.add.outer(weeks,days).flatten()
# Initialising the sequence dataframe with NAs
columns = np.linspace(1,ncols,ncols)
cus_nr = df_trips['customer_nr']
# df_sequence = pd.DataFrame(np.nan,columns=columns,index=cus_nr)
# Looping through every customer array to create the aligned sequences data frame
p = Pool(n_threads)
l = [i for i in range(0,len(cus_nr))]
sequences_as_lists = p.starmap(create_sequence_for_individual, zip(l, [align_by_day_of_week]*len(cus_nr), [window_hrs]*len(cus_nr), [country_for_missing]*len(cus_nr), [ncols+1]*len(cus_nr)))
# Converting list of lists into a dataframe
col_names = ['customer_nr'] + list(map(str,np.linspace(1,ncols,ncols)))
df_sequence = pd.DataFrame.from_records(sequences_as_lists,columns=col_names)
# Setting customer number to be the index
df_sequence = df_sequence.set_index('customer_nr')
return df_sequence
sd_seq1 = create_sequences(df_trips,align_by_day_of_week=False,window_hrs=12,country_for_missing=True,n_threads=12)
sd_seq1.head()
sd_seq1.to_csv("/mnt/data/kmohan/sequences_Germans_Summer.csv")
###Output
_____no_output_____
###Markdown
Clustering Sequences using TraMineR package in R - loading dataframe saved above into R environment - creating sequence object - substitution cost using TRATE - distance between sequences using OMA - clustering using PAM/k-medoids
###Code
%load_ext rpy2.ipython
%%R
#install.packages('TraMineR')
install.packages('TraMineRextras')
install.packages('fpc')
install.packages('WeightedCluster')
install.packages('foreach')
install.packages('parallel')
install.packages('doParallel')
%%R
library(cluster)
library(lattice)
library(TraMineR)
library(TraMineRextras)
library(fpc)
library(foreach)
library(parallel)
library(doParallel)
library(WeightedCluster)
%%R
df_seq <- read.csv("../../sequences_Germans_Summer.csv")
agg_seq <- wcAggregateCases(df_seq[,-1])
print(agg_seq)
%%R
unique_seq <- df_seq[agg_seq$aggIndex, -1]
seq_obj <- seqdef(unique_seq, weights = agg_seq$aggWeights)
seq_subcost <- seqcost(seq_obj,method="CONSTANT",cval=2)
%%R
seq_dist <- seqdist(seq_obj,method = "OM",sm=seq_subcost$sm,full.matrix=FALSE)
%%R
seq_dist <- seqdist(seq_obj[2,],refseq=seq_obj[1,],method = "OM",sm=seq_subcost$sm)
%%R
seq_dist
%%R
packageVersion("TraMineR")
getRversion()
gc()
%%R
version
%%R
load("../../seqdist_Tuscany_Aug_10000_sample.RSav")
%%R
## Stats to find number of clusters
get_pam_cluster_stats <- function(dist_matrix,k){
cl <- pam(dist_matrix,k=k,diss=TRUE)
cs <- cluster.stats(dist_matrix,clustering = cl$clustering)
return(c(cs$ch,cs$avg.silwidth ))
}
cl.stats = data.frame(matrix(0,nrow=5,ncol=2))
for (i in c(2:10)){
cl.stats[i,] = get_pam_cluster_stats(seq_dist_trate,i)
}
%%R
plot(x=c(2:10),y=cl.stats[2:10,1],type='l')
%%R
clusterpam_trate <- pam(seq_dist_trate, k=4, diss = TRUE)
%%R
clustering_results = data.frame(ids=df_seq[,1], cluster=clusterpam_trate$clustering)
write.csv(clustering_results,"../../clusters_Tuscany_4_Aug_10000_sample.csv")
%%R
medoids_cus_ids = df_seq[clusterpam_trate$medoids,1]
write.csv(medoids_cus_ids,"../../medoids_Tuscany_4_Aug_10000_sample.csv")
%%R
save(seq_dist_trate,file="../../seqdist_Tuscany_Aug_10000_sample.RSav")
%%R
K<-5
for (k in 1:K){
seqdplot(seq_obj[clusterpam_trate$clustering==k,])
}
###Output
_____no_output_____
###Markdown
Experimenting with nested foreach
###Code
%%R
sim <- function(seq_obj,a,b,sm){
k <- sum(b<=a)
if (k < length(b)){
return (c(rep(NA,k),seqdist(seq_obj[b[(k+1):length(b)],],refseq=seq_obj[a,],method = "OM",sm=sm)))
}else{
return (rep(NA,k))
}
}
%%R
no_cores <- 8
registerDoParallel(makeCluster(no_cores))
N <- dim(seq_obj)[1]
G <- 10000
avec <- c(1:N)
seq_dist <- foreach(a=avec, .combine='cbind') %:%
foreach(b=c(1:ceiling(N/G)), .combine='c',.packages = c("TraMineR")) %dopar% {
sim(seq_obj,a,c((G*(b-1)+1):min((G*b),N)),seq_subcost$sm)
}
stopImplicitCluster()
%%R
dim(seq_obj)
%%R
simple_sim <- function(a,b){
return (10*a+b)
}
%%R
no_cores <- detectCores()
registerDoParallel(makeCluster(no_cores))
bvec <- c(1:10000)
avec <- c(1:1000000)
x <- foreach(a=avec, .combine='c') %dopar% simple_sim(a, 10)
stopImplicitCluster()
%%R
no_cores <- detectCores()
registerDoParallel(makeCluster(no_cores))
no_cores
###Output
_____no_output_____
###Markdown
Visualization and descriptives on cluster results
###Code
df_med = pd.read_csv("../../mediods_5_Jul_Aug_10000_sample.csv")
med_cus_nos = np.array(df_med.iloc[:,1])
med_cus_nos
sum(df_trips['customer_nr'] == med_cus_nos[2])
df_med.merge(df_trips,how='inner', left_on='x', right_on='customer_nr')
# Load maps data
# load data from TPT
regions = r"/mnt/data/shared/Boundaries regions and municipalities Italy 2016/Reg2016_WGS84_g/Reg_2016_WGS84_g.shp"
provinces = r"/mnt/data/shared/Boundaries regions and municipalities Italy 2016/CMProv2016_WGS84_g/CMprov2016_WGS84_g.shp"
municipalities = r"/mnt/data/shared/Boundaries regions and municipalities Italy 2016/Com2016_WGS84_g/Com2016_WGS84_g.shp"
new_reg = r"/mnt/data/shared/ITA_shapefiles/Tus_28districts.shp"
# important cities
important_cities_file = r"/mnt/data/shared/important_cities.csv"
gdf_mun = gpd.read_file(municipalities)
gdf_mun['geometry'] = gdf_mun['geometry'].to_crs(epsg=4326)
def plot_trajectory(list_of_comunes=False):
'''
Parameters:
list_of_comunes: list of pro_com (as ints)
'''
# comune centroids
df_centroids = pd.read_csv(r"/mnt/data/shared/comune_centroids.csv")
fig = plt.figure(figsize=(15, 15))
ax = plt.subplot(1,1,1)
gdf_mun.plot(ax=ax, color='white', edgecolor='gray', alpha=0.5)
if list_of_comunes is not False:
trip = pd.DataFrame(list_of_comunes, columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
plt.axis('off')
# query for the 1week sample
query = """
select customer_nr,com_locs_new from tuscany.customer_arrays
where customer_nr in (6830799, 5955872, 1179935, 6804043, 1418535)
"""
# drop 'customer_id' to save memory
df_clusters = db.sql_query_to_data_frame(query, cust_id=True)
df_clusters['com_locs_new'] = df_clusters['com_locs_new'].str.split(', ').tolist()
for i in range(0,5):
plot_trajectory(list(map(int,df_clusters.iloc[i,1])))
# comune centroids
df_centroids = pd.read_csv(r"/mnt/data/shared/comune_centroids.csv")
fig = plt.figure(figsize=(15, 15))
ax = plt.subplot(1,1,1)
gdf_mun.plot(ax=ax, color='white', edgecolor='gray', alpha=0.5)
trip = pd.DataFrame(list(map(int,df_clusters.iloc[0,1])), columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
trip = pd.DataFrame(list(map(int,df_clusters.iloc[1,1])), columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
trip = pd.DataFrame(list(map(int,df_clusters.iloc[2,1])), columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
trip = pd.DataFrame(list(map(int,df_clusters.iloc[3,1])), columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
trip = pd.DataFrame(list(map(int,df_clusters.iloc[4,1])), columns=['PRO_COM'])
trip = trip.merge(df_centroids, how='inner', left_on='PRO_COM', right_on='pro_com')
plt.plot(trip['lat'], trip['lon'], '-o')
plt.axis('off')
###Output
_____no_output_____
###Markdown
Experimenting with parallel processing below
###Code
from multiprocessing import Pool
from itertools import repeat
X = 0
def f(x, y, z):
a = x*y*z
b = x+y+z
return list([a,b])
if __name__ == '__main__':
p = Pool(5)
l =[i for i in range(0,10)]
y = p.starmap(f, zip([1,2,3], [4,5,6],[7,8,9]))
map(,y)
from itertools import product
names = ['Brown', 'Wilson', 'Bartlett', 'Rivera', 'Molloy', 'Opie']
print(product(names, repeat=2))
print(pd.DataFrame({'col1': [1], 'col2' : [2]}))
align_by_day_of_week=True,window_hrs=3,country_for_missing=True,ncols=(df_sequence.shape[1]+1)
a = [True]*10
a
def create_sequence_for_individual(i,align_by_day_of_week,window_hrs,country_for_missing,ncols):
# extracting the row for each customer
df_row = df_trips[i:i+1]
cus = int(df_row['customer_nr'][i])
st_wk = df_row['st_time'][i].weekday()
st_hr = df_row['st_time'][i].hour
country = int(df_row['mcc'][i])
seq = [np.nan] * ncols
seq[0] = cus
# Initialising all values on the sequence to be country of origin if set to True
if country_for_missing == True:
seq[1:] = [country] * (ncols-1)
# Creating the Pandas Series object from the list of 'times'
# Initialising the time array
timestamps = [np.datetime64(df_row['st_time'][i])]
# Cummulating times spent at each location to create timestamps
cum_mins = np.cumsum(np.array(list(map(int,df_row['times'][i]))))
timestamps.extend(np.datetime64(df_row['st_time'][i]) + cum_mins.astype('timedelta64[m]'))
# Getting list of locations
locs = np.array(df_row['locations'][i])
# Defining the Pandas Series object
ts = pd.Series(locs,dtype=np.int64,index=timestamps)
# Resampling sequence for the required window size
# 1 minute resolution before resampling to window_hrs as we want to find the location spent maximum time at in the window
ts = ts.resample('1T').bfill()
# Resampling to window_hrs
ts2 = ts.resample(str(window_hrs)+'H').apply(location_with_max_time)
# Identifying the columns to insert into
if align_by_day_of_week == True:
col_idx_st = int(24/window_hrs)*st_wk + int(st_hr/window_hrs)
else:
col_idx_st = int(st_hr/window_hrs)
# Inserting location values into the sequence dataframe
# df_sequence.loc[[cus],columns[col_idx_st:(col_idx_st + len(ts2))]] = ts2.values
seq[col_idx_st:(col_idx_st + len(ts2))] = ts2.values
return seq
def create_sequences(df_trips,align_by_day_of_week=True,window_hrs=3,country_for_missing=True,n_threads=5):
"""
Convert a str (output of the SQL query) into a list of strs
Parameters:
df_trips: DataFrame containing the column 'locations', 'times', 'st_time','customer_nr','mcc'
align_by_day_of_week: If True, the sequences are aligned by the day of week of arrival. Else, the sequences are aligned
by their respective first day of arrival.
window_hrs: window size for sequence creation in hours. A sequence would contain a location for every 'window_hrs'
from start to end times
country_for_missing: If True, the location for entries in the sequence when the individual wasn't in Italy would be
set to the MCC code of the respective country
n_threads: Number of threads to use in parallel
"""
# importing math for ceiling function
import math
from multiprocessing import Pool
from itertools import repeat
# finding the maximum time spent by an individual to set the length fo sequence
max_time = max(df_trips['en_time'] - df_trips['st_time'])
ncols = math.ceil(max_time.total_seconds()/(60*60*window_hrs))
# If aligning by day of week of arrival, we need additional columns
# as someone could arrive on a sunday and stay for max_time
if align_by_day_of_week == True:
ncols += 6*math.ceil(24/window_hrs)
# weeks = 10*np.linspace(1,6,6)
# days= np.linspace(0,6.5,14)
# columns = np.add.outer(weeks,days).flatten()
# Initialising the sequence dataframe with NAs
columns = np.linspace(1,ncols,ncols)
cus_nr = df_trips['customer_nr']
df_sequence = pd.DataFrame(np.nan,columns=columns,index=cus_nr)
# Looping through every customer array to create the aligned sequences data frame
p = Pool(n_threads)
l = [i for i in range(0,len(cus_nr))]
sequences_as_lists = p.starmap(create_sequence_for_individual, zip(l, [align_by_day_of_week]*len(cus_nr), [window_hrs]*len(cus_nr), [country_for_missing]*len(cus_nr), [ncols+1]*len(cus_nr)))
return sequences_as_lists
sd_seq1 = create_sequences(df_trips,align_by_day_of_week=True,window_hrs=3,country_for_missing=True,n_threads=8)
sd_seq1
max_time = max(df_trips['en_time'] - df_trips['st_time'])
ncols = math.ceil(max_time.total_seconds()/(60*60*window_hrs))
# If aligning by day of week of arrival, we need additional columns
# as someone could arrive on a sunday and stay for max_time
if align_by_day_of_week == True:
ncols += 6*math.ceil(24/window_hrs)
create_sequence_for_individual(1,align_by_day_of_week=True,window_hrs=12,country_for_missing=True,ncols=ncols+1)
seq = [np.nan] * (ncols+1)
seq[0] = 1
seq[1:] = [2]*ncols
seq
sd_seq1
import py_common_subseq
df_seq = pd.read_csv("/mnt/data/kmohan/TPT_tourism/new_codebase/src/models/sequence_analysis/data/sequences/sequences_chinese_pre-summer.csv")
df_seq.head()
###Output
_____no_output_____ |
tutorials/getting_started/1b_nlp_queries.ipynb | ###Markdown
Introduction to FBL: Part 1(b): Using Natural Language Queries to Explore DatasetsThis example demonstrates how to use NLP queries. We will start with the FlyCircuit dataset, and then show some capabilities for the Hemibrain dataset. We will be using the `executeNLPquery` method available to make our queries. Equivalently, the query can be passed to the query bar located at the bottom of the NeuroNLP window. Query RulesLet us now explain the rules for constructing these queries. Here is an easy way to construct your NLP queries. Your queries should start with a verb; the verbs supported right now are * **show**: clear workspace and then show the queried neurons,* **add**: add to the workspace the neurons queried,* **remove**: remove from the workspace the queried neuron,* **keep**: keep in the workspace only the neurons that meet the criterion of the query,* **hide**: hide the neurons that meet the criterion of the query (this does not remove them from workspace, but reduce their visibility),* **pin**: pin the neurons that meet the criterion of the query. Pinned neurons are automatically highlighted, and cannot be removed by the "trash can" button on top of the NeuroNLP window.* **unpin**: unpin the neurons that meet the criterion of the query,* **color**: color the neurons that meet the criterion of the query with a user defined color (can be hex color code, e.g., FF0000 for red), or [these predefined colors](https://github.com/fruitflybrain/ffbo.neuroarch_nlp/blob/master/neuroarch_nlp/data/defaults.pyL2).* **clear**: clear up the workspace, removing all neurons and synapses. Next, we explain the rules for defining the criterion of the query, using the verb *show* as an example. The queries will be based on the Hemibrain dataset.* **show *cell-type* neurons**: Shows the neurons of the cell type. Example: - `show PEG neurons`, or simply `show peg`.* **show *\\$string\\$* neurons**: Shows neurons with a name that contains the string. Example: - `show $PEG$ neurons`, or simply `show $PEG$`, will query any neuron whose name contain the string \*PEG\*.* **show */rstring/r* neurons**: Show neurons whose name matches the regular expressing `string` (This requires some knowledge of how the neurons are named in each dataset). Example: - `show /r(.*)PEG(.*)R1(.*)/r neurons`, or simply `show /r(.*)PEG(.*)R1(.*)/r` will show the PEG neurons that innervate PB glomerulus R1.* **show neurons in|that innervate|that arborize in *neuropil/subregion***: Shows neurons that has output or input in a neuropil or a subregion of a neuropil. Examples: - `show neurons in ellipsoid body`, or using abbreviations `show neurons in EB`, - `show PEG neurons that arborize in PB glomerulus r9`. - `show neurons that innervate right al and right mb`. Note that this is different from `show neurons that innervate right al or right mb`.* **show *local* neurons in *neuropil***: Shows the neurons that has only inputs and outputs within the neuropil (note that due to lack of data in some datasets, some neurons are only traced in one neuropil and thus are classified local neuron by default). Examples: - `show local neurons in ellipsoid body`* **show neurons with|that have inputs|outputs in *neuropil/subregion***: More specific then the previous query on the inputs or outputs. Examples: - `show neurons with inputs in AME`, - `show $R3w$ neurons that have outputs in EB`, - `show neurons with inputs in right antennal lobe and outputs in right lateral horn`, or equivalently `show neurons projecting from right antennal lobe to right lateral horn`. - `show neurons that connect right AL and right MB`. Includes both the neurons that has inputs in AL and outputs in MB, and those has inputs in MB and outputs in AL.* **show neurons *presynaptic|postsynaptic to***: Shows the neurons that are presynaptic or postsynaptic to the neurons defined after the word to. Exmples: - `show neurons presynaptic to TuBu05`, - `show $aMe$ presynaptic to KCs that innervate alpha'1 compartment`, - `show DAN postsynaptic to MBONs with at least 30 synapses`,* **show *presynaptic|postsynaptic* neurons**: Shows the neurons that are presynaptic or postsynaptic to the neurons already in workspace. Examples: - `show presynaptic neurons` - `show postsynaptic neurons with at least 10 synapses` - `show postynaptic MBON that innervate gamma lobe with at least 5 synapses`* **show *neurotransmitter* neurons**: (currently mainly works in FlyCircuit, L1EM and Medulla datasets) Shows the neurons that express the neurotramsitter. Examples: - `show GABAergic neurons in EB` - `show cholinergic presynaptic neurons` - `show glutamatergic local neurons in AL`Other short-hands:* **show */:referenceId:[5813014882, 912147912, 880875861]***: Shows the neurons whose `referenceId` in the original dataset is in the list. It can be used similar to \\$ \\$ and regular expression and combined with other types of criteria. Examples: - Hemibrain: `show */:referenceId:[5813014882, 912147912, 880875861]` - FlyCircuit: `show */:referenceId:[VGlut-F-000001, Cha-F-100201]`**Coloring**, if no criteria are specified, the color will be applied to the neurons *added* in the most recent query. For example, if you query: `show A neurons`, then `add B neurons`, `color red` will color B neurons red. `color A neurons 0000FF` will then color A neurons blue.Let us execute these example queries one by one: The Hemibrain DatasetFor the examples in this subsection, select "Hemibrain" after pressing the "Create FBL Workspace" button. Then change the kernel of this notebook (see [this notebook](1_introduction.ipynb) if you are not aware why and how to do it). Execute the following cells one by one (note that you can type in the queries to the query bar at the bottom of the NeuroNLP window to run the same query). We do not explain the returned variable `res` in the following examples. Please see [this notebook](1c_query_results.ipynb) for more information on the returned values of the queries. If you are running NeuroArch server locally and using one of the Hemibrain databases from the [Dataset Repository](https://github.com/FlyBrainLab/datasets), currently the coordinate of the neuropil mesh requires a fix by uncommenting the code on the last three lines.
###Code
my_client = fbl.get_client()
# my_client.x_scale = 1./8
# my_client.y_scale = 1./8
# my_client.z_scale = 1./8
###Output
_____no_output_____
###Markdown
We use this function to retrieve the client object we generate when we create a workspace. If the code above fails, it is likely that you have not connected this notebook to the workspace you generated by pressing the "Create FBL Workspace" button; you go back to the [previous notebook](1_introduction.ipynb) to see how you can connect an existing notebook to a newly created workspace. Name-based Queries In the Hemibrain dataset, we can typically query by the type of the neurons, such as:
###Code
res = my_client.executeNLPquery('show PEG neurons')
###Output
_____no_output_____
###Markdown
For neuron types that span multiple word, try put an underscore between the words, for example, for $\alpha/\beta$ lobe KCs:
###Code
res = my_client.executeNLPquery('show alpha_beta_kc')
###Output
_____no_output_____
###Markdown
Another quick way to find neurons whose name contains a string is to use the \\$\\$ syntax. The following query searches for all neurons whose name contains the string 'MBON'.
###Code
res = my_client.executeNLPquery('show $MBON$')
###Output
_____no_output_____
###Markdown
We can also use a regular expression to filter out the name of the neurons. For example, the following query asks for neurons whose names starts with 'R3w_b' and has an trailing 'R'. In Hemibrain name scheme, it means the R3w_b ring neurons that has its cell body on the right hemisphere.
###Code
res = my_client.executeNLPquery('show /r(.*)R3w(.*)_R(.*)/r')
###Output
_____no_output_____
###Markdown
If we want to, we can load multiple neurons using their Hemibrain Body IDs from the original Neurprint database. Below, we will load three neurons using their Hemibrain Body IDs:
###Code
res = my_client.executeNLPquery('show /:referenceId:[5813014882, 912147912, 880875861]')
###Output
_____no_output_____
###Markdown
Arborization-based Queries We can also make arborization-based queries. For example, let us show all neurons that have arborizations in right accessory Medulla (aMe):
###Code
res = my_client.executeNLPquery('show neurons in the right accessory medulla')
# We also support using abbreviations of the neuropils, such as
# res = my_client.executeNLPquery('show neurons in the right ame')
###Output
_____no_output_____
###Markdown
We can also make more specific queries. For example, we can show neurons that have input site from the NEUROPIL EB and have output sites in the SUBREGION PB glomerulus R3.
###Code
res = my_client.executeNLPquery('show neurons that have inputs in EB and outputs in PB glomerulus R3')
###Output
_____no_output_____
###Markdown
we can also write:
###Code
res = my_client.executeNLPquery('show neurons that have dendrites in EB and axons in PB glomerulus R3')
###Output
_____no_output_____
###Markdown
Combining name-based an arborization-based queries, wer query for EPG neurons that have output sites in PB glomerulus L8 or R2.
###Code
res = my_client.executeNLPquery('show EPG neurons that have outputs in PB glomerulus L8 or PB glomerulus R2')
###Output
_____no_output_____
###Markdown
For a list of neuropils and subregions that can be addressed in arborization-based queries, see [the Brain Regions](https://neuprint.janelia.org/help/brainregions) defined in the Hemibrain dataset. Synaptic Partners-based Queries Queries can be based on synaptic partners. For example, we can show some neurons and add their presynaptic neurons.
###Code
res = my_client.executeNLPquery('show TuBu05')
res = my_client.executeNLPquery('color 00FF00')
res = my_client.executeNLPquery('add postsynaptic neurons')
res = my_client.executeNLPquery('color yellow')
###Output
_____no_output_____
###Markdown
Combining with the other types of criteria, we can show EPG (EB-PB-LAL) neurons that are presynaptic to PEN (PB-EB-NO) neurons that have input sites in PB glomerulus R7 and the connections has at least 20 synapses. Then we add their postsynaptic PEG neurons with at least 10 synapses.
###Code
res = my_client.executeNLPquery('show EPG presynaptic to PEN that have input in PB glomerulus R7 with at least 20 synapses')
res = my_client.executeNLPquery('color lime')
res = my_client.executeNLPquery('add postsynaptic PEG with at least 10 synapses')
res = my_client.executeNLPquery('color yellow')
###Output
_____no_output_____
###Markdown
The FlyCircuit DatasetIn this second part of the tutorial, we will instead focus on the FlyCircuit dataset. Close your Hemibrain workspace and initialize a FlyCircuit workspace. Then, create a reference to the workspace client:
###Code
my_client = fbl.get_client()
###Output
_____no_output_____
###Markdown
The queries below are self-explanatory:
###Code
res = my_client.executeNLPquery('show neurons in the ellipsoid body')
###Output
_____no_output_____
###Markdown
For this query, we can also write:
###Code
res = my_client.executeNLPquery('show neurons in the eb')
res = my_client.executeNLPquery('show neurons in central complex')
res = my_client.executeNLPquery('remove neurons in fan-shaped body')
res = my_client.executeNLPquery('add neurons projecting from antennal lobe to mushroom body')
res = my_client.executeNLPquery('keep neurons with outputs in lateral horn and inputs in mushroom body')
res = my_client.executeNLPquery('show dopaminergic neurons in right medulla')
###Output
_____no_output_____
###Markdown
Supported NeuropilsThe names of the neuropils supported for searches and their abbreviations are available below. The ones that have both side can be separately addressed, for example, using left al and right al. | Neuropil Name | Abbreviation ||-------------------------------------------------------------------------|----------------------------------------------------------------|| Antennal Lobe | AL || Antennal Mechanosensory and Motor Center | AMMC || Caudalcentral Protocerebrum | CCP || Caudalmedial Protocerebrum | CMP || Caudal Ventrolateral Protocerebrum | CVLP || Dorsolateral Protocerebrum | DLP || Dorsomedial Protocerebrum | DMP || Ellipsoid Body | EB || Fanshaped Body | FB || Frontal Superpeduncular Protocerebrum | FSPP || Inferior Dorsofrontal Protocerebrum | IDFP || Inner Dorsolateral Protocerebrum | IDLP || Lateral Horn | LH || Lobula | Lob || Lobula Plate | LoP || Mushroom Body | MB || Medulla | Med || Noduli | Nod || Optic Glomerulus | OG || Optic Tubercle | OPTU || Superior Dorsofrontal Protocerebrum | SDFP || Subesophageal Ganglion | SOG || Superpeduncular Protocerebrum | SPP || Ventrolateral Protocerebrum | VLP || Ventromedial Protocerebrum | VMP | The Larva L1EM Dataset In the third part of the tutorial, we will focus on the Larva L1EM dataset. Close your previous workspaces and initialize an larva(l1em) workspace. Then, create a reference to the workspace client:
###Code
my_client = fbl.get_client()
res = my_client.executeNLPquery('show broad LN')
res = my_client.executeNLPquery('add postsynaptic uPN')
res = my_client.executeNLPquery('show dopaminergic neurons in MB')
res = my_client.executeNLPquery('color yellow')
res = my_client.executeNLPquery('show octopaminergic neurons in MB')
res = my_client.executeNLPquery('color red')
res = my_client.executeNLPquery('add postsynaptic KC')
res = my_client.executeNLPquery('color white')
res = my_client.executeNLPquery('add cholinergic postsynaptic MBON')
res = my_client.executeNLPquery('color blue')
###Output
_____no_output_____ |
notebooks/03_numpy/03-Numpy-PyConES2018.ipynb | ###Markdown
Numpy Numpy proporciona un nuevo contenedor de datos a Python, los `ndarray`s, además de funcionalidad especializada para poder manipularlos de forma eficiente.Hablar de manipulación de datos en Python es sinónimo de Numpy y prácticamente todo el ecosistema científico de Python está construido sobre Numpy. Digamos que Numpy es el ladrillo que ha permitido levantar edificios tan sólidos como Pandas, Matplotlib, Scipy, scikit-learn,...**Índice*** [¿Por qué un nuevo contenedor de datos?](%C2%BFPor-qu%C3%A9-un-nuevo-contenedor-de-datos?)* [Tipos de datos](Tipos-de-datos)* [Creación de `numpy` arrays](Creaci%C3%B3n-de-numpy-arrays)* [Operaciones disponibles más típicas](Operaciones-disponibles-m%C3%A1s-t%C3%ADpicas)* [Metadatos y anatomía de un `ndarray`](Metadatos-y-anatom%C3%ADa-de-un-ndarray)* [Indexación](Indexaci%C3%B3n)* [Manejo de valores especiales](Manejo-de-valores-especiales)* [Subarrays, vistas y copias](Subarrays,-vistas-y-copias)* [Operaciones entre numpy arrays](Operaciones-entre-numpy-arrays)* [Broadcasting](Broadcasting)* [Funciones matemáticas, funciones universales *ufuncs* y vectorización](Funciones-matem%C3%A1ticas,-funciones-universales-ufuncs-y-vectorizaci%C3%B3n)Playground* [Estadística](Estad%C3%ADstica)* [Ordenando, buscando y contando](Ordenando,-buscando-y-contando)* [Polinomios](Polinomios)* [Álgebra lineal](%C3%81lgebra-lineal)* [Manipulación de `ndarray`s](Manipulaci%C3%B3n-de-ndarrays)* [Módulos de interés dentro de numpy](M%C3%B3dulos-de-inter%C3%A9s-dentro-de-numpy) ¿Por qué un nuevo contenedor de datos? En Python, disponemos, de partida, de diversos contenedores de datos, listas, tuplas, diccionarios, conjuntos,..., ¿por qué añadir uno más?.¡Por conveniencia!, a pesar de la pérdida de flexibilidad. Es una solución de compromiso.* Uso de memoria más eficiente: Por ejemplo, una lista puede contener distintos tipos de objetos lo que provoca que Python deba guardar información del tipo de cada elemento contenido en la lista. Por otra parte, un `ndarray` contiene tipos homogéneos, es decir, todos los elementos son del mismo tipo, por lo que la información del tipo solo debe guardarse una vez independientemente del número de elementos que tenga el `ndarray`.***(imagen por Jake VanderPlas y extraída [de GitHub](https://github.com/jakevdp/PythonDataScienceHandbook)).**** Más rápido: Por ejemplo, en una lista que consta de elementos con diferentes tipos Python debe realizar trabajos extra para saber si los tipos son compatibles con las operaciones que estamos realizando. Cuando trabajamos con un `ndarray` ya podemos saber eso de partida y podemos tener operaciones más eficientes (además de que mucha funcionalidad está programada en C, C++, Cython, Fortran).* Operaciones vectorizadas* Funcionalidad extra: Muchas operaciones de álgebra lineal, transformadas rápidas de Fourier, estadística básica, histogramas,...* Acceso a los elementos más conveniente: Indexación más avanzada que con los tipos normales de Python* ... Uso de memoria
###Code
# AVISO: SYS.GETSYZEOF NO ES FIABLE
lista = list(range(5_000_000))
arr = np.array(lista, dtype=np.uint32)
print("5 millones de elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
print()
lista = list(range(100))
arr = np.array(lista, dtype=np.uint8)
print("100 elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
###Output
_____no_output_____
###Markdown
Velocidad de operaciones
###Code
a = list(range(1_000_000))
%timeit sum(a)
print(sum(a))
a = np.array(a)
%timeit np.sum(a)
print(np.sum(a))
###Output
_____no_output_____
###Markdown
Operaciones vectorizadas
###Code
# Suma de dos vectores elemento a elemento
a = [1, 1, 1]
b = [3, 4, 3]
print(a + b)
print('Fail')
# Suma de dos vectores elemento a elemento
a = np.array([1, 1, 1])
b = np.array([3, 4, 3])
print(a + b)
print('\o/')
###Output
_____no_output_____
###Markdown
Funcionalidad más conveniente
###Code
# suma acumulada
a = list(range(100))
print([sum(a[:i+1]) for i in a])
a = np.array(a)
print(a.cumsum())
###Output
_____no_output_____
###Markdown
Acceso a elementos más conveniente
###Code
a = [[11, 12, 13],
[21, 22, 23],
[31, 32, 33]]
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:][0], ' Fail!!!')
a = np.array(a)
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:,0], ' \o/')
###Output
_____no_output_____
###Markdown
... Recapitulando un poco.***Los `ndarray`s son contenedores multidimensionales, homogéneos con elementos de tamaño fijo, de dimensión predefinida.*** Tipos de datos Como los arrays deben ser homogéneos tenemos tipos de datos. Algunos de ellos se pueden ver en la siguiente tabla:| Data type | Descripción ||---------------|-------------|| ``bool_`` | Booleano (True o False) almacenado como un Byte || ``int_`` | El tipo entero por defecto (igual que el `long` de C; normalmente será `int64` o `int32`)| | ``intc`` | Idéntico al ``int`` de C (normalmente `int32` o `int64`)| | ``intp`` | Entero usado para indexación (igual que `ssize_t` en C; normalmente `int32` o `int64`)| | ``int8`` | Byte (de -128 a 127)| | ``int16`` | Entero (de -32768 a 32767)|| ``int32`` | Entero (de -2147483648 a 2147483647)|| ``int64`` | Entero (de -9223372036854775808 a 9223372036854775807)| | ``uint8`` | Entero sin signo (de 0 a 255)| | ``uint16`` | Entero sin signo (de 0 a 65535)| | ``uint32`` | Entero sin signo (de 0 a 4294967295)| | ``uint64`` | Entero sin signo (de 0 a 18446744073709551615)| | ``float_`` | Atajo para ``float64``.| | ``float16`` | Half precision float: un bit para el signo, 5 bits para el exponente, 10 bits para la mantissa| | ``float32`` | Single precision float: un bit para el signo, 8 bits para el exponente, 23 bits para la mantissa|| ``float64`` | Double precision float: un bit para el signo, 11 bits para el exponente, 52 bits para la mantissa|| ``complex_`` | Atajo para `complex128`.| | ``complex64`` | Número complejo, represantedo por dos *floats* de 32-bits| | ``complex128``| Número complejo, represantedo por dos *floats* de 64-bits| Es posible tener una especificación de tipos más detallada, pudiendo especificar números con *big endian* o *little endian*. No vamos a ver esto en este momento.El tipo por defecto que usa `numpy` al crear un *ndarray* es `np.float_`, siempre que no específiquemos explícitamente el tipo a usar. Creación de numpy arrays Podemos crear numpy arrays de muchas formas.* Rangos numéricos`np.arange`, `np.linspace`, `np.logspace`* Datos homogéneos`np.zeros`, `np.ones`* Elementos diagonales`np.diag`, `np.eye`* A partir de otras estructuras de datos ya creadas`np.array`* A partir de otros numpy arrays`np.empty_like`* A partir de ficheros`np.loadtxt`, `np.genfromtxt`,...* A partir de un escalar`np.full`, `np.tile`,...* A partir de valores aleatorios`np.random.randint`, `np.random.randint`, `np.random.randn`,......
###Code
a = np.arange(10) # similar a range pero devuelve un ndarray en lugar de un objeto range
print(a)
a = np.linspace(0, 1, 101)
print(a)
a_i = np.zeros((2, 3), dtype=np.int)
a_f = np.zeros((2, 3))
print(a_i)
print(a_f)
a = np.eye(3)
print(a)
a = np.array(
(
(1, 2, 3, 4, 5, 6),
(10, 20, 30, 40, 50, 60)
),
dtype=np.float
)
print(a)
np.full((5, 5), -999)
np.random.randint(0, 50, 15)
###Output
_____no_output_____
###Markdown
Referencias: array creation routines for array creation **Practicando**Recordad que siempre podéis usar `help`, `?`, `np.lookfor`,..., para obtener más información.
###Code
help(np.sum)
np.rad2deg?
np.lookfor("create array")
###Output
_____no_output_____
###Markdown
Ved un poco como funciona `np.repeat`, `np.empty_like`,...
###Code
# Play area
%load ../../solutions/03_01_np_array_creacion.py
###Output
_____no_output_____
###Markdown
Operaciones disponibles más típicas que podemos hacer con un numpy array
###Code
a = np.random.rand(5, 2)
print(a)
a.sum()
a.sum(axis=0)
a.sum(axis=1)
a.ravel()
a.reshape(2, 5)
a.T
a.transpose()
a.mean()
a.mean(axis=1)
a.cumsum(axis=1)
###Output
_____no_output_____
###Markdown
Referencias: Quick start tutorial **Practicando**Mirad más métodos de un `ndarray` y toquetead. Si no entendéis algo, preguntad:
###Code
dir(a)
# Play area
%load ../../solutions/03_02_np_operaciones_tipicas.py
###Output
_____no_output_____
###Markdown
Metadatos y anatomía de un `ndarray` En realidad, un `ndarray` es un bloque de memoria con información extra sobre como interpretar su contenido. La memoria dinámica (RAM) se puede considerar como un 'churro' lineal y es por ello que necesitamos esa información extra para saber como formar ese `ndarray`, sobre todo la información de `shape` y `strides`.Esta parte va a ser un poco más esotérica para los no iniciados pero considero que es necesaria para poder entender mejor nuestra nueva estructura de datos y poder sacarle mejor partido.
###Code
a = np.random.randn(5000, 5000)
###Output
_____no_output_____
###Markdown
El número de dimensiones del `ndarray`
###Code
a.ndim
###Output
_____no_output_____
###Markdown
El número de elementos en cada una de las dimensiones
###Code
a.shape
###Output
_____no_output_____
###Markdown
El número de elementos
###Code
a.size
###Output
_____no_output_____
###Markdown
El tipo de datos de los elementos
###Code
a.dtype
###Output
_____no_output_____
###Markdown
El número de bytes de cada elemento
###Code
a.itemsize
###Output
_____no_output_____
###Markdown
El número de bytes que ocupa el `ndarray` (es lo mismo que `size` por `itemsize`)
###Code
a.nbytes
###Output
_____no_output_____
###Markdown
El *buffer* que contiene los elementos del `ndarray`
###Code
a.data
###Output
_____no_output_____
###Markdown
Pasos a dar en cada dimensión cuando nos movemos entre elementos
###Code
a.strides
###Output
_____no_output_____
###Markdown
***(imagen extraída [de GitHub](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy)).*** Referencias: Internal memory layout of an ndarray multidimensional array indexing order issues PracticandoCrea un numpy array de dos dimensiones y obtén información del mismo. Puedes obtener información extra usando el método `flags`:
###Code
# Play area
%load ../../solutions/03_03_np_array_metadatos.py
###Output
_____no_output_____
###Markdown
Indexación Si ya has trabajado con indexación en estructuras de Python, como listas, tuplas o strings, la indexación en Numpy te resultará muy familiar. Por ejemplo, por hacer las cosas sencillas, vamos a crear un `ndarray` de 1D:
###Code
a = np.arange(10, dtype=np.uint8)
print(a)
print(a[:]) # para acceder a todos los elementos
print(a[:-1]) # todos los elementos menos el último
print(a[1:]) # todos los elementos menos el primero
print(a[::2]) # el primer, el tercer, el quinto,..., elemento
print(a[3]) # el cuarto elemento
print(a[-1:-5:-1]) # ¿?
###Output
_____no_output_____
###Markdown
Para *ndarrays* de una dimensión es exactamente igual que si usásemos listas o tuplas de Python:* Primer elemento tiene índice 0* Los índices negativos empiezan a contar desde el final* slices/rebanadas con `[start:stop:step]` Con un `ndarray` de más dimensiones las cosas ya cambian con respecto a Python puro:
###Code
a = np.random.randn(10, 2)
print(a)
print(a[1]) # ¿Qué nos dará esto?
print(a[1, 1]) # Si queremos acceder a un elemento específico hay que dar su posición completa en el ndarray
print(a[::3, 1])
###Output
_____no_output_____
###Markdown
Vamos a considerar el siguiente numpy array y vamos a trabajar un poco el *slicing*
###Code
a = np.arange(40).reshape(5, 8)
print(a)
###Output
_____no_output_____
###Markdown
Si tenemos dimensiones mayores a 1 es parecido a las listas pero los índices se separan por comas para las nuevas dimensiones.(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[2, -3]
###Output
_____no_output_____
###Markdown
Para obtener más de un elemento hacemos *slicing* para cada eje:(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[:3, :5]
###Output
_____no_output_____
###Markdown
Jugamos de nuevo!!! ¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
# ¿?
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
# ¿?
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
# ¿?
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
# ¿?
###Output
_____no_output_____
###Markdown
Soluciones a lo anterior:
###Code
%load ../../solutions/03_04_array_indexing.py
###Output
_____no_output_____
###Markdown
**Fancy indexing** Con *fancy indexing* podemos hacer cosas tan variopintas como: (imágenes extraídas de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial)) Es decir, podemos indexar usando `ndarray`s de booleanos ó usando listas de índices para extraer elementos concretos de una sola vez.**WARNING: En el momento que usamos *fancy indexing* nos devuelve un nuevo *ndarray* que no tiene porque conservar la estructura original.** Referencias: array indexing indexing arrays Manejo de valores especiales `numpy` provee de varios valores especiales: `np.nan`, `np.Inf`, `np.Infinity`, `np.inf`, `np.infty`,...
###Code
a = 1 / np.arange(10)
print(a)
a[0] == np.inf
a.max() # Esto no es lo que queremos
a.mean() # Esto no es lo que queremos
a[np.isfinite(a)].max()
a[-1] = np.nan
print(a)
a.mean()
np.isnan(a)
np.isfinite(a)
np.isinf(a) # podéis mirar también np.isneginf, np.isposinf
###Output
_____no_output_____
###Markdown
`numpy` usa el estándar IEEE de números flotantes para aritmética (IEEE 754). Esto significa que *Not aNumber* no es equivalente a *infinity*. También, *positive infinity* no es equivalente a *negative infinity*. Pero *infinity* es equivalente a *positive infinity*.
###Code
1 < np.inf
1 < -np.inf
1 > -np.inf
1 == np.inf
1 < np.nan
1 > np.nan
1 == np.nan
###Output
_____no_output_____
###Markdown
Subarrays, vistas y copias **¡IMPORTANTE!**Vistas y copias: `numpy`, por defecto, siempre devuelve vistas para evitar incrementos innecesarios de memoria. Este comportamiento difiere del de Python puro donde una rebanada (*slicing*) de una lista devuelve una copia. Si queremos una copia de un `ndarray` debemos obtenerla de forma explícita:
###Code
a = np.arange(10)
b = a[2:5]
print(a)
print(b)
b[0] = 222
print(a)
print(b)
###Output
_____no_output_____
###Markdown
Este comportamiento por defecto es realmente muy útil, significa que, trabajando con grandes conjuntos de datos, podemos acceder y procesar piezas de estos conjuntos de datos sin necesidad de copiar el buffer de datos original. A veces, es necesario crear una copia. Esto se puede realizar fácilmente usando el método `copy` de los *ndarrays*. El ejemplo anterior usando una copia en lugar de una vista:
###Code
a = np.arange(10)
b = a[2:5].copy()
print(a)
print(b)
b[0] = 222
print(a)
print(b)
###Output
_____no_output_____
###Markdown
Operaciones entre numpy arrays Como hemos visto, se pueden hacer operaciones sobre el propio array pero, como es de esperar, también podemos realizar operaciones entre varios numpy arrays:
###Code
a = np.repeat(3, 10)
b = np.arange(10)
c = np.linspace(0, 10, 10)
print(a)
print(b)
print(c)
print()
print(a + b * c)
###Output
_____no_output_____
###Markdown
Broadcasting Es posible realizar operaciones en *ndarrays* de diferentes tamaños. En algunos casos `numpy` puede transformar estos *ndarrays* automáticamente de forma que todos tienen la misma forma. Esta conversión automática se llama **broadcasting**. Normas del BroadcastingPara determinar la interacción entre dos `ndarray`s en Numpy se sigue un conjunto de reglas estrictas:* Regla 1: Si dos `ndarray`s difieren en su número de dimensiones la forma de aquel con menos dimensiones se rellena con 1's a su derecha.- Regla 2: Si la forma de dos `ndarray`s no es la misma en ninguna de sus dimensiones, el `ndarry` con forma igual a 1 en esa dimensión se 'alarga' para tener simulares dimensiones que los del otros `ndarray`.- Regla 3: Si en cualquier dimensión el tamaño no es igual y ninguno de ellos es igual a 1 entonces obtendremos un error.Resumiendo, cuando se opera en dos *ndarrays*, `numpy` compara sus formas (*shapes*) elemento a elemento. Empieza por las dimensiones más a la izquierda y trabaja hacia las siguientes dimensiones. Dos dimensiones son compatibles cuando ambas son iguales o una de ellas es 1Si estas condiciones no se cumplen se lanzará una excepción `ValueError: frames are not aligned` indicando que los *ndarrays* tienen formas incompatibles. El tamaño del *ndarray* resultante es el tamaño máximo a lo largo de cada dimensión de los *ndarrays* de partida. De forma más gráfica:(imagen extraída de [aquí](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy))```a: 4 x 3 a: 4 x 3 a: 4 x 1b: 4 x 3 b: 3 b: 3result: 4 x 3 result: 4 x 3 result: 4 x 3```Intentemos reproducir los esquemas de la imagen anterior.
###Code
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.repeat((0, 1, 2), 4).reshape(3,4).T
print(a)
print(b)
print(a + b)
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
a = np.array((0, 10, 20, 30)).reshape(4,1)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
###Output
_____no_output_____
###Markdown
Referencias: Basic broadcasting Broadcasting more in depth Funciones matemáticas, funciones universales *ufuncs* y vectorización ¿Qué es eso de *ufunc*? De la [documentación oficial de Numpy](http://docs.scipy.org/doc/numpy/reference/ufuncs.html): > A universal function (or ufunc for short) is a function that operates on ndarrays in an element-by-element fashion, supporting array broadcasting, type casting, and several other standard features. That is, a ufunc is a “**vectorized**” wrapper for a function that takes a **fixed number of scalar inputs** and produces a **fixed number of scalar outputs**.Una *ufunc* es una *Universal function* o función universal que actúa sobre todos los elementos de un `ndarray`, es decir aplica la funcionalidad sobre cada uno de los elementos del `ndarray`. Esto se conoce como vectorización.Por ejemplo, veamos la operación de elevar al cuadrado una lista en python puro o en `numpy`:
###Code
# En Python puro
a_list = list(range(10_000))
%timeit [i ** 2 for i in a_list]
# En numpy
an_arr = np.arange(10_000)
%timeit np.power(an_arr, 2)
a = np.arange(10)
np.power(a, 2)
###Output
_____no_output_____
###Markdown
La función anterior eleva al cuadrado cada uno de los elementos del `ndarray` anterior.Dentro de `numpy` hay muchísimas *ufuncs* y `scipy` (no lo vamos a ver) dispone de muchas más *ufuns* mucho más especializadas.En `numpy` tenemos, por ejemplo: * Funciones trigonométricas: `sin`, `cos`, `tan`, `arcsin`, `arccos`, `arctan`, `hypot`, `arctan2`, `degrees`, `radians`, `unwrap`, `deg2rad`, `rad2deg`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Funciones hiperbólicas: `sinh`, `cosh`, `tanh`, `arcsinh`, `arccosh`, `arctanh`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Redondeo: `around`, `round_`, `rint`, `fix`, `floor`, `ceil`, `trunc`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Sumas, productos, diferencias: `prod`, `sum`, `nansum`, `cumprod`, `cumsum`, `diff`, `ediff1d`, `gradient`, `cross`, `trapz`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Exponentes y logaritmos: `exp`, `expm1`, `exp2`, `log`, `log10`, `log2`, `log1p`, `logaddexp`, `logaddexp2`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Otras funciones especiales: `i0`, `sinc`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Trabajo con decimales: `signbit`, `copysign`, `frexp`, `ldexp`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Operaciones aritméticas: `add`, `reciprocal`, `negative`, `multiply`, `divide`, `power`, `subtract`, `true_divide`, `floor_divide`, `fmod`, `mod`, `modf`, `remainder`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Manejo de números complejos: `angle`, `real`, `imag`, `conj`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Miscelanea: `convolve`, `clip`, `sqrt`, `square`, `absolute`, `fabs`, `sign`, `maximum`, `minimum`, `fmax`, `fmin`, `nan_to_num`, `real_if_close`, `interp`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Referencias: Ufuncs Estadística * Orden: `amin`, `amax`, `nanmin`, `nanmax`, `ptp`, `percentile`, `nanpercentile`* Medias y varianzas: `median`, `average`, `mean`, `std`, `var`, `nanmedian`, `nanmean`, `nanstd`, `nanvar`* Correlacionando: `corrcoef`, `correlate`, `cov`* Histogramas: `histogram`, `histogram2d`, `histogramdd`, `bincount`, `digitize`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Ordenando, buscando y contando * Ordenando: `sort`, `lexsort`, `argsort`, `ndarray.sort`, `msort`, `sort_complex`, `partition`, `argpartition`* Buscando: `argmax`, `nanargmax`, `argmin`, `nanargmin`, `argwhere`, `nonzero`, `flatnonzero`, `where`, `searchsorted`, `extract`* Contando: `count_nonzero`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Polinomios * Series de potencias: `numpy.polynomial.polynomial`* Clase Polynomial: `np.polynomial.Polynomial`* Básicos: `polyval`, `polyval2d`, `polyval3d`, `polygrid2d`, `polygrid3d`, `polyroots`, `polyfromroots`* Ajuste: `polyfit`, `polyvander`, `polyvander2d`, `polyvander3d`* Cálculo: `polyder`, `polyint`* Álgebra: `polyadd`, `polysub`, `polymul`, `polymulx`, `polydiv`, `polypow`* Miscelánea: `polycompanion`, `polydomain`, `polyzero`, `polyone`, `polyx`, `polytrim`, `polyline`* Otras funciones polinómicas: `Chebyshev`, `Legendre`, `Laguerre`, `Hermite`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Álgebra lineal Lo siguiente que se encuentra dentro de `numpy.linalg` vendrá precedido por `LA`.* Productos para vectores y matrices: `dot`, `vdot`, `inner`, `outer`, `matmul`, `tensordot`, `einsum`, `LA.matrix_power`, `kron`* Descomposiciones: `LA.cholesky`, `LA.qr`, `LA.svd`* Eigenvalores: `LA.eig`, `LA.eigh`, `LA.eigvals`, `LA.eigvalsh`* Normas y otros números: `LA.norm`, `LA.cond`, `LA.det`, `LA.matrix_rank`, `LA.slogdet`, `trace`* Resolución de ecuaciones e inversión de matrices: `LA.solve`, `LA.tensorsolve`, `LA.lstsq`, `LA.inv`, `LA.pinv`, `LA.tensorinv`Dentro de `scipy` tenemos más cosas relacionadas.
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Manipulación de `ndarrays` `tile`, `hstack`, `vstack`, `dstack`, `hsplit`, `vsplit`, `dsplit`, `repeat`, `reshape`, `ravel`, `resize`,...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Numpy Numpy proporciona un nuevo contenedor de datos a Python, los `ndarray`s, además de funcionalidad especializada para poder manipularlos de forma eficiente.Hablar de manipulación de datos en Python es sinónimo de Numpy y prácticamente todo el ecosistema científico de Python está construido sobre Numpy. Digamos que Numpy es el ladrillo que ha permitido levantar edificios tan sólidos como Pandas, Matplotlib, Scipy, scikit-learn,...**Índice*** [¿Por qué un nuevo contenedor de datos?](%C2%BFPor-qu%C3%A9-un-nuevo-contenedor-de-datos?)* [Tipos de datos](Tipos-de-datos)* [Creación de `numpy` arrays](Creaci%C3%B3n-de-numpy-arrays)* [Operaciones disponibles más típicas](Operaciones-disponibles-m%C3%A1s-t%C3%ADpicas)* [Metadatos y anatomía de un `ndarray`](Metadatos-y-anatom%C3%ADa-de-un-ndarray)* [Indexación](Indexaci%C3%B3n)* [Manejo de valores especiales](Manejo-de-valores-especiales)* [Subarrays, vistas y copias](Subarrays,-vistas-y-copias)* [¿Cómo funcionan los ejes de un `ndarray`?](%C2%BFC%C3%B3mo-funcionan-los-ejes-en-un-ndarray?)* [Reformateo de `ndarray`s](Reformateo-de-ndarrays)* [Broadcasting](Broadcasting)* [`ndarrays` estructurados y `recarray`s](ndarrays-estructurados-y-recarrays)* [Concatenación y partición de `ndarray`s](Concatenaci%C3%B3n-y-partici%C3%B3n-de-ndarrays)* [Funciones matemáticas, funciones universales *ufuncs* y vectorización](Funciones-matem%C3%A1ticas,-funciones-universales-ufuncs-y-vectorizaci%C3%B3n)* [Estadística](Estad%C3%ADstica)* [Ordenando, buscando y contando](Ordenando,-buscando-y-contando)* [Polinomios](Polinomios)* [Álgebra lineal](%C3%81lgebra-lineal)* [Manipulación de `ndarray`s](Manipulaci%C3%B3n-de-ndarrays)* [Módulos de interés dentro de numpy](M%C3%B3dulos-de-inter%C3%A9s-dentro-de-numpy)* [Cálculo matricial](C%C3%A1lculo-matricial) ¿Por qué un nuevo contenedor de datos? En Python, disponemos, de partida, de diversos contenedores de datos, listas, tuplas, diccionarios, conjuntos,..., ¿por qué añadir uno más?.¡Por conveniencia!, a pesar de la pérdida de flexibilidad. Es una solución de compromiso.* Uso de memoria más eficiente: Por ejemplo, una lista puede contener distintos tipos de objetos lo que provoca que Python deba guardar información del tipo de cada elemento contenido en la lista. Por otra parte, un `ndarray` contiene tipos homogéneos, es decir, todos los elementos son del mismo tipo, por lo que la información del tipo solo debe guardarse una vez independientemente del número de elementos que tenga el `ndarray`.***(imagen por Jake VanderPlas y extraída [de GitHub](https://github.com/jakevdp/PythonDataScienceHandbook)).**** Más rápido: Por ejemplo, en una lista que consta de elementos con diferentes tipos Python debe realizar trabajos extra para saber si los tipos son compatibles con las operaciones que estamos realizando. Cuando trabajamos con un `ndarray` ya podemos saber eso de partida y podemos tener operaciones más eficientes (además de que mucha funcionalidad está programada en C, C++, Cython, Fortran).* Operaciones vectorizadas* Funcionalidad extra: Muchas operaciones de álgebra lineal, transformadas rápidas de Fourier, estadística básica, histogramas,...* Acceso a los elementos más conveniente: Indexación más avanzada que con los tipos normales de Python* ... Uso de memoria
###Code
# AVISO: SYS.GETSYZEOF NO ES FIABLE
lista = list(range(5_000_000))
arr = np.array(lista, dtype=np.uint32)
print("5 millones de elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
print()
lista = list(range(100))
arr = np.array(lista, dtype=np.uint8)
print("100 elementos")
print(sys.getsizeof(lista))
print(sys.getsizeof(arr))
###Output
_____no_output_____
###Markdown
Velocidad de operaciones
###Code
a = list(range(1000000))
%timeit sum(a)
print(sum(a))
a = np.array(a)
%timeit np.sum(a)
print(np.sum(a))
###Output
_____no_output_____
###Markdown
Operaciones vectorizadas
###Code
# Suma de dos vectores elemento a elemento
a = [1, 1, 1]
b = [3, 4, 3]
print(a + b)
print('Fail')
# Suma de dos vectores elemento a elemento
a = np.array([1, 1, 1])
b = np.array([3, 4, 3])
print(a + b)
print('\o/')
###Output
_____no_output_____
###Markdown
Funcionalidad más conveniente
###Code
# suma acumulada
a = list(range(100))
print([sum(a[:i+1]) for i in a])
a = np.array(a)
print(a.cumsum())
###Output
_____no_output_____
###Markdown
Acceso a elementos más conveniente
###Code
a = [[11, 12, 13],
[21, 22, 23],
[31, 32, 33]]
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:][0], ' Fail!!!')
a = np.array(a)
print('acceso a la primera fila: ', a[0])
print('acceso a la primera columna: ', a[:,0], ' \o/')
###Output
_____no_output_____
###Markdown
... Recapitulando un poco.***Los `ndarray`s son contenedores multidimensionales, homogéneos con elementos de tamaño fijo, de dimensión predefinida.*** Tipos de datos Como los arrays deben ser homogéneos tenemos tipos de datos. Algunos de ellos se pueden ver en la siguiente tabla:| Data type | Descripción ||---------------|-------------|| ``bool_`` | Booleano (True o False) almacenado como un Byte || ``int_`` | El tipo entero por defecto (igual que el `long` de C; normalmente será `int64` o `int32`)| | ``intc`` | Idéntico al ``int`` de C (normalmente `int32` o `int64`)| | ``intp`` | Entero usado para indexación (igual que `ssize_t` en C; normalmente `int32` o `int64`)| | ``int8`` | Byte (de -128 a 127)| | ``int16`` | Entero (de -32768 a 32767)|| ``int32`` | Entero (de -2147483648 a 2147483647)|| ``int64`` | Entero (de -9223372036854775808 a 9223372036854775807)| | ``uint8`` | Entero sin signo (de 0 a 255)| | ``uint16`` | Entero sin signo (de 0 a 65535)| | ``uint32`` | Entero sin signo (de 0 a 4294967295)| | ``uint64`` | Entero sin signo (de 0 a 18446744073709551615)| | ``float_`` | Atajo para ``float64``.| | ``float16`` | Half precision float: un bit para el signo, 5 bits para el exponente, 10 bits para la mantissa| | ``float32`` | Single precision float: un bit para el signo, 8 bits para el exponente, 23 bits para la mantissa|| ``float64`` | Double precision float: un bit para el signo, 11 bits para el exponente, 52 bits para la mantissa|| ``complex_`` | Atajo para `complex128`.| | ``complex64`` | Número complejo, represantedo por dos *floats* de 32-bits| | ``complex128``| Número complejo, represantedo por dos *floats* de 64-bits| Es posible tener una especificación de tipos más detallada, pudiendo especificar números con *big endian* o *little endian*. No vamos a ver esto en este momento.El tipo por defecto que usa `numpy` al crear un *ndarray* es `np.float_`, siempre que no específiquemos explícitamente el tipo a usar. Por ejemplo, un array de tipo `np.uint8` puede tener los siguientes valores:
###Code
import itertools
for i, bits in enumerate(itertools.product((0, 1), repeat=8)):
print(i, bits)
###Output
_____no_output_____
###Markdown
Es decir, puede contener valores que van de 0 a 255 ($2^8$). ¿Cuántos bytes tendrá un `ndarray` de 10 elementos cuyo tipo de datos es un `np.int8`?
###Code
a = np.arange(10, dtype=np.int8)
print(a.nbytes)
print(sys.getsizeof(a))
a = np.repeat(1, 100000).astype(np.int8)
print(a.nbytes)
print(sys.getsizeof(a))
###Output
_____no_output_____
###Markdown
Creación de numpy arrays Podemos crear numpy arrays de muchas formas.* Rangos numéricos`np.arange`, `np.linspace`, `np.logspace`* Datos homogéneos`np.zeros`, `np.ones`* Elementos diagonales`np.diag`, `np.eye`* A partir de otras estructuras de datos ya creadas`np.array`* A partir de otros numpy arrays`np.empty_like`* A partir de ficheros`np.loadtxt`, `np.genfromtxt`,...* A partir de un escalar`np.full`, `np.tile`,...* A partir de valores aleatorios`np.random.randint`, `np.random.randint`, `np.random.randn`,......
###Code
a = np.arange(10) # similar a range pero devuelve un ndarray en lugar de un objeto range
print(a)
a = np.linspace(0, 1, 101)
print(a)
a_i = np.zeros((2, 3), dtype=np.int)
a_f = np.zeros((2, 3))
print(a_i)
print(a_f)
a = np.eye(3)
print(a)
a = np.array(
(
(1, 2, 3, 4, 5, 6),
(10, 20, 30, 40, 50, 60)
),
dtype=np.float
)
print(a)
np.full((5, 5), -999)
np.random.randint(0, 50, 15)
###Output
_____no_output_____
###Markdown
Referencias: array creation routines for array creation **Practicando**Recordad que siempre podéis usar `help`, `?`, `np.lookfor`,..., para obtener más información.
###Code
help(np.sum)
np.rad2deg?
np.lookfor("create array")
###Output
_____no_output_____
###Markdown
Ved un poco como funciona `np.repeat`, `np.empty_like`,...
###Code
# Play area
%load ../../solutions/03_01_np_array_creacion.py
###Output
_____no_output_____
###Markdown
Operaciones disponibles más típicas
###Code
a = np.random.rand(5, 2)
print(a)
a.sum()
a.sum(axis=0)
a.sum(axis=1)
a.ravel()
a.reshape(2, 5)
a.T
a.transpose()
a.mean()
a.mean(axis=1)
a.cumsum(axis=1)
###Output
_____no_output_____
###Markdown
Referencias: Quick start tutorial **Practicando**Mirad más métodos de un `ndarray` y toquetead. Si no entendéis algo, preguntad:
###Code
dir(a)
# Play area
%load ../../solutions/03_02_np_operaciones_tipicas.py
###Output
_____no_output_____
###Markdown
Metadatos y anatomía de un `ndarray` En realidad, un `ndarray` es un bloque de memoria con información extra sobre como interpretar su contenido. La memoria dinámica (RAM) se puede considerar como un 'churro' lineal y es por ello que necesitamos esa información extra para saber como formar ese `ndarray`, sobre todo la información de `shape` y `strides`.Esta parte va a ser un poco más esotérica para los no iniciados pero considero que es necesaria para poder entender mejor nuestra nueva estructura de datos y poder sacarle mejor partido.
###Code
a = np.random.randn(5000, 5000)
###Output
_____no_output_____
###Markdown
El número de dimensiones del `ndarray`
###Code
a.ndim
###Output
_____no_output_____
###Markdown
El número de elementos en cada una de las dimensiones
###Code
a.shape
###Output
_____no_output_____
###Markdown
El número de elementos
###Code
a.size
###Output
_____no_output_____
###Markdown
El tipo de datos de los elementos
###Code
a.dtype
###Output
_____no_output_____
###Markdown
El número de bytes de cada elemento
###Code
a.itemsize
###Output
_____no_output_____
###Markdown
El número de bytes que ocupa el `ndarray` (es lo mismo que `size` por `itemsize`)
###Code
a.nbytes
###Output
_____no_output_____
###Markdown
El *buffer* que contiene los elementos del `ndarray`
###Code
a.data
###Output
_____no_output_____
###Markdown
Pasos a dar en cada dimensión cuando nos movemos entre elementos
###Code
a.strides
###Output
_____no_output_____
###Markdown
***(imagen extraída [de GitHub](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy)).*** Más cosas
###Code
a.flags
###Output
_____no_output_____
###Markdown
Pequeño ejercicio, ¿por qué tarda menos en sumar elementos en una dimensión que en otra si es un array regular?
###Code
%timeit a.sum(axis=0)
%timeit a.sum(axis=1)
###Output
_____no_output_____
###Markdown
Pequeño ejercicio, ¿por qué ahora el resultado es diferente?
###Code
aT = a.T
%timeit aT.sum(axis=0)
%timeit aT.sum(axis=1)
print(aT.strides)
print(aT.flags)
print(np.repeat((1,2,3), 3))
print()
a = np.repeat((1,2,3), 3).reshape(3, 3)
print(a)
print()
print(a.sum(axis=0))
print()
print(a.sum(axis=1))
###Output
_____no_output_____
###Markdown
Referencias: Internal memory layout of an ndarray multidimensional array indexing order issues Indexación Si ya has trabajado con indexación en estructuras de Python, como listas, tuplas o strings, la indexación en Numpy te resultará muy familiar. Por ejemplo, por hacer las cosas sencillas, vamos a crear un `ndarray` de 1D:
###Code
a = np.arange(10, dtype=np.uint8)
print(a)
print(a[:]) # para acceder a todos los elementos
print(a[:-1]) # todos los elementos menos el último
print(a[1:]) # todos los elementos menos el primero
print(a[::2]) # el primer, el tercer, el quinto,..., elemento
print(a[3]) # el cuarto elemento
print(a[-1:-5:-1]) # ¿?
# Practicad vosotros
###Output
_____no_output_____
###Markdown
Para *ndarrays* de una dimensión es exactamente igual que si usásemos listas o tuplas de Python:* Primer elemento tiene índice 0* Los índices negativos empiezan a contar desde el final* slices/rebanadas con `[start:stop:step]` Con un `ndarray` de más dimensiones las cosas ya cambian con respecto a Python puro:
###Code
a = np.random.randn(10, 2)
print(a)
a[1] # ¿Qué nos dará esto?
a[1, 1] # Si queremos acceder a un elemento específico hay que dar su posición completa en el ndarray
a[::3, 1]
###Output
_____no_output_____
###Markdown
Si tenemos dimensiones mayores a 1 es parecido a las listas pero los índices se separan por comas para las nuevas dimensiones.(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a = np.arange(40).reshape(5, 8)
print(a)
a[2, -3]
###Output
_____no_output_____
###Markdown
Para obtener más de un elemento hacemos *slicing* para cada eje:(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[:3, :5]
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[x:x ,x:x]
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[x:x ,x:x]
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[x:x ,x:x]
###Output
_____no_output_____
###Markdown
¿Cómo podemos conseguir los elementos señalados en esta imagen?(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a[x:x ,x:x]
###Output
_____no_output_____
###Markdown
Soluciones a lo anterior:(imágenes extraídas de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial)) **Fancy indexing** Con *fancy indexing* podemos hacer cosas tan variopintas como: (imágenes extraídas de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial)) Es decir, podemos indexar usando `ndarray`s de booleanos ó usando listas de índices para extraer elementos concretos de una sola vez.**WARNING: En el momento que usamos *fancy indexing* nos devuelve un nuevo *ndarray* que no tiene porque conservar la estructura original.** Por ejemplo, en el siguiente caso no devuelve un *ndarray* de dos dimensiones porque la máscara no tiene porqué ser regular y, por tanto, devuelve solo los valores que cumplen el criterio en un vector (*ndarray* de una dimensión).
###Code
a = np.arange(10).reshape(2, 5)
print(a)
bool_indexes = (a % 2 == 0)
print(bool_indexes)
a[bool_indexes]
###Output
_____no_output_____
###Markdown
Sin embargo, sí que lo podríamos usar para modificar el *ndarray* original en base a un criterio y seguir manteniendo la misma forma.
###Code
a[bool_indexes] = 999
print(a)
###Output
_____no_output_____
###Markdown
Referencias: array indexing indexing arrays Manejo de valores especiales `numpy` provee de varios valores especiales: `np.nan`, `np.Inf`, `np.Infinity`, `np.inf`, `np.infty`,...
###Code
a = 1 / np.arange(10)
print(a)
a[0] == np.inf
a.max() # Esto no es lo que queremos
a.mean() # Esto no es lo que queremos
a[np.isfinite(a)].max()
a[-1] = np.nan
print(a)
a.mean()
np.isnan(a)
np.isfinite(a)
np.isinf(a) # podéis mirar también np.isneginf, np.isposinf
###Output
_____no_output_____
###Markdown
`numpy` usa el estándar IEEE de números flotantes para aritmética (IEEE 754). Esto significa que *Not aNumber* no es equivalente a *infinity*. También, *positive infinity* no es equivalente a *negative infinity*. Pero *infinity* es equivalente a *positive infinity*.
###Code
1 < np.inf
1 < -np.inf
1 > -np.inf
1 == np.inf
1 < np.nan
1 > np.nan
1 == np.nan
###Output
_____no_output_____
###Markdown
Subarrays, vistas y copias **¡IMPORTANTE!**Vistas y copias: `numpy`, por defecto, siempre devuelve vistas para evitar incrementos innecesarios de memoria. Este comportamiento difiere del de Python puro donde una rebanada (*slicing*) de una lista devuelve una copia. Si queremos una copia de un `ndarray` debemos obtenerla de forma explícita:
###Code
a = np.arange(10)
b = a[2:5]
print(a)
print(b)
b[0] = 222
print(a)
print(b)
###Output
_____no_output_____
###Markdown
Este comportamiento por defecto es realmente muy útil, significa que, trabajando con grandes conjuntos de datos, podemos acceder y procesar piezas de estos conjuntos de datos sin necesidad de copiar el buffer de datos original. A veces, es necesario crear una copia. Esto se puede realizar fácilmente usando el método `copy` de los *ndarrays*. El ejemplo anterior usando una copia en lugar de una vista:
###Code
a = np.arange(10)
b = a[2:5].copy()
print(a)
print(b)
b[0] = 222
print(a)
print(b)
###Output
_____no_output_____
###Markdown
¿Cómo funcionan los ejes en un `ndarray`? Por ejemplo, cuando hacemos `a.sum()`, `a.sum(axis=0)`, `a.sum(axis=1)`.¿Qué pasa si tenemos más de dos dimensiones?Vamos a ver ejemplos:
###Code
a = np.arange(10).reshape(5,2)
a.shape
a.sum()
a.sum(axis=0)
a.sum(axis=1)
###Output
_____no_output_____
###Markdown
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a = np.arange(9).reshape(3, 3)
print(a)
print(a.sum(axis=0))
print(a.sum(axis=1))
###Output
_____no_output_____
###Markdown
(imagen extraída de [aquí](https://github.com/gertingold/euroscipy-numpy-tutorial))
###Code
a = np.arange(24).reshape(2, 3, 4)
print(a)
print(a.sum(axis=0))
print(a.sum(axis=1))
print(a.sum(axis=2))
###Output
_____no_output_____
###Markdown
Por ejemplo, en el primer caso, `axis=0`, lo que sucede es que cogemos todos los elementos del primer índice y aplicamos la operación para cada uno de los elementos de los otros dos ejes. Hecho de uno en uno sería lo siguiente:
###Code
print(a[:,0,0].sum(), a[:,0,1].sum(), a[:,0,2].sum(), a[:,0,3].sum())
print(a[:,1,0].sum(), a[:,1,1].sum(), a[:,1,2].sum(), a[:,1,3].sum())
print(a[:,2,0].sum(), a[:,2,1].sum(), a[:,2,2].sum(), a[:,2,3].sum())
###Output
_____no_output_____
###Markdown
Sin contar el eje que estamos usando, las dimensiones que quedan son 3 x 4 (segunda y tercera dimensiones) por lo que el resultado son 12 elementos. Para el caso de `axis=1`:
###Code
print(a[0,:,0].sum(), a[0,:,1].sum(), a[0,:,2].sum(), a[0,:,3].sum())
print(a[1,:,0].sum(), a[1,:,1].sum(), a[1,:,2].sum(), a[1,:,3].sum())
###Output
_____no_output_____
###Markdown
Sin contar el eje que estamos usando, las dimensiones que quedan son 2 x 4 (primera y tercera dimensiones) por lo que el resultado son 8 elementos. Para el caso de `axis=2`:
###Code
print(a[0,0,:].sum(), a[0,1,:].sum(), a[0,2,:].sum())
print(a[1,0,:].sum(), a[1,1,:].sum(), a[1,2,:].sum())
###Output
_____no_output_____
###Markdown
Sin contar el eje que estamos usando, las dimensiones que quedan son 2 x 3 (primera y segunda dimensiones) por lo que el resultado son 3 elementos. Reformateo de `ndarray`s Podemos cambiar la forma de los `ndarray`s usando el método `reshape`. Por ejemplo, si queremos colocar los números del 1 al 9 en un grid $3 \times 3$ lo podemos hacer de la siguiente forma:
###Code
a = np.arange(1, 10).reshape(3, 3)
###Output
_____no_output_____
###Markdown
Para que el cambio de forma no dé errores hemos de tener cuidado en que los tamaños del `ndarray` inicial y del `ndarray` final sean compatibles.
###Code
# Por ejemplo, lo siguiente dará error?
a = np.arange(1, 10). reshape(5, 2)
###Output
_____no_output_____
###Markdown
Otro patrón común de cambio de forma sería la conversion de un `ndarray` de 1D en uno de 2D añadiendo un nuevo eje. Lo podemos hacer usando, nuevamente, el método `reshape` o usando `numpy.newaxis`.
###Code
# Por ejemplo un array 2D de una fila
a = np.arange(3)
a1_2D = a.reshape(1,3)
a2_2D = a[np.newaxis, :]
print(a1_2D)
print(a1_2D.shape)
print(a2_2D)
print(a2_2D.shape)
# Por ejemplo un array 2D de una columna
a = np.arange(3)
a1_2D = a.reshape(3,1)
a2_2D = a[:, np.newaxis]
print(a1_2D)
print(a1_2D.shape)
print(a2_2D)
print(a2_2D.shape)
###Output
_____no_output_____
###Markdown
Broadcasting Es poible realizar operaciones en *ndarrays* de diferentes tamaños. En algunos casos `numpy` puede transformar estos *ndarrays* automáticamente de forma que todos tienen la misma forma. Esta conversión automática se llama **broadcasting**. Normas del BroadcastingPara determinar la interacción entre dos `ndarray`s en Numpy se sigue un conjunto de reglas estrictas:* Regla 1: Si dos `ndarray`s difieren en su número de dimensiones la forma de aquel con menos dimensiones se rellena con 1's a su derecha.- Regla 2: Si la forma de dos `ndarray`s no es la misma en ninguna de sus dimensiones, el `ndarry` con forma igual a 1 en esa dimensión se 'alarga' para tener simulares dimensiones que los del otros `ndarray`.- Regla 3: Si en cualquier dimensión el tamaño no es igual y ninguno de ellos es igual a 1 entonces obtendremos un error.Resumiendo, cuando se opera en dos *ndarrays*, `numpy` compara sus formas (*shapes*) elemento a elemento. Empieza por las dimensiones más a la izquierda y trabaja hacia las siguientes dimensiones. Dos dimensiones son compatibles cuando ambas son iguales o una de ellas es 1Si estas condiciones no se cumplen se lanzará una excepción `ValueError: frames are not aligned` indicando que los *ndarrays* tienen formas incompatibles. El tamaño del *ndarray* resultante es el tamaño máximo a lo largo de cada dimensión de los *ndarrays* de partida. De forma más gráfica:(imagen extraída de [aquí](https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy))```a: 4 x 3 a: 4 x 3 a: 4 x 1b: 4 x 3 b: 3 b: 3result: 4 x 3 result: 4 x 3 result: 4 x 3```Intentemos reproducir los esquemas de la imagen anterior.
###Code
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.repeat((0, 1, 2), 4).reshape(3,4).T
print(a)
print(b)
print(a + b)
a = np.repeat((0, 10, 20, 30), 3).reshape(4, 3)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
a = np.array((0, 10, 20, 30)).reshape(4,1)
b = np.array((0, 1, 2))
print(a)
print(b)
print(a + b)
###Output
_____no_output_____
###Markdown
Referencias: Basic broadcasting Broadcasting more in depth `ndarrays` estructurados y `recarray`s Antes hemos comentado que los `ndarray`s deben ser homogéneos pero era un poco inexacto, en realidad, podemos tener `ndarray`s que tengan diferentes tipos. Estos se llaman `ndarray`s estructurados y `recarray`s.Veamos ejemplos:
###Code
nombre = ['paca', 'pancracio', 'nemesia', 'eulogio']
edad = [72, 68, 86, 91]
a = np.array(np.zeros(4), dtype=[('name', '<S10'), ('age', np.int)])
a['name'] = nombre
a['age'] = edad
print(a)
###Output
_____no_output_____
###Markdown
Podemos acceder a las columnas por nombre
###Code
a['name']
###Output
_____no_output_____
###Markdown
A todos los elementos menos el primero
###Code
a['age'][1:]
###Output
_____no_output_____
###Markdown
Un `recarray` es similar pero podemos acceder a los campos con notación de punto (*dot notation*).
###Code
ra = a.view(np.recarray)
ra.name
###Output
_____no_output_____
###Markdown
Esto introduce un poco de *overhead* para acceder ya que se realizan algunas operaciones de más. Concatenación y partición de `ndarrays` Podemos combinar múltiples *ndarrays* en uno o separar uno en varios.Para concatenar podemos usar `np.concatenate`, `np.hstack`, `np.vstack`, `np.dstack`. Ejemplos:
###Code
a = np.array([1, 1, 1, 1])
b = np.array([2, 2, 2, 2])
###Output
_____no_output_____
###Markdown
Podemos concatenar esos dos arrays usando `np.concatenate`:
###Code
np.concatenate([a, b])
###Output
_____no_output_____
###Markdown
No solo podemos concatenar *ndarrays* de una sola dimensión:
###Code
np.concatenate([a.reshape(2, 2), b.reshape(2, 2)])
###Output
_____no_output_____
###Markdown
Podemos elegir sobre qué eje concatenamos:
###Code
np.concatenate([a.reshape(2, 2), b.reshape(2, 2)], axis=1)
###Output
_____no_output_____
###Markdown
Podemos concatenar más de dos arrays:
###Code
c = [3, 3, 3, 3]
np.concatenate([a, b, c])
###Output
_____no_output_____
###Markdown
Si queremos ser más explícitos podemos usar `np.hstack` o `np.vstack`. La `h` y la `v` son para horizontal y vertical, respectivamente.
###Code
np.hstack([a, b])
np.vstack([a, b])
###Output
_____no_output_____
###Markdown
Podemos concatenar en la tercera dimensión usamos `np.dstack`. De la misma forma que podemos concatenar, podemos partir *ndarrays* usando `np.split`, `np.hsplit`, `np.vsplit`, `np.dsplit`.
###Code
# Intentamos entender como funciona la partición probando...
###Output
_____no_output_____
###Markdown
Funciones matemáticas, funciones universales *ufuncs* y vectorización ¿Qué es eso de *ufunc*? De la [documentación oficial de Numpy](http://docs.scipy.org/doc/numpy/reference/ufuncs.html): > A universal function (or ufunc for short) is a function that operates on ndarrays in an element-by-element fashion, supporting array broadcasting, type casting, and several other standard features. That is, a ufunc is a “**vectorized**” wrapper for a function that takes a **fixed number of scalar inputs** and produces a **fixed number of scalar outputs**.Una *ufunc* es una *Universal function* o función universal que actúa sobre todos los elementos de un `ndarray`, es decir aplica la funcionalidad sobre cada uno de los elementos del `ndarray`. Esto se conoce como vectorización.Por ejemplo, veamos la operación de elevar al cuadrado una lista en python puro o en `numpy`:
###Code
# En Python puro
a_list = list(range(10000))
%timeit [i ** 2 for i in a_list]
# En numpy
an_arr = np.arange(10000)
%timeit np.power(an_arr, 2)
a = np.arange(10)
np.power(a, 2)
###Output
_____no_output_____
###Markdown
La función anterior eleva al cuadrado cada uno de los elementos del `ndarray` anterior.Dentro de `numpy` hay muchísimas *ufuncs* y `scipy` (no lo vamos a ver) dispone de muchas más *ufuns* mucho más especializadas.En `numpy` tenemos, por ejemplo: * Funciones trigonométricas: `sin`, `cos`, `tan`, `arcsin`, `arccos`, `arctan`, `hypot`, `arctan2`, `degrees`, `radians`, `unwrap`, `deg2rad`, `rad2deg`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Funciones hiperbólicas: `sinh`, `cosh`, `tanh`, `arcsinh`, `arccosh`, `arctanh`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Redondeo: `around`, `round_`, `rint`, `fix`, `floor`, `ceil`, `trunc`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Sumas, productos, diferencias: `prod`, `sum`, `nansum`, `cumprod`, `cumsum`, `diff`, `ediff1d`, `gradient`, `cross`, `trapz`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Exponentes y logaritmos: `exp`, `expm1`, `exp2`, `log`, `log10`, `log2`, `log1p`, `logaddexp`, `logaddexp2`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Otras funciones especiales: `i0`, `sinc`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Trabajo con decimales: `signbit`, `copysign`, `frexp`, `ldexp`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Operaciones aritméticas: `add`, `reciprocal`, `negative`, `multiply`, `divide`, `power`, `subtract`, `true_divide`, `floor_divide`, `fmod`, `mod`, `modf`, `remainder`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Manejo de números complejos: `angle`, `real`, `imag`, `conj`
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
* Miscelanea: `convolve`, `clip`, `sqrt`, `square`, `absolute`, `fabs`, `sign`, `maximum`, `minimum`, `fmax`, `fmin`, `nan_to_num`, `real_if_close`, `interp`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Referencias: Ufuncs Estadística * Orden: `amin`, `amax`, `nanmin`, `nanmax`, `ptp`, `percentile`, `nanpercentile`* Medias y varianzas: `median`, `average`, `mean`, `std`, `var`, `nanmedian`, `nanmean`, `nanstd`, `nanvar`* Correlacionando: `corrcoef`, `correlate`, `cov`* Histogramas: `histogram`, `histogram2d`, `histogramdd`, `bincount`, `digitize`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Ordenando, buscando y contando * Ordenando: `sort`, `lexsort`, `argsort`, `ndarray.sort`, `msort`, `sort_complex`, `partition`, `argpartition`* Buscando: `argmax`, `nanargmax`, `argmin`, `nanargmin`, `argwhere`, `nonzero`, `flatnonzero`, `where`, `searchsorted`, `extract`* Contando: `count_nonzero`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Polinomios * Series de potencias: `numpy.polynomial.polynomial`* Clase Polynomial: `np.polynomial.Polynomial`* Básicos: `polyval`, `polyval2d`, `polyval3d`, `polygrid2d`, `polygrid3d`, `polyroots`, `polyfromroots`* Ajuste: `polyfit`, `polyvander`, `polyvander2d`, `polyvander3d`* Cálculo: `polyder`, `polyint`* Álgebra: `polyadd`, `polysub`, `polymul`, `polymulx`, `polydiv`, `polypow`* Miscelánea: `polycompanion`, `polydomain`, `polyzero`, `polyone`, `polyx`, `polytrim`, `polyline`* Otras funciones polinómicas: `Chebyshev`, `Legendre`, `Laguerre`, `Hermite`...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Álgebra lineal Lo siguiente que se encuentra dentro de `numpy.linalg` vendrá precedido por `LA`.* Productos para vectores y matrices: `dot`, `vdot`, `inner`, `outer`, `matmul`, `tensordot`, `einsum`, `LA.matrix_power`, `kron`* Descomposiciones: `LA.cholesky`, `LA.qr`, `LA.svd`* Eigenvalores: `LA.eig`, `LA.eigh`, `LA.eigvals`, `LA.eigvalsh`* Normas y otros números: `LA.norm`, `LA.cond`, `LA.det`, `LA.matrix_rank`, `LA.slogdet`, `trace`* Resolución de ecuaciones e inversión de matrices: `LA.solve`, `LA.tensorsolve`, `LA.lstsq`, `LA.inv`, `LA.pinv`, `LA.tensorinv`Dentro de `scipy` tenemos más cosas relacionadas.
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Manipulación de `ndarrays` `tile`, `hstack`, `vstack`, `dstack`, `hsplit`, `vsplit`, `dsplit`, `repeat`, `reshape`, `ravel`, `resize`,...
###Code
# juguemos un poco con ellas
###Output
_____no_output_____
###Markdown
Módulos de interés dentro de `numpy` Dentro de `numpy` podemos encontrar módulos para:* Usar números aleatorios: `np.random`* Usar FFT: `np.fft`* Usar *masked arrays*: `np.ma`* Usar polinomios: `np.polynomial`* Usar álgebra lineal: `np.linalg`* Usar matrices: `np.matlib`* ...Toda esta funcionalidad se puede ampliar y mejorar usando `scipy`. Cálculo matricial
###Code
a1 = np.repeat(2, 9).reshape(3, 3)
a2 = np.tile(2, (3, 3))
a3 = np.ones((3, 3), dtype=np.int) * 2
print(a1)
print(a2)
print(a3)
b = np.arange(1,4)
print(b)
print(a1.dot(b))
print(np.dot(a2, b))
print(a3 @ b) # only python version >= 3.5
###Output
_____no_output_____
###Markdown
Lo anterior lo hemos hecho usando *ndarrays* pero `numpy` también ofrece una estructura de datos `matrix`.
###Code
a_mat = np.matrix(a1)
a_mat
b_mat = np.matrix(b)
a_mat @ b_mat
a_mat @ b_mat.T
###Output
_____no_output_____ |
Reinforce_Experiments.ipynb | ###Markdown
$swf$ for 5 simulations, utilitarian
###Code
N_exps = 5
N = 1000
SUs = []
swf = utilitarian_swf
for i in range(N_exps):
SUs.append([run_data_coop_game(i, swf,N=N),
run_data_coop_game_with_regulator(i, swf, N=N),
run_data_coop_game_with_gaussian_regulator(i+55, swf,N=N)])
SUs = np.array(SUs)
colors = onp.asarray(['red', 'magenta', 'blue'])
smooth = 10
for i in range(3):
for j in range(N_exps):
plt.plot(np.convolve(SUs[j, i, :, 0], np.ones(smooth)/smooth, mode='valid'), color=colors[i], alpha=0.1)
plt.plot(np.convolve(SUs.mean(axis=0)[i, :, 0], np.ones(smooth)/smooth, mode='valid'), color=colors[i])
colors = onp.asarray(['red', 'magenta', 'blue'])
smooth = 50
for i in range(3):
for j in range(N_exps):
plt.plot(np.convolve(SUs[j, i, :, 0], np.ones(smooth)/smooth, mode='valid'), color=colors[i], alpha=0.1)
plt.plot(np.convolve(SUs.mean(axis=0)[i, :, 0], np.ones(smooth)/smooth, mode='valid'), color=colors[i])
plt.xlabel('time, $t$')
plt.ylabel('Social utility, $SU_t$')
from matplotlib.lines import Line2D
cmap = plt.cm.coolwarm
custom_lines = [Line2D([0], [0], color='red'),
Line2D([0], [0], color='magenta'),
Line2D([0], [0], color='blue')]
plt.legend(custom_lines,['No intervention', 'Regulator (Disc.)', 'Regulator (Gauss.)'], loc=2);
###Output
_____no_output_____
###Markdown
$swf$ for 5 simulations, rawlsian
###Code
N_exps = 5
SUs = []
swf = rawlsian_swf
for i in range(N_exps):
SUs.append([run_data_coop_game(i, swf, N=1000),
run_data_coop_game_with_regulator(i, swf, N=1000),
run_data_coop_game_with_gaussian_regulator(i, swf, N=1000)])
for i in range(N_exps):
SUs[i][2] = SUs[i][2].squeeze(-1)
SUs = np.array(SUs)
colors = onp.asarray(['red', 'magenta', 'blue'])
smooth = 20
for i in range(3):
for j in range(N_exps):
plt.plot(np.convolve(SUs[j, i, :], np.ones(smooth)/smooth, mode='valid'), color=colors[i], alpha=0.1)
plt.plot(np.convolve(SUs.mean(axis=0)[i, :], np.ones(smooth)/smooth, mode='valid'), color=colors[i])
plt.xlabel('time, $t$')
plt.ylabel('Rawlsian utility, $RU_t$')
from matplotlib.lines import Line2D
cmap = plt.cm.coolwarm
custom_lines = [Line2D([0], [0], color='red'),
Line2D([0], [0], color='magenta'),
Line2D([0], [0], color='blue')]
plt.legend(custom_lines,['No intervention', 'Regulator (Disc.)', 'Regulator (Gauss.)'], loc=2);
###Output
_____no_output_____ |
Simulations/HERA Memo A.The Impact of Feed Positinal Disorientation on the Antenna Angular Response.ipynb | ###Markdown
$\textbf{HERA Memo A. The Impact of Feed Positinal Displacement On the Delay Power Spectrum}$ $\textbf{Introduction}$ $\textbf{Section. 2.1 : Antenna Angular Response Model}$The success of observing a faith 21CM cosmological signals emitted by Neutral Hydrogen atoms lies in the perfect characterization of its final destination i.e individual antenna element's signal chain. As the 21CM cosmological signals along with other radio signals lands on the $\textbf{antenna dish surface}$, the $\textbf{dish deformation}$ due gravity and strong winds, and roughness of the $\textbf{dish surface}$ slightly deviate their path away from the dish focus point, thus leading to path delays, $\textbf{multi-reflection}$ between the dish opposite sides, and/or feed-dish-vortex multi-reflection and spillover to other adjacent antenna dish. These delays introduce ripples in delay spaces which are indistinguishable to 21CM cosmological signal (ref, impact of multi-reflection on the 21CM Delay power spectrum). In this work, we explore the $\textbf{impact of feed postional disorientation}$ on the instrument angular response and position errors and how these systemic affect the 21 cm power spectrum. In section 2.1, we discuss the impact of feed disorientation on the instrument angular response due strong wind and extreme temperature conditions. The visibility simulation are discussed in section 2.2. $\textbf{Section. 2.1.1 : The effect of Feed Positional Displacemnt due to Strong Wind}$Suppose a HERA-like antenna element, which has a feed cage hanged through spring-and-rope to four pole, see Figure 1 shows the cross-section. The magnitude of force that is applied by the strong wind on the feed cage with surface area $\textbf{A}$ and a strong wind moving at speed $\mid \textbf{v}\mid =v$ and with a density $\rho$, is equal to:\begin{equation} F_w= \frac{1}{2}\rho \textbf{A}v^2\end{equation}ref: https://www.engineeringtoolbox.com/wind-load-d_1775.htmlBut we also know that this force will cause compression or stretching of the springs that are holding the feed cage, the force is given by Hook's law,\begin{equation} F_k= -k_{steel}\Delta l\end{equation}where $k_{steel}$ is the spring constant of a steel and $\Delta l$ is displacement cause by wind in lateral direction for one of the four spring hooked on feed cage. Equating the wind force and Hook's force, the displacement of feed position in one of spring hooked on the cage is equal to:\begin{equation} \Delta l_w= -\frac{1}{2k_{steel}}\rho \textbf{A}v^2\end{equation}If the $h_{feed}^0$ is unpertabed height of a feed cage above the dish vertix, the lateral displacement of feed due strong wind will result in new height, which is equal to:\begin{equation} h_{feed}^w (k_{steel},\rho,A,v)= h_{feed}^0 + \end{equation}where $$ average of lateral displacement over sometime time $t$ . $\textbf{Section. 2.1.2: The effect of Feed Positional Displacement due to Extreme Temperature}$As discussed above, extreme where condition can affect the position of the feed and this can indirectly affect the primary antenna angular response.In this section, we look at how does high temperature or low temperature results into feed positional displacement. Suppose the feed,the spring and surroundings of the antenna are initially at thermal equalibruim with temperature $T_i$, now, if suppose that there is significant chnage in temperature of the surroundings, so that after some time, the new equalibrium temperature is $T_f$, then the spring will undergo a thermal expansion/shrinkage according the linear expansion formular,\begin{equation} \Delta l_T= \alpha_{steel}l_0 (T_f-T_i)\end{equation}where $l_0$ is the orginal lenght of the string prior expansion/shrinkage and $\alpha_{steel}$ is the steel expansion coeffient. Again, this will affect the height of the feed cage in similar way as in the case of strong wind push. THe new height,\begin{equation} h_{feed}^T (\alpha_{steel},T_i,T_f)= h_{feed}^0 + \end{equation}where $$ average of lateral displacement over sometime time $t$ $\textbf{The Impact of Feed Positional Disorientation on Angular Response}$Suppose a parabolic dish is place at the $xy-plane$ such that the vertex of the dish coincide with the $xy-plane$ origin, with a focus lenghth $y_0$ above the origin and the shape of the dish is give by $y= \frac{x^2}{4h_{feed}^0}$. The total path traveled by a light ray from height $h_{far}$ (far field distance $h=2\frac{D^2}{\lambda}$) straight above the dish ($\theta=0^o$, reflecting from the dish surface at point $P(x,y)$ to focus at $F(0,h_{feed}(dx,dy))$ is \begin{equation} L_{Total}= xsin(\theta) -ycos(\theta) + h_{far} + \sqrt{ x^2 + (h_{feed}(dh_x,dh_y) -y)^2}\end{equation}$h_{feed}(dh_x,dh_y) =\sqrt{dh_{x}^2 + (h_{feed}^0 +dh_y)^2}where $dh_x$ and $dh_y$ are feed positional displacwith displaced a dish focal length given by Equation descibe above.. The instrument response model for antenna $i$ with a dish diameter $D$ observing at frequency $\nu$ is\begin{equation} A_i(\theta,\lambda,dh_x,dh_y) = \frac{\mid E(\theta,\lambda,dh_x,dh_y)\mid^2}{Z_0}\end{equation}where is characteristic impedance of free space, $Z_0=377$ ohm. The electric field intensity measured at co-altitude angle $\theta$ is\begin{equation} E(\theta,\lambda,dh_x,dh_y)= 2\int_{0}^{D/2}\epsilon_0 e^{i\frac{2\pi}{\lambda}L_{Total}(x,y,\lambda,dh_x,dh_y)}\sqrt{1+ \Big(\frac{x}{4h_{feed}^0}\Big)^2}dx\end{equation} . $\textbf{The Electric Field at Focus as function of Lateral Positional Displacement for a Source at Zenith}$
###Code
from scipy.optimize import fmin_cg
import numpy as np
import matplotlib.pyplot as plt
import time
import get_ants_response
from scipy.integrate import quad as qd
#the feed displacement due to wind
#4.9~29.4 N/mm
#4.9~29.4 x10^3 N/m
#air density NC 1.18 kg/m^3 at 20 C
#HERA feed cage is made of cylinder 176 cm diameter and hieght of 36 cm,
#A = 2pi*r*h
def get_dl_wind(vwind,r_cage,h_cage,rho=1.18,k_steel =29.4e3):
"This function compute the letaral displacement of feed cage due to strong wind"
Feed_surfarea = 2.0*np.pi*r_cage*h_cage
dl_wind = -(1/(2*k_steel))*(Feed_surfarea/2)*rho*vwind**2
return dl_wind
# 1kt =0.514444 m/s
#9kt = x
#x =0.514444*9
vwind = 4.63 #m/s
print abs(get_dl_wind(4.63,17.6/2.0,0.36))
#the impact of feed positional displacement on the total power received at the focus
#the signal coming right above the feed, zenith
dy = np.arange(-0.1,0.1,0.001)
dx = np.arange(-0.1,0.1,0.001)
D= 14.6 #meters HERA dish diameter
lambda_= 3e8/(150*10**6) #meters
h_f = 2.0*D**2/lambda_ #farfield distance
h_0 = 4.5 # HERA dish feed height
y = lambda x : x**2/4.0/h_0 #dish surface
#focus dispacement
df = lambda dy,dx :np.sqrt((h_0 +dy)**2 + dx**2)
#total path travelled by a planewave from a source at far-field to the focal point
tot_path = lambda x,theta,dy,dx : abs(np.sin(theta)*x - np.cos(theta)*y(x) + h) + np.sqrt(x**2 + (df(dy,dx)- y(x))**2)
exp_ = lambda x,theta,dy,dx : np.exp(1j*2.0*(np.pi/lambda_)*tot_path(x,theta,dy,dx))
real_int = lambda x,theta,dy,dx : exp_(x,theta,dy,dx).real*np.sqrt(1.0 + (x/(2.0*h_0))**2)
imag_int = lambda x,theta,dy,dx : exp_(x,theta,dy,dx).imag*np.sqrt(1.0 + (x/(2.*h_0))**2)
h= h_f
power =[]
m = 0.0
tot_EF_at_Focus_xdir =[2.0*(qd(real_int,0.0,D/2,args=(m,0.0,dx[i]))[0] + 1j*qd(imag_int,0.0,D/2,args=(m,0.0,dx[i]))[0]) for i in range(dx.size)]
plt.figure(figsize=(10,10))
#plt.title('Total Electric Field at Focus for Lateral Feed Displacement')
plt.plot(dx/lambda_,np.abs(tot_EF_at_Focus_xdir))
plt.xlabel(r'$dx/\lambda$')
plt.ylabel('Total Electric Field at Focus [V/m]')
###Output
_____no_output_____
###Markdown
The total electric at the feed point is independent of lateral displacement for $\mid dy/\lambda \mid \leq 0.02$ and start to decrease quadratical. The asymmetric shape that we observe on the plot can be as result of the amount of full wave available as move toward/way from dish vertex. For example, observing at 150 MHz, with correspoinding wavelenght of 2 meters, for a HERS-like dish with a feed at 4.5 metters above the dish vertex, there can be a maximum of 2.5 waves can be formed between the dish surface and the feed point. So that as the total electric you receive will decrease more drastically as move the feed toward the dish vertex than moving it way. This is becuase you have relatively more waves as you away from the dish vertex. $\textbf{The Electric Field at Focus as function of Axial Positional Displacement for a Source at Zenith}$
###Code
m= 0.0
tot_EF_at_Focus_ydir =[qd(real_int,-D/2.0,D/2.0,args=(m,dy[i],0.0))[0] + 1j*qd(imag_int,-D/2.0,D/2.0,args=(m,dy[i],0.0))[0] for i in range(dy.size)]
plt.figure(figsize=(10,10))
#plt.title('Total Electric Field at Focus for Axial Feed Displacement')
plt.plot(dy,np.abs(tot_EF_at_Focus_ydir))
plt.xlabel(r'$dy/\lambda$')
plt.ylabel('Total Electric Field at Focus [V/m]')
###Output
_____no_output_____
###Markdown
The total electric at the feed point is decrease squadratical as the feed displacement increase from both directions. However, there is asymmetric shape in the plot of total electric at the feed point vs feed positional displacement as obseverved previously observed in lateral feed positional dsiplacement. This asymmetry is understood from the same reasoning discussed above. $\textbf{The Electric Field at Focus as function of Lateral Positional Displacement for a Source at Off-Zenith}$
###Code
dish_res = 2.0/14.0
dish_res, 'radians', np.rad2deg(dish_res) ,'degrees'
m = 2.0*dish_res
dish_res
tot_EF_at_Focus_xdir =[2.0*(qd(real_int,0.0,D/2,args=(m,0.0,dx[i]))[0] + 1j*qd(imag_int,0.0,D/2,args=(m,0.0,dx[i]))[0]) for i in range(dx.size)]
plt.figure(figsize=(10,10))
#plt.title('Total Electric Field at Focus for Lateral Feed Displacement')
plt.plot(dx/lambda_,np.abs(tot_EF_at_Focus_xdir))
plt.xlabel(r'$dx/\lambda$')
plt.ylabel('Total Electric Field at Focus [V/m]')
###Output
_____no_output_____
###Markdown
The general shape of the total electric field at feed point is preserved even for the case of waves coming o from the off-set from the zenith.This could be the fact that the lateral movement always see the same amplitude of the incomming waves. However, the strenght of the droped by a factor of roughly 10, and the previous observed flatness within $\mid dy/\lambda \mid \leq 0.02$ is not observed. $\textbf{The Electric Field at Focus as function of Axial Positional Displacement for a Source Off-Zenith}$
###Code
tot_EF_at_Focus_ydir =[qd(real_int,-D/2.0,D/2.0,args=(m,dy[i],0.0))[0] + 1j*qd(imag_int,-D/2.0,D/2.0,args=(m,dy[i],0.0))[0] for i in range(dy.size)]
plt.figure(figsize=(10,10))
#plt.title('Total Electric Field at Focus for Axial Feed Displacement')
plt.plot(dy,np.abs(tot_EF_at_Focus_ydir))
plt.xlabel(r'$dy/\lambda$')
plt.ylabel('Total Electric Field at Focus [V/m]')
###Output
_____no_output_____
###Markdown
The total electric field at feed point exhibit a negative linear trend, and the total strenght of the total electric field at feed point droped by a factor of roughly 10. One could image waves coming at angle, if the feed point is moved up, it will see less waves. Where else if the feed is move inwards, it will see more upto an shadow angle. This will results in an almost linearly decreasing trend. $\textbf{Antenna Angular Response without Feed Positional Displacement}$
###Code
#Antenna angular Response
D_hera= 14.0 # HERA dish diameter in meters
h_feed = 4.5 # HERA feed height in meters
freq_mid = 150.0 # HERA mid frequency in MHz
dx =0.0 #lateral positional displacement in meters
dy =0.0 # axial positional displacemt in meters
theta = np.linspace(-np.pi/2,np.pi/2,1000) # zenith angle in radians
ant_resp_stong_winds_wto_feed_errors = get_ants_response.response_pattern(theta,D_hera,h_feed,freq_mid,dy,dx)[1]
#plot of normalize power pattern in dB
plt.figure(figsize=(10,10))
plt.plot(np.rad2deg(theta),np.log10(ant_resp_stong_winds_wto_feed_errors))
plt.xlabel('CO-ALTITUDE [deg]')
plt.ylabel('dB')
#the feed posotion precision is +/-1 0.02m
#fp = 0.020m +/- 0.004
dy = 0.0 #np.random.normal(0.0,4.5)
dx = 0.1 #4.5 #np.random.normal(0.0,0.4.54)
theta = np.linspace(-np.pi/2,np.pi/2,1000)
ant_resp_stong_winds_wt_feed_errors = get_ants_response.response_pattern(theta,D_hera,h_feed,freq_mid,dy,dx)[1]
#test response pattern
plt.figure(figsize=(10,10))
plt.plot(np.rad2deg(theta),np.log10(ant_resp_stong_winds_wt_feed_errors))
plt.xlabel('CO-ALTITUDE [deg]')
plt.ylabel('dB')
#Antenna normalized power residuals
res = np.array(ant_resp_stong_winds_wt_feed_errors) - np.array(ant_resp_stong_winds_wto_feed_errors)
plt.figure(figsize=(10,10))
plt.plot(theta,np.log10(res))
plt.xlabel('CO-ALTITUDE [radians]')
plt.ylabel('dB')
theta_main =np.linspace(0.0,0.13,1000)
dy = np.arange(-0.10,0.10,0.1)
print("first null at 150 MHz is", np.rad2deg(0.136))
plt.figure(figsize=(10,10))
for i in range(dy.size):
main_beam = np.array(get_ants_response.response_pattern(theta_main,D_hera,h_feed,freq_mid,dy[i],0.0)[1])
# print np.real(main_beam)
plt.plot(theta_main,np.log10(np.real(main_beam)))
plt.xlabel('CO-ALTITUDE [radians]')
plt.ylabel('dB')
###Output
('first null at 150 MHz is', 7.792226013779197)
###Markdown
$\textbf{The Impact of Feed positional Displacement on the Beam Solid Angle. }$
###Code
# Beam effeciency at 150 MHz
# Beam efficiency solid angle
def beam_efficiency(theta,theta_main,dx,dy,freq):
"""his function calcuate the main beam solid angle given the theta (zenith angle i radians), feed positional
displacement in dx and dy (meters) and the observing frequency in MHz.
"""
#main beam solid angle as function of dx and dy
npower_mbeam_ydir = [np.abs(get_ants_response.response_pattern(theta_main,D_hera,h_feed,freq_mid,dy[i],0.0)[1]) for i in range(dy.size)]
omega_mbeam_ydir = [np.sum(2.0*np.pi*np.array(npower_mbeam_ydir[i])*np.sin(theta_main)) for i in range(len(npower_mbeam_ydir))]
npower_mbeam_xdir = [np.abs(get_ants_response.response_pattern(theta_main,D_hera,h_feed,freq_mid,0.0,dx[i])[1]) for i in range(dx.size)]
omega_mbeam_xdir = [np.sum(2.0*np.pi*np.array(npower_mbeam_xdir[i])*np.sin(theta_main)) for i in range(len(npower_mbeam_xdir))]
#total beam solid angle as function of dx and dy
npower_beam_ydir = [np.abs(get_ants_response.response_pattern(theta,D_hera,h_feed,freq_mid,dy[i],0.0)[1]) for i in range(dy.size)]
omega_beam_ydir = [np.sum(2.0*np.pi*np.array(npower_beam_ydir[i])*np.sin(theta)) for i in range(len(npower_beam_ydir))]
npower_beam_xdir = [np.abs(get_ants_response.response_pattern(theta,D_hera,h_feed,freq_mid,0.0,dx[i])[1]) for i in range(dx.size)]
omega_beam_xdir = [np.sum(2.0*np.pi*np.array(npower_beam_xdir[i])*np.sin(theta)) for i in range(len(npower_beam_xdir))]
beam_effic_ydir = np.array(omega_mbeam_ydir)/np.array(omega_beam_ydir)
beam_effic_xdir = np.array(omega_mbeam_xdir)/np.array(omega_beam_xdir)
return [omega_mbeam_ydir,omega_beam_ydir,beam_effic_ydir,omega_mbeam_xdir,omega_beam_xdir,beam_effic_xdir]
theta = np.linspace(0.0,np.pi,1000)
dy = np.arange(-0.10,0.10,0.01)
dx = np.arange(-0.10,0.10,0.01)
beam_data = beam_efficiency(theta,theta_main,dx,dy,150)
#The main beam solid angle as function of dx
plt.figure(figsize=(10,10))
plt.plot(dx,beam_data[3])
plt.xlabel('dx [meaters]')
plt.ylabel(r'$\Omega_{MB}$ $[sr]$')
# The total solid angle as function of dx
plt.figure(figsize=(10,10))
plt.plot(dx,beam_data[4])
plt.xlabel('dx [meaters]')
plt.ylabel(r'$\Omega_{A}$ $[sr]$')
# The main beam solid angle as function of dy
plt.figure(figsize=(10,10))
plt.plot(dy,beam_data[0])
plt.xlabel('dy [meaters]')
plt.ylabel(r'$\Omega_{MB}$ $[sr]$')
plt.figure(figsize=(10,10))
plt.plot(dy,beam_data[1])
plt.xlabel('dy [meaters]')
plt.ylabel(r'$\Omega_{A}$ $[sr]$')
###Output
_____no_output_____ |
01_basic_models/linear_regression.ipynb | ###Markdown
Linear Regression
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
# The data and the data prepartion steps are taken from :
# https://github.com/zmostafa/handson-ml/blob/master/01_the_machine_learning_landscape.ipynb
datapath = os.path.join("../datasets", "lifesat", "")
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
train_X = np.c_[country_stats["GDP per capita"]]
train_Y = np.c_[country_stats["Life satisfaction"]]
n_samples = len(train_X)
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Training Parameters
learning_rate = 0.01
training_epochs = 1000
display_step = 50
class LinearRegressionModel(object):
def __init__(self):
# Define Varibales for Weights and Bias
self.W = tf.Variable(np.random.randn())
self.b = tf.Variable(np.random.randn())
def __call__ (self,x):
# Deifne Linear Model
return self.W * x + self.b
model = LinearRegressionModel()
# Define Cost Function : Mean squared error
def loss(pred , label):
return tf.reduce_sum(tf.pow(pred-label, 2))
# Optimizer : Gradient descent
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate)#.minimize(cost)
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
# Compute gradients
# grad = tfe.implicit_gradients(cost)
# Initial cost, before optimizing
print("Initial cost= {:.9f}".format(
cost),
"W=", W.numpy(), "b=", b.numpy())
# Training
# for step in range(training_epochs):
# # optimizer.apply_gradiants(grad(pred,train_Y))
# with tf.GradientTape() as tape:
# grads = tape.gradient(cost, [W,b])
# optimizer.apply_gradients(zip(grads, [W,b]))
# if (step + 1) % display_step == 0 or step == 0:
# print("Epoch:", '%04d' % (step + 1), "cost=",
# "{:.9f}".format(cost),
# "W=", W.numpy(), "b=", b.numpy())
# history.append(loss.numpy().mean())
# plotter.plot(history)
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(2)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(train_X), train_Y)
train(model, train_X, train_Y, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
# plt.plot(epochs, Ws, 'r',
# epochs, bs, 'b')
# plt.legend(['W', 'b', 'True W', 'True b'])
# plt.show()
# Graphic display
plt.plot(train_X, train_Y, 'ro', label='Original data')
plt.plot(train_X, np.array(Ws[-1] * train_X + bs[-1]), label='Fitted line')
plt.legend()
plt.show()
###Output
_____no_output_____ |
2ndMeetup-Resources/Demo/TensorFlow/Practice/01_Basics/.ipynb_checkpoints/02_UnderstandingTensorWorld-checkpoint.ipynb | ###Markdown
Hello, Tensor! © Jubeen Shah 2018Hola! Welcome to `J.S Codes` jupyter notebooks for TensorFlow! Understanding tensorsIn TensorFlow, we do not have the ideas of integers, floats, or strings. Instead, these values are encapsulated in an object known as `tensor`. `import tensorflow as tfhello_constant = tf.constant("Hello World!")with tf.Session() as sess: output = sess.run(hello_constant) print(output)` In the above code, `hello_constant = tf.constant("Hello World!")` is the 0-dimensional string tensor that we created. But they can be of other dimensions as well.
###Code
import tensorflow as tf
# X is 0-dimensional int32 tensor
X = tf.constant(1)
# Y is a 1-dimensional int32 tensor
Y = tf.constant([1,2,3])
# Z is a 2-dimensional int32 tensor
Z = tf.constant([ [1,2,3], [4,5,6]])
X
Y
Z
###Output
_____no_output_____
###Markdown
`tf.constant()` is a TensorFlow operation, that returns a constant tensor, i.e an immutable tensor SessionTensorFlow's api is built around the notion of [computational graphs](https://www.tensorflow.org/programmers_guide/graphs), a way of visualizing mathematical process.`with tf.Session() as sess: output = sess.run(hello_constant) print(output)`The code has already created the tensor, `hello_constant`, from the previous lines. The next step is to evaluate the tensor in a session.The code creates a session instance, `sess`, using `tf.Session`. The `sess.run()` function then evaluates the tensor and returns the results.
###Code
hello_constant = tf.constant("Hello World!")
with tf.Session() as sess:
output = sess.run(hello_constant)
print(output)
###Output
b'Hello World!'
|
examples/tutorials/translations/marathi/Part 10 - Federated Learning with Secure Aggregation.ipynb | ###Markdown
भाग 10: एन्क्रिप्टेड ग्रेडियंट एकत्रीकरणासह फेडरटेड लर्निंगमागील काही विभागात आपण अनेक सोप्या प्रोग्राम्स बनवून एनक्रिप्टेड संगणनाबद्दल शिकत आहोत. या विभागात, आपण [Federated Learning Demo of Part 4](https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/Part%204/%20/%20Federated%20Learning%20-29%20Trusted%20Aggregator.ipynb) मध्ये परत जात आहोत, जिथे आपल्याकडे एक "trusted aggregator" होता जो एकाधिक कामगारांकडील मॉडेल अद्यतनांच्या सरासरीसाठी जबाबदार होता.आपण आता हा trusted aggregator एकत्रीकरण काढण्यासाठी एनक्रिप्टेड संगणनासाठी आमची नवीन साधने वापरू कारण ते आदर्शपेक्षा कमी नाही कारण असे मानते की या संवेदनशील माहितीमध्ये प्रवेश करण्यासाठी आम्हाला विश्वासू (trusted) कोणी सापडेल. हे नेहमीच नसते.अशाप्रकारे, या नोटबुकमध्ये, SMPC चा वापर आपल्याला “trusted aggregator” ची गरज नसलेल्या सुरक्षित एकत्रीकरणासाठी कसा करता येईल हे दर्शवू.लेखक:- Theo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel)- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)अनुवादक/संपादक:- Krunal Kshirsagar - Twitter: [@krunal_wrote](https://twitter.com/krunal_wrote) - Github: [@Noob-can-Compile](https://github.com/Noob-can-Compile) विभाग 1: सामान्य फेडरेटेड शिक्षणप्रथम, येथे काही कोड आहे जो बोस्टन हाउसिंग डेटासेटवर क्लासिक फेडरेट लर्निंग करतो. कोडचा हा विभाग अनेक विभागांमध्ये वाटला गेला आहे. स्थापना
###Code
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
class Parser:
"""Parameters for training"""
def __init__(self):
self.epochs = 10
self.lr = 0.001
self.test_batch_size = 8
self.batch_size = 8
self.log_interval = 10
self.seed = 1
args = Parser()
torch.manual_seed(args.seed)
kwargs = {}
###Output
_____no_output_____
###Markdown
डेटासेट लोड होत आहे
###Code
with open('../data/BostonHousing/boston_housing.pickle','rb') as f:
((X, y), (X_test, y_test)) = pickle.load(f)
X = torch.from_numpy(X).float()
y = torch.from_numpy(y).float()
X_test = torch.from_numpy(X_test).float()
y_test = torch.from_numpy(y_test).float()
# preprocessing
mean = X.mean(0, keepdim=True)
dev = X.std(0, keepdim=True)
mean[:, 3] = 0. # the feature at column 3 is binary,
dev[:, 3] = 1. # so we don't standardize it
X = (X - mean) / dev
X_test = (X_test - mean) / dev
train = TensorDataset(X, y)
test = TensorDataset(X_test, y_test)
train_loader = DataLoader(train, batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = DataLoader(test, batch_size=args.test_batch_size, shuffle=True, **kwargs)
###Output
_____no_output_____
###Markdown
Neural नेटवर्क संरचना
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(13, 32)
self.fc2 = nn.Linear(32, 24)
self.fc3 = nn.Linear(24, 1)
def forward(self, x):
x = x.view(-1, 13)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
optimizer = optim.SGD(model.parameters(), lr=args.lr)
###Output
_____no_output_____
###Markdown
हुकिंग PyTorch
###Code
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
james = sy.VirtualWorker(hook, id="james")
compute_nodes = [bob, alice]
###Output
_____no_output_____
###Markdown
**कामगारांना डेटा पाठवा** सामान्यत: त्यांच्याकडे आधीपासून असते, हे केवळ डेमो उद्देशाने असते जे आम्ही ते स्वहस्ते पाठवितो
###Code
train_distributed_dataset = []
for batch_idx, (data,target) in enumerate(train_loader):
data = data.send(compute_nodes[batch_idx % len(compute_nodes)])
target = target.send(compute_nodes[batch_idx % len(compute_nodes)])
train_distributed_dataset.append((data, target))
###Output
_____no_output_____
###Markdown
प्रशिक्षण फॅन्क्शॅन
###Code
def train(epoch):
model.train()
for batch_idx, (data,target) in enumerate(train_distributed_dataset):
worker = data.location
model.send(worker)
optimizer.zero_grad()
# update the model
pred = model(data)
loss = F.mse_loss(pred.view(-1), target)
loss.backward()
optimizer.step()
model.get()
if batch_idx % args.log_interval == 0:
loss = loss.get()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * data.shape[0], len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
###Output
_____no_output_____
###Markdown
चाचणी फॅन्क्शॅन
###Code
def test():
model.eval()
test_loss = 0
for data, target in test_loader:
output = model(data)
test_loss += F.mse_loss(output.view(-1), target, reduction='sum').item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}\n'.format(test_loss))
###Output
_____no_output_____
###Markdown
मॉडेल चे प्रशिक्षण
###Code
import time
t = time.time()
for epoch in range(1, args.epochs + 1):
train(epoch)
total_time = time.time() - t
print('Total', round(total_time, 2), 's')
###Output
_____no_output_____
###Markdown
कामगिरीची गणना
###Code
test()
###Output
_____no_output_____
###Markdown
धारा 2: एन्क्रिप्टेड एकत्रित जोडणेआता आपण एनक्रिप्शन वापरुन एकत्रित ग्रेडीयंट्सचे हे उदाहरण किंचित सुधारित करणार आहोत. वेगळा मुख्य भाग म्हणजे `train()` फंक्शनमधील कोडच्या 1 किंवा 2 ओळी वेगळ्या आहे ज्याचा आपण उल्लेख करू. या क्षणाकरिता, आपल्या डेटावर पुन्हा प्रक्रिया करू आणि Bob आणि Alice साठी मॉडेल प्रारंभ करू.
###Code
remote_dataset = (list(),list())
train_distributed_dataset = []
for batch_idx, (data,target) in enumerate(train_loader):
data = data.send(compute_nodes[batch_idx % len(compute_nodes)])
target = target.send(compute_nodes[batch_idx % len(compute_nodes)])
remote_dataset[batch_idx % len(compute_nodes)].append((data, target))
def update(data, target, model, optimizer):
model.send(data.location)
optimizer.zero_grad()
pred = model(data)
loss = F.mse_loss(pred.view(-1), target)
loss.backward()
optimizer.step()
return model
bobs_model = Net()
alices_model = Net()
bobs_optimizer = optim.SGD(bobs_model.parameters(), lr=args.lr)
alices_optimizer = optim.SGD(alices_model.parameters(), lr=args.lr)
models = [bobs_model, alices_model]
params = [list(bobs_model.parameters()), list(alices_model.parameters())]
optimizers = [bobs_optimizer, alices_optimizer]
###Output
_____no_output_____
###Markdown
आपले प्रशिक्षण लॉजिक तयार करणेया प्रशिक्षण पद्धतीत फक्त **वास्तविक** फरक आहे. चला यातून चरण-दर-चरण पाऊल टाकू या. भाग ए: ट्रेन:
###Code
# this is selecting which batch to train on
data_index = 0
# update remote models
# we could iterate this multiple times before proceeding, but we're only iterating once per worker here
for remote_index in range(len(compute_nodes)):
data, target = remote_dataset[remote_index][data_index]
models[remote_index] = update(data, target, models[remote_index], optimizers[remote_index])
###Output
_____no_output_____
###Markdown
भाग बी: एन्क्रिप्टेड एकत्रीकरण
###Code
# create a list where we'll deposit our encrypted model average
new_params = list()
# iterate through each parameter
for param_i in range(len(params[0])):
# for each worker
spdz_params = list()
for remote_index in range(len(compute_nodes)):
# select the identical parameter from each worker and copy it
copy_of_parameter = params[remote_index][param_i].copy()
# since SMPC can only work with integers (not floats), we need
# to use Integers to store decimal information. In other words,
# we need to use "Fixed Precision" encoding.
fixed_precision_param = copy_of_parameter.fix_precision()
# now we encrypt it on the remote machine. Note that
# fixed_precision_param is ALREADY a pointer. Thus, when
# we call share, it actually encrypts the data that the
# data is pointing TO. This returns a POINTER to the
# MPC secret shared object, which we need to fetch.
encrypted_param = fixed_precision_param.share(bob, alice, crypto_provider=james)
# now we fetch the pointer to the MPC shared value
param = encrypted_param.get()
# save the parameter so we can average it with the same parameter
# from the other workers
spdz_params.append(param)
# average params from multiple workers, fetch them to the local machine
# decrypt and decode (from fixed precision) back into a floating point number
new_param = (spdz_params[0] + spdz_params[1]).get().float_precision()/2
# save the new averaged parameter
new_params.append(new_param)
###Output
_____no_output_____
###Markdown
भाग सी: स्वच्छ करणे
###Code
with torch.no_grad():
for model in params:
for param in model:
param *= 0
for model in models:
model.get()
for remote_index in range(len(compute_nodes)):
for param_index in range(len(params[remote_index])):
params[remote_index][param_index].set_(new_params[param_index])
###Output
_____no_output_____
###Markdown
चला हे सर्व एकत्र ठेवूया !!आणि आता आपल्याला प्रत्येक चरण माहित आहे, आपण हे सर्व एकत्र प्रशिक्षण training loop मध्ये घालू शकतो!
###Code
def train(epoch):
for data_index in range(len(remote_dataset[0])-1):
# update remote models
for remote_index in range(len(compute_nodes)):
data, target = remote_dataset[remote_index][data_index]
models[remote_index] = update(data, target, models[remote_index], optimizers[remote_index])
# encrypted aggregation
new_params = list()
for param_i in range(len(params[0])):
spdz_params = list()
for remote_index in range(len(compute_nodes)):
spdz_params.append(params[remote_index][param_i].copy().fix_precision().share(bob, alice, crypto_provider=james).get())
new_param = (spdz_params[0] + spdz_params[1]).get().float_precision()/2
new_params.append(new_param)
# cleanup
with torch.no_grad():
for model in params:
for param in model:
param *= 0
for model in models:
model.get()
for remote_index in range(len(compute_nodes)):
for param_index in range(len(params[remote_index])):
params[remote_index][param_index].set_(new_params[param_index])
def test():
models[0].eval()
test_loss = 0
for data, target in test_loader:
output = models[0](data)
test_loss += F.mse_loss(output.view(-1), target, reduction='sum').item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
test_loss /= len(test_loader.dataset)
print('Test set: Average loss: {:.4f}\n'.format(test_loss))
t = time.time()
for epoch in range(args.epochs):
print(f"Epoch {epoch + 1}")
train(epoch)
test()
total_time = time.time() - t
print('Total', round(total_time, 2), 's')
###Output
_____no_output_____ |
01 Linear Regression (complete).ipynb | ###Markdown
Slope coefficient:
###Code
regression.coef_ # theta_1
#Intercept
regression.intercept_
plt.figure(figsize=(10,6))
plt.scatter(X, y, alpha=0.3)
# Adding the regression line here:
plt.plot(X, regression.predict(X), color='red', linewidth=3)
plt.title('Film Cost vs Global Revenue')
plt.xlabel('Production Budget $')
plt.ylabel('Worldwide Gross $')
plt.ylim(0, 3000000000)
plt.xlim(0, 450000000)
plt.show()
#Getting r square from Regression
regression.score(X, y)
###Output
R-Square is 0.5496
###Markdown
Slope coefficient:
###Code
regression.coef_ # theta_1
#Intercept
regression.intercept_
plt.figure(figsize=(10,6))
plt.scatter(X, y, alpha=0.3)
# Adding the regression line here:
plt.plot(X, regression.predict(X), color='red', linewidth=3)
plt.title('Film Cost vs Global Revenue')
plt.xlabel('Production Budget $')
plt.ylabel('Worldwide Gross $')
plt.ylim(0, 3000000000)
plt.xlim(0, 450000000)
plt.show()
#Getting r square from Regression
regression.score(X, y)
###Output
R-Square is 0.5496
|
aal close price prediction decision tree project.ipynb | ###Markdown
keyingi yil uchun Ehtimolni hisoblash va aniq farqni korish
###Code
pred_new = lm.predict(df2_x)
pred_new.shape
ORG = df2_y
PRED = pred_new
#print('original value:', ORG)
#print("predicted:\n", PRED.reshape((487,1)))
print('Difference :', abs(ORG - PRED).mean())
print('Linear Model Prediction Accuracy:', (100-100*abs(ORG - PRED)/ORG).mean())
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, pred))
print('MSE:', metrics.mean_squared_error(y_test, pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, pred)))
from sklearn.tree import DecisionTreeRegressor
dm = DecisionTreeRegressor()
dm.fit(X_train, y_train)
pred_dm = dm.predict(X_test)
pred_dm
pyplot.scatter(y_test, pred_dm)
sns.distplot((y_test, pred_dm), bins = 50)
# The mean squared error
print("Mean squared error: %.2f" % np.mean((pred_dm - y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % dm.score(X_test, y_test))
pred_new_dm = dm.predict(df2_x)
pred_new_dm.shape
ORG = df2_y
PRED1 = pred_new_dm
#print('original value:', ORG)
#print("predicted:\n", PRED.reshape((487,1)))
print('Difference :', abs(ORG - PRED1).mean())
print('Decision Tree Prediction Accuracy:', (100-100*abs(ORG - PRED1)/ORG).mean())
print('MAE:', metrics.mean_absolute_error(y_test, pred_dm))
print('MSE:', metrics.mean_squared_error(y_test, pred_dm))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, pred_dm)))
print('Decision Tree Prediction Accuracy:', (100-100*abs(ORG - PRED1)/ORG).mean())
print('Linear Model Prediction Accuracy:', (100-100*abs(ORG - PRED)/ORG).mean())
###Output
Decision Tree Prediction Accuracy: 94.66345415828546
Linear Model Prediction Accuracy: 96.9090592872311
|
Sample/Keras/Keras-LSTM-PredictiveMaintenance-datastore-Hyperdrive.ipynb | ###Markdown
設備の残存耐用時間(RUL)を予測する時系列モデリング本Notebookでは、豊富な計算環境が用意されているAzure Machine Learning service の Machine Learning Compute のコンピューティング環境を用いて、高速に深層学習(LSTM)を行います。設備の残存耐用時間を予測する時系列モデルを構築します。 故障予測のアプローチ方法 故障予測のアプローチ方法は色々ありますが、代表的なアプローチを下記に記載しました。本Notebookでは、設備の残存耐用時間(RUL)を予測する深層学習モデルを構築するアプローチを採用しています。いずれのアプローチにも言えることですが、故障を予測するのではなく、故障する予兆を予測することが大事です。 使用するデータ Azure ML Workspaceへ接続Azure Machine Learning service ワークスペースへ接続します。
###Code
from azureml.core import Workspace, Experiment
subscription_id = '9c0f91b8-eb2f-484c-979c-15848c098a6b'
resource_group = 'mlservice'
workspace_name = 'azureml'
workspace = Workspace(subscription_id, resource_group, workspace_name)
###Output
_____no_output_____
###Markdown
実験名の設定
###Code
experiment = Experiment(workspace = workspace, name = "lstm-rul-aml")
###Output
_____no_output_____
###Markdown
クラウドにデータをアップロード学習で使用するデータをオンプレミスからクラウドにアップロードします
###Code
ds = workspace.get_default_datastore()
ds.upload(src_dir='./data', target_path='data', overwrite=False, show_progress=True)
###Output
Uploading an estimated of 2 files
Target already exists. Skipping upload for data/test.csv
Target already exists. Skipping upload for data/train.csv
Uploaded 0 files
###Markdown
学習コード準備
###Code
import os
project_folder = "./keras-lstm"
os.makedirs(project_folder, exist_ok=True)
%%writefile {project_folder}/keras_lstm.py
import tensorflow as tf
from tensorflow.python.keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.layers import Activation, Dropout, Flatten, Input, Dense, Dropout, LSTM
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.utils import plot_model
from tensorflow.keras.callbacks import Callback
import os
import pandas as pd
import numpy as np
from azureml.core import Run
from azureml.core import Workspace, Dataset
from keras.utils import plot_model
import argparse
#from keras import initializers, regularizers, constraints, optimizers, layers, callbacks
np.random.seed(1234)
PYTHONHASHSEED = 0
from azureml.core import Run
run = Run.get_context()
parser = argparse.ArgumentParser(description='keras lstm example:')
parser.add_argument('--epochs', '-e', type=int, default=10, help='Number of sweeps over the dataset to train')
parser.add_argument('--batchsize', '-b', type=int, default=32, help='Number of images in each mini-batch')
parser.add_argument('--dataset', '-d', dest='data_folder',help='The datastore')
args = parser.parse_args()
train_df = pd.read_csv(args.data_folder+"/data/train.csv", sep=",", header=0)
train_df['RUL'] = train_df['RUL'].astype(float)
test_df = pd.read_csv(args.data_folder+"/data/test.csv", sep=",", header=0)
train_df['RUL'] = train_df['RUL'].astype(float)
sequence_length = 50
def gen_sequence(id_df, seq_length, seq_cols):
#指定された列の値を取得
data_array = id_df[seq_cols].values
#num_elements : 特定idのデータ数 (for id = 1, it is 192)
num_elements = data_array.shape[0]
# for id = 1, zip from both range(0, 142) & range(50, 192)
for start, stop in zip(range(0, num_elements-seq_length), range(seq_length, num_elements)):
#print(start,stop)
yield data_array[start:stop, :]
# 特徴量となる列の抽出
sensor_cols = ['s' + str(i) for i in range(1,22)]
sequence_cols = ['setting1', 'setting2', 'setting3', 'cycle_norm']
sequence_cols.extend(sensor_cols)
# 学習データのsequences作成
seq_gen = (list(gen_sequence(train_df[train_df['id']==id], sequence_length, sequence_cols)) for id in train_df['id'].unique())
seq_array = np.concatenate(list(seq_gen)).astype(np.float32)
# function to generate labels
def gen_labels(id_df, seq_length, label):
data_array = id_df[label].values
num_elements = data_array.shape[0]
return data_array[seq_length:num_elements, :]
# generate labels
label_gen = [gen_labels(train_df[train_df['id']==id], sequence_length, ['label1'])
for id in train_df['id'].unique()]
label_array = np.concatenate(label_gen).astype(np.float32)
epochs=args.epochs
batch_size=args.batchsize
validation_split=0.05
# Hyper-Parameter
run.log("エポック数",epochs)
run.log("バッチサイズ",batch_size)
run.log("検証データ分割",validation_split)
class RunCallback(tf.keras.callbacks.Callback):
def __init__(self, run):
self.run = run
def on_epoch_end(self, batch, logs={}):
print("test")
self.run.log(name="training_loss", value=float(logs.get('loss')))
self.run.log(name="validation_loss", value=float(logs.get('val_loss')))
self.run.log(name="training_acc", value=float(logs.get('acc')))
self.run.log(name="validation_acc", value=float(logs.get('val_acc')))
callbacks = list()
callbacks.append(RunCallback(run))
# モデルネットワークの定義
nb_features = seq_array.shape[2]
nb_out = label_array.shape[1]
print("nb_features:",seq_array.shape[2])
print("nb_out:",label_array.shape[1])
model = Sequential()
model.add(LSTM(
input_shape=(sequence_length, nb_features),
units=100,
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(
units=50,
return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=nb_out, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(x = seq_array, y = label_array, epochs=epochs, batch_size=batch_size, validation_split=validation_split, verbose=1,
callbacks = callbacks)
# training metrics
scores = model.evaluate(seq_array, label_array, verbose=1, batch_size=200)
run.log("損失",scores[0])
run.log("モデル精度", scores[1])
os.makedirs('./outputs/model', exist_ok=True)
model.save_weights('./outputs/mnist_mlp_weights.h5')
###Output
Overwriting ./keras-lstm/keras_lstm.py
###Markdown
Machine Learning Compute設定 Machine Learning Computeの設定を行います。GPUの場合は**gpucluster**、CPUの場合は**cpucluster**を指定します。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
compute_target = ComputeTarget(workspace,"gpucluster")
#compute_target = ComputeTarget(ws,"cpucluster")
###Output
_____no_output_____
###Markdown
モデル学習設定 TensorFlowのEstimatorの設定を行います。GPUでモデル学習する際は、use_gpu = Trueに設定します。 CPUしか利用できない場合は、このパラメーターを削除するか、user_gpu=False に設定しなおします。
###Code
from azureml.train.dnn import TensorFlow
from azureml.train.estimator import Estimator
script_params = {
'--dataset': ds.as_mount()
}
estimator = TensorFlow(source_directory=project_folder,
compute_target=compute_target,
entry_script='keras_lstm.py',
script_params=script_params,
framework_version = '1.13',
pip_packages = ['keras'],
)
# estimator = Estimator(source_directory=project_folder,
# compute_target=compute_target,
# entry_script='keras_lstm.py',
# script_params=script_params,
# pip_packages = ['pandas','tensorflow==2.0.0','keras'],
# )
###Output
WARNING - 'gpu_support' is no longer necessary; AzureML now automatically detects and uses nvidia docker extension when it is available. It will be removed in a future release.
WARNING - 'gpu_support' is no longer necessary; AzureML now automatically detects and uses nvidia docker extension when it is available. It will be removed in a future release.
###Markdown
実行開始 上記で定義した TensorFlow Estimator の設定に従って、トレーニング環境を構築し、モデル学習を始めます。
###Code
run = experiment.submit(estimator)
print(run)
from azureml.widgets import RunDetails
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
モデル無事完了したことを確認して、次に進みます。 モデル登録
###Code
run.get_file_names()
model = run.register_model(model_name = 'RUL-lstm-keras', model_path = 'outputs/mnist_mlp_weights.h5',tags = {'area': "turbine predictive maintenance", 'type': "lstm"})
print(model.name, model.id, model.version, sep = '\t')
# run.get_details()
###Output
_____no_output_____
###Markdown
ハイパーパラメータチューニング Hyperdrive Machine Learning Compute を用いて複数サーバでパラメータチューニングを分散で実行します。今回は Random Search を用います。
###Code
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.parameter_expressions import choice
param_sampling = RandomParameterSampling( {
"--batchsize": choice(32, 64, 128, 256),
"--epochs": choice(5, 10, 20, 40, 80)
}
)
hyperdrive_run_config = HyperDriveConfig(estimator=estimator,
hyperparameter_sampling=param_sampling,
primary_metric_name='validation_acc',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=4,
max_concurrent_runs=4)
###Output
_____no_output_____
###Markdown
実行開始
###Code
hyperdrive_run = experiment.submit(hyperdrive_run_config)
from azureml.widgets import RunDetails
RunDetails(hyperdrive_run).show()
###Output
_____no_output_____ |
Forecasting/8_2_Poisson_regression.ipynb | ###Markdown
Пуассоновская регрессия Kirill Zakharov2021
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
1. Чтение и подготовка данных Рассмотрим данные о количестве велосипедистов. Количество велосипедистов зависит от погодных условий в рассматриваемый день: чем хуже погода, тем меньше желающих. В качестве признаков возьмем:- максимальную температуру в рассматриваемый день (F);- минимальную температуру в рассматриваемый день (F);- количество осадков.
###Code
data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True)
data.head()
###Output
_____no_output_____
###Markdown
Целевая переменная – `'BB_COUNT'` – содержит только целые положительные числа, что должно быть учтено при выборе предсказательной модели.
###Code
data['BB_COUNT'].plot(figsize=(12,5))
plt.show()
###Output
_____no_output_____
###Markdown
Кроме указанных факторов, количество велосипедистов может зависеть от дня недели: в выходные количество желающих больше, нежели в будни. Также может оказаться важным месяц. Добавим столбцы, содержащие информацию о том, на какой день недели и на какой месяц приходится наблюдение:
###Code
data['DAY_OF_WEEK'] = data.index.dayofweek
data['MONTH'] = data.index.month
data
###Output
_____no_output_____
###Markdown
Данные переменные являются категориальными. Задание 11. Определите функцию, которая принимает на вход исходные данные $(X,y)$ и параметры модели $\theta$. Данная функция должна возвращать среднеквадратичную ошибку модели. 2. Определите аналогичную функцию, которая возвращает значение функционала качества пуассоновской регрессии. 3. Обучите обе модели с помощью функции minimize из SciPy. Сравните качество аппроксимации моделей. Метрикой качества выберите среднюю абсолютную ошибку. 4. Отобразите на графике исходный ряд и результаты аппроксимации линейной и пуассоновской регрессиями.
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score, KFold
from sklearn.compose import TransformedTargetRegressor
from scipy.optimize import minimize
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
def mse(X,y,theta):
return ((y-np.dot(X,theta))**2).mean()
X=data.drop('BB_COUNT', axis=1)
y = data['BB_COUNT']
X['const'] = 1
theta0 = np.ones(X.shape[1])
lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0))
lin_reg_params = lin_reg.x
y_pred_lin = np.dot(X, lin_reg_params)
mean_absolute_error(y,y_pred_lin)
def pois(X,y,theta):
mu = np.dot(X,theta)
return (np.exp(mu)-y*mu).mean()
theta0 = np.zeros(X.shape[1])
pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0))
pois_reg_params = pois_reg.x
y_pred_pois = np.dot(X, pois_reg_params)
data['pois_approx'] = y_pred_pois
mean_absolute_error(y,data['pois_approx'])
data['lin_approx'] = y_pred_lin
data['pois_approx'] = np.exp(y_pred_pois)
a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial')
b = data['lin_approx'].plot(label = 'lin')
c = data['pois_approx'].plot(label = 'pois')
a.legend()
b.legend()
c.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Задание 2Линейные модели чувствительны к виду категориальных признаков. Преобразуйте категориальные признаки с помощью One Hot Encoding и повторите шаги 3-4 из задания 1. Как изменилось качество моделей?
###Code
data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True)
data['DAY_OF_WEEK'] = data.index.dayofweek
data['MONTH'] = data.index.month
X=data.drop(['BB_COUNT','DAY_OF_WEEK','MONTH'], axis = 1)
y = data['BB_COUNT']
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(data[['DAY_OF_WEEK','MONTH']])
X[['SUN','MON','TUES','WEN','THUR','FRI','SAT','APR','JUN','JUL','AUG','SEP','OCT','NOV']] = enc.transform(data[['DAY_OF_WEEK','MONTH']]).toarray()
X['const'] = 1
theta0 = np.ones(X.shape[1])
lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0))
lin_reg_params = lin_reg.x
y_pred_lin = np.dot(X, lin_reg_params)
mean_absolute_error(y,y_pred_lin)
theta0 = np.zeros(X.shape[1])
pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0))
pois_reg_params = pois_reg.x
y_pred_pois = np.dot(X, pois_reg_params)
data['pois_approx'] = y_pred_pois
mean_absolute_error(y,data['pois_approx'])
data['lin_approx'] = y_pred_lin
data['pois_approx'] = np.exp(y_pred_pois)
a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial')
b = data['lin_approx'].plot(label = 'lin')
c = data['pois_approx'].plot(label = 'pois')
a.legend()
b.legend()
c.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Задание 3Преобразуйте категориальные признаки с помощью Фурье-разложения и повторите шаги 3-4 из задания 1. Какого качества моделей удалось достичь?
###Code
data = pd.read_csv('data/nyc_bicyclist_counts.csv', index_col=['Date'], parse_dates=True)
data['DAY_OF_WEEK'] = data.index.dayofweek
data['MONTH'] = data.index.month
X=data.drop(['BB_COUNT','DAY_OF_WEEK','MONTH'], axis = 1)
y = data['BB_COUNT']
X[['SIN_M','COS_M','SIN_W','COS_W']] = np.array([np.sin(2*np.pi/7*data['MONTH']),np.cos(2*np.pi/7*data['MONTH']),np.sin(2*np.pi/12*data['DAY_OF_WEEK']),np.cos(2*np.pi/12*data['DAY_OF_WEEK'])]).transpose()
X['const'] = 1
theta0 = np.zeros(X.shape[1])
lin_reg = minimize(lambda theta: mse(X,y,theta), tuple(theta0))
lin_reg_params = lin_reg.x
lin_reg_params
y_pred_lin = np.dot(X, lin_reg_params)
mean_absolute_error(y,y_pred_lin)
pois_reg = minimize(lambda theta1: pois(X,y,theta1), tuple(theta0))
pois_reg_params = pois_reg.x
y_pred_pois = np.dot(X, pois_reg_params)
data['lin_approx'] = y_pred_lin
data['pois_approx'] = np.exp(y_pred_pois)
mean_absolute_error(y,data['pois_approx'])
a = data['BB_COUNT'].plot(figsize=(15,7), label = 'initial')
b = data['lin_approx'].plot(label = 'lin')
c = data['pois_approx'].plot(label = 'pois')
a.legend()
b.legend()
c.legend()
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/configuration-checkpoint.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Configuration_**Setting up your Azure Machine Learning services workspace and configuring your notebook library**_------ Table of Contents1. [Introduction](Introduction) 1. What is an Azure Machine Learning workspace1. [Setup](Setup) 1. Azure subscription 1. Azure ML SDK and other library installation 1. Azure Container Instance registration1. [Configure your Azure ML Workspace](Configure%20your%20Azure%20ML%20workspace) 1. Workspace parameters 1. Access your workspace 1. Create a new workspace 1. Create compute resources1. [Next steps](Next%20steps)--- IntroductionThis notebook configures your library of notebooks to connect to an Azure Machine Learning (ML) workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook library to use an existing workspace or create a new workspace.Typically you will need to run this notebook only once per notebook library as all other notebooks will use connection information that is written here. If you want to redirect your notebook library to work with a different workspace, then you should re-run this notebook.In this notebook you will* Learn about getting an Azure subscription* Specify your workspace parameters* Access or create your workspace* Add a default compute cluster for your workspace What is an Azure Machine Learning workspaceAn Azure ML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models. SetupThis section describes activities required before you can access any Azure ML services functionality. 1. Azure SubscriptionIn order to create an Azure ML Workspace, first you need access to an Azure subscription. An Azure subscription allows you to manage storage, compute, and other assets in the Azure cloud. You can [create a new subscription](https://azure.microsoft.com/en-us/free/) or access existing subscription information from the [Azure portal](https://portal.azure.com). Later in this notebook you will need information such as your subscription ID in order to create and access AML workspaces. 2. Azure ML SDK and other library installationIf you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.Also install following libraries to your environment. Many of the example notebooks depend on them```(myenv) $ conda install -y matplotlib tqdm scikit-learn```Once installation is complete, the following cell checks the Azure ML SDK version:
###Code
import azureml.core
print("This notebook was created using version 1.0.85 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
###Output
This notebook was created using version 1.0.85 of the Azure ML SDK
You are currently using version 1.0.85 of the Azure ML SDK
###Markdown
If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK. 3. Azure Container Instance registrationAzure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subscription needs to be registered to use ACI. If you or the subscription owner have not yet registered ACI on your subscription, you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands. Note that if you ran through the AML [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) you have already registered ACI. ```shell check to see if ACI is already registered(myenv) $ az provider show -n Microsoft.ContainerInstance -o table if ACI is not registered, run this command. note you need to be the subscription owner in order to execute this command successfully.(myenv) $ az provider register -n Microsoft.ContainerInstance```--- Configure your Azure ML workspace Workspace parametersTo use an AML Workspace, you will need to import the Azure ML SDK and supply the following information:* Your subscription id* A resource group name* (optional) The region that will host your workspace* A name for your workspaceYou can get your subscription ID from the [Azure portal](https://portal.azure.com).You will also need access to a [_resource group_](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overviewresource-groups), which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.The region to host your workspace will be used if you are creating a new workspace. You do not need to specify this if you are using an existing workspace. You can find the list of supported regions [here](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=machine-learning-service). You should pick a region that is close to your location or that contains your data.The name for your workspace is unique within the subscription and should be descriptive enough to discern among other AML Workspaces. The subscription may be used only by you, or it may be used by your department or your entire enterprise, so choose a name that makes sense for your situation.The following cell allows you to specify your workspace parameters. This cell uses the python method `os.getenv` to read values from environment variables which is useful for automation. If no environment variable exists, the parameters will be set to the specified default values. If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.Replace the default values in the cell below with your workspace parameters
###Code
import os
subscription_id = os.getenv("SUBSCRIPTION_ID", default="1a653f13-bee2-4093-8c1b-97dddb149720")
resource_group = os.getenv("RESOURCE_GROUP", default="autoML")
workspace_name = os.getenv("WORKSPACE_NAME", default="ML_20200125")
workspace_region = os.getenv("WORKSPACE_REGION", default="eastus2")
###Output
_____no_output_____
###Markdown
Access your workspaceThe following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified workspace doesn't exist or you don't have permissions to access it.
###Code
from azureml.core import Workspace
try:
ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
# write the details of the workspace to a configuration file to the notebook library
ws.write_config()
print("Workspace configuration succeeded. Skip the workspace creation steps below")
except:
print("Workspace not accessible. Change your parameters or create a new workspace below")
###Output
WARNING - Warning: Falling back to use azure cli login credentials.
If you run your code in unattended mode, i.e., where you can't give a user input, then we recommend to use ServicePrincipalAuthentication or MsiAuthentication.
Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.
Workspace configuration succeeded. Skip the workspace creation steps below
###Markdown
Create a new workspaceIf you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.**Note**: As with other Azure services, there are limits on certain resources (for example AmlCompute quota) associated with the Azure ML service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.This cell will create an Azure ML workspace for you in a subscription provided you have the correct permissions.This will fail if:* You do not have permission to create a workspace in the resource group* You do not have permission to create a resource group if it's non-existing.* You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscriptionIf workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources.**Note**: A Basic workspace is created by default. If you would like to create an Enterprise workspace, please specify sku = 'enterprise'.Please visit our [pricing page](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) for more details on our Enterprise edition.
###Code
from azureml.core import Workspace
# Create the workspace using the specified parameters
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
create_resource_group = True,
sku = 'basic',
exist_ok = True)
ws.get_details()
# write the details of the workspace to a configuration file to the notebook library
ws.write_config()
###Output
_____no_output_____
###Markdown
Create compute resources for your training experimentsMany of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.To create a cluster, you need to specify a compute configuration that specifies the type of machine to be used and the scalability behaviors. Then you choose a name for the cluster that is unique within the workspace that can be used to address the cluster later.The cluster parameters are:* vm_size - this describes the virtual machine type and size used in the cluster. All machines in the cluster are the same type. You can get the list of vm sizes available in your region by using the CLI command```shellaz vm list-skus -o tsv```* min_nodes - this sets the minimum size of the cluster. If you set the minimum to 0 the cluster will shut down all nodes while not in use. Setting this number to a value higher than 0 will allow for faster start-up times, but you will also be billed when the cluster is not in use.* max_nodes - this sets the maximum size of the cluster. Setting this to a larger number allows for more concurrency and a greater distributed processing of scale-out jobs.To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print("Found existing cpu-cluster")
except ComputeTargetException:
print("Creating new cpu-cluster")
# Specify the configuration for the new cluster
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2",
min_nodes=0,
max_nodes=4)
# Create the cluster with the specified name and configuration
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
# Wait for the cluster to complete, show the output log
cpu_cluster.wait_for_completion(show_output=True)
###Output
Found existing cpu-cluster
###Markdown
To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request).
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your GPU cluster
gpu_cluster_name = "gpu-cluster"
# Verify that cluster does not exist already
try:
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print("Found existing gpu cluster")
except ComputeTargetException:
print("Creating new gpu-cluster")
# Specify the configuration for the new cluster
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
min_nodes=0,
max_nodes=4)
# Create the cluster with the specified name and configuration
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
# Wait for the cluster to complete, show the output log
gpu_cluster.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
notebooks/cellular-automaton/Game of Life.ipynb | ###Markdown
Game of LifeIn this notebook we are going to study the Game of Life cellular automaton.
###Code
from plotly import tools
from plotly import offline as py
from plotly import graph_objs as go
py.init_notebook_mode(connected=True)
###Output
_____no_output_____
###Markdown
For the Game of Life one defines the state,$$A\in \{0,1\}^{N\times M}.$$Every $A_{ij}$ dependes a cell that either lives $A_{ij}=1$ or that is dead $A_{ij}=0$. Every cell has eight neighbors $A_{i-1,j-1},A_{i-1,j_1},\dots,A_{i+1,j+1}$. We define$$N_{ij}=\sum^{i+1}_{m=i-1}\sum^{j+1}_{n=j-1}A_{mn}-A_{ij}$$. Now the rules to update a given cell $A_{ij}\to A^\prime_{ij}$ areIf the cell lives, $A_{ij}=1$:* if $N_{ij}<2$ the cell dies out due to underpopulation $A^\prime_{ij}=0$* if $N_{ij}=2,3$ the cell stays alvie $A^\prime_{ij}=1$* if $N_{ij}>3$ the cell dies out due to overpopulation $A^\prime_{ij}=0$If the cell is dead, $A_{ij}=0$:* if $N_{ij}=3$ the cell gets (re)born
###Code
def game_of_life(state, N):
states = np.zeros((N+1, state.shape[0]+2, state.shape[1]+2))
states[0, 1:-1, 1:-1] = state
for n in range(N):
for i in range(1, state.shape[0] + 1):
for j in range(1, state.shape[1] + 1):
N = states[n][i-1:i+2, j-1:j+2].sum() - states[n][i, j]
states[n+1][i,j] = states[n][i,j]
if states[n][i,j] == 1:
if N < 2 or N > 3:
states[n+1][i,j] = 0
else:
if N == 3:
states[n+1][i,j] = 1
return states[:, 1:-1, 1:-1]
X1 = np.array([
[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 0],
])
X2 = np.array([
[0, 0, 0, 0],
[0, 1, 1, 0],
[0, 0, 0, 0],
])
X3 = np.array([
[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
])
Y1 = game_of_life(X1, 1)
Y2 = game_of_life(X2, 1)
Y3 = game_of_life(X3, 1)
figure = tools.make_subplots(rows=2, cols=3, print_grid=False)
figure.append_trace(go.Heatmap(z=X1, colorscale='YlGnBu', showscale=False), 1, 1)
figure.append_trace(go.Heatmap(z=Y1[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 1)
figure.append_trace(go.Heatmap(z=X2, colorscale='YlGnBu', showscale=False), 1, 2)
figure.append_trace(go.Heatmap(z=Y2[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 2)
figure.append_trace(go.Heatmap(z=X3, colorscale='YlGnBu', showscale=False), 1, 3)
figure.append_trace(go.Heatmap(z=Y3[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 3)
figure['layout'].update(title='Underpopulation')
py.iplot(figure)
X1 = np.array([
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
])
X2 = np.array([
[0, 1, 0, 0],
[1, 0, 1, 0],
[0, 0, 0, 0],
])
X3 = np.array([
[0, 0, 1, 0],
[0, 1, 0, 0],
[1, 0, 0, 0],
])
Y1 = game_of_life(X1, 2)
Y2 = game_of_life(X2, 2)
Y3 = game_of_life(X3, 2)
figure = tools.make_subplots(rows=3, cols=3, print_grid=False)
figure.append_trace(go.Heatmap(z=X1, colorscale='YlGnBu', showscale=False), 1, 1)
figure.append_trace(go.Heatmap(z=Y1[-2], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 1)
figure.append_trace(go.Heatmap(z=Y1[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 3, 1)
figure.append_trace(go.Heatmap(z=X2, colorscale='YlGnBu', showscale=False), 1, 2)
figure.append_trace(go.Heatmap(z=Y2[-2], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 2)
figure.append_trace(go.Heatmap(z=Y2[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 3, 2)
figure.append_trace(go.Heatmap(z=X3, colorscale='YlGnBu', showscale=False), 1, 3)
figure.append_trace(go.Heatmap(z=Y3[-2], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 3)
figure.append_trace(go.Heatmap(z=Y3[-1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 3, 3)
figure['layout'].update(title='Underpopulation')
py.iplot(figure)
X0 = np.array([
[0, 1, 1, 0],
[0, 1, 0, 0],
[1, 1, 0, 0],
])
Y0 = game_of_life(X0, 4)
figure = tools.make_subplots(rows=4, cols=1, print_grid=False)
figure.append_trace(go.Heatmap(z=Y0[0], colorscale='YlGnBu', showscale=False), 1, 1)
figure.append_trace(go.Heatmap(z=Y0[1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 1)
figure.append_trace(go.Heatmap(z=Y0[2], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 3, 1)
figure.append_trace(go.Heatmap(z=Y0[3], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 4, 1)
figure['layout'].update(title='Underpopulation', width=500, height=1000)
py.iplot(figure)
X0 = np.array([
[0, 0, 0, 0],
[1, 1, 1, 0],
[0, 0, 0, 0],
])
Y0 = game_of_life(X0, 8)
figure = tools.make_subplots(rows=2, cols=4, print_grid=False)
figure.append_trace(go.Heatmap(z=Y0[0], colorscale='YlGnBu', showscale=False), 1, 1)
figure.append_trace(go.Heatmap(z=Y0[1], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 1, 2)
figure.append_trace(go.Heatmap(z=Y0[2], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 1, 3)
figure.append_trace(go.Heatmap(z=Y0[3], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 1, 4)
figure.append_trace(go.Heatmap(z=Y0[4], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 1)
figure.append_trace(go.Heatmap(z=Y0[5], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 2)
figure.append_trace(go.Heatmap(z=Y0[6], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 3)
figure.append_trace(go.Heatmap(z=Y0[7], colorscale='YlGnBu', showscale=False, zmin=0, zmax=1), 2, 4)
figure['layout'].update(title='Blinker')
py.iplot(figure)
###Output
_____no_output_____ |
07_Object_Tracking_Localisation/4_1_Intro_to_Motion_Videos/Optical Flow.ipynb | ###Markdown
Optical FlowOptical flow tracks objects by looking at where the *same* points have moved from one image frame to the next. Let's load in a few example frames of a pacman-like face moving to the right and down and see how optical flow finds **motion vectors** that describe the motion of the face!As usual, let's first import our resources and read in the images.
###Code
import numpy as np
import matplotlib.image as mpimg # for reading in images
import matplotlib.pyplot as plt
import cv2 # computer vision library
%matplotlib inline
# Read in the image frames
frame_1 = cv2.imread('images/pacman_1.png')
frame_2 = cv2.imread('images/pacman_2.png')
frame_3 = cv2.imread('images/pacman_3.png')
# convert to RGB
frame_1 = cv2.cvtColor(frame_1, cv2.COLOR_BGR2RGB)
frame_2 = cv2.cvtColor(frame_2, cv2.COLOR_BGR2RGB)
frame_3 = cv2.cvtColor(frame_3, cv2.COLOR_BGR2RGB)
# Visualize the individual color channels
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))
ax1.set_title('frame 1')
ax1.imshow(frame_1)
ax2.set_title('frame 2')
ax2.imshow(frame_2)
ax3.set_title('frame 3')
ax3.imshow(frame_3)
###Output
_____no_output_____
###Markdown
Finding Points to TrackBefore optical flow can work, we have to give it a set of *keypoints* to track between two image frames!In the below example, we use a **Shi-Tomasi corner detector**, which uses the same process as a Harris corner detector to find patterns of intensity that make up a "corner" in an image, only it adds an additional parameter that helps select the most prominent corners. You can read more about this detection algorithm in [the documentation](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.html). Alternatively, you could choose to use Harris or even ORB to find feature points. I just found that this works well.**You sould see that the detected points appear at the corners of the face.**
###Code
# parameters for ShiTomasi corner detection
feature_params = dict( maxCorners = 10,
qualityLevel = 0.2,
minDistance = 5,
blockSize = 5 )
# convert all frames to grayscale
gray_1 = cv2.cvtColor(frame_1, cv2.COLOR_RGB2GRAY)
gray_2 = cv2.cvtColor(frame_2, cv2.COLOR_RGB2GRAY)
gray_3 = cv2.cvtColor(frame_3, cv2.COLOR_RGB2GRAY)
# Take first frame and find corner points in it
pts_1 = cv2.goodFeaturesToTrack(gray_1, mask = None, **feature_params)
# display the detected points
plt.imshow(frame_1)
for p in pts_1:
# plot x and y detected points
plt.plot(p[0][0], p[0][1], 'r.', markersize=15)
# print out the x-y locations of the detected points
print(pts_1)
###Output
[[[318. 82.]]
[[308. 304.]]
[[208. 188.]]
[[309. 81.]]
[[299. 304.]]
[[199. 188.]]]
###Markdown
Perform Optical FlowOnce we've detected keypoints on our initial image of interest, we can calculate the optical flow between this image frame (frame 1) and the next frame (frame 2), using OpenCV's `calcOpticalFlowPyrLK` which is [documented, here](https://docs.opencv.org/trunk/dc/d6b/group__video__track.htmlga473e4b886d0bcc6b65831eb88ed93323). It takes in an initial image frame, the next image, and the first set of points, and it returns the detected points in the next frame and a value that indicates how good matches are between points from one frame to the next.The parameters also include a window size and maxLevels that indicate the size of a window and number of levels that will be used to scale the given images using pyramid scaling; this version peforms an iterative search for matching points and this matching criteria is reflected in the last parameter (you may need to change these values if you are working with a different image, but these should work for the provided example).
###Code
# parameters for lucas kanade optical flow
lk_params = dict( winSize = (5,5),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# calculate optical flow between first and second frame
pts_2, match, err = cv2.calcOpticalFlowPyrLK(gray_1, gray_2, pts_1, None, **lk_params)
# Select good matching points between the two image frames
good_new = pts_2[match==1]
good_old = pts_1[match==1]
###Output
_____no_output_____
###Markdown
Next, let's display the resulting motion vectors! You should see the first image with motion vectors drawn on it that indicate the direction of motion from the first frame to the next.
###Code
# create a mask image for drawing (u,v) vectors on top of the second frame
mask = np.zeros_like(frame_2)
# draw the lines between the matching points (these lines indicate motion vectors)
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
# draw points on the mask image
mask = cv2.circle(mask,(a,b),5,(200),-1)
# draw motion vector as lines on the mask image
mask = cv2.line(mask, (a,b),(c,d), (200), 3)
# add the line image and second frame together
composite_im = np.copy(frame_2)
composite_im[mask!=0] = [0]
plt.imshow(composite_im)
###Output
_____no_output_____
###Markdown
Perform Optical Flow between image frames 2 and 3Repeat this process but for the last two image frames; see what the resulting motion vectors look like. Imagine doing this for a series of image frames and plotting the entire-motion-path of a given object.
###Code
## Perform optical flow between image frames 2 and 3
# Take first frame and find corner points in it
pts_2 = cv2.goodFeaturesToTrack(gray_2, mask = None, **feature_params)
# display the detected points
plt.imshow(frame_2)
for p in pts_2:
# plot x and y detected points
plt.plot(p[0][0], p[0][1], 'r.', markersize=15)
# print out the x-y locations of the detected points
print(pts_1)
# create a mask image for drawing (u,v) vectors on top of the second frame
mask = np.zeros_like(frame_3)
# calculate optical flow between first and second frame
pts_3, match, err = cv2.calcOpticalFlowPyrLK(gray_2, gray_3, pts_2, None, **lk_params)
# Select good matching points between the two image frames
good_new = pts_3[match==1]
good_old = pts_2[match==1]
# draw the lines between the matching points (these lines indicate motion vectors)
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
# draw points on the mask image
mask = cv2.circle(mask,(a,b),5,(200),-1)
# draw motion vector as lines on the mask image
mask = cv2.line(mask, (a,b),(c,d), (200), 3)
# add the line image and second frame together
composite_im = np.copy(frame_3)
composite_im[mask!=0] = [0]
plt.imshow(composite_im)
###Output
_____no_output_____ |
Week 9/loc vs iloc.ipynb | ###Markdown
loc vs iloc... which one to use?loc and iloc are really useful, and it can be confusing to know which one to use.Let's experiment with a dataset and see if we can't create clear examples of why one works when another doesn't and *vice versa*.Steve Taylor, March 2021
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The Setup Read in the file as we always do
###Code
students = pd.read_csv("students.csv")
###Output
_____no_output_____
###Markdown
Let's check out what we've loaded...
###Code
students.head()
###Output
_____no_output_____
###Markdown
Aside:>When we dtype the dataframe, the *studentID* column is an `int64` (a C language data type, not a Python one).>>Pandas will go to great lengths to figure out if things are floats (in C, that's `float64` here). Because so much of the inside of Pandas is written in the C language, and "strings" are really just addresses in memory to objects, the Pandas dtype of what we know as `str` is `object`.>>It's not important to grok C types here, but it is useful to know that Pandas bias is to try to figure out what the types are on it's own. You can override these types when creating dataframes, and we'll look at that later in class.
###Code
students.dtypes
# I'm not "sticking" the index... just looking at it...
students.set_index("studentID")
###Output
_____no_output_____
###Markdown
Shaking Things Up
###Code
# because I wanted to shuffle the original
students = students.sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
When we look at the shuffled dataframe, notice how we've eliminated the relationship between the index and the studentID.
###Code
students.head()
# Now set the index
students = students.set_index("studentID")
###Output
_____no_output_____
###Markdown
Show the student first and last names, with IDs of 1, 50, and 100.When the studentID's matched the position from the original file, you get away with using it's row position (minus one of course) to get the student ID. But now, if you use iloc, you absolutely won't get what you you want.
###Code
students.iloc[[0, 49, 99], [0, 1]]
###Output
_____no_output_____
###Markdown
Using loc we get what we're looking for using the student IDs.
###Code
students.loc[[1, 50, 100], ["firstName", "lastName"]]
###Output
_____no_output_____
###Markdown
Aside:>For most of our purposes, you can use a tuple where you'd use a list. Think of tuples as read-only (that's *immutable* in the lingo) lists; if you try to change the value of something in a tuple (e.g., my_tuple[3] = "something new"), you'll get an error. As a general rule if you use a tuple, you are relying on Python to catch you if you try to change something, but as importantly, you are communicating to others in your code that the intention is it be used as read-only, similar to a constant.
###Code
students.loc[(1, 50, 100), ("firstName", "lastName")]
###Output
_____no_output_____
###Markdown
>Back to loc and iloc! When I want the second set of 7 **rows** of the students in the dataframe:
###Code
students.iloc[7:14]
###Output
_____no_output_____
###Markdown
Fun aside is you can do the same thing using the dataframe accessor alone, but it doesn't take a second argument for columns like iloc does.
###Code
students[7:14]
###Output
_____no_output_____
###Markdown
If we use loc now, we get something that's pretty wild. It includes student ID's 7 through 14, as asked (look at the first item and the last to verify), but it also included everything else inclusively. This might not be what you where looking for. Customarily when we're looking to select things by business value we'll use different techniques like `where()`.
###Code
students.loc[7:14]
###Output
_____no_output_____
###Markdown
We can always select using loc by using expressions. In this case we use the `.index` to get the value for comparison, versus using a named column.It's worth knowing that `((students.index >= 7) & (students.index < 14))` evaluates to something called a boolean array, which is enough like a list -- we might say "list-like" -- to satisfy the accessor. More on that below.
###Code
students.loc[((students.index >= 7) & (students.index < 14))]
###Output
_____no_output_____
###Markdown
Notes on Accessors and FunctionsThe use of iloc and loc (as well as many other things like a dataframe itself) can be confusing because we often get sloppy and teach these as functions (e.g., .loc() is **wrong**). And iloc and loc are specifically *not* functions. These are called "accessors", and these use a list. How do we know that? First of course is docs, e.g., [loc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) from the official Pandas documentation. But second is how we use it. For instance in our example above of `students.loc[[1, 50, 100], ["firstName", "lastName"]]`, we are using .loc**[]**. The use of square-brackets an *indexing operator*, which tells us that we need to use it with a list-like object (I switched it to list from tuples, but both are list-like). So with the example, let's look the list that loc is using? It's a list of two lists: ```python.loc[ [1, 50, 100], ["firstName", "lastName"]]```the first list is `[1,50,100]`, and the second list is `['firstName','lastName']`. If we check the docs we'll find the second list is optional, which is why we often see something like `students.loc[[1,50,100]]`. That's a single list inside the outside indexing operator... think list-of-lists.This is applicable to .loc, .loc, or use of the dataframe itself, e.g., `students[0:15]` is a single list in the indexing operator of the datataframe variable, right? Right! Slices return a list. If we definitely want the second list to include specific columns, we don't have a way to omit the first list, so we often use a "get them all" slice in the first list position. ```python.loc[ :, ["firstName", "lastName"]]```Where we often go wrong is to replace the slice operator with [:]:```python.loc[ [:], WRONG! ["firstName", "lastName"]]```Why does that break? Now the first list is one with an colon in it. The indexing operator (i.e., []) can take an integer or a slice for lists. In our case a colon isn't anything that's valid to go inside an indexing operator. So it raises an error.*If you still haven't let go of functions and optional, postional, and named arguments yet, the whole "nested" lists in the indexing operator is a little mind-bendy* Take your time. :-)
###Code
students.loc[:, ["firstName", "lastName"]]
###Output
_____no_output_____ |
nbs/dl1/lesson1-pets.ipynb | ###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
# bs = 64
bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
untar_data
help(untar_data)
URLs.PETS
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
?ImageDataBunch.from_name_re
# ??ImageDataBunch.from_name_re
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs, valid_pct=0.20
).normalize(imagenet_stats)
len(data.valid_ds)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
['Abyssinian', 'Bengal', 'Birman', 'Bombay', 'British_Shorthair', 'Egyptian_Mau', 'Maine_Coon', 'Persian', 'Ragdoll', 'Russian_Blue', 'Siamese', 'Sphynx', 'american_bulldog', 'american_pit_bull_terrier', 'basset_hound', 'beagle', 'boxer', 'chihuahua', 'english_cocker_spaniel', 'english_setter', 'german_shorthaired', 'great_pyrenees', 'havanese', 'japanese_chin', 'keeshond', 'leonberger', 'miniature_pinscher', 'newfoundland', 'pomeranian', 'pug', 'saint_bernard', 'samoyed', 'scottish_terrier', 'shiba_inu', 'staffordshire_bull_terrier', 'wheaten_terrier', 'yorkshire_terrier']
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
?ImageDataBunch.from_name_re
len(fnames)
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
data
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5)
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.save('stage-1-50', return_path=True)
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Export model to use in App - renderLink is [here](https://course.fast.ai/deployment_render.html)learner.export()
###Code
learn.export(file='export_stg150.pkl')
#doc(learn.export)
learn.path
###Output
_____no_output_____
###Markdown
Other data formats
###Code
URLs.MNIST_SAMPLE
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
doc(ImageDataBunch)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
_____no_output_____
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=16).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
['Abyssinian', 'Bengal', 'Birman', 'Bombay', 'British_Shorthair', 'Egyptian_Mau', 'Maine_Coon', 'Persian', 'Ragdoll', 'Russian_Blue', 'Siamese', 'Sphynx', 'american_bulldog', 'american_pit_bull_terrier', 'basset_hound', 'beagle', 'boxer', 'chihuahua', 'english_cocker_spaniel', 'english_setter', 'german_shorthaired', 'great_pyrenees', 'havanese', 'japanese_chin', 'keeshond', 'leonberger', 'miniature_pinscher', 'newfoundland', 'pomeranian', 'pug', 'saint_bernard', 'samoyed', 'scottish_terrier', 'shiba_inu', 'staffordshire_bull_terrier', 'wheaten_terrier', 'yorkshire_terrier']
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
!nvidia-smi
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
doc(cnn_learner)
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
_____no_output_____
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Run on Mac CPU - requires `num_workers=0` in a few places when get error Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
import fastai
fastai.__version__
from fastai.utils.show_install import *
show_install()
###Output
```text
=== Software ===
python : 3.6.5
fastai : 1.0.40
fastprogress : 0.1.18
torch : 1.0.0
torch cuda : None / is **Not available**
=== Hardware ===
No GPUs available
=== Environment ===
platform : Darwin-18.2.0-x86_64-i386-64bit
conda env : Unknown
python : /Users/robincole/anaconda3/bin/python
sys.path :
/Users/robincole/anaconda3/lib/python36.zip
/Users/robincole/anaconda3/lib/python3.6
/Users/robincole/anaconda3/lib/python3.6/lib-dynload
/Users/robincole/.local/lib/python3.6/site-packages
/Users/robincole/anaconda3/lib/python3.6/site-packages
/Users/robincole/anaconda3/lib/python3.6/site-packages/aeosa
/Users/robincole/Documents/Github/jupyter_micropython_kernel
/Users/robincole/anaconda3/lib/python3.6/site-packages/circuitpython_kernel-0.3.1-py3.6.egg
/Users/robincole/Documents/Github/HASS-data-detective
/Users/robincole/Documents/Github/London-tube-status
/Users/robincole/anaconda3/lib/python3.6/site-packages/GlacierVaultRemove-0.1-py3.6.egg
/Users/robincole/anaconda3/lib/python3.6/site-packages/IPython/extensions
/Users/robincole/.ipython
no supported gpus found on this system
```
Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections.
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
#bs = 64
bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
URLs.PETS
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = re.compile(r'/([^/]+)_\d+.jpg$')
?data.show_batch
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs, num_workers=0
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
['Abyssinian', 'Bengal', 'Birman', 'Bombay', 'British_Shorthair', 'Egyptian_Mau', 'Maine_Coon', 'Persian', 'Ragdoll', 'Russian_Blue', 'Siamese', 'Sphynx', 'american_bulldog', 'american_pit_bull_terrier', 'basset_hound', 'beagle', 'boxer', 'chihuahua', 'english_cocker_spaniel', 'english_setter', 'german_shorthaired', 'great_pyrenees', 'havanese', 'japanese_chin', 'keeshond', 'leonberger', 'miniature_pinscher', 'newfoundland', 'pomeranian', 'pug', 'saint_bernard', 'samoyed', 'scottish_terrier', 'shiba_inu', 'staffordshire_bull_terrier', 'wheaten_terrier', 'yorkshire_terrier']
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = create_cnn(data, models.resnet34, metrics=error_rate)
learn.model
###Output
_____no_output_____
###Markdown
below cell taske 3:45 on GPU. On Mac CPU, this was going to take 2 hours
###Code
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = create_cnn(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 128
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
###Output
_____no_output_____
###Markdown
Set the random seed to two to guarantee that the same validation set is every time. This will give you consistent results with what you see in the lesson video.
###Code
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
_____no_output_____
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
torch.cuda.empty_cache()
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
from fastai.datasets import *
import numpy as np
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
print(path_anno)
print(path_img)
###Output
C:\Users\wenz_\.fastai\data\oxford-iiit-pet\annotations
C:\Users\wenz_\.fastai\data\oxford-iiit-pet\images
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
_____no_output_____
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
###Output
_____no_output_____
###Markdown
Set the random seed to two to guarantee that the same validation set is every time. This will give you consistent results with what you see in the lesson video.
###Code
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
['Abyssinian', 'Bengal', 'Birman', 'Bombay', 'British_Shorthair', 'Egyptian_Mau', 'Maine_Coon', 'Persian', 'Ragdoll', 'Russian_Blue', 'Siamese', 'Sphynx', 'american_bulldog', 'american_pit_bull_terrier', 'basset_hound', 'beagle', 'boxer', 'chihuahua', 'english_cocker_spaniel', 'english_setter', 'german_shorthaired', 'great_pyrenees', 'havanese', 'japanese_chin', 'keeshond', 'leonberger', 'miniature_pinscher', 'newfoundland', 'pomeranian', 'pug', 'saint_bernard', 'samoyed', 'scottish_terrier', 'shiba_inu', 'staffordshire_bull_terrier', 'wheaten_terrier', 'yorkshire_terrier']
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____
###Markdown
Lesson 1 - What's your pet Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook. Ed's Notes This notebook tries to replicate with no (or as few as possible) changes from what was presented by Jeremy during the day 1 video. Environment* This is being run on my GPU enabled workstation. * Ubuntu 18.04 LTS (Server)* Installation of PyTorch and fast.ai libraries as described in the fast.ai web site. Issues * While the notebook runs correctly and w/o run time errors, the results here do not match (are not close!) the results that were shown in the video or are documented in the original git repo. Some of the issues are noted below. If the issue was resolved, there will be anote after the item being listed as an issue:* TBD
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
#bs = 64
# Line below is for a GTX 980 card with has only 4G of Memory.
bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = r'/([^/]+)_\d+.jpg$'
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
_____no_output_____
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(8) #Originally 4 epochs
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
###Output
_____no_output_____
###Markdown
In the function below, there is a minor difference between the behavior of the function now vs. what it was like in the course. Fromt he docs, heatmap default to True, so I've added this as a parameter to the function call so that I can selectively turn this off and on. Setting heatmap = False replicates the types of pictures in the original notebook. I have added a new line below with Heatmap turned on as well.
###Code
interp.plot_top_losses(9, heatmap = False, figsize=(15,11))
interp.plot_top_losses(9, heatmap = True, figsize=(15,11))
doc(interp.plot_top_losses)
###Output
_____no_output_____
###Markdown
The confusion matrix below is now closer to the original results from the video (I don't expect 100% matching). So far only change is that I changed batch size to 32 (vs. 64 in Jeremy's original notebook since my GPU does not have as much memory as Jeremy's
###Code
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
###Output
_____no_output_____
###Markdown
**Issue**My results below no longer match Jeremy's results. They are significantly different and so far I don't know the reason for this. The only parameter that I had changed was batch size, but it does not (I did think it would) seem to make a difference in the results
###Code
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(4) #parameter was 1
learn.load('stage-1');
learn.lr_find()
###Output
_____no_output_____
###Markdown
**Issue**My Learning Rate curve is very different than the notebook. It does not seem to find a good spot where the loss settles.
###Code
learn.recorder.plot()
###Output
_____no_output_____
###Markdown
These results are similar and close enough to the video
###Code
learn.unfreeze()
learn.fit_one_cycle(4, max_lr=slice(1e-6,1e-4)) #epochs was 2
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
###Output
_____no_output_____
###Markdown
Somewhat similar to the original lesson
###Code
learn.lr_find()
learn.recorder.plot()
###Output
_____no_output_____
###Markdown
**Issue**These results do not match original notebook. Training loss after epoch 5 is ~ 2x the original from the class notebook
###Code
learn.fit_one_cycle(12) #epochs were 8
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps: **GPU Problem**Code below requires more than 4G of GPU memory, so I will have to set the bs = 16. 16 seems to be the biggest size that can fit in my current GPU card
###Code
learn.unfreeze()
learn.fit_one_cycle(8, max_lr=slice(1e-6,1e-4)) #epochs was 3
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(4)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____ |
deeplearning1/nbs-custom-mine/lesson1_01.ipynb | ###Markdown
Using Convolutional Neural Networks Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning. Introduction to this week's task: 'Dogs vs Cats' We're going to try to create a model to enter the [Dogs vs Cats](https://www.kaggle.com/c/dogs-vs-cats) competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): *"**State of the art**: The current literature suggests machine classifiers can score above 80% accuracy on this task"*. So if we can beat 80%, then we will be at the cutting edge as of 2013! Basic setup There isn't too much to do to get started - just a few simple configuration steps.This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
###Code
path = "data/dogscats/"
#path = "data/dogscats/sample/"
###Output
_____no_output_____
###Markdown
A few basic libraries that we'll need for the initial exercises:
###Code
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
###Code
import utils; reload(utils)
from utils import plots
###Output
WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10). Please switch to the gpuarray backend. You can get more information about how to switch at this URL:
https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29
Using gpu device 0: Tesla K80 (CNMeM is disabled, cuDNN 5103)
Using Theano backend.
###Markdown
Use a pretrained VGG model with our **Vgg16** class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (*VGG 19*) and a smaller, faster model (*VGG 16*). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.We have created a python class, *Vgg16*, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of codeHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
###Code
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
###Output
Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
Epoch 1/1
632s - loss: 0.1179 - acc: 0.9687 - val_loss: 0.0666 - val_acc: 0.9830
###Markdown
The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.Let's take a look at how this works, step by step... Use Vgg16 for basic image recognitionLet's start off by using the *Vgg16* class to recognise the main imagenet category for each image.We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.First, create a Vgg16 object:
###Code
vgg = Vgg16()
###Output
_____no_output_____
###Markdown
Vgg16 is built on top of *Keras* (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in *batches*, using a fixed directory structure, where images from each category for training must be placed in a separate folder.Let's grab batches of data from our training folder:
###Code
batches = vgg.get_batches(path+'train', batch_size=4)
###Output
Found 23000 images belonging to 2 classes.
###Markdown
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)*Batches* is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
###Code
imgs,labels = next(batches)
###Output
_____no_output_____
###Markdown
As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called *one hot encoding*. The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
###Code
plots(imgs, titles=labels)
###Output
_____no_output_____
###Markdown
We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
###Code
vgg.predict(imgs, True)
###Output
_____no_output_____
###Markdown
The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
###Code
vgg.classes[:4]
###Output
_____no_output_____
###Markdown
(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.) Use our Vgg16 class to finetune a Dogs vs Cats modelTo change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call *fit()* after calling *finetune()*.We create our batches just like before, and making the validation set available as well. A 'batch' (or *mini-batch* as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
###Code
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
###Output
Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
###Markdown
Calling *finetune()* modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
###Code
vgg.finetune(batches)
###Output
_____no_output_____
###Markdown
Finally, we *fit()* the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An *epoch* is one full pass through the training data.)
###Code
vgg.fit(batches, val_batches, nb_epoch=1)
###Output
Epoch 1/1
631s - loss: 0.1187 - acc: 0.9700 - val_loss: 0.0693 - val_acc: 0.9835
###Markdown
That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.Next up, we'll dig one level deeper to see what's going on in the Vgg16 class. Create a VGG model from scratch in KerasFor the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes. Model setupWe need to import all the modules we'll be using from numpy, scipy, and keras:
###Code
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
###Output
_____no_output_____
###Markdown
Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
###Code
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
###Output
_____no_output_____
###Markdown
Here's a few examples of the categories we just imported:
###Code
classes[:5]
###Output
_____no_output_____
###Markdown
Model creationCreating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
###Code
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
###Output
_____no_output_____
###Markdown
...and here's the fully-connected definition.
###Code
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
###Output
_____no_output_____
###Markdown
When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
###Code
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
###Output
_____no_output_____
###Markdown
Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
###Code
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
We'll learn about what these different blocks do later in the course. For now, it's enough to know that:- Convolution layers are for finding patterns in images- Dense (fully connected) layers are for combining patterns across an imageNow that we've defined the architecture, we can create the model like any python object:
###Code
model = VGG_16()
###Output
/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/layers/core.py:622: UserWarning: `output_shape` argument not specified for layer lambda_3 and cannot be automatically inferred with the Theano backend. Defaulting to output shape `(None, 3, 224, 224)` (same as input shape). If the expected output shape is different, specify it via the `output_shape` argument.
.format(self.name, input_shape))
###Markdown
As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
###Code
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
###Output
_____no_output_____
###Markdown
Getting imagenet predictionsThe setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call *predict()* on them.
###Code
batch_size = 4
###Output
_____no_output_____
###Markdown
Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
###Code
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
From here we can use exactly the same steps as before to look at predictions from the model.
###Code
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
###Output
Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
###Markdown
The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with *np.argmax()*) we can find the predicted label.
###Code
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
###Output
Shape: (4, 1000)
First 5 classes: [u'tench', u'goldfish', u'great_white_shark', u'tiger_shark', u'hammerhead']
First 5 probabilities: [ 4.6038e-07 6.5442e-06 2.5975e-05 4.5351e-05 2.7894e-05]
Predictions prob/class:
0.1165/Egyptian_cat
0.3711/Egyptian_cat
0.9913/boxer
0.4782/tabby
|
student-notebooks/13.00-RosettaCarbohydrates-Working-with-Glycans.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* RosettaCarbohydratesKeywords: carbohydrate, glycan, sugar, glucose, mannose, sugar, GlycanTreeSet, saccharide, furanose, pyranose, aldose, ketose OverviewIn this chapter, we will focus on a special subset of non-peptide oligo- and polymers — carbohydrates.Modeling carbohydrates — also known as saccharides, glycans, or simply sugars — comes with some special challenges. For one, most saccharide residues contain a ring as part of their backbone. This ring provides potentially new degrees of freedom when sampling. Additionally, carbohydrate structures are often branched, leading in Rosetta to more complicated `FoldTrees`. This chapter includes a quick overview of carbohydrate nomenclature, structure, and basic interactions within Rosetta. Carbohydrate Chemistry Background Figure 1. A pyranose (left) and a furanose (right).Sugars (saccharides) are defined as hyroxylated aldehydes and ketones. A typical monosaccharide has an equal number of carbon and oxygen atoms. For example, glucose has the molecular formula C6H12O6.Sugars containing more than three carbons will spontaneously cyclize in aqueous environments to form five- or six-membered hemiacetals and hemiketals. Sugars with five-membered rings are called furanoses; those with six-membered rings are called pyranoses (Fig. 1).Figure 2. An aldose (left) and a ketose (right).A sugar is classified as an aldose or ketose, depending on whether it has an aldehyde or ketone in its linear form (Fig. 2).The different sugars have different names, depending on the stereochemistry at each of the carbon atoms in the molecule. For example, glucose has one set of stereochemistries, while mannose has another.In addition to their full names, many individual saccharide residues have three-letter codes, just like amino acid residues do. Glucose is "Glc" and mannose is "Man". Backbone Torsions, Residue Connections, and side-chainsA glycan tree is made up of many sugar residues, each residue a ring. The 'backbone' of a glycan is the connection between one residue and another. The chemical makeup of each sugar residue in this 'linkage' effects the propensity/energy of each bacbone dihedral angle. In addition, sugars can be attached via different carbons of the parent glycan. In this way, the chemical makeup and the attachment position effects the dihedral propensities. Typically, there are two backbone dihedral angles, but this could be up to 4+ angles depending on the connection.In IUPAC, the dihedrals of N are defined as the dihedrals between N and N-1 (IE - the parent linkage). The ASN (or other glycosylated protein residue's) dihedrals become part of the first glycan residue that is connected. For this first first glycan residue that is connected to an ASN, it has 4 torsions, while the ASN now has none!If you are creating a movemap for dihedral residues, please use the `MoveMapFactory` as this has the IUPAC nomenclature of glycan residues built in in order to allow proper DOF sampling of the backbone residues, especially for branching glycan trees. In general, all of our samplers should use residue selectors and use the MoveMapFactory to build movemaps internally.A sugar's side-chains are the constitutents of the glycan ring, which are typically an OH group or an acetyl group. These are sampled together at 60 degree angles by default during packing. A higher granularity of rotamers cannot currently be handled in Rosetta, but 60 degrees seems adequete for our purposes.Within Rosetta, glycan connectivity information is stored in the `GlycanTreeSet`, which is continually updated to reflect any residue changes or additions to the pose. This info is always available through the function pose.glycan_tree_set()Chemical information of each glycan residue can be accessed through the CarbohydrateInfo object, which is stored in each ResidueType object: pose.residue_type(i).carbohydrate_info() We will cover both of these classes in the next tutorial. Documentationhttps://www.rosettacommons.org/docs/latest/application_documentation/carbohydrates/WorkingWithGlycans References**Residue centric modeling and design of saccharide and glycoconjugate structures**Jason W. Labonte Jared Adolf-Bryfogle William R. Schief Jeffrey J. Gray_Journal of Computational Chemistry_, 11/30/2016 - **Automatically Fixing Errors in Glycoprotein Structures with Rosetta**Brandon Frenz, Sebastian Rämisch, Andrew J. Borst, Alexandra C. WallsJared Adolf-Bryfogle, William R. Schief, David Veesler, Frank DiMaio_Structure_, 1/2/2019 InitializationLet's use Pyrosetta to compare some common monosaccharide residues and see how they differ. As usual, we start by importing the `pyrosetta` and `rosetta` namespaces.
###Code
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.mount_pyrosetta_install()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
from pyrosetta.rosetta import *
###Output
_____no_output_____
###Markdown
First, one needs the `-include_sugars` option, which will tell Rosetta to load sugars and add the sugar_bb energy term to a default scorefunction. This scoreterm is like rama for the sugar dihedrals which connect each sugar residue.
###Code
init('-include_sugars')
###Output
_____no_output_____
###Markdown
When loading structures from the PDB that include glycans, we use these options. This includes an option to write out the structures in pdb format instead of the (better) Rosetta format. We will be using these options in the next tutorial. -maintain_links -auto_detect_glycan_connections -alternate_3_letter_codes pdb_sugar -write_glycan_pdb_codes -load_PDB_components false Set up the `PyMOLMover` for viewing structures.
###Code
pm = PyMOLMover()
###Output
_____no_output_____
###Markdown
Creating Saccharides from SequenceWe will use the function, `pose_from_saccharide_sequence()`, which must be imported from the `core.pose` namespace. Unlike with peptide chains, one-letter-codes will not suffice when specifying saccharide chains, because there is too much information to convey; we must use at least four letters. The first three letters are the sugar's three-letter code; the fourth letter designates whether the residue is a furanose (`f`) or pyranose (`p`).
###Code
from pyrosetta.rosetta.core.pose import pose_from_saccharide_sequence
glucose = pose_from_saccharide_sequence('Glcp')
galactose = pose_from_saccharide_sequence('Galp')
mannose = pose_from_saccharide_sequence('Manp')
###Output
_____no_output_____
###Markdown
Use the `PyMOLMover` to compare the three monosacharides in PyMOL. At which carbons do the three sugars differ? L and D FormsJust like with peptides, saccharides come in two enantiomeric forms, labelled l and d. (Note the small-caps, used in print.) These can be loaded into PyRosetta using the prefixes `L-` and `D-`.
###Code
L_glucose = pose_from_saccharide_sequence('L-Glcp')
D_glucose = pose_from_saccharide_sequence('D-Glcp')
###Output
_____no_output_____
###Markdown
Compare the two structures in PyMOL. Notice that all stereocenters are inverted between the two monosaccharides. Which enantiomer is loaded by PyRosetta by default if l or d are not specified? AnomersThe carbon that is at a higher oxidation state — that is, the carbon of the hemiacetal/-ketal in the cyclic form or the carbon that is the carbonyl carbon of the aldehyde or ketone in the linear form — is called the anomeric carbon. Because the carbonyl of an aldehyde or ketone is planar, a sugar molecule can cyclize into one of two forms, one in which the resulting hydroxyl group is pointing "up" and another in which the same hydroxyl group is pointing "down". These two anomers are labelled α and β.Create a one-residue `Pose` for both α- and β-d-glucopyranose and use PyMOL to compare both.
###Code
alpha_D_glucose = pose_from_saccharide_sequence('a-D-Glcp')
###Output
_____no_output_____
###Markdown
For which anomer is the C1 hydroxyl group axial to the chair conformation of the six-membered pyranose ring?Which anomer of d-glucose would you predict to be the most stable? (Hint: remember what you learned in organic chemistry about axial and equatorial substituents.) Linear Oligosaccharides & IUPAC SequencesOligo- and polysaccharides are composed of simple monosaccharide residues connected by acetal and ketal linkages called __glycosidic bonds__. Any of the monosaccharide's _hydroxyl_ groups can be used to form a linkage to the anomeric carbon of another monosaccharide, leading to both _linear_ and _branched_ molecules. Rosetta can create both _linear_ and _branched_ oligosaccharides from an __IUPAC__ sequence. (IUPAC is the international organization dedicated to chemical nomenclature.)To properly build a linear oligosaccharide, Rosetta must know the following details about each sugar residue being created in the following order: - Main-chain connectivity — →2) (`->2)`), →4) (`->4)`), →6) (`->6)`), _etc._; default value is `->4)-` - Anomeric form — α (`a` or `alpha`) or β (`b` or `beta`); default value is `alpha` - Enantiomeric form — l (`L`) or d (`D`); default value is `D` - 3-Letter code — required; uses sentence case - Ring form code — f (for a furanose/5-membered ring), p (for a pyranose/6-membered ring); required Residues must be separated by hyphens. Glycosidic linkages can be specified with full IUPAC notation, _e.g._, `-(1->4)-` for “-(1→4)-”. (This means that the residue on the left connects from its C1 (anomeric) position to the hydoxyl oxygen at C4 of the residue on the right.) Rosetta will assume `-(1->` for aldoses and `-(2->` for ketoses.Note that the standard is to write the IUPAC sequence of a saccharide chain in reverse order from how they are numbered. Lets create three new oligosacharides from sequence.
###Code
maltotriose = pose_from_saccharide_sequence('a-D-Glcp-' * 3)
lactose = pose_from_saccharide_sequence('b-D-Galp-(1->4)-a-D-Glcp')
isomaltose = pose_from_saccharide_sequence('->6)-Glcp-' * 2)
###Output
_____no_output_____
###Markdown
General Residue InformationWhen you print a `Pose` containing carbohydrate residues, the sugar residues will be listed as `Z` in the sequence.
###Code
print("maltotriose\n", maltotriose)
print("\nisomaltose\n", isomaltose)
print("\nlactose\n", lactose)
###Output
_____no_output_____
###Markdown
However, you can have Rosetta print out the sequences for individual chains, using the `chain_sequence()` method. If you do this, Rosetta is smart enough to give you a distinct sequence format for saccharide chains. (You may have noticed that the default file name for a `.pdb` file created from this `Pose` will be the same sequence.)
###Code
print(maltotriose.chain_sequence(1))
print(isomaltose.chain_sequence(1))
print(lactose.chain_sequence(1))
###Output
_____no_output_____
###Markdown
Again, the standard is to show the sequence of a saccharide chain in reverse order from how they are numbered. This is also how phi, psi, and omega are defined. From i+1 to i.
###Code
for res in lactose.residues: print(res.seqpos(), res.name())
###Output
_____no_output_____
###Markdown
Notice that for polysaccharides, the upstream residue is called the reducing end, while the downstream residue is called the non-reducing end.You will also see the terms parent and child being used across Rosetta. Here, for Residue 2, residue 1 is the parent. For Residue 1, Residue 2 is the child. Due to branching, residues can have more than one child/non-reducing-end, but only a single parent residue. Rosetta stores carbohydrate-specific information within `ResidueType`. If you print a residue, this additional information will be displayed.
###Code
print(glucose.residue(1))
###Output
_____no_output_____
###Markdown
Scanning the output from printing a glucose `Residue`, what is the general term for an aldose with six carbon atoms? Exploring Carbohydrate Structure Torsion Angles Most bioolymers have predefined, named torsion angles for their main-chain and side-chain bonds, such as φ, ψ, and ω and the various χs for amino acid residues. The same is true for saccharide residues. The torsion angles of sugars are as follows:Figure 3. A disaccharide's main-chain torsion angles.Figure 4. A monosaccharide's internal ring torsion angles.Figure 5. A monosaccharide's side-chain torsion angles.φ — The 1st glycosidic torsion back to the previous (n−1) residue. The angle is defined by the cyclic oxygen, the two atoms across the bond, and the cyclic carbon numbered one less than the glycosidic linkage position. For aldopyranoses, φ(n) is thus defined as O5(n)–C1(n)–OX(n−1)–CX(n−1), where X is the position of the glycosidic linkage. For aldofuranoses, φ(n) is defined as O4(n)–C1(n)–OX(n−1)–CX(n−1) For 2-ketopyranoses, φ(n) is defined as O6(n)–C2(n)–OX(n−1)–CX(n−1). For 2-ketofuranoses, φ(n) is defined as O5(n)–C2(n)–OX(n−1)–CX(n−1). Et cetera….ψ — The 2nd glycosidic torsion back to the previous (n−1) residue. The angle is defined by the anomeric carbon, the two atoms across the bond, and the cyclic carbon numbered two less than the glycosidic linkage position. ψ(n) is thus defined as Canomeric(n)–OX(n−1)–CX(n−1)–CX−1(n−1), where X is the position of the glycosidic linkage.ω — The 3rd (and any subsequent) glycosidic torsion(s) back to the previous residue. ω1(n) is defined as OX(n−1)–CX(n−1)–CX−1(n−1)–CX−2(n−1), where X is the position of the glycosidic linkage. (This only applies to sugars with exocyclic connectivities.). The connection in Figure 3 has an exocyclic carbon, but the other potential connection points do not - so only phi and psi would available as bacbone torsion angles for those connection points. ν1 – νn — The internal ring torsion angles, where n is the number of atoms in the ring. ν1 defines the torsion across bond C1–C2, etc.χ1 – χn — The side-chain torsion angles, where n is the number of carbons in the sugar residue. The angle is defined by the carbon numbered one less than the glycosidic linkage position, the two atoms across the bond, and the polar hydrogen. The cyclic ring counts as carbon 0. For an aldopyranose, χ1 is thus defined by O5–C1–O1–HO1, and χ2 is defined by C1–C2–O2–HO2. χ5 is defined by C4–C5–C6–O6, because it rotates the exocyclic carbon rather than twists the ring. χ6 is defined by C5–C6–O6–HO6.Take special note of how φ, ψ, and ω are defined in the reverse order as the angles of the same names for amino acid residues!The `chi()` method of `Pose` works with sugar residues in the same way that it works with amino acid residues, where the first argument is the χ subscript and the second is the residue number of the `Pose`.
###Code
galactose.chi(1, 1)
galactose.chi(2, 1)
galactose.chi(3, 1)
galactose.chi(4, 1)
galactose.chi(5, 1)
galactose.chi(6, 1)
###Output
_____no_output_____
###Markdown
Likewise, we can use `set_chi()` to change these torsion angles and observe the changes in PyMOL, setting the option to keep history to true.
###Code
from pyrosetta.rosetta.protocols.moves import AddPyMOLObserver
observer = AddPyMOLObserver(galactose, True)
pm.apply(galactose)
###Output
_____no_output_____
###Markdown
Perform the following torsion angle changes to galactose using `set_chi()` and observe which torsions move in PyMOL.Set χ1 to 120°.Set χ2 to 60°.Set χ3 to 60°.Set χ4 to 0°.Set χ5 to 60°.Set χ6 to −60°.
###Code
galactose.set_chi(1, 1, 180)
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Creating Saccharides from a PDB file The `phi()`, `set_phi()`, `psi()`, `set_psi()`, `omega()`, and `set_omega()` methods of `Pose` also work with sugars. However, since `pose_from_saccharide_sequence()` may create a `Pose` with angles that cause the residues to wrap around onto each other, instead, let's reload some Pose's from `.pdb` files.
###Code
maltotriose = pose_from_file('inputs/glycans/maltotriose.pdb')
isomaltose = pose_from_file('inputs/glycans/isomaltose.pdb')
###Output
_____no_output_____
###Markdown
Now, try out the torsion angle getters and setters for the glycosydic bonds.
###Code
pm.apply(maltotriose)
maltotriose.phi(1)
maltotriose.psi(1)
maltotriose.phi(2)
maltotriose.psi(2)
maltotriose.omega(2)
maltotriose.phi(3)
maltotriose.psi(3)
###Output
_____no_output_____
###Markdown
Notice how φ1 and ψ1 are undefined—the first residue is not connected to anything
###Code
observer = AddPyMOLObserver(maltotriose, True)
for i in (2, 3):
maltotriose.set_phi(i, 180)
maltotriose.set_psi(i, 180)
###Output
_____no_output_____
###Markdown
**Isomaltose** is composed of (1→6) linkages, so in this case omega torsions are defined. Get and set φ2, ψ2, ω2 for isomaltose
###Code
observer = AddPyMOLObserver(isomaltose, True)
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Any cyclic residue also stores its ν angles.
###Code
pm.apply(glucose)
Glc1 = glucose.residue(1)
for i in range(1, 6): print(Glc1.nu(i))
###Output
_____no_output_____
###Markdown
However, we generally care more about the ring conformation of a cyclic residue’s rings, in this case, its only ring with index of 1. (The output values here are the ideal angles, not the actual angles, which we viewed above.)
###Code
print(Glc1.ring_conformer(1))
###Output
_____no_output_____
###Markdown
RingConformersThe output above warrants a brief explanation. First, what does `4C1` mean? Most of us likely remember learning about chair and boat conformations in Organic Chemistry. Do you recall how there are two distinct chair conformations that can interconvert between each other? The names for these specific conformations are 4C1 and 1C4. The nomenclature is as follows: Superscripts to the left of the capital letter are above the plane of the ring if it is oriented such that its carbon atoms proceed in a clockwise direction when viewed from above. Subscripts to the right of the letter are below the plane of the ring. The letter itself is an abbreviation, where, for example, C indicates a chair conformation and B a boat conformation. In all, there are 38 different ideal ring conformations that any six-membered cycle can take.`C-P parameters` refers to the Cremer–Pople parameters for this conformation (Cremer D, Pople JA. J Am Chem Soc. 1975;97:1354–1358.). C–P parameters are an alternative coordinate system used to refer to a ring conformation. Finally, a `RingConformer` in Rosetta includes the values of the ν angles. Each conformer has a unique set of angles. `Pose::set_nu()` does not exist, because it would rip a ring apart. Instead, to change a ring conformation, we need to use the `set_ring_conformer()` method, which takes a `RingConformer` object. Most of the time, you will not need to adjust the ring conformers, but you should be aware of it. We can ask a cyclic `ResidueType` for one of its `RingConformerSet`s to give us the `RingConformer` we want. (Each `RingConformerSet` includes the list of possible idealized ring conformers that such a ring can attain as well as information about the most energetically favorable one.) Then, we can et the conformation for our residue through `Pose`. (The arguments for `set_ring_conformer()` are the `Pose`’s sequence position, ring number, and the new conformer, respectively.) Figure 5. The two chair conformations of α-d-glucopyranose. In the 1C4 conformation (left), all of the substituents are axial; in the 4C1 conformation (right), they are equatorial. 4C1 is the most stable conformation for the majority of the α-d-aldohexopyranoses. In this nomenclature, a superscript means that that numbered carbon is above the ring, if the atoms are arranged in a clockwise manner from C1. A subscripted number indicates a carbon below the plane of the ring.
###Code
ring_set = Glc1.type().ring_conformer_set(1)
conformer = ring_set.get_ideal_conformer_by_name('1C4')
glucose.set_ring_conformation(1, 1, conformer)
pm.apply(glucose)
###Output
_____no_output_____
###Markdown
Modified Sugars, Branched Oligosaccharides, & `.pdb` File `LINK` Records Modified sugars can also be created in Rosetta, either from sequence or from file. In the former case, simply use the proper abbreviation for the modification after the “ring form code”. For example, the abbreviation for an N-acetyl group is “NAc”. Note the N-acetyl group in the PyMOL window.
###Code
LacNAc = pose_from_saccharide_sequence('b-D-Galp-(1->4)-a-D-GlcpNAc')
pm.apply(LacNAc)
###Output
_____no_output_____
###Markdown
Rosetta can handle branched oligosaccharides as well, but when loading from a sequence, this requires the use of brackets, which is the standard IUPAC notation. For example, here is how one would load Lewisx (Lex), a common branched glyco-epitope, into Rosetta by sequence.
###Code
Lex = pose_from_saccharide_sequence('b-D-Galp-(1->4)-[a-L-Fucp-(1->3)]-D-GlcpNAc')
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
One can also load branched carbohydrates from a `.pdb` file. These `.pdb` files must include `LINK` records, which are a standard part of the PDB format. Open the `test/data/carbohydrates/Lex.pdb` file and look bear the top to see an example `LINK` record, which looks like this:```LINK O3 Glc A 1 C1 Fuc B 1 1555 1555 1.5 ```It tells us that there is a covalent linkage between O3 of glucose A1 and C1 of fucose B1 with a bond length of 1.5 Å. (The `1555`s indicate symmetry and are ignored by Rosetta.)Note that if the LINK records are not in order, or HETNAM records are not in a Rosetta format, we will fail to load. In the next tutorial we will use auto-detection to do this. For now, we know Lex.pdb will load OK.
###Code
Lex = pose_from_file('inputs/glycans/Lex.pdb')
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
You may notice when viewing the structure in PyMOL that the hybridization of the carbonyl of the amido functionality of the N-acetyl group is wrong. This is because of an error in the model deposited in the PDB from which this file was generated. This is, unfortunately, a very common problem with sugar structures found in the PDB. It is always useful to use http://www.glycosciences.de to identify any errors in the solution PDB structure before working with them in Rosetta. The referenced paper, __Automatically Fixing Errors in Glycoprotein Structures with Rosetta__ can be used as a guide to fixing these.You mayalso have noticed that the `inputs/glycans/Lex.pdb` file indicated in its `HETNAM` records that Glc1 was actually an N-acetylglycosamine (GlcNAc) with the indication `2-acetylamino-2-deoxy-`. This is optional and is helpful for human-readability, but Rosetta only needs to know the base `ResidueType` of each sugar residue; specific `VariantType`s needed — and most sugar modifications are treated as `VariantType`s — are determined automatically from the atom names in the `HETATM` records for the residue. Anything after the comma is ignored.Print out the `Pose` to see how the `FoldTree` is defined.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Note the `CHEMICAL` `Edge` (`-2`). This is Rosetta’s way of indicating a branch backbone connection. Unlike a standard `POLYMER` `Edge` (`-1`), this one tells you which atoms are involved.Print out the sequence of each chain. Now print out information about each residue in the Pose to see which VariantTypes and ResiduePropertys are assigned to each. What are the three `VariantType`s of residue 1?Output the various torison angles and make sure that you understand to which angles they correspond. Can you see now why φ and ψ are defined the way they are? If they were defined as in AA residues, they would not have unique definitions, since GlcNAc is a branch point. A monosaccharide can have multiple children, but it can never have more than a single parent.Note that for this oligosaccharide χ3(1) is equivalent to ψ(3) and χ4(1) is equivalent to ψ(2). Make sure that you understand why!
###Code
Lex.chi(3, 1), Lex.psi(3)
Lex.chi(4, 1), Lex.psi(2)
###Output
_____no_output_____
###Markdown
For chemically modified sugars, χ angles are redefined at the positions where substitution has occurred. For new χs that have come into existence from the addition of new atoms and bonds, new definitions are added to new indices. For example, for GlcN2Ac residue 1, χC2–N2–C′–Cα′ is accessed through `chi(7, 1)`.
###Code
Lex.chi(2, 1)
Lex.set_chi(2, 1, 180)
pm.apply(Lex)
Lex.chi(7, 1)
Lex.set_chi(7, 1, 0)
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
Play around with getting and setting the various torsion angles for Lex N- and O-Linked Glycans Branching does not have to occur at sugars; a glycan can be attached to the nitrogen of an ASN or the oxygen of a SER or THR. N-linked glycans themselves tend to be branched structures. We will cover more on linked glycan trees in the next tutorial through the `GlycanTreeSet` object - which is always present in a pose that has carbohydrates.
###Code
N_linked = pose_from_file('inputs/glycans/N-linked_14-mer_glycan.pdb')
pm.apply(N_linked)
print(N_linked)
for i in range(4): print(N_linked.chain_sequence(i + 1))
###Output
_____no_output_____
###Markdown
Which residue number is glycosylated above?
###Code
O_linked = pose_from_file('inputs/glycans/O_glycan.pdb')
pm.apply(O_linked)
###Output
_____no_output_____
###Markdown
Print `O-linked` and the sequence of each of its chains. `set_phi()` and `set_psi()` still work when a glycan is linked to a peptide. (Below, we use `pdb_info()` to give help us select the residue that we want. In this case, in the `.pdb` file, the glycan is chain B.)
###Code
N_linked.set_phi(N_linked.pdb_info().pdb2pose("B", 1), 180)
pm.apply(N_linked)
###Output
_____no_output_____
###Markdown
Set ψ(B1) to 0° and ω(B1) to 90° and view the results in PyMOL. Notice that in this case ψ and ω affect the side-chain torsions (χs) of the asparagine residue. This is another case where there are multiple ways of both naming and accessing the same specific torsion angles.One can also create conjugated glycans from sequences if performed in steps, first creating the peptide portion by loading from a `.pdb`file or from sequence and then using the `glycosylate_pose()` function, (which needs to be imported first.) For example, to glycosylate an ASA peptide with a single glucose at position 2 of the peptide, we perform the following: Glycosylation by functionHere, we will glycosylate a simple peptide using the function, `glycosylate_pose`. In the next tutorial, we will use a Mover interface to this function.
###Code
peptide = pose_from_sequence('ASA')
pm.apply(peptide)
from pyrosetta.rosetta.core.pose.carbohydrates import glycosylate_pose, glycosylate_pose_by_file
glycosylate_pose(peptide, 2, 'Glcp')
pm.apply(peptide)
###Output
_____no_output_____
###Markdown
Here, we uset the main function to glycosylate a pose. In the next tutorial, we will use a Mover interface to do so. It is also possible to glycosylate a pose with common glycans found in the database. These files end in the `.iupac`extension and are simply IUPAC sequences just as we have been using throughout this chapter.Here is a list of some common iupacs.```bisected_fucosylated_N-glycan_core.iupacbisected_N-glycan_core.iupaccommon_names.txtcore_1_O-glycan.iupaccore_2_O-glycan.iupaccore_3_O-glycan.iupaccore_4_O-glycan.iupaccore_5_O-glycan.iupaccore_6_O-glycan.iupaccore_7_O-glycan.iupaccore_8_O-glycan.iupacfucosylated_N-glycan_core.iupachigh-mannose_N-glycan_core.iupachybrid_bisected_fucosylated_N-glycan_core.iupachybrid_bisected_N-glycan_core.iupachybrid_fucosylated_N-glycan_core.iupachybrid_N-glycan_core.iupacman5.iupacman9.iupacN-glycan_core.iupac```
###Code
peptide = pose_from_sequence('ASA'); pm.apply(peptide)
glycosylate_pose_by_file(peptide, 2, 'core_5_O-glycan')
pm.apply(peptide)
###Output
_____no_output_____
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* RosettaCarbohydratesKeywords: carbohydrate, glycan, sugar, glucose, mannose, sugar, GlycanTreeSet, saccharide, furanose, pyranose, aldose, ketose OverviewIn this chapter, we will focus on a special subset of non-peptide oligo- and polymers — carbohydrates.Modeling carbohydrates — also known as saccharides, glycans, or simply sugars — comes with some special challenges. For one, most saccharide residues contain a ring as part of their backbone. This ring provides potentially new degrees of freedom when sampling. Additionally, carbohydrate structures are often branched, leading in Rosetta to more complicated `FoldTrees`. This chapter includes a quick overview of carbohydrate nomenclature, structure, and basic interactions within Rosetta. Carbohydrate Chemistry Background Figure 1. A pyranose (left) and a furanose (right).Sugars (saccharides) are defined as hyroxylated aldehydes and ketones. A typical monosaccharide has an equal number of carbon and oxygen atoms. For example, glucose has the molecular formula C6H12O6.Sugars containing more than three carbons will spontaneously cyclize in aqueous environments to form five- or six-membered hemiacetals and hemiketals. Sugars with five-membered rings are called furanoses; those with six-membered rings are called pyranoses (Fig. 1).Figure 2. An aldose (left) and a ketose (right).A sugar is classified as an aldose or ketose, depending on whether it has an aldehyde or ketone in its linear form (Fig. 2).The different sugars have different names, depending on the stereochemistry at each of the carbon atoms in the molecule. For example, glucose has one set of stereochemistries, while mannose has another.In addition to their full names, many individual saccharide residues have three-letter codes, just like amino acid residues do. Glucose is "Glc" and mannose is "Man". Backbone Torsions, Residue Connections, and side-chainsA glycan tree is made up of many sugar residues, each residue a ring. The 'backbone' of a glycan is the connection between one residue and another. The chemical makeup of each sugar residue in this 'linkage' effects the propensity/energy of each bacbone dihedral angle. In addition, sugars can be attached via different carbons of the parent glycan. In this way, the chemical makeup and the attachment position effects the dihedral propensities. Typically, there are two backbone dihedral angles, but this could be up to 4+ angles depending on the connection.In IUPAC, the dihedrals of N are defined as the dihedrals between N and N-1 (IE - the parent linkage). The ASN (or other glycosylated protein residue's) dihedrals become part of the first glycan residue that is connected. For this first first glycan residue that is connected to an ASN, it has 4 torsions, while the ASN now has none!If you are creating a movemap for dihedral residues, please use the `MoveMapFactory` as this has the IUPAC nomenclature of glycan residues built in in order to allow proper DOF sampling of the backbone residues, especially for branching glycan trees. In general, all of our samplers should use residue selectors and use the MoveMapFactory to build movemaps internally.A sugar's side-chains are the constitutents of the glycan ring, which are typically an OH group or an acetyl group. These are sampled together at 60 degree angles by default during packing. A higher granularity of rotamers cannot currently be handled in Rosetta, but 60 degrees seems adequete for our purposes.Within Rosetta, glycan connectivity information is stored in the `GlycanTreeSet`, which is continually updated to reflect any residue changes or additions to the pose. This info is always available through the function pose.glycan_tree_set()Chemical information of each glycan residue can be accessed through the CarbohydrateInfo object, which is stored in each ResidueType object: pose.residue_type(i).carbohydrate_info() We will cover both of these classes in the next tutorial. Documentationhttps://www.rosettacommons.org/docs/latest/application_documentation/carbohydrates/WorkingWithGlycans References**Residue centric modeling and design of saccharide and glycoconjugate structures**Jason W. Labonte Jared Adolf-Bryfogle William R. Schief Jeffrey J. Gray_Journal of Computational Chemistry_, 11/30/2016 - **Automatically Fixing Errors in Glycoprotein Structures with Rosetta**Brandon Frenz, Sebastian Rämisch, Andrew J. Borst, Alexandra C. WallsJared Adolf-Bryfogle, William R. Schief, David Veesler, Frank DiMaio_Structure_, 1/2/2019 InitializationLet's use Pyrosetta to compare some common monosaccharide residues and see how they differ. As usual, we start by importing the `pyrosetta` and `rosetta` namespaces.
###Code
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
from pyrosetta.rosetta import *
###Output
_____no_output_____
###Markdown
First, one needs the `-include_sugars` option, which will tell Rosetta to load sugars and add the sugar_bb energy term to a default scorefunction. This scoreterm is like rama for the sugar dihedrals which connect each sugar residue.
###Code
init('-include_sugars')
###Output
_____no_output_____
###Markdown
When loading structures from the PDB that include glycans, we use these options. This includes an option to write out the structures in pdb format instead of the (better) Rosetta format. We will be using these options in the next tutorial. -maintain_links -auto_detect_glycan_connections -alternate_3_letter_codes pdb_sugar -write_glycan_pdb_codes -load_PDB_components false Set up the `PyMOLMover` for viewing structures.
###Code
pm = PyMOLMover()
###Output
_____no_output_____
###Markdown
Creating Saccharides from SequenceWe will use the function, `pose_from_saccharide_sequence()`, which must be imported from the `core.pose` namespace. Unlike with peptide chains, one-letter-codes will not suffice when specifying saccharide chains, because there is too much information to convey; we must use at least four letters. The first three letters are the sugar's three-letter code; the fourth letter designates whether the residue is a furanose (`f`) or pyranose (`p`).
###Code
from pyrosetta.rosetta.core.pose import pose_from_saccharide_sequence
glucose = pose_from_saccharide_sequence('Glcp')
galactose = pose_from_saccharide_sequence('Galp')
mannose = pose_from_saccharide_sequence('Manp')
###Output
_____no_output_____
###Markdown
Use the `PyMOLMover` to compare the three monosacharides in PyMOL. At which carbons do the three sugars differ? L and D FormsJust like with peptides, saccharides come in two enantiomeric forms, labelled l and d. (Note the small-caps, used in print.) These can be loaded into PyRosetta using the prefixes `L-` and `D-`.
###Code
L_glucose = pose_from_saccharide_sequence('L-Glcp')
D_glucose = pose_from_saccharide_sequence('D-Glcp')
###Output
_____no_output_____
###Markdown
Compare the two structures in PyMOL. Notice that all stereocenters are inverted between the two monosaccharides. Which enantiomer is loaded by PyRosetta by default if l or d are not specified? AnomersThe carbon that is at a higher oxidation state — that is, the carbon of the hemiacetal/-ketal in the cyclic form or the carbon that is the carbonyl carbon of the aldehyde or ketone in the linear form — is called the anomeric carbon. Because the carbonyl of an aldehyde or ketone is planar, a sugar molecule can cyclize into one of two forms, one in which the resulting hydroxyl group is pointing "up" and another in which the same hydroxyl group is pointing "down". These two anomers are labelled α and β.Create a one-residue `Pose` for both α- and β-d-glucopyranose and use PyMOL to compare both.
###Code
alpha_D_glucose = pose_from_saccharide_sequence('a-D-Glcp')
###Output
_____no_output_____
###Markdown
For which anomer is the C1 hydroxyl group axial to the chair conformation of the six-membered pyranose ring?Which anomer of d-glucose would you predict to be the most stable? (Hint: remember what you learned in organic chemistry about axial and equatorial substituents.) Linear Oligosaccharides & IUPAC SequencesOligo- and polysaccharides are composed of simple monosaccharide residues connected by acetal and ketal linkages called __glycosidic bonds__. Any of the monosaccharide's _hydroxyl_ groups can be used to form a linkage to the anomeric carbon of another monosaccharide, leading to both _linear_ and _branched_ molecules. Rosetta can create both _linear_ and _branched_ oligosaccharides from an __IUPAC__ sequence. (IUPAC is the international organization dedicated to chemical nomenclature.)To properly build a linear oligosaccharide, Rosetta must know the following details about each sugar residue being created in the following order: - Main-chain connectivity — →2) (`->2)`), →4) (`->4)`), →6) (`->6)`), _etc._; default value is `->4)-` - Anomeric form — α (`a` or `alpha`) or β (`b` or `beta`); default value is `alpha` - Enantiomeric form — l (`L`) or d (`D`); default value is `D` - 3-Letter code — required; uses sentence case - Ring form code — f (for a furanose/5-membered ring), p (for a pyranose/6-membered ring); required Residues must be separated by hyphens. Glycosidic linkages can be specified with full IUPAC notation, _e.g._, `-(1->4)-` for “-(1→4)-”. (This means that the residue on the left connects from its C1 (anomeric) position to the hydoxyl oxygen at C4 of the residue on the right.) Rosetta will assume `-(1->` for aldoses and `-(2->` for ketoses.Note that the standard is to write the IUPAC sequence of a saccharide chain in reverse order from how they are numbered. Lets create three new oligosacharides from sequence.
###Code
maltotriose = pose_from_saccharide_sequence('a-D-Glcp-' * 3)
lactose = pose_from_saccharide_sequence('b-D-Galp-(1->4)-a-D-Glcp')
isomaltose = pose_from_saccharide_sequence('->6)-Glcp-' * 2)
###Output
_____no_output_____
###Markdown
General Residue InformationWhen you print a `Pose` containing carbohydrate residues, the sugar residues will be listed as `Z` in the sequence.
###Code
print("maltotriose\n", maltotriose)
print("\nisomaltose\n", isomaltose)
print("\nlactose\n", lactose)
###Output
_____no_output_____
###Markdown
However, you can have Rosetta print out the sequences for individual chains, using the `chain_sequence()` method. If you do this, Rosetta is smart enough to give you a distinct sequence format for saccharide chains. (You may have noticed that the default file name for a `.pdb` file created from this `Pose` will be the same sequence.)
###Code
print(maltotriose.chain_sequence(1))
print(isomaltose.chain_sequence(1))
print(lactose.chain_sequence(1))
###Output
_____no_output_____
###Markdown
Again, the standard is to show the sequence of a saccharide chain in reverse order from how they are numbered. This is also how phi, psi, and omega are defined. From i+1 to i.
###Code
for res in lactose.residues: print(res.seqpos(), res.name())
###Output
_____no_output_____
###Markdown
Notice that for polysaccharides, the upstream residue is called the reducing end, while the downstream residue is called the non-reducing end.You will also see the terms parent and child being used across Rosetta. Here, for Residue 2, residue 1 is the parent. For Residue 1, Residue 2 is the child. Due to branching, residues can have more than one child/non-reducing-end, but only a single parent residue. Rosetta stores carbohydrate-specific information within `ResidueType`. If you print a residue, this additional information will be displayed.
###Code
print(glucose.residue(1))
###Output
_____no_output_____
###Markdown
Scanning the output from printing a glucose `Residue`, what is the general term for an aldose with six carbon atoms? Exploring Carbohydrate Structure Torsion Angles Most bioolymers have predefined, named torsion angles for their main-chain and side-chain bonds, such as φ, ψ, and ω and the various χs for amino acid residues. The same is true for saccharide residues. The torsion angles of sugars are as follows:Figure 3. A disaccharide's main-chain torsion angles.Figure 4. A monosaccharide's internal ring torsion angles.Figure 5. A monosaccharide's side-chain torsion angles.φ — The 1st glycosidic torsion back to the previous (n−1) residue. The angle is defined by the cyclic oxygen, the two atoms across the bond, and the cyclic carbon numbered one less than the glycosidic linkage position. For aldopyranoses, φ(n) is thus defined as O5(n)–C1(n)–OX(n−1)–CX(n−1), where X is the position of the glycosidic linkage. For aldofuranoses, φ(n) is defined as O4(n)–C1(n)–OX(n−1)–CX(n−1) For 2-ketopyranoses, φ(n) is defined as O6(n)–C2(n)–OX(n−1)–CX(n−1). For 2-ketofuranoses, φ(n) is defined as O5(n)–C2(n)–OX(n−1)–CX(n−1). Et cetera….ψ — The 2nd glycosidic torsion back to the previous (n−1) residue. The angle is defined by the anomeric carbon, the two atoms across the bond, and the cyclic carbon numbered two less than the glycosidic linkage position. ψ(n) is thus defined as Canomeric(n)–OX(n−1)–CX(n−1)–CX−1(n−1), where X is the position of the glycosidic linkage.ω — The 3rd (and any subsequent) glycosidic torsion(s) back to the previous residue. ω1(n) is defined as OX(n−1)–CX(n−1)–CX−1(n−1)–CX−2(n−1), where X is the position of the glycosidic linkage. (This only applies to sugars with exocyclic connectivities.). The connection in Figure 3 has an exocyclic carbon, but the other potential connection points do not - so only phi and psi would available as bacbone torsion angles for those connection points. ν1 – νn — The internal ring torsion angles, where n is the number of atoms in the ring. ν1 defines the torsion across bond C1–C2, etc.χ1 – χn — The side-chain torsion angles, where n is the number of carbons in the sugar residue. The angle is defined by the carbon numbered one less than the glycosidic linkage position, the two atoms across the bond, and the polar hydrogen. The cyclic ring counts as carbon 0. For an aldopyranose, χ1 is thus defined by O5–C1–O1–HO1, and χ2 is defined by C1–C2–O2–HO2. χ5 is defined by C4–C5–C6–O6, because it rotates the exocyclic carbon rather than twists the ring. χ6 is defined by C5–C6–O6–HO6.Take special note of how φ, ψ, and ω are defined in the reverse order as the angles of the same names for amino acid residues!The `chi()` method of `Pose` works with sugar residues in the same way that it works with amino acid residues, where the first argument is the χ subscript and the second is the residue number of the `Pose`.
###Code
galactose.chi(1, 1)
galactose.chi(2, 1)
galactose.chi(3, 1)
galactose.chi(4, 1)
galactose.chi(5, 1)
galactose.chi(6, 1)
###Output
_____no_output_____
###Markdown
Likewise, we can use `set_chi()` to change these torsion angles and observe the changes in PyMOL, setting the option to keep history to true.
###Code
from pyrosetta.rosetta.protocols.moves import AddPyMOLObserver
observer = AddPyMOLObserver(galactose, True)
pm.apply(galactose)
###Output
_____no_output_____
###Markdown
Perform the following torsion angle changes to galactose using `set_chi()` and observe which torsions move in PyMOL.Set χ1 to 120°.Set χ2 to 60°.Set χ3 to 60°.Set χ4 to 0°.Set χ5 to 60°.Set χ6 to −60°.
###Code
galactose.set_chi(1, 1, 180)
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Creating Saccharides from a PDB file The `phi()`, `set_phi()`, `psi()`, `set_psi()`, `omega()`, and `set_omega()` methods of `Pose` also work with sugars. However, since `pose_from_saccharide_sequence()` may create a `Pose` with angles that cause the residues to wrap around onto each other, instead, let's reload some Pose's from `.pdb` files.
###Code
maltotriose = pose_from_file('inputs/glycans/maltotriose.pdb')
isomaltose = pose_from_file('inputs/glycans/isomaltose.pdb')
###Output
_____no_output_____
###Markdown
Now, try out the torsion angle getters and setters for the glycosydic bonds.
###Code
pm.apply(maltotriose)
maltotriose.phi(1)
maltotriose.psi(1)
maltotriose.phi(2)
maltotriose.psi(2)
maltotriose.omega(2)
maltotriose.phi(3)
maltotriose.psi(3)
###Output
_____no_output_____
###Markdown
Notice how φ1 and ψ1 are undefined—the first residue is not connected to anything
###Code
observer = AddPyMOLObserver(maltotriose, True)
for i in (2, 3):
maltotriose.set_phi(i, 180)
maltotriose.set_psi(i, 180)
###Output
_____no_output_____
###Markdown
**Isomaltose** is composed of (1→6) linkages, so in this case omega torsions are defined. Get and set φ2, ψ2, ω2 for isomaltose
###Code
observer = AddPyMOLObserver(isomaltose, True)
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Any cyclic residue also stores its ν angles.
###Code
pm.apply(glucose)
Glc1 = glucose.residue(1)
for i in range(1, 6): print(Glc1.nu(i))
###Output
_____no_output_____
###Markdown
However, we generally care more about the ring conformation of a cyclic residue’s rings, in this case, its only ring with index of 1. (The output values here are the ideal angles, not the actual angles, which we viewed above.)
###Code
print(Glc1.ring_conformer(1))
###Output
_____no_output_____
###Markdown
RingConformersThe output above warrants a brief explanation. First, what does `4C1` mean? Most of us likely remember learning about chair and boat conformations in Organic Chemistry. Do you recall how there are two distinct chair conformations that can interconvert between each other? The names for these specific conformations are 4C1 and 1C4. The nomenclature is as follows: Superscripts to the left of the capital letter are above the plane of the ring if it is oriented such that its carbon atoms proceed in a clockwise direction when viewed from above. Subscripts to the right of the letter are below the plane of the ring. The letter itself is an abbreviation, where, for example, C indicates a chair conformation and B a boat conformation. In all, there are 38 different ideal ring conformations that any six-membered cycle can take.`C-P parameters` refers to the Cremer–Pople parameters for this conformation (Cremer D, Pople JA. J Am Chem Soc. 1975;97:1354–1358.). C–P parameters are an alternative coordinate system used to refer to a ring conformation. Finally, a `RingConformer` in Rosetta includes the values of the ν angles. Each conformer has a unique set of angles. `Pose::set_nu()` does not exist, because it would rip a ring apart. Instead, to change a ring conformation, we need to use the `set_ring_conformer()` method, which takes a `RingConformer` object. Most of the time, you will not need to adjust the ring conformers, but you should be aware of it. We can ask a cyclic `ResidueType` for one of its `RingConformerSet`s to give us the `RingConformer` we want. (Each `RingConformerSet` includes the list of possible idealized ring conformers that such a ring can attain as well as information about the most energetically favorable one.) Then, we can et the conformation for our residue through `Pose`. (The arguments for `set_ring_conformer()` are the `Pose`’s sequence position, ring number, and the new conformer, respectively.) Figure 5. The two chair conformations of α-d-glucopyranose. In the 1C4 conformation (left), all of the substituents are axial; in the 4C1 conformation (right), they are equatorial. 4C1 is the most stable conformation for the majority of the α-d-aldohexopyranoses. In this nomenclature, a superscript means that that numbered carbon is above the ring, if the atoms are arranged in a clockwise manner from C1. A subscripted number indicates a carbon below the plane of the ring.
###Code
ring_set = Glc1.type().ring_conformer_set(1)
conformer = ring_set.get_ideal_conformer_by_name('1C4')
glucose.set_ring_conformation(1, 1, conformer)
pm.apply(glucose)
###Output
_____no_output_____
###Markdown
Modified Sugars, Branched Oligosaccharides, & `.pdb` File `LINK` Records Modified sugars can also be created in Rosetta, either from sequence or from file. In the former case, simply use the proper abbreviation for the modification after the “ring form code”. For example, the abbreviation for an N-acetyl group is “NAc”. Note the N-acetyl group in the PyMOL window.
###Code
LacNAc = pose_from_saccharide_sequence('b-D-Galp-(1->4)-a-D-GlcpNAc')
pm.apply(LacNAc)
###Output
_____no_output_____
###Markdown
Rosetta can handle branched oligosaccharides as well, but when loading from a sequence, this requires the use of brackets, which is the standard IUPAC notation. For example, here is how one would load Lewisx (Lex), a common branched glyco-epitope, into Rosetta by sequence.
###Code
Lex = pose_from_saccharide_sequence('b-D-Galp-(1->4)-[a-L-Fucp-(1->3)]-D-GlcpNAc')
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
One can also load branched carbohydrates from a `.pdb` file. These `.pdb` files must include `LINK` records, which are a standard part of the PDB format. Open the `test/data/carbohydrates/Lex.pdb` file and look bear the top to see an example `LINK` record, which looks like this:```LINK O3 Glc A 1 C1 Fuc B 1 1555 1555 1.5 ```It tells us that there is a covalent linkage between O3 of glucose A1 and C1 of fucose B1 with a bond length of 1.5 Å. (The `1555`s indicate symmetry and are ignored by Rosetta.)Note that if the LINK records are not in order, or HETNAM records are not in a Rosetta format, we will fail to load. In the next tutorial we will use auto-detection to do this. For now, we know Lex.pdb will load OK.
###Code
Lex = pose_from_file('inputs/glycans/Lex.pdb')
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
You may notice when viewing the structure in PyMOL that the hybridization of the carbonyl of the amido functionality of the N-acetyl group is wrong. This is because of an error in the model deposited in the PDB from which this file was generated. This is, unfortunately, a very common problem with sugar structures found in the PDB. It is always useful to use http://www.glycosciences.de to identify any errors in the solution PDB structure before working with them in Rosetta. The referenced paper, __Automatically Fixing Errors in Glycoprotein Structures with Rosetta__ can be used as a guide to fixing these.You mayalso have noticed that the `inputs/glycans/Lex.pdb` file indicated in its `HETNAM` records that Glc1 was actually an N-acetylglycosamine (GlcNAc) with the indication `2-acetylamino-2-deoxy-`. This is optional and is helpful for human-readability, but Rosetta only needs to know the base `ResidueType` of each sugar residue; specific `VariantType`s needed — and most sugar modifications are treated as `VariantType`s — are determined automatically from the atom names in the `HETATM` records for the residue. Anything after the comma is ignored.Print out the `Pose` to see how the `FoldTree` is defined.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Note the `CHEMICAL` `Edge` (`-2`). This is Rosetta’s way of indicating a branch backbone connection. Unlike a standard `POLYMER` `Edge` (`-1`), this one tells you which atoms are involved.Print out the sequence of each chain. Now print out information about each residue in the Pose to see which VariantTypes and ResiduePropertys are assigned to each. What are the three `VariantType`s of residue 1?Output the various torison angles and make sure that you understand to which angles they correspond. Can you see now why φ and ψ are defined the way they are? If they were defined as in AA residues, they would not have unique definitions, since GlcNAc is a branch point. A monosaccharide can have multiple children, but it can never have more than a single parent.Note that for this oligosaccharide χ3(1) is equivalent to ψ(3) and χ4(1) is equivalent to ψ(2). Make sure that you understand why!
###Code
Lex.chi(3, 1), Lex.psi(3)
Lex.chi(4, 1), Lex.psi(2)
###Output
_____no_output_____
###Markdown
For chemically modified sugars, χ angles are redefined at the positions where substitution has occurred. For new χs that have come into existence from the addition of new atoms and bonds, new definitions are added to new indices. For example, for GlcN2Ac residue 1, χC2–N2–C′–Cα′ is accessed through `chi(7, 1)`.
###Code
Lex.chi(2, 1)
Lex.set_chi(2, 1, 180)
pm.apply(Lex)
Lex.chi(7, 1)
Lex.set_chi(7, 1, 0)
pm.apply(Lex)
###Output
_____no_output_____
###Markdown
Play around with getting and setting the various torsion angles for Lex N- and O-Linked Glycans Branching does not have to occur at sugars; a glycan can be attached to the nitrogen of an ASN or the oxygen of a SER or THR. N-linked glycans themselves tend to be branched structures. We will cover more on linked glycan trees in the next tutorial through the `GlycanTreeSet` object - which is always present in a pose that has carbohydrates.
###Code
N_linked = pose_from_file('inputs/glycans/N-linked_14-mer_glycan.pdb')
pm.apply(N_linked)
print(N_linked)
for i in range(4): print(N_linked.chain_sequence(i + 1))
###Output
_____no_output_____
###Markdown
Which residue number is glycosylated above?
###Code
O_linked = pose_from_file('inputs/glycans/O_glycan.pdb')
pm.apply(O_linked)
###Output
_____no_output_____
###Markdown
Print `O-linked` and the sequence of each of its chains. `set_phi()` and `set_psi()` still work when a glycan is linked to a peptide. (Below, we use `pdb_info()` to give help us select the residue that we want. In this case, in the `.pdb` file, the glycan is chain B.)
###Code
N_linked.set_phi(N_linked.pdb_info().pdb2pose("B", 1), 180)
pm.apply(N_linked)
###Output
_____no_output_____
###Markdown
Set ψ(B1) to 0° and ω(B1) to 90° and view the results in PyMOL. Notice that in this case ψ and ω affect the side-chain torsions (χs) of the asparagine residue. This is another case where there are multiple ways of both naming and accessing the same specific torsion angles.One can also create conjugated glycans from sequences if performed in steps, first creating the peptide portion by loading from a `.pdb`file or from sequence and then using the `glycosylate_pose()` function, (which needs to be imported first.) For example, to glycosylate an ASA peptide with a single glucose at position 2 of the peptide, we perform the following: Glycosylation by functionHere, we will glycosylate a simple peptide using the function, `glycosylate_pose`. In the next tutorial, we will use a Mover interface to this function.
###Code
peptide = pose_from_sequence('ASA')
pm.apply(peptide)
from pyrosetta.rosetta.core.pose.carbohydrates import glycosylate_pose, glycosylate_pose_by_file
glycosylate_pose(peptide, 2, 'Glcp')
pm.apply(peptide)
###Output
_____no_output_____
###Markdown
Here, we uset the main function to glycosylate a pose. In the next tutorial, we will use a Mover interface to do so. It is also possible to glycosylate a pose with common glycans found in the database. These files end in the `.iupac`extension and are simply IUPAC sequences just as we have been using throughout this chapter.Here is a list of some common iupacs.```bisected_fucosylated_N-glycan_core.iupacbisected_N-glycan_core.iupaccommon_names.txtcore_1_O-glycan.iupaccore_2_O-glycan.iupaccore_3_O-glycan.iupaccore_4_O-glycan.iupaccore_5_O-glycan.iupaccore_6_O-glycan.iupaccore_7_O-glycan.iupaccore_8_O-glycan.iupacfucosylated_N-glycan_core.iupachigh-mannose_N-glycan_core.iupachybrid_bisected_fucosylated_N-glycan_core.iupachybrid_bisected_N-glycan_core.iupachybrid_fucosylated_N-glycan_core.iupachybrid_N-glycan_core.iupacman5.iupacman9.iupacN-glycan_core.iupac```
###Code
peptide = pose_from_sequence('ASA'); pm.apply(peptide)
glycosylate_pose_by_file(peptide, 2, 'core_5_O-glycan')
pm.apply(peptide)
###Output
_____no_output_____ |
04_EDA_Examining Relationships_part4.ipynb | ###Markdown
Case Q→Q: Scatterplots Case Q→Q: Two Quantitative Variables Here again is the role-type classification table for framing our discussion about the relationship between two variables:  We are done with cases C→Q and C→C, and now we will move on to case Q→Q, where we examine the relationship between two quantitative variables. Scatterplot: Introduction In the previous two cases we had a categorical explanatory variable, and therefore exploring the relationship between the two variables was done by comparing the distribution of the response variable for each category of the explanatory variable:* In case C→Q we compared distributions of the quantitative response.* In case C→C we compared distributions of the categorical response.Case Q→Q is different in the sense that both variables (in particular the explanatory variable) are quantitative, and therefore, as you'll discover, this case will require a different kind of treatment and tools. Let's start with an example: Example: Highway Signs A Pennsylvania research firm conducted a study in which 30 drivers (of ages 18 to 82 years old) were sampled, and for each one, the maximum distance (in feet) at which he/she could read a newly designed sign was determined. The goal of this study was to explore the relationship between a driver's age and the maximum distance at which signs were legible, and then use the study's findings to improve safety for older drivers. (Reference: Utts and Heckard, Mind on Statistics (2002). Originally source: Data collected by Last Resource, Inc, Bellfonte, PA.)Since the purpose of this study is to explore the effect of age on maximum legibility distance,* the **explanatory** variable is **Age**, and* the **response** variable is **Distance.**Here is what the raw data look like: Note that the data structure is such that for each individual (in this case driver 1....driver 30) we have a pair of values (in this case representing the driver's age and distance). We can therefore think about these data as 30 pairs of values: (18, 510), (32, 410), (55, 420), ... , (82, 360).The first step in exploring the relationship between driver age and sign legibility distance is to create an appropriate and informative graphical display. The appropriate graphical display for examining the relationship between two quantitative variables is the **scatterplot.** Here is how a scatterplot is constructed for our example:To create a scatterplot, each pair of values is plotted, so that the value of the explanatory variable (X) is plotted on the horizontal axis, and the value of the response variable (Y) is plotted on the vertical axis. In other words, each individual (driver, in our example) appears on the scatterplot as a single point whose X-coordinate is the value of the explanatory variable for that individual, and whose Y-coordinate is the value of the response variable. Here is an illustration:And here is the completed scatterplot: CommentIt is important to mention again that when creating a scatterplot, the explanatory variable should always be plotted on the horizontal X-axis, and the response variable should be plotted on the vertical Y-axis. If in a specific example we do not have a clear distinction between explanatory and response variables, each of the variables can be plotted on either axis. Interpreting the Scatterplot How do we explore the relationship between two quantitative variables using the scatterplot? What should we look at, or pay attention to?Recall that when we described the distribution of a single quantitative variable with a histogram, we described the overall pattern of the distribution (shape, center, spread) and any deviations from that pattern (outliers). We do the **same thing with the scatterplot.**The following figure summarizes this point:As the figure explains, when describing the **overall pattern** of the relationship we look at its direction, form and strength.* The **direction** of the relationship can be positive, negative, or neither:A **positive (or increasing) relationship** means that an increase in one of the variables is associated with an increase in the other.A **negative (or decreasing) relationship** means that an increase in one of the variables is associated with a decrease in the other.Not all relationships can be classified as either positive or negative.* The **form** of the relationship is its general shape. When identifying the form, we try to find the simplest way to describe the shape of the scatterplot. There are many possible forms. Here are a couple that are quite common:Relationships with a **linear** form are most simply described as points scattered about a line:Relationships with a **curvilinear form** are most simply described as points dispersed around the same curved line:There are many other possible forms for the relationship between two quantitative variables, but linear and curvilinear forms are quite common and easy to identify. Another form-related pattern that we should be aware of is clusters in the data:* The **strength** of the relationship is determined by how closely the data follow the form of the relationship. Let's look, for example, at the following two scatterplots displaying positive, linear relationships:The strength of the relationship is determined by how closely the data points follow the form. We can see that in the top scatterplot the data points follow the linear pattern quite closely. This is an example of a strong relationship. In the bottom scatterplot, the points also follow the linear pattern, but much less closely, and therefore we can say that the relationship is weaker. In general, though, assessing the strength of a relationship just by looking at the scatterplot is quite problematic, and we need a numerical measure to help us with that. We will discuss that later in this section.Data points that **deviate from the pattern** of the relationship are called **outliers.** We will see several examples of outliers during this section. Two outliers are illustrated in the scatterplot below:Let's go back now to our example, and use the scatterplot to examine the relationship between the age of the driver and the maximum sign legibility distance. Here is the scatterplot:The direction of the relationship is **negative**, which makes sense in context, since as you get older your eyesight weakens, and in particular older drivers tend to be able to read signs only at lesser distances. An arrow drawn over the scatterplot illustrates the negative direction of this relationship:The form of the relationship seems to be **linear.** Notice how the points tend to be scattered about the line. Although, as we mentioned earlier, it is problematic to assess the strength without a numerical measure, the relationship appears to be **moderately strong**, as the data is fairly tightly scattered about the line. Finally, all the data points seem to "obey" the pattern—there **do not appear to be any outliers.** Scatterplot: Examples Example: Average Gestation Period The average gestation period, or time of pregnancy, of an animal is closely related to its longevity (the length of its lifespan.) Data on the average gestation period and longevity (in captivity) of 40 different species of animals have been examined, with the purpose of examining how the gestation period of an animal is related to (or can be predicted from) its longevity. (Source: Rossman and Chance. (2001). Workshop statistics: Discovery with data and Minitab. Original source: The 1993 world almanac and book of facts).Here is the scatterplot of the data.What can we learn about the relationship from the scatterplot? The direction of the relationship is positive, which means that animals with longer life spans tend to have longer times of pregnancy (this makes intuitive sense). An arrow drawn over the scatterplot below illustrates this:The form of the relationship is again essentially **linear.** There appears to be **one outlier**, indicating an animal with an exceptionally long longevity and gestation period. (This animal happens to be the elephant.) Note that while this outlier definitely deviates from the rest of the data in term of its magnitude, it **does** follow the direction of the data.**Comment:** Another feature of the scatterplot that is worth observing is how the variation in gestation increases as longevity increases. This fact is illustrated by the two red vertical lines at the bottom left part of the graph. Note that the gestation periods for animals who live 5 years range from about 30 days up to about 120 days. On the other hand, the gestation period of animals who live 12 years varies much more, and ranges from about 60 days up to more than400 days. Example: Fuel Usage As a third example, consider the relationship between the average amount of fuel used (in liters) to drive a fixed distance in a car (100 kilometers), and the speed at which the car is driven (in kilometers per hour). (Source: Moore and McCabe, (2003). Introduction to the practice of statistics. Original source: T.N. Lam. (1985). "Estimating fuel consumption for engine size," Journal of Transportation Engineering, vol. 111)The data describe a relationship that decreases and then increases—the amount of fuel consumed decreases rapidly to a minimum for a car driving 60 kilometers per hour, and then increases gradually for speeds exceeding 60 kilometers per hour. This suggests that the speed at which a car economizes on fuel the most is about 60 km/h. This forms a curvilinear relationship that seems to be very strong, as the observations seem to perfectly fit the curve. Finally, there do not appear to be any outliers. Learn By Doing A study examined how the percentage of participants who completed a survey is related to the monetary incentive that researchers promised to participants. Consider the relationship between these two quantitative variables, displayed in the scatterplot below.**Question-** What is the direction of this relationship?* positive * negative* neither positive nor negative **Your Answer-** **Question-** In the context of this example, when researchers promised higher payments, what happened to the percentage of participants who completed the survey?* increased * remained the same* decreased **Your Answer-** **Question-** What is the form of the relationship?* linear* curvilinear * neither linear nor curvilinear **Your Answer-** **Question-** Based on the form of the relationship as it is illustrated above, is this a weak relationship or a strong relationship?* strong * weak **Your Answer-** CommentThe example in the last activity provides a great opportunity for interpretation of the form of the relationship in context. Recall that the example examined how the percentage of participants who completed a survey is affected by the monetary incentive that researchers promised to participants. Here again is the scatterplot that displays the relationship:The positive relationship definitely makes sense in context, but what is the interpretation of the curvilinear form in the context of the problem? How can we explain (in context) the fact that the relationship seems at first to be increasing very rapidly, but then slows down? The following graph will help us:Note that when the monetary incentive increases from $0 to $10, the percentage of returned surveys increases sharply—an increase of 27% (from 16% to 43%). However, the same increase of 10 dollar from 30 dollar to 40 dollar doesn't result in the same dramatic increase in the percentage of returned surveys—it results in an increase of only 3 percent (from 54 percent to 57 percent). The form displays the phenomenon of "diminishing returns"—a return rate that after a certain point fails to increase proportionately to additional outlays of investment. 10 dollar is worth more to people relative to 0 dollar than it is relative to 30 dollar. Scatterplot: Labeled A Labeled Scatterplot In certain circumstances, it may be reasonable to indicate different subgroups or categories within the data on the scatterplot, by labeling each subgroup differently. The result is called a labeled scatterplot, and can provide further insight about the relationship we are exploring. Here is an example. Example: Hot DogsRecall the hot dog example from case C→Q, in which 54 major hot dog brands were examined. In this study, both the **calorie content** and the **sodium level** of each brand was recorded, as well as the **type** of hot dog: beef, poultry, and meat (mostly pork and beef, but up to 15% poultry meat). In this example, we will explore the relationship between the sodium level and calorie content of hot dogs, and we will label the three different types of hot dogs to create a labeled scatterplot.
###Code
from IPython.display import Video
Video("../img/hotdog_video.mp4")
###Output
_____no_output_____
###Markdown
Scenario: Height, Weight, and Gender In this activity, we look at height and weight data that were collected from 57 males and 24 females, and use the data to explore how the weight of a person is related to (or affected by) his or her height. This implies that height will be our explanatory variable and weight will be our response variable. We will then look at gender, and see how labeling this third variable contributes to our understanding of the form of the relationship.Here is the scatterplot to examine how weight is related to height, ignoring gender. Learn By Doing **Question-** What is the direction of the relationship between height and weight?* Positive * Negative **Your Answer-** So far we have studied the relationship between height and weight for all of the males and females together. It may be interesting to examine whether the relationship between height and weight is different for males and females. To visualize the effect of the third variable, gender, we will indicate in the scatterplot which observations are males (blue) and which are females (red).  **Question-** True or false? The weight of females increases with an increase in height as quickly as the weight of males increases with a corresponding increase in height.* True* False **Your Answer-** Let's Summarize* The relationship between two quantitative variables is visually displayed using the scatterplot, where each point represents an individual. We always plot the explanatory variable on the horizontal X axis, and the response variable on the vertical Y axis.* When we explore a relationship using the scatterplot, we should describe the overall pattern of the relationship and any deviations from that pattern. To describe the overall pattern consider the direction, form and strength of the relationship. Assessing the strength just by looking at the scatterplot can be problematic; using a numerical measure to determine strength will be discussed later in this course.* Adding labels to the scatterplot that indicate different groups or categories within the data might help us get more insight about the relationship we are exploring. Exercise: Creating a Scatterplot In this exercise, we will:* learn how to create a scatterplot.* use the scatterplot to examine the relationship between two quantitative variables.* learn how to create a labeled scatterplot.* use the labeled scatterplot to better understand the form of a relationship.In this activity, we look at height and weight data that were collected from 57 males and 24 females, and use the data to explore how the weight of a person is related to (or affected by) his or her height. This implies that height will be our explanatory variable and weight will be our response variable. We will then look at gender, and see how labeling this third variable contributes to our understanding of the form of the relationship.
###Code
height = pd.read_excel('../Data/height.xls')
height.head()
###Output
_____no_output_____
###Markdown
Where 0 = male, 1 = female.
###Code
height.plot.scatter(x='height', y='weight', figsize=(8,6))
plt.xlabel("Height (inches)")
plt.ylabel("Weight (lbs)")
plt.show()
###Output
_____no_output_____
###Markdown
**Question-** Describe the relationship between the height and weight of the subjects suggested by the data. Consider the pattern of the data—mainly direction and form—and any deviations from this pattern, such as outliers. **Your Answer-** So far we have studied the relationship between height and weight for all of the males and females together. It may be interesting to examine whether the relationship between height and weight is different for males and females. To visualize the effect of the third variable, gender, we will indicate in the scatterplot which observations are males and which are females.
###Code
fig, ax = plt.subplots(figsize=(8,6))
sc = ax.scatter(x='height', y='weight', c='gender', data=height)
_ = ax.set(xlabel = 'Height(inches)', ylabel = 'weight(lbs)')
_ = ax.legend(*sc.legend_elements())
# easier way to do it using seaborn library
import seaborn as sns
sns.lmplot(x='height', y='weight', hue='gender',height=7,fit_reg=False, data=height);
###Output
_____no_output_____
###Markdown
Case Q→Q: Scatterplots Case Q→Q: Two Quantitative Variables Here again is the role-type classification table for framing our discussion about the relationship between two variables:  We are done with cases C→Q and C→C, and now we will move on to case Q→Q, where we examine the relationship between two quantitative variables. Scatterplot: Introduction In the previous two cases we had a categorical explanatory variable, and therefore exploring the relationship between the two variables was done by comparing the distribution of the response variable for each category of the explanatory variable:* In case C→Q we compared distributions of the quantitative response.* In case C→C we compared distributions of the categorical response.Case Q→Q is different in the sense that both variables (in particular the explanatory variable) are quantitative, and therefore, as you'll discover, this case will require a different kind of treatment and tools. Let's start with an example: Example: Highway Signs A Pennsylvania research firm conducted a study in which 30 drivers (of ages 18 to 82 years old) were sampled, and for each one, the maximum distance (in feet) at which he/she could read a newly designed sign was determined. The goal of this study was to explore the relationship between a driver's age and the maximum distance at which signs were legible, and then use the study's findings to improve safety for older drivers. (Reference: Utts and Heckard, Mind on Statistics (2002). Originally source: Data collected by Last Resource, Inc, Bellfonte, PA.)Since the purpose of this study is to explore the effect of age on maximum legibility distance,* the **explanatory** variable is **Age**, and* the **response** variable is **Distance.**Here is what the raw data look like: Note that the data structure is such that for each individual (in this case driver 1....driver 30) we have a pair of values (in this case representing the driver's age and distance). We can therefore think about these data as 30 pairs of values: (18, 510), (32, 410), (55, 420), ... , (82, 360).The first step in exploring the relationship between driver age and sign legibility distance is to create an appropriate and informative graphical display. The appropriate graphical display for examining the relationship between two quantitative variables is the **scatterplot.** Here is how a scatterplot is constructed for our example:To create a scatterplot, each pair of values is plotted, so that the value of the explanatory variable (X) is plotted on the horizontal axis, and the value of the response variable (Y) is plotted on the vertical axis. In other words, each individual (driver, in our example) appears on the scatterplot as a single point whose X-coordinate is the value of the explanatory variable for that individual, and whose Y-coordinate is the value of the response variable. Here is an illustration:And here is the completed scatterplot: CommentIt is important to mention again that when creating a scatterplot, the explanatory variable should always be plotted on the horizontal X-axis, and the response variable should be plotted on the vertical Y-axis. If in a specific example we do not have a clear distinction between explanatory and response variables, each of the variables can be plotted on either axis. Interpreting the Scatterplot How do we explore the relationship between two quantitative variables using the scatterplot? What should we look at, or pay attention to?Recall that when we described the distribution of a single quantitative variable with a histogram, we described the overall pattern of the distribution (shape, center, spread) and any deviations from that pattern (outliers). We do the **same thing with the scatterplot.**The following figure summarizes this point:As the figure explains, when describing the **overall pattern** of the relationship we look at its direction, form and strength.* The **direction** of the relationship can be positive, negative, or neither:A **positive (or increasing) relationship** means that an increase in one of the variables is associated with an increase in the other.A **negative (or decreasing) relationship** means that an increase in one of the variables is associated with a decrease in the other.Not all relationships can be classified as either positive or negative.* The **form** of the relationship is its general shape. When identifying the form, we try to find the simplest way to describe the shape of the scatterplot. There are many possible forms. Here are a couple that are quite common:Relationships with a **linear** form are most simply described as points scattered about a line:Relationships with a **curvilinear form** are most simply described as points dispersed around the same curved line:There are many other possible forms for the relationship between two quantitative variables, but linear and curvilinear forms are quite common and easy to identify. Another form-related pattern that we should be aware of is clusters in the data:* The **strength** of the relationship is determined by how closely the data follow the form of the relationship. Let's look, for example, at the following two scatterplots displaying positive, linear relationships:The strength of the relationship is determined by how closely the data points follow the form. We can see that in the top scatterplot the data points follow the linear pattern quite closely. This is an example of a strong relationship. In the bottom scatterplot, the points also follow the linear pattern, but much less closely, and therefore we can say that the relationship is weaker. In general, though, assessing the strength of a relationship just by looking at the scatterplot is quite problematic, and we need a numerical measure to help us with that. We will discuss that later in this section.Data points that **deviate from the pattern** of the relationship are called **outliers.** We will see several examples of outliers during this section. Two outliers are illustrated in the scatterplot below:Let's go back now to our example, and use the scatterplot to examine the relationship between the age of the driver and the maximum sign legibility distance. Here is the scatterplot:The direction of the relationship is **negative**, which makes sense in context, since as you get older your eyesight weakens, and in particular older drivers tend to be able to read signs only at lesser distances. An arrow drawn over the scatterplot illustrates the negative direction of this relationship:The form of the relationship seems to be **linear.** Notice how the points tend to be scattered about the line. Although, as we mentioned earlier, it is problematic to assess the strength without a numerical measure, the relationship appears to be **moderately strong**, as the data is fairly tightly scattered about the line. Finally, all the data points seem to "obey" the pattern—there **do not appear to be any outliers.** Scatterplot: Examples Example: Average Gestation Period The average gestation period, or time of pregnancy, of an animal is closely related to its longevity (the length of its lifespan.) Data on the average gestation period and longevity (in captivity) of 40 different species of animals have been examined, with the purpose of examining how the gestation period of an animal is related to (or can be predicted from) its longevity. (Source: Rossman and Chance. (2001). Workshop statistics: Discovery with data and Minitab. Original source: The 1993 world almanac and book of facts).Here is the scatterplot of the data.What can we learn about the relationship from the scatterplot? The direction of the relationship is positive, which means that animals with longer life spans tend to have longer times of pregnancy (this makes intuitive sense). An arrow drawn over the scatterplot below illustrates this:The form of the relationship is again essentially **linear.** There appears to be **one outlier**, indicating an animal with an exceptionally long longevity and gestation period. (This animal happens to be the elephant.) Note that while this outlier definitely deviates from the rest of the data in term of its magnitude, it **does** follow the direction of the data.**Comment:** Another feature of the scatterplot that is worth observing is how the variation in gestation increases as longevity increases. This fact is illustrated by the two red vertical lines at the bottom left part of the graph. Note that the gestation periods for animals who live 5 years range from about 30 days up to about 120 days. On the other hand, the gestation period of animals who live 12 years varies much more, and ranges from about 60 days up to more than400 days. Example: Fuel Usage As a third example, consider the relationship between the average amount of fuel used (in liters) to drive a fixed distance in a car (100 kilometers), and the speed at which the car is driven (in kilometers per hour). (Source: Moore and McCabe, (2003). Introduction to the practice of statistics. Original source: T.N. Lam. (1985). "Estimating fuel consumption for engine size," Journal of Transportation Engineering, vol. 111)The data describe a relationship that decreases and then increases—the amount of fuel consumed decreases rapidly to a minimum for a car driving 60 kilometers per hour, and then increases gradually for speeds exceeding 60 kilometers per hour. This suggests that the speed at which a car economizes on fuel the most is about 60 km/h. This forms a curvilinear relationship that seems to be very strong, as the observations seem to perfectly fit the curve. Finally, there do not appear to be any outliers. Learn By Doing A study examined how the percentage of participants who completed a survey is related to the monetary incentive that researchers promised to participants. Consider the relationship between these two quantitative variables, displayed in the scatterplot below.**Question-** What is the direction of this relationship?* positive * negative* neither positive nor negative **Your Answer-** **Question-** In the context of this example, when researchers promised higher payments, what happened to the percentage of participants who completed the survey?* increased * remained the same* decreased **Your Answer-** **Question-** What is the form of the relationship?* linear* curvilinear * neither linear nor curvilinear **Your Answer-** **Question-** Based on the form of the relationship as it is illustrated above, is this a weak relationship or a strong relationship?* strong * weak **Your Answer-** CommentThe example in the last activity provides a great opportunity for interpretation of the form of the relationship in context. Recall that the example examined how the percentage of participants who completed a survey is affected by the monetary incentive that researchers promised to participants. Here again is the scatterplot that displays the relationship:The positive relationship definitely makes sense in context, but what is the interpretation of the curvilinear form in the context of the problem? How can we explain (in context) the fact that the relationship seems at first to be increasing very rapidly, but then slows down? The following graph will help us:Note that when the monetary incentive increases from $0 to $10, the percentage of returned surveys increases sharply—an increase of 27% (from 16% to 43%). However, the same increase of 10 dollar from 30 dollar to 40 dollar doesn't result in the same dramatic increase in the percentage of returned surveys—it results in an increase of only 3 percent (from 54 percent to 57 percent). The form displays the phenomenon of "diminishing returns"—a return rate that after a certain point fails to increase proportionately to additional outlays of investment. 10 dollar is worth more to people relative to 0 dollar than it is relative to 30 dollar. Scatterplot: Labeled A Labeled Scatterplot In certain circumstances, it may be reasonable to indicate different subgroups or categories within the data on the scatterplot, by labeling each subgroup differently. The result is called a labeled scatterplot, and can provide further insight about the relationship we are exploring. Here is an example. Example: Hot DogsRecall the hot dog example from case C→Q, in which 54 major hot dog brands were examined. In this study, both the **calorie content** and the **sodium level** of each brand was recorded, as well as the **type** of hot dog: beef, poultry, and meat (mostly pork and beef, but up to 15% poultry meat). In this example, we will explore the relationship between the sodium level and calorie content of hot dogs, and we will label the three different types of hot dogs to create a labeled scatterplot.
###Code
from IPython.display import Video
Video("../img/hotdog_video.mp4")
###Output
_____no_output_____
###Markdown
Scenario: Height, Weight, and Gender In this activity, we look at height and weight data that were collected from 57 males and 24 females, and use the data to explore how the weight of a person is related to (or affected by) his or her height. This implies that height will be our explanatory variable and weight will be our response variable. We will then look at gender, and see how labeling this third variable contributes to our understanding of the form of the relationship.Here is the scatterplot to examine how weight is related to height, ignoring gender. Learn By Doing **Question-** What is the direction of the relationship between height and weight?* Positive * Negative **Your Answer-** So far we have studied the relationship between height and weight for all of the males and females together. It may be interesting to examine whether the relationship between height and weight is different for males and females. To visualize the effect of the third variable, gender, we will indicate in the scatterplot which observations are males (blue) and which are females (red).  **Question-** True or false? The weight of females increases with an increase in height as quickly as the weight of males increases with a corresponding increase in height.* True* False **Your Answer-** Let's Summarize* The relationship between two quantitative variables is visually displayed using the scatterplot, where each point represents an individual. We always plot the explanatory variable on the horizontal X axis, and the response variable on the vertical Y axis.* When we explore a relationship using the scatterplot, we should describe the overall pattern of the relationship and any deviations from that pattern. To describe the overall pattern consider the direction, form and strength of the relationship. Assessing the strength just by looking at the scatterplot can be problematic; using a numerical measure to determine strength will be discussed later in this course.* Adding labels to the scatterplot that indicate different groups or categories within the data might help us get more insight about the relationship we are exploring. Exercise: Creating a Scatterplot In this exercise, we will:* learn how to create a scatterplot.* use the scatterplot to examine the relationship between two quantitative variables.* learn how to create a labeled scatterplot.* use the labeled scatterplot to better understand the form of a relationship.In this activity, we look at height and weight data that were collected from 57 males and 24 females, and use the data to explore how the weight of a person is related to (or affected by) his or her height. This implies that height will be our explanatory variable and weight will be our response variable. We will then look at gender, and see how labeling this third variable contributes to our understanding of the form of the relationship.
###Code
height = pd.read_excel('../Data/height.xls')
height.head()
###Output
_____no_output_____
###Markdown
Where 0 = male, 1 = female.
###Code
height.plot.scatter(x='height', y='weight', figsize=(8,6))
plt.xlabel("Height (inches)")
plt.ylabel("Weight (lbs)")
plt.show()
###Output
_____no_output_____
###Markdown
**Question-** Describe the relationship between the height and weight of the subjects suggested by the data. Consider the pattern of the data—mainly direction and form—and any deviations from this pattern, such as outliers. **Your Answer-** So far we have studied the relationship between height and weight for all of the males and females together. It may be interesting to examine whether the relationship between height and weight is different for males and females. To visualize the effect of the third variable, gender, we will indicate in the scatterplot which observations are males and which are females.
###Code
fig, ax = plt.subplots(figsize=(8,6))
sc = ax.scatter(x='height', y='weight', c='gender', data=height)
_ = ax.set(xlabel = 'Height(inches)', ylabel = 'weight(lbs)')
_ = ax.legend(*sc.legend_elements())
# easier way to do it using seaborn library
import seaborn as sns
sns.lmplot(x='height', y='weight', hue='gender',height=7,fit_reg=False, data=height);
###Output
_____no_output_____ |
ML_HelloWorld.ipynb | ###Markdown
After training with Random Forest, the accuracy was 100%. mind you that this dataset is somewhat a 'perfect' dataset and hence the high accuracy. We also did not do any data preprocessing as this was not needed. Most real world datasets are crude and hence need some level of preprocessing and feature engineering.
###Code
clf = LogisticRegression()
clf.fit(X_train,y_train)
predictions = clf.predict(X_test)
print('LG_Accuracy: {}'.format(accuracy_score(y_test,predictions)))
###Output
LG_Accuracy: 0.9565217391304348
|
functions/google_colab_clickhouse_window_functions.ipynb | ###Markdown
Справка по оконным функциям
###Code
# https://clickhouse.tech/docs/en/sql-reference/window-functions/
client.execute('set allow_experimental_window_functions = 1;')
sql = '''SELECT
date,
city,
manager,
amount,
sum(amount) over w as amount_cum_sum
FROM test
WINDOW w AS
(PARTITION BY city,manager
ORDER BY date,city, manager
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW);'''
select_clickhouse(sql)
###Output
_____no_output_____ |
notebook-text/paraphrase-distilroberta-base-v1-margin-0.7.ipynb | ###Markdown
Shopee Training paraphrase-distilroberta-base-v1
###Code
import sys
import time
import datetime
start_time = time.time()
print(datetime.datetime.now())
import os
import gc
import math
import random
from tqdm import tqdm
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import GroupKFold
from sklearn.neighbors import NearestNeighbors
import torch
from torch import nn
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
import warnings
warnings.filterwarnings('ignore')
TRAIN_CSV = '../input/shopee-product-matching/train.csv'
class CFG:
compute_cv = True # set False to train model for submission
### BERT
bert_model_name = '../input/bertmodel/paraphrase-distilroberta-base-v1'
max_length = 128
### ArcFace
scale = 30
margin = 0.7
fc_dim = 768
seed = 412
classes = 11014
# groupkfold
N_SPLITS = 5
TEST_FOLD = 0
VALID_FOLD = 1
### Training
batch_size = 16
accum_iter = 1 # 1 if use_sam = True
epochs = 8
min_save_epoch = epochs // 3
use_sam = True # SAM (Sharpness-Aware Minimization for Efficiently Improving Generalization)
use_amp = True # Automatic Mixed Precision
num_workers = 2 # On Windows, set 0 or export train_fn and TitleDataset as .py files for faster training.
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
### NearestNeighbors
bert_knn = 50
bert_knn_threshold = 0.4 # Cosine distance threshold
### GradualWarmupSchedulerV2(lr_start -> lr_max -> lr_min)
scheduler_params = {
"lr_start": 7.5e-6,
"lr_max": 1e-4,
"lr_min": 2.74e-5, # 1.5e-5,
}
multiplier = scheduler_params['lr_max'] / scheduler_params['lr_start']
eta_min = scheduler_params['lr_min'] # last minimum learning rate
freeze_epo = 0
warmup_epo = 2
cosine_epo = epochs - freeze_epo - warmup_epo
### save_model_path
save_model_path = f"./{bert_model_name.rsplit('/', 1)[-1]}_epoch{epochs}-bs{batch_size}x{accum_iter}.pt"
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True # set True to be faster
seed_everything(CFG.seed)
###Output
_____no_output_____
###Markdown
Classes and Functions
###Code
### Dataset
class TitleDataset(torch.utils.data.Dataset):
def __init__(self, df, text_column, label_column):
texts = df[text_column]
self.labels = df[label_column].values
self.titles = []
for title in texts:
title = title.encode('utf-8').decode("unicode_escape")
title = title.encode('ascii', 'ignore').decode("unicode_escape")
title = title.lower()
self.titles.append(title)
def __len__(self):
return len(self.titles)
def __getitem__(self, idx):
text = self.titles[idx]
label = torch.tensor(self.labels[idx])
return text, label
### SAM Optimizer 2020/1/16
# https://github.com/davda54/sam/blob/main/sam.py
class SAM(torch.optim.Optimizer):
def __init__(self, params, base_optimizer, rho=0.05, **kwargs):
assert rho >= 0.0, f"Invalid rho, should be non-negative: {rho}"
defaults = dict(rho=rho, **kwargs)
super(SAM, self).__init__(params, defaults)
self.base_optimizer = base_optimizer(self.param_groups, **kwargs)
self.param_groups = self.base_optimizer.param_groups
@torch.no_grad()
def first_step(self, zero_grad=False):
grad_norm = self._grad_norm()
for group in self.param_groups:
scale = group["rho"] / (grad_norm + 1e-12)
for p in group["params"]:
if p.grad is None: continue
e_w = p.grad * scale.to(p)
p.add_(e_w) # climb to the local maximum "w + e(w)"
self.state[p]["e_w"] = e_w
if zero_grad: self.zero_grad()
@torch.no_grad()
def second_step(self, zero_grad=False):
for group in self.param_groups:
for p in group["params"]:
if p.grad is None: continue
p.sub_(self.state[p]["e_w"]) # get back to "w" from "w + e(w)"
self.base_optimizer.step() # do the actual "sharpness-aware" update
if zero_grad: self.zero_grad()
@torch.no_grad()
def step(self, closure=None):
assert closure is not None, "Sharpness Aware Minimization requires closure, but it was not provided"
closure = torch.enable_grad()(closure) # the closure should do a full forward-backward pass
self.first_step(zero_grad=True)
closure()
self.second_step()
def _grad_norm(self):
shared_device = self.param_groups[0]["params"][0].device # put everything on the same device, in case of model parallelism
norm = torch.norm(
torch.stack([
p.grad.norm(p=2).to(shared_device)
for group in self.param_groups for p in group["params"]
if p.grad is not None
]),
p=2
)
return norm
### GradualWarmupScheduler
# https://github.com/ildoonet/pytorch-gradual-warmup-lr
from torch.optim.lr_scheduler import _LRScheduler
from torch.optim.lr_scheduler import ReduceLROnPlateau
class GradualWarmupScheduler(_LRScheduler):
""" Gradually warm-up(increasing) learning rate in optimizer.
Proposed in 'Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour'.
Args:
optimizer (Optimizer): Wrapped optimizer.
multiplier: target learning rate = base lr * multiplier if multiplier > 1.0. if multiplier = 1.0, lr starts from 0 and ends up with the base_lr.
total_epoch: target learning rate is reached at total_epoch, gradually
after_scheduler: after target_epoch, use this scheduler(eg. ReduceLROnPlateau)
"""
def __init__(self, optimizer, multiplier, total_epoch, after_scheduler=None):
self.multiplier = multiplier
if self.multiplier < 1.:
raise ValueError('multiplier should be greater thant or equal to 1.')
self.total_epoch = total_epoch
self.after_scheduler = after_scheduler
self.finished = False
super(GradualWarmupScheduler, self).__init__(optimizer)
def get_lr(self):
if self.last_epoch > self.total_epoch:
if self.after_scheduler:
if not self.finished:
self.after_scheduler.base_lrs = [base_lr * self.multiplier for base_lr in self.base_lrs]
self.finished = True
return self.after_scheduler.get_last_lr()
return [base_lr * self.multiplier for base_lr in self.base_lrs]
if self.multiplier == 1.0:
return [base_lr * (float(self.last_epoch) / self.total_epoch) for base_lr in self.base_lrs]
else:
return [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
def step_ReduceLROnPlateau(self, metrics, epoch=None):
if epoch is None:
epoch = self.last_epoch + 1
self.last_epoch = epoch if epoch != 0 else 1 # ReduceLROnPlateau is called at the end of epoch, whereas others are called at beginning
if self.last_epoch <= self.total_epoch:
warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
for param_group, lr in zip(self.optimizer.param_groups, warmup_lr):
param_group['lr'] = lr
else:
if epoch is None:
self.after_scheduler.step(metrics, None)
else:
self.after_scheduler.step(metrics, epoch - self.total_epoch)
def step(self, epoch=None, metrics=None):
if type(self.after_scheduler) != ReduceLROnPlateau:
if self.finished and self.after_scheduler:
if epoch is None:
self.after_scheduler.step(None)
else:
self.after_scheduler.step(epoch - self.total_epoch)
self._last_lr = self.after_scheduler.get_last_lr()
else:
return super(GradualWarmupScheduler, self).step(epoch)
else:
self.step_ReduceLROnPlateau(metrics, epoch)
### GradualWarmupSchedulerV2
class GradualWarmupSchedulerV2(GradualWarmupScheduler):
def __init__(self, optimizer, multiplier, total_epoch, after_scheduler=None):
super(GradualWarmupSchedulerV2, self).__init__(optimizer, multiplier, total_epoch, after_scheduler)
def get_lr(self):
if self.last_epoch > self.total_epoch:
if self.after_scheduler:
if not self.finished:
self.after_scheduler.base_lrs = [base_lr * self.multiplier for base_lr in self.base_lrs]
self.finished = True
return self.after_scheduler.get_lr()
return [base_lr * self.multiplier for base_lr in self.base_lrs]
if self.multiplier == 1.0:
return [base_lr * (float(self.last_epoch) / self.total_epoch) for base_lr in self.base_lrs]
else:
return [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
### Train one epoch
def train_fn(model, data_loader, optimizer, scheduler, use_sam, accum_iter, epoch, device, use_amp):
model.train()
if use_amp:
scaler = torch.cuda.amp.GradScaler()
fin_loss = 0.0
tk = tqdm(data_loader, desc = "Training epoch: " + str(epoch+1), ncols=100)
for t, (texts, labels) in enumerate(tk):
texts = list(texts)
if use_sam:
if use_amp:
with torch.cuda.amp.autocast():
_, loss = model(texts, labels)
loss.mean().backward()
optimizer.first_step(zero_grad=True)
fin_loss += loss.item()
with torch.cuda.amp.autocast():
_, loss_second = model(texts, labels)
loss_second.mean().backward()
optimizer.second_step(zero_grad=True)
optimizer.zero_grad()
else:
_, loss = model(texts, labels)
loss.mean().backward()
optimizer.first_step(zero_grad=True)
fin_loss += loss.item()
_, loss_second = model(texts, labels)
loss_second.mean().backward()
optimizer.second_step(zero_grad=True)
optimizer.zero_grad()
else: # if use_sam == False
if use_amp:
with torch.cuda.amp.autocast():
_, loss = model(texts, labels)
scaler.scale(loss).backward()
fin_loss += loss.item()
# mini-batch accumulation
if (t + 1) % accum_iter == 0:
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
else:
_, loss = model(texts, labels)
loss.backward()
fin_loss += loss.item()
# mini-batch accumulation
if (t + 1) % accum_iter == 0:
optimizer.step()
optimizer.zero_grad()
tk.set_postfix({'loss' : '%.6f' %float(fin_loss/(t+1)), 'LR' : optimizer.param_groups[0]['lr']})
scheduler.step()
return model, fin_loss / len(data_loader)
### Validation
def getMetric(col):
def f1score(row):
n = len(np.intersect1d(row.target, row[col]))
return 2 * n / (len(row.target) + len(row[col]))
return f1score
def get_bert_embeddings(df, column, model, chunk=32):
model.eval()
bert_embeddings = torch.zeros((df.shape[0], 768)).to(CFG.device)
for i in tqdm(list(range(0, df.shape[0], chunk)) + [df.shape[0]-chunk], desc="get_bert_embeddings", ncols=80):
titles = []
for title in df[column][i : i + chunk].values:
try:
title = title.encode('utf-8').decode("unicode_escape")
title = title.encode('ascii', 'ignore').decode("unicode_escape")
except:
pass
#title = text_punctuation(title)
title = title.lower()
titles.append(title)
with torch.no_grad():
if CFG.use_amp:
with torch.cuda.amp.autocast():
model_output = model(titles)
else:
model_output = model(titles)
bert_embeddings[i : i + chunk] = model_output
del model, titles, model_output
gc.collect()
torch.cuda.empty_cache()
return bert_embeddings
def get_neighbors(df, embeddings, knn=50, threshold=0.0):
model = NearestNeighbors(n_neighbors=knn, metric='cosine')
model.fit(embeddings)
distances, indices = model.kneighbors(embeddings)
preds = []
for k in range(embeddings.shape[0]):
idx = np.where(distances[k,] < threshold)[0]
ids = indices[k,idx]
posting_ids = df['posting_id'].iloc[ids].values
preds.append(posting_ids)
del model, distances, indices
gc.collect()
return preds
### ArcFace
class ArcMarginProduct(nn.Module):
def __init__(self, in_features, out_features, scale=30.0, margin=0.50, easy_margin=False, ls_eps=0.0):
super(ArcMarginProduct, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.scale = scale
self.margin = margin
self.ls_eps = ls_eps # label smoothing
self.weight = nn.Parameter(torch.FloatTensor(out_features, in_features))
nn.init.xavier_uniform_(self.weight)
self.easy_margin = easy_margin
self.cos_m = math.cos(margin)
self.sin_m = math.sin(margin)
self.th = math.cos(math.pi - margin)
self.mm = math.sin(math.pi - margin) * margin
self.criterion = nn.CrossEntropyLoss()
def forward(self, input, label):
# --------------------------- cos(theta) & phi(theta) ---------------------------
if CFG.use_amp:
cosine = F.linear(F.normalize(input), F.normalize(self.weight)).float() # if CFG.use_amp
else:
cosine = F.linear(F.normalize(input), F.normalize(self.weight))
sine = torch.sqrt(1.0 - torch.pow(cosine, 2))
phi = cosine * self.cos_m - sine * self.sin_m
if self.easy_margin:
phi = torch.where(cosine > 0, phi, cosine)
else:
phi = torch.where(cosine > self.th, phi, cosine - self.mm)
# --------------------------- convert label to one-hot ---------------------------
one_hot = torch.zeros(cosine.size(), device=CFG.device)
one_hot.scatter_(1, label.view(-1, 1).long(), 1)
if self.ls_eps > 0:
one_hot = (1 - self.ls_eps) * one_hot + self.ls_eps / self.out_features
output = (one_hot * phi) + ((1.0 - one_hot) * cosine)
output *= self.scale
return output, self.criterion(output,label)
### BERT
# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] # First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask
class ShopeeBertModel(nn.Module):
def __init__(
self,
n_classes = CFG.classes,
model_name = CFG.bert_model_name,
fc_dim = CFG.fc_dim,
margin = CFG.margin,
scale = CFG.scale,
use_fc = True
):
super(ShopeeBertModel,self).__init__()
print('Building Model Backbone for {} model'.format(model_name))
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.backbone = AutoModel.from_pretrained(model_name).to(CFG.device)
in_features = 768
self.use_fc = use_fc
if use_fc:
self.dropout = nn.Dropout(p=0.0)
self.classifier = nn.Linear(in_features, fc_dim)
self.bn = nn.BatchNorm1d(fc_dim)
self._init_params()
in_features = fc_dim
self.final = ArcMarginProduct(
in_features,
n_classes,
scale = scale,
margin = margin,
easy_margin = False,
ls_eps = 0.0
)
def _init_params(self):
nn.init.xavier_normal_(self.classifier.weight)
nn.init.constant_(self.classifier.bias, 0)
nn.init.constant_(self.bn.weight, 1)
nn.init.constant_(self.bn.bias, 0)
def forward(self, texts, labels=torch.tensor([0])):
features = self.extract_features(texts)
if self.training:
logits = self.final(features, labels.to(CFG.device))
return logits
else:
return features
def extract_features(self, texts):
encoding = self.tokenizer(texts, padding=True, truncation=True,
max_length=CFG.max_length, return_tensors='pt').to(CFG.device)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
embedding = self.backbone(input_ids, attention_mask=attention_mask)
x = mean_pooling(embedding, attention_mask)
if self.use_fc and self.training:
x = self.dropout(x)
x = self.classifier(x)
x = self.bn(x)
return x
###Output
_____no_output_____
###Markdown
Setup
###Code
### Create Dataloader
print("Compute CV =", CFG.compute_cv)
df = pd.read_csv(TRAIN_CSV)
df['target'] = df.label_group.map(df.groupby('label_group').posting_id.agg('unique').to_dict())
labelencoder= LabelEncoder()
df['label_group'] = labelencoder.fit_transform(df['label_group'])
gkf = GroupKFold(n_splits=CFG.N_SPLITS)
df['fold'] = -1
for i, (train_idx, valid_idx) in enumerate(gkf.split(X=df, groups=df['label_group'])):
df.loc[valid_idx, 'fold'] = i
train_df = df[df['fold']!=CFG.TEST_FOLD].reset_index(drop=True)
train_df = train_df[train_df['fold']!=CFG.VALID_FOLD].reset_index(drop=True)
valid_df = df[df['fold']==CFG.VALID_FOLD].reset_index(drop=True)
test_df = df[df['fold']==CFG.TEST_FOLD].reset_index(drop=True)
# force label_group to be integers from 0 to (n_class - 1)
train_df['label_group'] = labelencoder.fit_transform(train_df['label_group'])
print("train_df length =", len(train_df))
print("train_df classes =", len(train_df['label_group'].unique()))
print("valid_df length =", len(valid_df))
print("valid_df classes =", len(valid_df['label_group'].unique()))
print("test_df length =", len(test_df))
print("test_df classes =", len(test_df['label_group'].unique()))
train_dataset = TitleDataset(train_df, 'title', 'label_group')
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size = CFG.batch_size,
num_workers = CFG.num_workers,
pin_memory = True,
shuffle = True,
drop_last = True
)
valid_dataset = TitleDataset(valid_df, 'title', 'label_group')
valid_dataloader = torch.utils.data.DataLoader(
valid_dataset,
batch_size = CFG.batch_size,
num_workers = CFG.num_workers,
pin_memory = True,
shuffle = False,
drop_last = False
)
test_dataset = TitleDataset(test_df, 'title', 'label_group')
test_dataloader = torch.utils.data.DataLoader(
test_dataset,
batch_size = CFG.batch_size,
num_workers = CFG.num_workers,
pin_memory = True,
shuffle = False,
drop_last = False
)
### Create Model
model = ShopeeBertModel()
model.to(CFG.device);
### Create Optimizer
optimizer_grouped_parameters = [
{'params': model.backbone.parameters(), 'lr': CFG.scheduler_params['lr_start']},
{'params': model.classifier.parameters(), 'lr': CFG.scheduler_params['lr_start'] * 2},
{'params': model.bn.parameters(), 'lr': CFG.scheduler_params['lr_start'] * 2},
{'params': model.final.parameters(), 'lr': CFG.scheduler_params['lr_start'] * 2},
]
if CFG.use_sam:
from transformers import AdamW
optimizer = AdamW
optimizer = SAM(optimizer_grouped_parameters, optimizer)
else:
from transformers import AdamW
optimizer = AdamW(optimizer_grouped_parameters)
print("lr_start")
print("-" * 30)
for i in range(len(optimizer.param_groups)):
print('Parameter Group ' + str(i) + ' :', optimizer.param_groups[i]["lr"])
### Create Scheduler
scheduler_cosine = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=CFG.cosine_epo-2, eta_min=CFG.eta_min, last_epoch=-1)
scheduler = GradualWarmupSchedulerV2(optimizer, multiplier=CFG.multiplier, total_epoch=CFG.warmup_epo,
after_scheduler=scheduler_cosine)
###Output
_____no_output_____
###Markdown
Training and Validation
###Code
print("Training epochs =", CFG.epochs)
max_f1_valid = 0.
for epoch in range(CFG.epochs):
model, avg_loss_train = train_fn(model, train_dataloader, optimizer, scheduler,
CFG.use_sam, CFG.accum_iter, epoch, CFG.device, CFG.use_amp)
valid_embeddings = get_bert_embeddings(valid_df, 'title', model)
valid_predictions = get_neighbors(valid_df, valid_embeddings.detach().cpu().numpy(),
knn=CFG.bert_knn if len(df) > 3 else 3, threshold=CFG.bert_knn_threshold)
valid_df['oof'] = valid_predictions
valid_df['f1'] = valid_df.apply(getMetric('oof'), axis=1)
valid_f1 = valid_df.f1.mean()
print('Valid f1 score =', valid_f1)
if (epoch >= CFG.min_save_epoch) and (valid_f1 > max_f1_valid):
print(f"[{datetime.datetime.now()}] Valid f1 score improved. Saving model weights to {CFG.save_model_path}")
max_f1_valid = valid_f1
torch.save(model.state_dict(), CFG.save_model_path)
###Output
Training epoch: 1: 100%|█████████████| 1284/1284 [02:25<00:00, 8.85it/s, loss=28.540265, LR=7.5e-6]
get_bert_embeddings: 100%|████████████████████| 216/216 [00:05<00:00, 39.09it/s]
Training epoch: 2: 0%| | 0/1284 [00:00<?, ?it/s]
###Markdown
Best threshold Search
###Code
print("Searching best threshold...")
search_space = np.arange(10, 50, 1)
model.load_state_dict(torch.load(CFG.save_model_path, map_location=CFG.device))
valid_embeddings = get_bert_embeddings(valid_df, 'title', model)
best_f1_valid = 0.
best_threshold = 0.
for i in search_space:
threshold = i / 100
valid_predictions = get_neighbors(valid_df, valid_embeddings.detach().cpu().numpy(),
knn=CFG.bert_knn if len(df) > 3 else 3, threshold=threshold)
valid_df['oof'] = valid_predictions
valid_df['f1'] = valid_df.apply(getMetric('oof'), axis=1)
valid_f1 = valid_df.f1.mean()
print(f"threshold = {threshold} -> f1 score = {valid_f1}")
if (valid_f1 > best_f1_valid):
best_f1_valid = valid_f1
best_threshold = threshold
print("Best threshold =", best_threshold)
print("Best f1 score =", best_f1_valid)
BEST_THRESHOLD = best_threshold
print("Searching best knn...")
search_space = np.arange(40, 80, 2)
best_f1_valid = 0.
best_knn = 0
for knn in search_space:
valid_predictions = get_neighbors(valid_df, valid_embeddings.detach().cpu().numpy(),
knn=knn, threshold=BEST_THRESHOLD)
valid_df['oof'] = valid_predictions
valid_df['f1'] = valid_df.apply(getMetric('oof'), axis=1)
valid_f1 = valid_df.f1.mean()
print(f"knn = {knn} -> f1 score = {valid_f1}")
if (valid_f1 > best_f1_valid):
best_f1_valid = valid_f1
BEST_KNN = knn
print("Best knn =", BEST_KNN)
print("Best f1 score =", best_f1_valid)
###Output
Searching best knn...
knn = 40 -> f1 score = 0.790898212795869
knn = 42 -> f1 score = 0.791063628416331
knn = 44 -> f1 score = 0.7912190053429567
knn = 46 -> f1 score = 0.7913496904646189
knn = 48 -> f1 score = 0.7914202262299262
knn = 50 -> f1 score = 0.7914130729375783
knn = 52 -> f1 score = 0.7913965421151309
knn = 54 -> f1 score = 0.7914119403482549
knn = 56 -> f1 score = 0.7914119403482549
knn = 58 -> f1 score = 0.7914119403482549
knn = 60 -> f1 score = 0.7914119403482549
knn = 62 -> f1 score = 0.7914119403482549
knn = 64 -> f1 score = 0.7914119403482549
knn = 66 -> f1 score = 0.7914119403482549
knn = 68 -> f1 score = 0.7914119403482549
knn = 70 -> f1 score = 0.7914119403482549
knn = 72 -> f1 score = 0.7914119403482549
knn = 74 -> f1 score = 0.7914119403482549
knn = 76 -> f1 score = 0.7914119403482549
knn = 78 -> f1 score = 0.7914119403482549
Best knn = 48
Best f1 score = 0.7914202262299262
###Markdown
Find Test F1 Score
###Code
test_embeddings = get_bert_embeddings(test_df, 'title', model)
test_predictions = get_neighbors(test_df, test_embeddings.detach().cpu().numpy(),
knn=BEST_KNN, threshold=BEST_THRESHOLD)
test_df['oof'] = test_predictions
test_df['f1'] = test_df.apply(getMetric('oof'), axis=1)
test_f1 = test_df.f1.mean()
print("Test f1 score =", test_f1)
time_elapsed = time.time() - start_time
print('Elapsed time: {:.0f} min {:.0f} sec'.format(time_elapsed // 60, time_elapsed % 60))
print(datetime.datetime.now())
###Output
Elapsed time: 24 min 14 sec
2021-05-19 07:53:04.109102
|
notebooks/01-titatic-data-load.ipynb | ###Markdown
Extracting Titanic disaster dataset from Kaggle
###Code
!pip install python-dotenv
from dotenv import load_dotenv, find_dotenv
# search for .env in all directories
dotenv_path = find_dotenv()
#Load env file entries to the environment variable
load_dotenv(dotenv_path)
# Extracting environment variable using os.environ.get
import os
KAGGLE_USERNAME = os.environ.get("KAGGLE_USERNAME")
print (KAGGLE_USERNAME)
# imports
import requests
from requests import session
import os
from dotenv import load_dotenv, find_dotenv
# payload for post
payload = {
'action' : 'login'
'username' : os.environ.get("KAGGLE_USERNAME")
'password' : os.environ.get("KAGGLE_PASSWORD")
}
# URL for downloading the dataset
url="https://www.kaggle.com/c/titanic/download/train.csv"
# Session
with session() as c:
# post request to Kaggle
c.post("https://www.kaggle.com/account/login",data=payload)
# get request
response = c.get(url)
print (response.text)
!pip install --upgrade kaggle
from IPython.display import display_html
def restartkernel():
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
restartkernel()
###Output
_____no_output_____
###Markdown
Read API Key
###Code
import json
import os
api_file_path = os.path.join(os.path.pardir,'kaggle.json')
with open(api_file_path) as f:
kaggle_token = json.load(f)
# kaggle authentication
os.environ["KAGGLE_USERNAME"] = kaggle_token['username']
os.environ["KAGGLE_KEY"] = kaggle_token['key']
from kaggle.api.kaggle_api_extended import KaggleApi
# create kaggle API object
api = KaggleApi()
# authenticate
api.authenticate()
# raw data paths
raw_data_path = os.path.join(os.path.pardir,'data','raw')
# download files to the ..data/raw folder
api.competition_download_file(competition = 'titanic',file_name = 'train.csv', path = raw_data_path ,force = True)
api.competition_download_file(competition = 'titanic',file_name = 'test.csv', path = raw_data_path, force = True)
train_data_path = os.path.join(raw_data_path, 'train.csv')
!head -5 $train_data_path
!ls -l ../data/raw
###Output
total 88
-rw-r--r-- 1 mahethot Domain Users 28629 Jun 5 00:52 test.csv
-rw-r--r-- 1 mahethot Domain Users 61194 Jun 5 00:52 train.csv
###Markdown
Building a script file
###Code
get_raw_data_script_file = os.path.join(os.path.pardir,'src','data','get_raw_data.py')
%%writefile $get_raw_data_script_file
import json
import os
import logging
# Root directory
project_dir = os.path.join(os.path.dirname(__file__),os.pardir,os.pardir)
# read API token and create Env variable
api_file_path = os.path.join(project_dir, 'kaggle.json')
with open(api_file_path) as f:
kaggle_token = json.load(f)
# Env variables
os.environ["KAGGLE_USERNAME"] = kaggle_token['username']
os.environ["KAGGLE_PASSWORD"] = kaggle_token['key']
from kaggle.api.kaggle_api_extended import KaggleApi
def main(project_dir):
'''
main method
'''
# get logger
logger = logging.getLogger(__name__)
logger.info('geting raw data')
#file name
train_file_name = 'train.csv'
test_file_name = 'test.csv'
# file paths
raw_data_path = os.path.join(project_dir,'data','raw')
# extract data
api = KaggleApi()
api.authenticate()
api.competition_download_file(competition = 'titanic', file_name = train_file_name, path = raw_data_path, force = True)
api.competition_download_file(competition = 'titanic', file_name = test_file_name , path = raw_data_path, force = True)
logger.info('downloaded raw training and test data')
if __name__ == '__main__' :
# setup logger
log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_fmt)
main(project_dir)
!python $get_raw_data_script_file
###Output
Downloading train.csv to ..\src\data\..\..\data\raw |
TensorFlow_LeNet_AvgPooling_All_Sigmoid.ipynb | ###Markdown
###Code
# -U: Upgrade all packages to the newest available version
!pip install -U d2l
from d2l import tensorflow as d2l
import tensorflow as tf
from tensorflow.distribute import MirroredStrategy, OneDeviceStrategy
from matplotlib import pyplot
from keras.datasets import fashion_mnist
###Output
Requirement already up-to-date: d2l in /usr/local/lib/python3.6/dist-packages (0.15.1)
Requirement already satisfied, skipping upgrade: pandas in /usr/local/lib/python3.6/dist-packages (from d2l) (1.1.5)
Requirement already satisfied, skipping upgrade: matplotlib in /usr/local/lib/python3.6/dist-packages (from d2l) (3.2.2)
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from d2l) (1.19.4)
Requirement already satisfied, skipping upgrade: jupyter in /usr/local/lib/python3.6/dist-packages (from d2l) (1.0.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->d2l) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->d2l) (2018.9)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l) (2.4.7)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l) (0.10.0)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l) (1.3.1)
Requirement already satisfied, skipping upgrade: ipykernel in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (4.10.1)
Requirement already satisfied, skipping upgrade: ipywidgets in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (7.5.1)
Requirement already satisfied, skipping upgrade: notebook in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (5.3.1)
Requirement already satisfied, skipping upgrade: jupyter-console in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (5.2.0)
Requirement already satisfied, skipping upgrade: nbconvert in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (5.6.1)
Requirement already satisfied, skipping upgrade: qtconsole in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l) (5.0.1)
Requirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas->d2l) (1.15.0)
Requirement already satisfied, skipping upgrade: tornado>=4.0 in /usr/local/lib/python3.6/dist-packages (from ipykernel->jupyter->d2l) (5.1.1)
Requirement already satisfied, skipping upgrade: jupyter-client in /usr/local/lib/python3.6/dist-packages (from ipykernel->jupyter->d2l) (5.3.5)
Requirement already satisfied, skipping upgrade: ipython>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from ipykernel->jupyter->d2l) (5.5.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1.0 in /usr/local/lib/python3.6/dist-packages (from ipykernel->jupyter->d2l) (4.3.3)
Requirement already satisfied, skipping upgrade: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets->jupyter->d2l) (3.5.1)
Requirement already satisfied, skipping upgrade: nbformat>=4.2.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets->jupyter->d2l) (5.0.8)
Requirement already satisfied, skipping upgrade: jupyter-core>=4.4.0 in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l) (4.7.0)
Requirement already satisfied, skipping upgrade: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l) (0.2.0)
Requirement already satisfied, skipping upgrade: Send2Trash in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l) (1.5.0)
Requirement already satisfied, skipping upgrade: terminado>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l) (0.9.1)
Requirement already satisfied, skipping upgrade: jinja2 in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l) (2.11.2)
Requirement already satisfied, skipping upgrade: pygments in /usr/local/lib/python3.6/dist-packages (from jupyter-console->jupyter->d2l) (2.6.1)
Requirement already satisfied, skipping upgrade: prompt-toolkit<2.0.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from jupyter-console->jupyter->d2l) (1.0.18)
Requirement already satisfied, skipping upgrade: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (0.6.0)
Requirement already satisfied, skipping upgrade: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (1.4.3)
Requirement already satisfied, skipping upgrade: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (0.4.4)
Requirement already satisfied, skipping upgrade: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (0.3)
Requirement already satisfied, skipping upgrade: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (3.2.1)
Requirement already satisfied, skipping upgrade: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l) (0.8.4)
Requirement already satisfied, skipping upgrade: pyzmq>=17.1 in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l) (20.0.0)
Requirement already satisfied, skipping upgrade: qtpy in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l) (1.9.0)
Requirement already satisfied, skipping upgrade: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0->ipykernel->jupyter->d2l) (0.8.1)
Requirement already satisfied, skipping upgrade: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0->ipykernel->jupyter->d2l) (0.7.5)
Requirement already satisfied, skipping upgrade: decorator in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0->ipykernel->jupyter->d2l) (4.4.2)
Requirement already satisfied, skipping upgrade: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0->ipykernel->jupyter->d2l) (51.0.0)
Requirement already satisfied, skipping upgrade: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython>=4.0.0->ipykernel->jupyter->d2l) (4.8.0)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.2.0->ipywidgets->jupyter->d2l) (2.6.0)
Requirement already satisfied, skipping upgrade: ptyprocess; os_name != "nt" in /usr/local/lib/python3.6/dist-packages (from terminado>=0.8.1->notebook->jupyter->d2l) (0.6.0)
Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->notebook->jupyter->d2l) (1.1.1)
Requirement already satisfied, skipping upgrade: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.0->jupyter-console->jupyter->d2l) (0.2.5)
Requirement already satisfied, skipping upgrade: packaging in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->jupyter->d2l) (20.8)
Requirement already satisfied, skipping upgrade: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->jupyter->d2l) (0.5.1)
###Markdown
Convolutional Neural Network LeNet.This is implementation of classical LeNet convolutional neural network , originally designed for handwritten digit recignition. The basic architecture is used for some experimentation: we may change AveragePooling to MaxPooling and Sigmoid to ReLu activations. It is interesting to check, how it will change the results.I use some code from d2l.ai : http://d2l.ai/ There is also some intermediate code with experimentation with TensorFlow objects. In this version we use Average Pooling and ReLu.Relu as activation function for Conv2d layers ruined the model. (same result for Max Pooling). But if we use 'sigmoid' for Conv2d layers and 'relu' for Dence layers : results are good (model is in separate notebook).
###Code
def LN():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation='relu',
padding='same'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5,
activation='relu'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='relu'),
tf.keras.layers.Dense(84, activation='relu'),
tf.keras.layers.Dense(10)])
X = tf.zeros((1,28,28,1))
for layer in LN().layers:
X = layer(X)
print(layer.__class__.__name__,' output shape \t', X.shape)
###Output
Conv2D output shape (1, 28, 28, 6)
AveragePooling2D output shape (1, 14, 14, 6)
Conv2D output shape (1, 10, 10, 16)
AveragePooling2D output shape (1, 5, 5, 16)
Flatten output shape (1, 400)
Dense output shape (1, 120)
Dense output shape (1, 84)
Dense output shape (1, 10)
###Markdown
Data We will use FASHION-MNIST dataset.
###Code
(train_X, train_y), (test_X, test_y) = fashion_mnist.load_data()
print('Train : {} , {}'.format(train_X.shape, train_y.shape))
print('Test : {} , {}'.format(test_X.shape, test_y.shape))
for i in range(9):
pyplot.subplot(3,3,1 + i)
pyplot.imshow(train_X[i], cmap = pyplot.get_cmap('Greys'))
pyplot.show()
for i in range(9):
pyplot.subplot(3,3,1 + i)
pyplot.imshow(train_X[i], cmap = pyplot.get_cmap('Spectral'))
pyplot.show()
for i in range(9):
pyplot.subplot(3,3,1 + i)
pyplot.imshow(train_X[i], cmap = pyplot.get_cmap('gray'))
pyplot.show()
def reshape_cast(X,y):
# scale to [0,1] interval, add dim=3 -> will be single colour channel
return (tf.expand_dims(X,axis=3)/255, tf.cast(y,dtype='int32'))
def load_data(batch_size):
return (
tf.data.Dataset.from_tensor_slices(reshape_cast(*(train_X,train_y)))
.batch(batch_size).shuffle(len(train_X)),
tf.data.Dataset.from_tensor_slices(reshape_cast(*(test_X,test_y)))
.batch(batch_size)
)
batch_size = 256
train_iter, test_iter = load_data(batch_size=batch_size)
# from d2l.ai
class Timer:
"""Record multiple running times."""
def __init__(self):
self.times = []
self.start()
def start(self):
"""Start the timer."""
self.tik = time.time()
def stop(self):
"""Stop the timer and record the time in a list."""
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
"""Return the average time."""
return sum(self.times) / len(self.times)
def sum(self):
"""Return the sum of time."""
return sum(self.times)
def cumsum(self):
"""Return the accumulated time."""
return np.array(self.times).cumsum().tolist()
# from d2l.ai
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
# from d2l.ai
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
if len(tf.config.experimental.list_physical_devices('GPU')) >= i + 1:
return tf.device(f'/GPU:{i}')
return tf.device('/CPU:0')
# from d2l.ai
class TrainCallback(tf.keras.callbacks.Callback):
"""A callback to visiualize the training progress."""
def __init__(self, net, train_iter, test_iter, num_epochs, device_name):
self.timer = d2l.Timer()
self.animator = d2l.Animator(
xlabel='epoch', xlim=[1, num_epochs], legend=[
'train loss', 'train acc', 'test acc'])
self.net = net
self.train_iter = train_iter
self.test_iter = test_iter
self.num_epochs = num_epochs
self.device_name = device_name
def on_epoch_begin(self, epoch, logs=None):
self.timer.start()
def on_epoch_end(self, epoch, logs):
self.timer.stop()
test_acc = self.net.evaluate(
self.test_iter, verbose=0, return_dict=True)['accuracy']
metrics = (logs['loss'], logs['accuracy'], test_acc)
self.animator.add(epoch + 1, metrics)
if epoch == self.num_epochs - 1:
batch_size = next(iter(self.train_iter))[0].shape[0]
num_examples = batch_size * tf.data.experimental.cardinality(
self.train_iter).numpy()
print(f'loss {metrics[0]:.3f}, train acc {metrics[1]:.3f}, '
f'test acc {metrics[2]:.3f}')
print(f'{num_examples / self.timer.avg():.1f} examples/sec on '
f'{str(self.device_name)}')
def train_ch6(net_fn, train_iter, test_iter, num_epochs, lr,
device=d2l.try_gpu()):
"""Train a model with a GPU (defined in Chapter 6)."""
device_name = device._device_name
strategy = tf.distribute.OneDeviceStrategy(device_name)
with strategy.scope():
optimizer = tf.keras.optimizers.SGD(learning_rate=lr)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
net = net_fn()
net.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
callback = TrainCallback(net, train_iter, test_iter, num_epochs,
device_name)
net.fit(train_iter, epochs=num_epochs, verbose=0, callbacks=[callback])
return net
lr, num_epochs = 0.9, 20
train_ch6(LN, train_iter, test_iter, num_epochs, lr)
###Output
loss 2.304, train acc 0.099, test acc 0.100
81650.4 examples/sec on /GPU:0
|
Notebook-Class-Assignment-Answers/Step-5-Evaluate-Model-Task-1-Regression-Class-Assignment.ipynb | ###Markdown
Step 5 – Evaluate Model - Task 1. Evaluate Regression Model - CLASS ASSIGNMENT Load Libraries
###Code
!pip install sklearn --upgrade
import pandas as pd
import numpy as np
from datetime import date
from datetime import timedelta
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn import metrics
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
###Output
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:35: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:597: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:836: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:862: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, positive=False):
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1074: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
max_n_alphas=1000, n_jobs=1, eps=np.finfo(np.float).eps,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1306: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
max_n_alphas=1000, n_jobs=1, eps=np.finfo(np.float).eps,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/least_angle.py:1442: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, copy_X=True, positive=False):
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/randomized_l1.py:152: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
precompute=False, eps=np.finfo(np.float).eps,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/randomized_l1.py:318: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=np.finfo(np.float).eps, random_state=None,
/Users/asathi/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/randomized_l1.py:575: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
eps=4 * np.finfo(np.float).eps, n_jobs=1,
###Markdown
Set up environment and connect to Google Drive
###Code
using_Google_colab = False
using_Anaconda_on_Mac_or_Linux = True
using_Anaconda_on_windows = False
if using_Google_colab:
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
EM5.1 Open Notebook, upload the county Analytics Base Table, and find maximum incremental cases Jan-Jun Upload State level Data
###Code
if using_Google_colab:
abt_by_county = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/abt_by_county.csv',
parse_dates=['Date'])
if using_Anaconda_on_Mac_or_Linux:
abt_by_county = pd.read_csv('../output/abt_by_county.csv',
parse_dates=['Date'])
if using_Anaconda_on_windows:
abt_by_county = pd.read_csv(r'..\output\abt_by_county.csv',
parse_dates=['Date'])
abt_by_county
###Output
_____no_output_____
###Markdown
Filter maximum incremetal cases in Jan - Jun1. Filter data for first half2. Attach state to county name3. find max incremental cases
###Code
abt_by_county_Incremental_Cases = abt_by_county[['Date', 'countyFIPS', 'County Name', 'State', 'Incremental Cases', 'population']]
abt_by_county_Incremental_Cases['County Name'] = abt_by_county_Incremental_Cases['County Name'] + " " + abt_by_county_Incremental_Cases['State']
abt_by_county_Incremental_Cases
max_cases = abt_by_county_Incremental_Cases[abt_by_county_Incremental_Cases['Date'] < '2020-07-01']
max_cases = max_cases.groupby(['countyFIPS', 'County Name']).agg({'Incremental Cases': 'max',
'population': 'min'}).reset_index()
max_cases
###Output
_____no_output_____
###Markdown
EM5.2 Develop regression model for top 50 counties and create test data predictions Sort the data by population and select top 50 counties
###Code
max_cases_sorted = max_cases.sort_values(by=['population'], ascending=False)
top_50 = max_cases_sorted[:50]
top_50
x_50 = top_50['population'].values
y_50 = top_50['Incremental Cases'].values
X_50 = x_50.reshape(-1,1)
X_train_50, X_test_50, y_train_50, y_test_50 = train_test_split(X_50, y_50,
test_size=0.2,
random_state=0)
regressor_50 = LinearRegression()
regressor_50.fit(X_train_50, y_train_50)
rsq = regressor_50.score(X_train_50, y_train_50)
print("Intercept: ",
regressor_50.intercept_,
"Coefficient: ", regressor_50.coef_,
"R-SQ: ", rsq)
###Output
Intercept: 268.06822989939155 Coefficient: [0.00033259] R-SQ: 0.37846875796843815
###Markdown
EM5.3 Evaluate regression model using test data results
###Code
y_predict_50 = regressor_50.predict(X_test_50)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
#plt.yscale('log')
#plt.xscale('log')
ax.scatter(X_test_50, y_test_50, color='b')
ax.scatter(X_test_50, y_predict_50, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_50, y_predict_50)
r2_score(y_test_50, y_predict_50)
###Output
_____no_output_____
###Markdown
EM5.4 Extend evaluation to top 2,000 counties Now try the whole population
###Code
top_2000 = max_cases_sorted[:2000]
x_2000 = top_2000['population'].values
y_2000 = top_2000['Incremental Cases'].values
X_2000 = x_2000.reshape(-1,1)
X_train_2000, X_test_2000, y_train_2000, y_test_2000 = train_test_split(X_2000,
y_2000,
test_size=0.2,
random_state=0)
regressor_2000 = LinearRegression()
regressor_2000.fit(X_train_2000, y_train_2000)
rsq_2000 = regressor_2000.score(X_train_2000, y_train_2000)
print("Intercept: ",
regressor_2000.intercept_,
"Coefficient: ", regressor_2000.coef_,
"R-SQ: ", rsq_2000)
y_predict_2000 = regressor_2000.predict(X_test_2000)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
plt.yscale('log')
plt.xscale('log')
ax.scatter(X_test_2000, y_test_2000, color='b')
ax.scatter(X_test_2000, y_predict_2000, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_2000, y_predict_2000)
r2_score(y_test_2000, y_predict_2000)
###Output
_____no_output_____
###Markdown
EM5.5 Conduct regression exercise for incremental cases by county for Jul-Dec and evaluate regression model for top 50 counties and extend to 2000 counties - CLASS ASSIGNMENT
###Code
max_cases = abt_by_county_Incremental_Cases[abt_by_county_Incremental_Cases['Date'] > '2020-07-01']
max_cases = max_cases.groupby(['countyFIPS', 'County Name']).agg({'Incremental Cases': 'max',
'population': 'min'}).reset_index()
max_cases
max_cases_sorted = max_cases.sort_values(by=['population'], ascending=False)
top_50 = max_cases_sorted[:50]
top_50
x_50 = top_50['population'].values
y_50 = top_50['Incremental Cases'].values
X_50 = x_50.reshape(-1,1)
X_train_50, X_test_50, y_train_50, y_test_50 = train_test_split(X_50, y_50,
test_size=0.2,
random_state=0)
regressor_50 = LinearRegression()
regressor_50.fit(X_train_50, y_train_50)
rsq = regressor_50.score(X_train_50, y_train_50)
print("Intercept: ",
regressor_50.intercept_,
"Coefficient: ", regressor_50.coef_,
"R-SQ: ", rsq)
y_predict_50 = regressor_50.predict(X_test_50)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
#plt.yscale('log')
#plt.xscale('log')
ax.scatter(X_test_50, y_test_50, color='b')
ax.scatter(X_test_50, y_predict_50, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_50, y_predict_50)
r2_score(y_test_50, y_predict_50)
top_2000 = max_cases_sorted[:2000]
x_2000 = top_2000['population'].values
y_2000 = top_2000['Incremental Cases'].values
X_2000 = x_2000.reshape(-1,1)
X_train_2000, X_test_2000, y_train_2000, y_test_2000 = train_test_split(X_2000,
y_2000,
test_size=0.2,
random_state=0)
regressor_2000 = LinearRegression()
regressor_2000.fit(X_train_2000, y_train_2000)
rsq_2000 = regressor_2000.score(X_train_2000, y_train_2000)
print("Intercept: ",
regressor_2000.intercept_,
"Coefficient: ", regressor_2000.coef_,
"R-SQ: ", rsq_2000)
y_predict_2000 = regressor_2000.predict(X_test_2000)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
plt.yscale('log')
plt.xscale('log')
ax.scatter(X_test_2000, y_test_2000, color='b')
ax.scatter(X_test_2000, y_predict_2000, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_2000, y_predict_2000)
r2_score(y_test_2000, y_predict_2000)
###Output
_____no_output_____
###Markdown
EM5.6 Create a model with the entire year for 50 and 2,000 counties for incremental cases and deaths. Compare regression results in EM5.3, 5.4 and 5.5. What was your finding? - CLASS ASSIGNMENT
###Code
max_cases = abt_by_county_Incremental_Cases[abt_by_county_Incremental_Cases['Date'] > '2020-01-01']
max_cases = max_cases.groupby(['countyFIPS', 'County Name']).agg({'Incremental Cases': 'max',
'population': 'min'}).reset_index()
max_cases
max_cases_sorted = max_cases.sort_values(by=['population'], ascending=False)
top_50 = max_cases_sorted[:50]
top_50
x_50 = top_50['population'].values
y_50 = top_50['Incremental Cases'].values
X_50 = x_50.reshape(-1,1)
X_train_50, X_test_50, y_train_50, y_test_50 = train_test_split(X_50, y_50,
test_size=0.2,
random_state=0)
regressor_50 = LinearRegression()
regressor_50.fit(X_train_50, y_train_50)
rsq = regressor_50.score(X_train_50, y_train_50)
print("Intercept: ",
regressor_50.intercept_,
"Coefficient: ", regressor_50.coef_,
"R-SQ: ", rsq)
y_predict_50 = regressor_50.predict(X_test_50)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
#plt.yscale('log')
#plt.xscale('log')
ax.scatter(X_test_50, y_test_50, color='b')
ax.scatter(X_test_50, y_predict_50, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_50, y_predict_50)
r2_score(y_test_50, y_predict_50)
top_2000 = max_cases_sorted[:2000]
x_2000 = top_2000['population'].values
y_2000 = top_2000['Incremental Cases'].values
X_2000 = x_2000.reshape(-1,1)
X_train_2000, X_test_2000, y_train_2000, y_test_2000 = train_test_split(X_2000,
y_2000,
test_size=0.2,
random_state=0)
regressor_2000 = LinearRegression()
regressor_2000.fit(X_train_2000, y_train_2000)
rsq_2000 = regressor_2000.score(X_train_2000, y_train_2000)
print("Intercept: ",
regressor_2000.intercept_,
"Coefficient: ", regressor_2000.coef_,
"R-SQ: ", rsq_2000)
y_predict_2000 = regressor_2000.predict(X_test_2000)
fig, ax = plt.subplots(figsize=(17, 17))
plt.title('Regression on test data')
plt.ylabel('Incremental Cases')
plt.xlabel('Population')
plt.yscale('log')
plt.xscale('log')
ax.scatter(X_test_2000, y_test_2000, color='b')
ax.scatter(X_test_2000, y_predict_2000, color='r')
plt.grid(True)
plt.show()
mean_squared_error(y_test_2000, y_predict_2000)
r2_score(y_test_2000, y_predict_2000)
###Output
_____no_output_____ |
examples/reference/widgets/CrossSelector.ipynb | ###Markdown
The ``CrossSelector`` widget allows selecting multiple values from a list of options by moving items between two lists. It falls into the broad category of multi-option selection widgets that provide a compatible API and include the [``MultiSelect``](MultiSelect.ipynb), [``CheckBoxGroup``](CheckBoxGroup.ipynb) and [``CheckButtonGroup``](CheckButtonGroup.ipynb) widgets.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb). Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb). Core* **``definition_order``** (boolean, default=True): Whether to preserve definition order after filtering. Disable to allow the order of selection to define the order of the selected list.* **``filter_fn``** (function): The filter function applied when querying using the text fields, defaults to re.search. Function is two arguments, the query or pattern and the item label.* **``options``** (list or dict): List or dictionary of available options* **``value``** (boolean): Currently selected options Display* **``disabled``** (boolean): Whether the widget is editable* **``name``** (str): The title of the widget___ The ``CrossSelector`` is made up of a number of components: * Two lists for the unselected (left) and selected (right) option values* Filter boxes that allow using a regex to match options in the list of values below* Buttons to move values from the unselected to the selected list (``>>``) and vice versa (``<<``)
###Code
cross_selector = pn.widgets.CrossSelector(name='Fruits', value=['Apple', 'Pear'],
options=['Apple', 'Banana', 'Pear', 'Strawberry'])
cross_selector
###Output
_____no_output_____
###Markdown
``CrossSelector.value`` returns a list of the currently selected options:
###Code
cross_selector.value
###Output
_____no_output_____
###Markdown
The ``CrossSelector`` widget allows selecting multiple values from a list of options by moving items between two lists. It falls into the broad category of multi-option selection widgets that provide a compatible API and include the [``MultiSelect``](MultiSelect.ipynb), [``CheckBoxGroup``](CheckBoxGroup.ipynb) and [``CheckButtonGroup``](CheckButtonGroup.ipynb) widgets.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb). Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb). Core* **``options``** (list or dict): List or dictionary of available options* **``value``** (boolean): Currently selected options Display* **``disabled``** (boolean): Whether the widget is editable* **``name``** (str): The title of the widget___ The ``CrossSelector`` is made up of a number of components: * Two lists for the unselected (left) and selected (right) option values* Filter boxes that allow using a regex to match options in the list of values below* Buttons to move values from the unselected to the selected list (``>>``) and vice versa (``<<``)
###Code
cross_selector = pn.widgets.CrossSelector(name='Fruits', value=['Apple', 'Pear'],
options=['Apple', 'Banana', 'Pear', 'Strawberry'])
cross_selector
###Output
_____no_output_____
###Markdown
``CrossSelector.value`` returns a list of the currently selected options:
###Code
cross_selector.value
###Output
_____no_output_____ |
notebooks/09 - Accessing NCBIs Entrez databases.ipynb | ###Markdown
**Source of the materials**: Biopython cookbook (adapted)Status: Draft Accessing NCBI’s Entrez databases[Entrez Guidelines](Entrez Guidelines)[EInfo: Obtaining information about the Entrez databases](EInfo:-Obtaining-information-about-the-Entrez-databases)[ESearch: Searching the Entrez databases](ESearch:-Searching-the-Entrez-databases)[EPost: Uploading a list of identifiers](EPost:-Uploading-a-list-of-identifiers)[EFetch: Downloading full records from Entrez](EFetch:-Downloading-full-records-from-Entrez)[History and WebEnv](Using-the-history-and-WebEnv)[Specialized parsers](Specialized-parsers)[Examples](Examples)Entrez () is a data retrieval systemthat provides users access to NCBI’s databases such as PubMed, GenBank,GEO, and many others. You can access Entrez from a web browser tomanually enter queries, or you can use Biopython’s `Bio.Entrez` modulefor programmatic access to Entrez. The latter allows you for example tosearch PubMed or download GenBank records from within a Python script.The `Bio.Entrez` module makes use of the Entrez Programming Utilities(also known as EUtils), consisting of eight tools that are described indetail on NCBI’s page at .Each of these tools corresponds to one Python function in the`Bio.Entrez` module, as described in the sections below. This modulemakes sure that the correct URL is used for the queries, and that notmore than one request is made every three seconds, as required by NCBI.The output returned by the Entrez Programming Utilities is typically inXML format. To parse such output, you have several options:1. Use `Bio.Entrez`’s parser to parse the XML output into a Python object;2. Use the DOM (Document Object Model) parser in Python’s standard library;3. Use the SAX (Simple API for XML) parser in Python’s standard library;4. Read the XML output as raw text, and parse it by string searching and manipulation.For the DOM and SAX parsers, see the Python documentation. The parser in`Bio.Entrez` is discussed below.NCBI uses DTD (Document Type Definition) files to describe the structureof the information contained in XML files. Most of the DTD files used byNCBI are included in the Biopython distribution. The `Bio.Entrez` parsermakes use of the DTD files when parsing an XML file returned by NCBIEntrez.Occasionally, you may find that the DTD file associated with a specificXML file is missing in the Biopython distribution. In particular, thismay happen when NCBI updates its DTD files. If this happens,`Entrez.read` will show a warning message with the name and URL of themissing DTD file. The parser will proceed to access the missing DTD filethrough the internet, allowing the parsing of the XML file to continue.However, the parser is much faster if the DTD file is available locally.For this purpose, please download the DTD file from the URL in thewarning message and place it in the directory`...site-packages/Bio/Entrez/DTDs`, containing the other DTD files. Ifyou don’t have write access to this directory, you can also place theDTD file in `~/.biopython/Bio/Entrez/DTDs`, where `~` represents yourhome directory. Since this directory is read before the directory`...site-packages/Bio/Entrez/DTDs`, you can also put newer versions ofDTD files there if the ones in `...site-packages/Bio/Entrez/DTDs` becomeoutdated. Alternatively, if you installed Biopython from source, you canadd the DTD file to the source code’s `Bio/Entrez/DTDs` directory, andreinstall Biopython. This will install the new DTD file in the correctlocation together with the other DTD files.The Entrez Programming Utilities can also generate output in otherformats, such as the Fasta or GenBank file formats for sequencedatabases, or the MedLine format for the literature database, discussedin Section [Specialized parsers](Specialized-parsers). Entrez GuidelinesBefore using Biopython to access the NCBI’s online resources (via`Bio.Entrez` or some of the other modules), please read the [NCBI’s Entrez User Requirements](http://www.ncbi.nlm.nih.gov/books/NBK25497/chapter2.Usage_Guidelines_and_Requiremen).If the NCBI finds you are abusing their systems, they can and will banyour access!To paraphrase:- For any series of more than 100 requests, do this at weekends or outside USA peak times. This is up to you to obey.- Use the address, not the standard NCBI Web address. Biopython uses this web address.- Make no more than three requests every seconds (relaxed from at most one request every three seconds in early 2009). This is automatically enforced by Biopython.- Use the optional email parameter so the NCBI can contact you if there is a problem. You can either explicitly set this as a parameter with each call to Entrez (e.g. include email=“[email protected]” in the argument list), or you can set a global email address:
###Code
from Bio import Entrez
Entrez.email = "[email protected]"
###Output
_____no_output_____
###Markdown
Bio.Entrez will then use this email address with eachcall to Entrez. The example.com address is a reserveddomain name specifically for documentation (RFC 2606). Please DO NOTuse a random email – it’s better not to give an email at all. Theemail parameter will be mandatory from June 1, 2010. In case ofexcessive usage, NCBI will attempt to contact a user at the e-mailaddress provided prior to blocking access to the E-utilities.If you are using Biopython within some larger software suite, usethe tool parameter to specify this. You can either explicitly setthe tool name as a parameter with each call to Entrez (e.g. includetool=“MyLocalScript” in the argument list), or you canset a global tool name:
###Code
from Bio import Entrez
Entrez.tool = "MyLocalScript"
###Output
_____no_output_____
###Markdown
The tool parameter will default to Biopython.- For large queries, the NCBI also recommend using their session history feature (the WebEnv session cookie string, see Section [History and WebEnv](Using-the-history-and-WebEnv)). This is only slightly more complicated.In conclusion, be sensible with your usage levels. If you plan todownload lots of data, consider other options. For example, if you wanteasy access to all the human genes, consider fetching each chromosome byFTP as a GenBank file, and importing these into your own BioSQL database(see Section \[sec:BioSQL\]). EInfo: Obtaining information about the Entrez databases- [einfo source](http://biopython.org/DIST/docs/api/Bio.Entrez-pysrc.htmleinfo)EInfo provides field index term counts, last update, and available linksfor each of NCBI’s databases. In addition, you can use EInfo to obtain alist of all database names accessible through the Entrez utilities. The variable `result` now contains a list of databases in XML format:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.einfo()
result = handle.read()
print(result)
###Output
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE eInfoResult PUBLIC "-//NLM//DTD einfo 20130322//EN" "http://eutils.ncbi.nlm.nih.gov/eutils/dtd/20130322/einfo.dtd">
<eInfoResult>
<DbList>
<DbName>pubmed</DbName>
<DbName>protein</DbName>
<DbName>nuccore</DbName>
<DbName>nucleotide</DbName>
<DbName>nucgss</DbName>
<DbName>nucest</DbName>
<DbName>structure</DbName>
<DbName>genome</DbName>
<DbName>annotinfo</DbName>
<DbName>assembly</DbName>
<DbName>bioproject</DbName>
<DbName>biosample</DbName>
<DbName>blastdbinfo</DbName>
<DbName>books</DbName>
<DbName>cdd</DbName>
<DbName>clinvar</DbName>
<DbName>clone</DbName>
<DbName>gap</DbName>
<DbName>gapplus</DbName>
<DbName>grasp</DbName>
<DbName>dbvar</DbName>
<DbName>epigenomics</DbName>
<DbName>gene</DbName>
<DbName>gds</DbName>
<DbName>geoprofiles</DbName>
<DbName>homologene</DbName>
<DbName>medgen</DbName>
<DbName>mesh</DbName>
<DbName>ncbisearch</DbName>
<DbName>nlmcatalog</DbName>
<DbName>omim</DbName>
<DbName>orgtrack</DbName>
<DbName>pmc</DbName>
<DbName>popset</DbName>
<DbName>probe</DbName>
<DbName>proteinclusters</DbName>
<DbName>pcassay</DbName>
<DbName>biosystems</DbName>
<DbName>pccompound</DbName>
<DbName>pcsubstance</DbName>
<DbName>pubmedhealth</DbName>
<DbName>seqannot</DbName>
<DbName>snp</DbName>
<DbName>sra</DbName>
<DbName>taxonomy</DbName>
<DbName>unigene</DbName>
<DbName>gencoll</DbName>
<DbName>gtr</DbName>
</DbList>
</eInfoResult>
###Markdown
Since this is a fairly simple XML file, we could extract the informationit contains simply by string searching. Using `Bio.Entrez`’s parserinstead, we can directly parse this XML file into a Python object:
###Code
from Bio import Entrez
handle = Entrez.einfo()
record = Entrez.read(handle)
###Output
_____no_output_____
###Markdown
Now `record` is a dictionary with exactly one key:
###Code
record.keys()
###Output
_____no_output_____
###Markdown
The values stored in this key is the list of database names shown in theXML above:
###Code
record["DbList"]
###Output
_____no_output_____
###Markdown
For each of these databases, we can use EInfo again to obtain moreinformation:
###Code
from Bio import Entrez
handle = Entrez.einfo(db="pubmed")
record = Entrez.read(handle)
record["DbInfo"]["Description"]
record['DbInfo'].keys()
handle = Entrez.einfo(db="pubmed")
record = Entrez.read(handle)
record["DbInfo"]["Description"]
record["DbInfo"]["Count"]
record["DbInfo"]["LastUpdate"]
###Output
_____no_output_____
###Markdown
Try `record["DbInfo"].keys()` for other information stored in thisrecord. One of the most useful is a list of possible search fields foruse with ESearch:
###Code
for field in record["DbInfo"]["FieldList"]:
print("%(Name)s, %(FullName)s, %(Description)s" % field)
###Output
ALL, All Fields, All terms from all searchable fields
UID, UID, Unique number assigned to publication
FILT, Filter, Limits the records
TITL, Title, Words in title of publication
WORD, Text Word, Free text associated with publication
MESH, MeSH Terms, Medical Subject Headings assigned to publication
MAJR, MeSH Major Topic, MeSH terms of major importance to publication
AUTH, Author, Author(s) of publication
JOUR, Journal, Journal abbreviation of publication
AFFL, Affiliation, Author's institutional affiliation and address
ECNO, EC/RN Number, EC number for enzyme or CAS registry number
SUBS, Supplementary Concept, CAS chemical name or MEDLINE Substance Name
PDAT, Date - Publication, Date of publication
EDAT, Date - Entrez, Date publication first accessible through Entrez
VOL, Volume, Volume number of publication
PAGE, Pagination, Page number(s) of publication
PTYP, Publication Type, Type of publication (e.g., review)
LANG, Language, Language of publication
ISS, Issue, Issue number of publication
SUBH, MeSH Subheading, Additional specificity for MeSH term
SI, Secondary Source ID, Cross-reference from publication to other databases
MHDA, Date - MeSH, Date publication was indexed with MeSH terms
TIAB, Title/Abstract, Free text associated with Abstract/Title
OTRM, Other Term, Other terms associated with publication
INVR, Investigator, Investigator
COLN, Author - Corporate, Corporate Author of publication
CNTY, Place of Publication, Country of publication
PAPX, Pharmacological Action, MeSH pharmacological action pre-explosions
GRNT, Grant Number, NIH Grant Numbers
MDAT, Date - Modification, Date of last modification
CDAT, Date - Completion, Date of completion
PID, Publisher ID, Publisher ID
FAUT, Author - First, First Author of publication
FULL, Author - Full, Full Author Name(s) of publication
FINV, Investigator - Full, Full name of investigator
TT, Transliterated Title, Words in transliterated title of publication
LAUT, Author - Last, Last Author of publication
PPDT, Print Publication Date, Date of print publication
EPDT, Electronic Publication Date, Date of Electronic publication
LID, Location ID, ELocation ID
CRDT, Date - Create, Date publication first accessible through Entrez
BOOK, Book, ID of the book that contains the document
ED, Editor, Section's Editor
ISBN, ISBN, ISBN
PUBN, Publisher, Publisher's name
AUCL, Author Cluster ID, Author Cluster ID
EID, Extended PMID, Extended PMID
DSO, DSO, Additional text from the summary
AUID, Author - Identifier, Author Identifier
PS, Subject - Personal Name, Personal Name as Subject
###Markdown
That’s a long list, but indirectly this tells you that for the PubMeddatabase, you can do things like `Jones[AUTH]` to search the authorfield, or `Sanger[AFFL]` to restrict to authors at the Sanger Centre.This can be very handy - especially if you are not so familiar with aparticular database.ESearch: Searching the Entrez databases-------------------------------------To search any of these databases, we use `Bio.Entrez.esearch()`. Forexample, let’s search in PubMed for publications related to Biopython:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="pubmed", term="biopython")
record = Entrez.read(handle)
record["IdList"]
record
###Output
_____no_output_____
###Markdown
In this output, you see seven PubMed IDs (including 19304878 which isthe PMID for the Biopython application), which can be retrieved byEFetch (see section [EFetch: Downloading full records from Entrez](EFetch:-Downloading-full-records-from-Entrez)).You can also use ESearch to search GenBank. Here we’ll do a quick searchfor the *matK* gene in *Cypripedioideae* orchids (seeSection \[sec:entrez-einfo\] about EInfo for one way to find out whichfields you can search in each Entrez database):
###Code
handle = Entrez.esearch(db="nucleotide", term="Cypripedioideae[Orgn] AND matK[Gene]")
record = Entrez.read(handle)
record["Count"]
record["IdList"]
###Output
_____no_output_____
###Markdown
Each of the IDs (126789333, 37222967, 37222966, …) is a GenBankidentifier. See section [EFetch: Downloading full records from Entrez](EFetch:-Downloading-full-records-from-Entrez) for information on how toactually download these GenBank records.Note that instead of a species name like `Cypripedioideae[Orgn]`, youcan restrict the search using an NCBI taxon identifier, here this wouldbe `txid158330[Orgn]`. This isn’t currently documented on the ESearchhelp page - the NCBI explained this in reply to an email query. You canoften deduce the search term formatting by playing with the Entrez webinterface. For example, including `complete[prop]` in a genome searchrestricts to just completed genomes.As a final example, let’s get a list of computational journal titles:
###Code
# nlmcatalog
# handle = Entrez.esearch(db="nlmcatalog", term="computational")
# record = Entrez.read(handle)
# record["Count"]
handle = Entrez.esearch(db="nlmcatalog", term="biopython[Journal]", RetMax='20')
record = Entrez.read(handle)
print("{} computational Journals found".format(record["Count"]))
print("The first 20 are\n{}".format(record['IdList']))
###Output
0 computational Journals found
The first 20 are
[]
###Markdown
Again, we could use EFetch to obtain more information for each of thesejournal IDs.ESearch has many useful options — see the [ESearch helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/esearch_help.html)for more information.EPost: Uploading a list of identifiers------------------------------------EPost uploads a list of UIs for use in subsequent search strategies; seethe [EPost helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/epost_help.html)for more information. It is available from Biopython through the`Bio.Entrez.epost()` function.To give an example of when this is useful, suppose you have a long listof IDs you want to download using EFetch (maybe sequences, maybecitations – anything). When you make a request with EFetch your list ofIDs, the database etc, are all turned into a long URL sent to theserver. If your list of IDs is long, this URL gets long, and long URLscan break (e.g. some proxies don’t cope well).Instead, you can break this up into two steps, first uploading the listof IDs using EPost (this uses an “HTML post” internally, rather than an“HTML get”, getting round the long URL problem). With the historysupport, you can then refer to this long list of IDs, and download theassociated data with EFetch.Let’s look at a simple example to see how EPost works – uploading somePubMed identifiers:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
id_list = ["19304878", "18606172", "16403221", "16377612", "14871861", "14630660"]
print(Entrez.epost("pubmed", id=",".join(id_list)).read())
###Output
<?xml version="1.0"?>
<!DOCTYPE ePostResult PUBLIC "-//NLM//DTD ePostResult, 11 May 2002//EN" "http://www.ncbi.nlm.nih.gov/entrez/query/DTD/ePost_020511.dtd">
<ePostResult>
<QueryKey>1</QueryKey>
<WebEnv>NCID_1_242547791_130.14.22.215_9001_1452651583_567658845_0MetA0_S_MegaStore_F_1</WebEnv>
</ePostResult>
###Markdown
The returned XML includes two important strings, `QueryKey` and `WebEnv`which together define your history session. You would extract thesevalues for use with another Entrez call such as EFetch:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
id_list = ["19304878", "18606172", "16403221", "16377612", "14871861", "14630660"]
search_results = Entrez.read(Entrez.epost("pubmed", id=",".join(id_list)))
webenv = search_results["WebEnv"]
query_key = search_results["QueryKey"]
###Output
_____no_output_____
###Markdown
Section [History and WebEnv](Using-the-history-and-WebEnv) shows how to use the history feature.ESummary: Retrieving summaries from primary IDs-----------------------------------------------ESummary retrieves document summaries from a list of primary IDs (seethe [ESummary helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/esummary_help.html)for more information). In Biopython, ESummary is available as`Bio.Entrez.esummary()`. Using the search result above, we can forexample find out more about the journal with ID 30367:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esummary(db="nlmcatalog", term="[journal]", id="101660833")
record = Entrez.read(handle)
info = record[0]['TitleMainList'][0]
print("Journal info\nid: {}\nTitle: {}".format(record[0]["Id"], info["Title"]))
###Output
Journal info
id: 101660833
Title: IEEE transactions on computational imaging.
###Markdown
EFetch: Downloading full records from Entrez-----------------------------------------EFetch is what you use when you want to retrieve a full record fromEntrez. This covers several possible databases, as described on the main[EFetch Helppage](http://eutils.ncbi.nlm.nih.gov/entrez/query/static/efetch_help.html).For most of their databases, the NCBI support several different fileformats. Requesting a specific file format from Entrez using`Bio.Entrez.efetch()` requires specifying the `rettype` and/or `retmode`optional arguments. The different combinations are described for eachdatabase type on the pages linked to on [NCBI efetchwebpage](http://www.ncbi.nlm.nih.gov/entrez/query/static/efetch_help.html)(e.g.[literature](http://eutils.ncbi.nlm.nih.gov/corehtml/query/static/efetchlit_help.html),[sequences](http://eutils.ncbi.nlm.nih.gov/corehtml/query/static/efetchseq_help.html)and[taxonomy](http://eutils.ncbi.nlm.nih.gov/corehtml/query/static/efetchtax_help.html)).One common usage is downloading sequences in the FASTA orGenBank/GenPept plain text formats (which can then be parsed with`Bio.SeqIO`, see Sections \[sec:SeqIO\_GenBank\_Online\]and [EFetch: Downloading full records from Entrez](EFetch:-Downloading-full-records-from-Entrez)). From the *Cypripedioideae* example above, we candownload GenBank record 186972394 using `Bio.Entrez.efetch`:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.efetch(db="nucleotide", id="186972394", rettype="gb", retmode="text")
print(handle.read())
###Output
LOCUS EU490707 1302 bp DNA linear PLN 15-JAN-2009
DEFINITION Selenipedium aequinoctiale maturase K (matK) gene, partial cds;
chloroplast.
ACCESSION EU490707
VERSION EU490707.1 GI:186972394
KEYWORDS .
SOURCE chloroplast Selenipedium aequinoctiale
ORGANISM Selenipedium aequinoctiale
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; Liliopsida; Asparagales; Orchidaceae;
Cypripedioideae; Selenipedium.
REFERENCE 1 (bases 1 to 1302)
AUTHORS Neubig,K.M., Whitten,W.M., Carlsward,B.S., Blanco,M.A., Endara,L.,
Williams,N.H. and Moore,M.
TITLE Phylogenetic utility of ycf1 in orchids: a plastid gene more
variable than matK
JOURNAL Plant Syst. Evol. 277 (1-2), 75-84 (2009)
REFERENCE 2 (bases 1 to 1302)
AUTHORS Neubig,K.M., Whitten,W.M., Carlsward,B.S., Blanco,M.A.,
Endara,C.L., Williams,N.H. and Moore,M.J.
TITLE Direct Submission
JOURNAL Submitted (14-FEB-2008) Department of Botany, University of
Florida, 220 Bartram Hall, Gainesville, FL 32611-8526, USA
FEATURES Location/Qualifiers
source 1..1302
/organism="Selenipedium aequinoctiale"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/specimen_voucher="FLAS:Blanco 2475"
/db_xref="taxon:256374"
gene <1..>1302
/gene="matK"
CDS <1..>1302
/gene="matK"
/codon_start=1
/transl_table=11
/product="maturase K"
/protein_id="ACC99456.1"
/db_xref="GI:186972395"
/translation="IFYEPVEIFGYDNKSSLVLVKRLITRMYQQNFLISSVNDSNQKG
FWGHKHFFSSHFSSQMVSEGFGVILEIPFSSQLVSSLEEKKIPKYQNLRSIHSIFPFL
EDKFLHLNYVSDLLIPHPIHLEILVQILQCRIKDVPSLHLLRLLFHEYHNLNSLITSK
KFIYAFSKRKKRFLWLLYNSYVYECEYLFQFLRKQSSYLRSTSSGVFLERTHLYVKIE
HLLVVCCNSFQRILCFLKDPFMHYVRYQGKAILASKGTLILMKKWKFHLVNFWQSYFH
FWSQPYRIHIKQLSNYSFSFLGYFSSVLENHLVVRNQMLENSFIINLLTKKFDTIAPV
ISLIGSLSKAQFCTVLGHPISKPIWTDFSDSDILDRFCRICRNLCRYHSGSSKKQVLY
RIKYILRLSCARTLARKHKSTVRTFMRRLGSGLLEEFFMEEE"
ORIGIN
1 attttttacg aacctgtgga aatttttggt tatgacaata aatctagttt agtacttgtg
61 aaacgtttaa ttactcgaat gtatcaacag aattttttga tttcttcggt taatgattct
121 aaccaaaaag gattttgggg gcacaagcat tttttttctt ctcatttttc ttctcaaatg
181 gtatcagaag gttttggagt cattctggaa attccattct cgtcgcaatt agtatcttct
241 cttgaagaaa aaaaaatacc aaaatatcag aatttacgat ctattcattc aatatttccc
301 tttttagaag acaaattttt acatttgaat tatgtgtcag atctactaat accccatccc
361 atccatctgg aaatcttggt tcaaatcctt caatgccgga tcaaggatgt tccttctttg
421 catttattgc gattgctttt ccacgaatat cataatttga atagtctcat tacttcaaag
481 aaattcattt acgccttttc aaaaagaaag aaaagattcc tttggttact atataattct
541 tatgtatatg aatgcgaata tctattccag tttcttcgta aacagtcttc ttatttacga
601 tcaacatctt ctggagtctt tcttgagcga acacatttat atgtaaaaat agaacatctt
661 ctagtagtgt gttgtaattc ttttcagagg atcctatgct ttctcaagga tcctttcatg
721 cattatgttc gatatcaagg aaaagcaatt ctggcttcaa agggaactct tattctgatg
781 aagaaatgga aatttcatct tgtgaatttt tggcaatctt attttcactt ttggtctcaa
841 ccgtatagga ttcatataaa gcaattatcc aactattcct tctcttttct ggggtatttt
901 tcaagtgtac tagaaaatca tttggtagta agaaatcaaa tgctagagaa ttcatttata
961 ataaatcttc tgactaagaa attcgatacc atagccccag ttatttctct tattggatca
1021 ttgtcgaaag ctcaattttg tactgtattg ggtcatccta ttagtaaacc gatctggacc
1081 gatttctcgg attctgatat tcttgatcga ttttgccgga tatgtagaaa tctttgtcgt
1141 tatcacagcg gatcctcaaa aaaacaggtt ttgtatcgta taaaatatat acttcgactt
1201 tcgtgtgcta gaactttggc acggaaacat aaaagtacag tacgcacttt tatgcgaaga
1261 ttaggttcgg gattattaga agaattcttt atggaagaag aa
//
###Markdown
The arguments `rettype="gb"` and `retmode="text"` let us download thisrecord in the GenBank format.Note that until Easter 2009, the Entrez EFetch API let you use “genbank”as the return type, however the NCBI now insist on using the officialreturn types of “gb” or “gbwithparts” (or “gp” for proteins) asdescribed on online. Also not that until Feb 2012, the Entrez EFetch APIwould default to returning plain text files, but now defaults to XML.Alternatively, you could for example use `rettype="fasta"` to get theFasta-format; see the [EFetch Sequences Helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/efetchseq_help.html)for other options. Remember – the available formats depend on whichdatabase you are downloading from - see the main [EFetch Helppage](http://eutils.ncbi.nlm.nih.gov/entrez/query/static/efetch_help.html).If you fetch the record in one of the formats accepted by `Bio.SeqIO`(see Chapter \[chapter:Bio.SeqIO\]), you could directly parse it into a`SeqRecord`:
###Code
from Bio import Entrez, SeqIO
handle = Entrez.efetch(db="nucleotide", id="186972394", rettype="gb", retmode="text")
record = SeqIO.read(handle, "genbank")
handle.close()
print(record)
###Output
ID: EU490707.1
Name: EU490707
Description: Selenipedium aequinoctiale maturase K (matK) gene, partial cds; chloroplast.
Number of features: 3
/gi=186972394
/taxonomy=['Eukaryota', 'Viridiplantae', 'Streptophyta', 'Embryophyta', 'Tracheophyta', 'Spermatophyta', 'Magnoliophyta', 'Liliopsida', 'Asparagales', 'Orchidaceae', 'Cypripedioideae', 'Selenipedium']
/date=15-JAN-2009
/references=[Reference(title='Phylogenetic utility of ycf1 in orchids: a plastid gene more variable than matK', ...), Reference(title='Direct Submission', ...)]
/organism=Selenipedium aequinoctiale
/sequence_version=1
/accessions=['EU490707']
/data_file_division=PLN
/source=chloroplast Selenipedium aequinoctiale
/keywords=['']
Seq('ATTTTTTACGAACCTGTGGAAATTTTTGGTTATGACAATAAATCTAGTTTAGTA...GAA', IUPACAmbiguousDNA())
###Markdown
Note that a more typical use would be to save the sequence data to alocal file, and *then* parse it with `Bio.SeqIO`. This can save youhaving to re-download the same file repeatedly while working on yourscript, and places less load on the NCBI’s servers. For example:
###Code
import os
from Bio import SeqIO
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
filename = "gi_186972394.gbk"
if not os.path.isfile(filename):
# Downloading...
with Entrez.efetch(db="nucleotide",id="186972394",rettype="gb", retmode="text") as net_handle:
with open(filename, "w") as out_handle:
out_handle.write(net_handle.read())
print("Saved")
print("Parsing...")
record = SeqIO.read(filename, "genbank")
print(record)
###Output
_____no_output_____
###Markdown
To get the output in XML format, which you can parse using the`Bio.Entrez.read()` function, use `retmode="xml"`:
###Code
from Bio import Entrez
handle = Entrez.efetch(db="nucleotide", id="186972394", retmode="xml")
record = Entrez.read(handle)
handle.close()
record[0]["GBSeq_definition"]
record[0]["GBSeq_source"]
###Output
_____no_output_____
###Markdown
So, that dealt with sequences. For examples of parsing file formatsspecific to the other databases (e.g. the `MEDLINE` format used inPubMed), see Section [Specialized parsers](Specialized-parsers).If you want to perform a search with `Bio.Entrez.esearch()`, and thendownload the records with `Bio.Entrez.efetch()`, you should use theWebEnv history feature – see Section [History and WebEnv](Using-the-history-and-WebEnv). ELink: Searching for related items in NCBI EntrezELink, available from Biopython as `Bio.Entrez.elink()`, can be used tofind related items in the NCBI Entrez databases. For example, you can usthis to find nucleotide entries for an entry in the gene database, andother cool stuff.Let’s use ELink to find articles related to the Biopython applicationnote published in *Bioinformatics* in 2009. The PubMed ID of thisarticle is 19304878:
###Code
from Bio import Entrez
Entrez.email = "[email protected]"
pmid = "19304878"
record = Entrez.read(Entrez.elink(dbfrom="pubmed", id=pmid))
print(record[0].keys())
print('The record is from the {} database.'.format(record[0]["DbFrom"]))
print('The IdList is {}.'.format(record[0]["IdList"]))
###Output
dict_keys(['ERROR', 'LinkSetDbHistory', 'IdList', 'LinkSetDb', 'DbFrom'])
The record is from the pubmed database.
The IdList is ['19304878'].
###Markdown
The `record` variable consists of a Python list, one for each databasein which we searched. Since we specified only one PubMed ID to searchfor, `record` contains only one item. This item is a dictionarycontaining information about our search term, as well as all the relateditems that were found: The `"LinkSetDb"` key contains the search results, stored as a listconsisting of one item for each target database. In our search results,we only find hits in the PubMed database (although sub-divided intocategories):
###Code
print('There are {} search results'.format(len(record[0]["LinkSetDb"])))
for linksetdb in record[0]["LinkSetDb"]:
print(linksetdb["DbTo"], linksetdb["LinkName"], len(linksetdb["Link"]))
###Output
There are 8 search results
pubmed pubmed_pubmed 224
pubmed pubmed_pubmed_alsoviewed 3
pubmed pubmed_pubmed_citedin 276
pubmed pubmed_pubmed_combined 6
pubmed pubmed_pubmed_five 6
pubmed pubmed_pubmed_refs 17
pubmed pubmed_pubmed_reviews 8
pubmed pubmed_pubmed_reviews_five 6
###Markdown
The actual search results are stored as under the `"Link"` key. Intotal, 110 items were found under standard search. Let’s now at thefirst search result:
###Code
record[0]["LinkSetDb"][0]["Link"][0]
###Output
_____no_output_____
###Markdown
This is the article we searched for, which doesn’t help us much, solet’s look at the second search result:
###Code
record[0]["LinkSetDb"][0]["Link"][1]
###Output
_____no_output_____
###Markdown
This paper, with PubMed ID 14630660, is about the Biopython PDB parser.We can use a loop to print out all PubMed IDs:
###Code
for link in record[0]["LinkSetDb"][0]["Link"]:
print(link["Id"])
###Output
19304878
14630660
22909249
20739307
23023984
18689808
20733063
19628504
18251993
15572471
20823319
20847218
25273102
12368254
23842806
22399473
22302572
18238804
24064416
22332238
20421198
15723693
15096277
14512356
16377612
15130828
18593718
25236461
20973958
12230038
22368248
23633579
20591906
21352538
21216774
22815363
17237069
17101041
22581176
19181685
16257987
17441614
18227118
17316423
17384428
16539535
22824207
24463182
22877863
14751976
16899494
17237072
23479348
25661541
16569235
20537149
20375454
19106120
20334363
20439314
19460889
15969769
23456039
15059834
24885957
23292976
14871861
15383216
22276101
19698094
17483505
17291351
15260898
20472540
17586821
16741236
17121776
18442177
23493324
21798964
16371163
19958528
22788675
17586553
20022974
22500002
21949271
16188925
21210984
21210978
16922600
20542914
21737439
22084254
25677125
12401134
22942023
11524374
19336443
17483515
25414366
22556367
16796559
16539540
22396485
22253821
19773334
22595207
23071651
24564380
23846743
22039207
21385461
17537750
15980476
23666736
21715385
19578173
23699471
22908215
25481009
23355290
22565567
25706687
22813356
25697819
24903418
22942017
24894501
23742983
23956303
24929426
21685053
24078714
24574118
22494792
24951946
23418185
25189778
23220574
23422340
19648141
24068901
24600386
23957210
23242262
22024252
25414364
25378466
23367449
21500218
24961236
25344496
24930138
23987304
22171336
26079347
23348786
24431986
25600941
20375445
20616382
21210977
23435069
23516352
24045775
25494900
24618462
22238272
25217575
15461798
23628689
21984743
25295002
23574738
25328913
22936991
22844241
23396756
26587054
21362187
23786315
21458441
24258321
25150250
23765606
24795618
22645166
25126069
22954632
25091065
22369160
25505088
22943297
21803787
23633576
24359023
19225577
16766564
19055766
24995036
25591752
19607723
23652425
18328109
20924230
17281649
16729046
25974373
25653001
22479120
24924300
21716279
25024921
24834575
24870127
25942442
25433467
26153621
24086295
25110777
21253560
25132841
25926788
###Markdown
Now that was nice, but personally I am often more interested to find outif a paper has been cited. Well, ELink can do that too – at least forjournals in Pubmed Central (see Section \[sec:elink-citations\]).For help on ELink, see the [ELink helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/elink_help.html).There is an entire sub-page just for the [linknames](http://eutils.ncbi.nlm.nih.gov/corehtml/query/static/entrezlinks.html),describing how different databases can be cross referenced.EGQuery: Global Query - counts for search terms-----------------------------------------------EGQuery provides counts for a search term in each of the Entrezdatabases (i.e. a global query). This is particularly useful to find outhow many items your search terms would find in each database withoutactually performing lots of separate searches with ESearch (see theexample in \[subsec:entrez\_example\_genbank\] below).In this example, we use `Bio.Entrez.egquery()` to obtain the counts for“Biopython”:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.egquery(term="biopython")
record = Entrez.read(handle)
for row in record["eGQueryResult"]:
print(row["DbName"], row["Count"])
###Output
pubmed 21
pmc 560
mesh 0
books 2
pubmedhealth 2
omim 0
ncbisearch 0
nuccore 0
nucgss 0
nucest 0
protein 0
genome 0
structure 0
taxonomy 0
snp 0
dbvar 0
epigenomics 0
gene 0
sra 0
biosystems 0
unigene 0
cdd 0
clone 0
popset 0
geoprofiles 0
gds 16
homologene 0
pccompound 0
pcsubstance 0
pcassay 0
nlmcatalog 0
probe 0
gap 0
proteinclusters 0
bioproject 0
biosample 0
###Markdown
See the [EGQuery helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/egquery_help.html)for more information.ESpell: Obtaining spelling suggestions--------------------------------------ESpell retrieves spelling suggestions. In this example, we use`Bio.Entrez.espell()` to obtain the correct spelling of Biopython:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.espell(term="biopythooon")
record = Entrez.read(handle)
record["Query"]
record["CorrectedQuery"]
###Output
_____no_output_____
###Markdown
See the [ESpell helppage](http://www.ncbi.nlm.nih.gov/entrez/query/static/espell_help.html)for more information. The main use of this is for GUI tools to provideautomatic suggestions for search terms.Parsing huge Entrez XML files-----------------------------The `Entrez.read` function reads the entire XML file returned by Entrezinto a single Python object, which is kept in memory. To parse EntrezXML files too large to fit in memory, you can use the function`Entrez.parse`. This is a generator function that reads records in theXML file one by one. This function is only useful if the XML filereflects a Python list object (in other words, if `Entrez.read` on acomputer with infinite memory resources would return a Python list).For example, you can download the entire Entrez Gene database for agiven organism as a file from NCBI’s ftp site. These files can be verylarge. As an example, on September 4, 2009, the file`Homo_sapiens.ags.gz`, containing the Entrez Gene database for human,had a size of 116576 kB. This file, which is in the `ASN` format, can beconverted into an XML file using NCBI’s `gene2xml` program (see NCBI’sftp site for more information): ```gene2xml -b T -i Homo_sapiens.ags -o Homo_sapiens.xml``` The resulting XML file has a size of 6.1 GB. Attempting `Entrez.read` onthis file will result in a `MemoryError` on many computers.The XML file `Homo_sapiens.xml` consists of a list of Entrez generecords, each corresponding to one Entrez gene in human. `Entrez.parse`retrieves these gene records one by one. You can then print out or storethe relevant information in each record by iterating over the records.For example, this script iterates over the Entrez gene records andprints out the gene numbers and names for all current genes: TODO: need alternate example, download option or ... ```pythonfrom Bio import Entrezhandle = open("Homo_sapiens.xml")records = Entrez.parse(handle)``` ```pythonfor record in records: status = record['Entrezgene_track-info']['Gene-track']['Gene-track_status'] if status.attributes['value']=='discontinued': continue geneid = record['Entrezgene_track-info']['Gene-track']['Gene-track_geneid'] genename = record['Entrezgene_gene']['Gene-ref']['Gene-ref_locus'] print(geneid, genename)``` This will print: ```1 A1BG2 A2M3 A2MP8 AA9 NAT110 NAT211 AACP12 SERPINA313 AADAC14 AAMP15 AANAT16 AARS17 AAVS1...``` Handling errors---------------Three things can go wrong when parsing an XML file:- The file may not be an XML file to begin with;- The file may end prematurely or otherwise be corrupted;- The file may be correct XML, but contain items that are not represented in the associated DTD.The first case occurs if, for example, you try to parse a Fasta file asif it were an XML file:
###Code
from Bio import Entrez
from Bio.Entrez.Parser import NotXMLError
handle = open("data/NC_005816.fna", 'rb') # a Fasta file
try:
record = Entrez.read(handle)
except NotXMLError as e:
print('We are expecting to get NotXMLError')
print(e)
###Output
We are expecting to get NotXMLError
Failed to parse the XML data (syntax error: line 1, column 0). Please make sure that the input data are in XML format.
###Markdown
Here, the parser didn’t find the `<?xml ...` tag with which an XML fileis supposed to start, and therefore decides (correctly) that the file isnot an XML file.When your file is in the XML format but is corrupted (for example, byending prematurely), the parser will raise a CorruptedXMLError. Here isan example of an XML file that ends prematurely: ```xml pubmed protein nucleotide nuccore nucgss nucest structure genome books cancerchromosomes cdd``` which will generate the following traceback: ```python---------------------------------------------------------------------------ExpatError Traceback (most recent call last)/Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/Parser.py in read(self, handle) 214 try:--> 215 self.parser.ParseFile(handle) 216 except expat.ExpatError as e:ExpatError: syntax error: line 1, column 0During handling of the above exception, another exception occurred:NotXMLError Traceback (most recent call last) in ()----> 1 Entrez.read(handle)/Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/__init__.py in read(handle, validate) 419 from .Parser import DataHandler 420 handler = DataHandler(validate)--> 421 record = handler.read(handle) 422 return record 423 /Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/Parser.py in read(self, handle) 223 We have not seen the initial <!xml declaration, so probably 224 the input data is not in XML format.--> 225 raise NotXMLError(e) 226 try: 227 return self.objectNotXMLError: Failed to parse the XML data (syntax error: line 1, column 0). Please make sure that the input data are in XML format.``` Note that the error message tells you at what point in the XML file theerror was detected.The third type of error occurs if the XML file contains tags that do nothave a description in the corresponding DTD file. This is an example ofsuch an XML file: ```xml pubmed PubMed PubMed bibliographic record 20161961 2010/09/10 04:52 ... PubDate 4 string EPubDate... ``` In this file, for some reason the tag `` (and severalothers) are not listed in the DTD file `eInfo_020511.dtd`, which isspecified on the second line as the DTD for this XML file. By default,the parser will stop and raise a ValidationError if it cannot find sometag in the DTD: ```pythonfrom Bio import Entrezhandle = open("data/einfo3.xml", 'rb')record = Entrez.read(handle)``` ```python---------------------------------------------------------------------------ValidationError Traceback (most recent call last) in () 1 from Bio import Entrez 2 handle = open("data/einfo3.xml", 'rb')----> 3 record = Entrez.read(handle)/Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/__init__.py in read(handle, validate) 419 from .Parser import DataHandler 420 handler = DataHandler(validate)--> 421 record = handler.read(handle) 422 return record 423 /Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/Parser.py in read(self, handle) 213 raise IOError("Can't parse a closed handle") 214 try:--> 215 self.parser.ParseFile(handle) 216 except expat.ExpatError as e: 217 if self.parser.StartElementHandler:-------src-dir--------/Python-3.5.1/Modules/pyexpat.c in StartElement()/Users/vincentdavis/anaconda/envs/py35/lib/python3.5/site-packages/Bio/Entrez/Parser.py in startElementHandler(self, name, attrs) 348 Element not found in DTD 349 if self.validating:--> 350 raise ValidationError(name) 351 else: 352 this will not be stored in the recordValidationError: Failed to find tag 'DocsumList' in the DTD. To skip all tags that are not represented in the DTD, please call Bio.Entrez.read or Bio.Entrez.parse with validate=False.``` Optionally, you can instruct the parser to skip such tags instead ofraising a ValidationError. This is done by calling `Entrez.read` or`Entrez.parse` with the argument `validate` equal to False:
###Code
from Bio import Entrez
handle = open("data/einfo3.xml", 'rb')
record = Entrez.read(handle, validate=False)
###Output
_____no_output_____
###Markdown
Of course, the information contained in the XML tags that are not in theDTD are not present in the record returned by `Entrez.read`.Specialized parsers------------------The `Bio.Entrez.read()` function can parse most (if not all) XML outputreturned by Entrez. Entrez typically allows you to retrieve records inother formats, which may have some advantages compared to the XML formatin terms of readability (or download size).To request a specific file format from Entrez using`Bio.Entrez.efetch()` requires specifying the `rettype` and/or `retmode`optional arguments. The different combinations are described for eachdatabase type on the [NCBI efetchwebpage](http://www.ncbi.nlm.nih.gov/entrez/query/static/efetch_help.html).One obvious case is you may prefer to download sequences in the FASTA orGenBank/GenPept plain text formats (which can then be parsed with`Bio.SeqIO`, see Sections \[sec:SeqIO\_GenBank\_Online\]and [EFetch: Downloading full records from Entrez](EFetch:-Downloading-full-records-from-Entrez)). For the literature databases, Biopython contains aparser for the `MEDLINE` format used in PubMed. Parsing Medline records {subsec:entrez-and-medline}You can find the Medline parser in `Bio.Medline`. Suppose we want toparse the file `pubmed_result1.txt`, containing one Medline record. Youcan find this file in Biopython’s `Tests\Medline` directory. The filelooks like this: ```PMID- 12230038OWN - NLMSTAT- MEDLINEDA - 20020916DCOM- 20030606LR - 20041117PUBM- PrintIS - 1467-5463 (Print)VI - 3IP - 3DP - 2002 SepTI - The Bio* toolkits--a brief overview.PG - 296-302AB - Bioinformatics research is often difficult to do with commercial software. The Open Source BioPerl, BioPython and Biojava projects provide toolkits with...``` We first open the file and then parse it:
###Code
from Bio import Medline
with open("data/pubmed_result1.txt") as handle:
record = Medline.read(handle)
###Output
_____no_output_____
###Markdown
The `record` now contains the Medline record as a Python dictionary:
###Code
record["PMID"]
###Output
_____no_output_____
###Markdown
###Code
record["AB"]
###Output
_____no_output_____
###Markdown
The key names used in a Medline record can be rather obscure; use
###Code
help(record)
###Output
Help on Record in module Bio.Medline object:
class Record(builtins.dict)
| A dictionary holding information from a Medline record.
|
| All data are stored under the mnemonic appearing in the Medline
| file. These mnemonics have the following interpretations:
|
| ========= ==============================
| Mnemonic Description
| --------- ------------------------------
| AB Abstract
| CI Copyright Information
| AD Affiliation
| IRAD Investigator Affiliation
| AID Article Identifier
| AU Author
| FAU Full Author
| CN Corporate Author
| DCOM Date Completed
| DA Date Created
| LR Date Last Revised
| DEP Date of Electronic Publication
| DP Date of Publication
| EDAT Entrez Date
| GS Gene Symbol
| GN General Note
| GR Grant Number
| IR Investigator Name
| FIR Full Investigator Name
| IS ISSN
| IP Issue
| TA Journal Title Abbreviation
| JT Journal Title
| LA Language
| LID Location Identifier
| MID Manuscript Identifier
| MHDA MeSH Date
| MH MeSH Terms
| JID NLM Unique ID
| RF Number of References
| OAB Other Abstract
| OCI Other Copyright Information
| OID Other ID
| OT Other Term
| OTO Other Term Owner
| OWN Owner
| PG Pagination
| PS Personal Name as Subject
| FPS Full Personal Name as Subject
| PL Place of Publication
| PHST Publication History Status
| PST Publication Status
| PT Publication Type
| PUBM Publishing Model
| PMC PubMed Central Identifier
| PMID PubMed Unique Identifier
| RN Registry Number/EC Number
| NM Substance Name
| SI Secondary Source ID
| SO Source
| SFM Space Flight Mission
| STAT Status
| SB Subset
| TI Title
| TT Transliterated Title
| VI Volume
| CON Comment on
| CIN Comment in
| EIN Erratum in
| EFR Erratum for
| CRI Corrected and Republished in
| CRF Corrected and Republished from
| PRIN Partial retraction in
| PROF Partial retraction of
| RPI Republished in
| RPF Republished from
| RIN Retraction in
| ROF Retraction of
| UIN Update in
| UOF Update of
| SPIN Summary for patients in
| ORI Original report in
| ========= ==============================
|
| Method resolution order:
| Record
| builtins.dict
| builtins.object
|
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.dict:
|
| __contains__(self, key, /)
| True if D has a key k, else False.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(...)
| x.__getitem__(y) <==> x[y]
|
| __gt__(self, value, /)
| Return self>value.
|
| __init__(self, /, *args, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| __repr__(self, /)
| Return repr(self).
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __sizeof__(...)
| D.__sizeof__() -> size of D in memory, in bytes
|
| clear(...)
| D.clear() -> None. Remove all items from D.
|
| copy(...)
| D.copy() -> a shallow copy of D
|
| fromkeys(iterable, value=None, /) from builtins.type
| Returns a new dict with keys from iterable and values equal to value.
|
| get(...)
| D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
|
| items(...)
| D.items() -> a set-like object providing a view on D's items
|
| keys(...)
| D.keys() -> a set-like object providing a view on D's keys
|
| pop(...)
| D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
| If key is not found, d is returned if given, otherwise KeyError is raised
|
| popitem(...)
| D.popitem() -> (k, v), remove and return some (key, value) pair as a
| 2-tuple; but raise KeyError if D is empty.
|
| setdefault(...)
| D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D
|
| update(...)
| D.update([E, ]**F) -> None. Update D from dict/iterable E and F.
| If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
| If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
| In either case, this is followed by: for k in F: D[k] = F[k]
|
| values(...)
| D.values() -> an object providing a view on D's values
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from builtins.dict:
|
| __hash__ = None
###Markdown
for a brief summary.To parse a file containing multiple Medline records, you can use the`parse` function instead:
###Code
from Bio import Medline
with open("data/pubmed_result2.txt") as handle:
for record in Medline.parse(handle):
print(record["TI"])
###Output
A high level interface to SCOP and ASTRAL implemented in python.
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Open source clustering software.
PDB file parser and structure class implemented in Python.
###Markdown
Instead of parsing Medline records stored in files, you can also parseMedline records downloaded by `Bio.Entrez.efetch`. For example, let’slook at all Medline records in PubMed related to Biopython:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="pubmed", term="biopython")
record = Entrez.read(handle)
record["IdList"]
###Output
_____no_output_____
###Markdown
We now use `Bio.Entrez.efetch` to download these Medline records:
###Code
idlist = record["IdList"]
handle = Entrez.efetch(db="pubmed", id=idlist, rettype="medline", retmode="text")
###Output
_____no_output_____
###Markdown
Here, we specify `rettype="medline", retmode="text"` to obtain theMedline records in plain-text Medline format. Now we use `Bio.Medline`to parse these records:
###Code
from Bio import Medline
records = Medline.parse(handle)
for record in records:
print(record["AU"])
###Output
['Waldmann J', 'Gerken J', 'Hankeln W', 'Schweer T', 'Glockner FO']
['Mielke CJ', 'Mandarino LJ', 'Dinu V']
['Gajda MJ']
['Mathelier A', 'Zhao X', 'Zhang AW', 'Parcy F', 'Worsley-Hunt R', 'Arenillas DJ', 'Buchman S', 'Chen CY', 'Chou A', 'Ienasescu H', 'Lim J', 'Shyr C', 'Tan G', 'Zhou M', 'Lenhard B', 'Sandelin A', 'Wasserman WW']
['Morales HF', 'Giovambattista G']
['Baldwin S', 'Revanna R', 'Thomson S', 'Pither-Joyce M', 'Wright K', 'Crowhurst R', 'Fiers M', 'Chen L', 'Macknight R', 'McCallum JA']
['Talevich E', 'Invergo BM', 'Cock PJ', 'Chapman BA']
['Prins P', 'Goto N', 'Yates A', 'Gautier L', 'Willis S', 'Fields C', 'Katayama T']
['Schmitt T', 'Messina DN', 'Schreiber F', 'Sonnhammer EL']
['Antao T']
['Cock PJ', 'Fields CJ', 'Goto N', 'Heuer ML', 'Rice PM']
['Jankun-Kelly TJ', 'Lindeman AD', 'Bridges SM']
['Korhonen J', 'Martinmaki P', 'Pizzi C', 'Rastas P', 'Ukkonen E']
['Cock PJ', 'Antao T', 'Chang JT', 'Chapman BA', 'Cox CJ', 'Dalke A', 'Friedberg I', 'Hamelryck T', 'Kauff F', 'Wilczynski B', 'de Hoon MJ']
['Munteanu CR', 'Gonzalez-Diaz H', 'Magalhaes AL']
['Faircloth BC']
['Casbon JA', 'Crooks GE', 'Saqi MA']
['Pritchard L', 'White JA', 'Birch PR', 'Toth IK']
['de Hoon MJ', 'Imoto S', 'Nolan J', 'Miyano S']
['Hamelryck T', 'Manderick B']
###Markdown
For comparison, here we show an example using the XML format:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="pubmed", term="biopython")
record = Entrez.read(handle)
idlist = record["IdList"]
handle = Entrez.efetch(db="pubmed", id=idlist, rettype="medline", retmode="xml")
records = Entrez.read(handle)
for record in records:
print(record["MedlineCitation"]["Article"]["ArticleTitle"])
###Output
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
AMASS: a database for investigating protein structures.
HPDB-Haskell library for processing atomic biomolecular structures in Protein Data Bank format.
JASPAR 2014: an extensively expanded and updated open-access database of transcription factor binding profiles.
BioSmalltalk: a pure object system and library for bioinformatics.
A toolkit for bulk PCR-based marker design from next-generation sequence data: application for development of a framework linkage map in bulb onion (Allium cepa L.).
Bio.Phylo: a unified toolkit for processing, analyzing and visualizing phylogenetic trees in Biopython.
Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.
Letter to the editor: SeqXML and OrthoXML: standards for sequence and orthology information.
interPopula: a Python API to access the HapMap Project dataset.
The Sanger FASTQ file format for sequences with quality scores, and the Solexa/Illumina FASTQ variants.
Exploratory visual analysis of conserved domains on multiple sequence alignments.
MOODS: fast search for position weight matrix matches in DNA sequences.
Biopython: freely available Python tools for computational molecular biology and bioinformatics.
Enzymes/non-enzymes classification model complexity based on composition, sequence, 3D and topological indices.
msatcommander: detection of microsatellite repeat arrays and automated, locus-specific primer design.
A high level interface to SCOP and ASTRAL implemented in python.
GenomeDiagram: a python package for the visualization of large-scale genomic data.
Open source clustering software.
PDB file parser and structure class implemented in Python.
###Markdown
Note that in both of these examples, for simplicity we have naivelycombined ESearch and EFetch. In this situation, the NCBI would expectyou to use their history feature, as illustrated inSection [History and WebEnv](Using-the-history-and-WebEnv). Parsing GEO recordsGEO ([Gene Expression Omnibus](http://www.ncbi.nlm.nih.gov/geo/)) is adata repository of high-throughput gene expression and hybridizationarray data. The `Bio.Geo` module can be used to parse GEO-formatteddata.The following code fragment shows how to parse the example GEO file`GSE16.txt` into a record and print the record:
###Code
from Bio import Geo
handle = open("data/GSE16.txt")
records = Geo.parse(handle)
for record in records:
print(record)
###Output
GEO Type: SAMPLE
GEO Id: GSM804
Sample_author: Antoine,M,Snijders
Sample_author: Norma,,Nowak
Sample_author: Richard,,Segraves
Sample_author: Stephanie,,Blackwood
Sample_author: Nils,,Brown
Sample_author: Jeffery,,Conroy
Sample_author: Greg,,Hamilton
Sample_author: Anna,K,Hindle
Sample_author: Bing,,Huey
Sample_author: Karen,,Kimura
Sample_author: Sindy,,Law
Sample_author: Ken,,Myambo
Sample_author: Joel,,Palmer
Sample_author: Bauke,,Ylstra
Sample_author: Jingzhu,P,Yue
Sample_author: Joe,W,Gray
Sample_author: Ajay,N,Jain
Sample_author: Daniel,,Pinkel
Sample_author: Donna,G,Albertson
Sample_description: Coriell Cell Repositories cell line <a h
ref="http://locus.umdnj.edu/nigms/nigms_cgi/display.cgi?GM05296">GM05296</a>.
Sample_description: Fibroblast cell line derived from a 1 mo
nth old female with multiple congenital malformations, dysmorphic features, intr
auterine growth retardation, heart murmur, cleft palate, equinovarus deformity,
microcephaly, coloboma of right iris, clinodactyly, reduced RBC catalase activit
y, and 1 copy of catalase gene.
Sample_description: Chromosome abnormalities are present.
Sample_description: Karyotype is 46,XX,-11,+der(11)inv ins(1
1;10)(11pter> 11p13::10q21>10q24::11p13>11qter)mat
Sample_organism: Homo sapiens
Sample_platform_id: GPL28
Sample_pubmed_id: 11687795
Sample_series_id: GSE16
Sample_status: Public on Feb 12 2002
Sample_submission_date: Jan 17 2002
Sample_submitter_city: San Francisco,CA,94143,USA
Sample_submitter_department: Comprehensive Cancer Center
Sample_submitter_email: [email protected]
Sample_submitter_institute: University of California San Francisco
Sample_submitter_name: Donna,G,Albertson
Sample_submitter_phone: 415 502-8463
Sample_target_source1: Cell line GM05296
Sample_target_source2: normal male reference genomic DNA
Sample_title: CGH_Albertson_GM05296-001218
Sample_type: dual channel genomic
Column Header Definitions
ID_REF: Unique row identifier, genome position o
rder
LINEAR_RATIO: Mean of replicate Cy3/Cy5 ratios
LOG2STDDEV: Standard deviation of VALUE
NO_REPLICATES: Number of replicate spot measurements
VALUE: aka LOG2RATIO, mean of log base 2 of LIN
EAR_RATIO
0: ID_REF VALUE LINEAR_RATIO LOG2STDDEV NO_REPLICATES
1: 1 1.047765 0.011853 3
2: 2 0
3: 3 0.008824 1.006135 0.00143 3
4: 4 -0.000894 0.99938 0.001454 3
5: 5 0.075875 1.054 0.003077 3
6: 6 0.017303 1.012066 0.005876 2
7: 7 -0.006766 0.995321 0.013881 3
8: 8 0.020755 1.014491 0.005506 3
9: 9 -0.094938 0.936313 0.012662 3
10: 10 -0.054527 0.96291 0.01073 3
11: 11 -0.025057 0.982782 0.003855 3
12: 12 0
13: 13 0.108454 1.078072 0.005196 3
14: 14 0.078633 1.056017 0.009165 3
15: 15 0.098571 1.070712 0.007834 3
16: 16 0.044048 1.031003 0.013651 3
17: 17 0.018039 1.012582 0.005471 3
18: 18 -0.088807 0.9403 0.010571 3
19: 19 0.016349 1.011397 0.007113 3
20: 20 0.030977 1.021704 0.016798 3
###Markdown
You can search the “gds” database (GEO datasets) with ESearch:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="gds", term="GSE16")
record = Entrez.read(handle)
record["Count"]
record["IdList"]
###Output
_____no_output_____
###Markdown
From the Entrez website, UID “200000016” is GDS16 while the other hit“100000028” is for the associated platform, GPL28. Unfortunately, at thetime of writing the NCBI don’t seem to support downloading GEO filesusing Entrez (not as XML, nor in the *Simple Omnibus Format in Text*(SOFT) format).However, it is actually pretty straight forward to download the GEOfiles by FTP or HTTP from [http://ftp.ncbi.nih.gov/pub/geo/](http://ftp.ncbi.nih.gov/pub/geo/) instead. In thiscase you might want[http://ftp.ncbi.nih.gov/pub/geo/DATA/SOFT/by_series/GSE16/GSE16_family.soft.gz](http://ftp.ncbi.nih.gov/pub/geo/DATA/SOFT/by_series/GSE16/GSE16_family.soft.gz)(a compressed file, see the Python module gzip). Parsing UniGene recordsUniGene is an NCBI database of the transcriptome, with each UniGenerecord showing the set of transcripts that are associated with aparticular gene in a specific organism. A typical UniGene record lookslike this: ```ID Hs.2TITLE N-acetyltransferase 2 (arylamine N-acetyltransferase)GENE NAT2CYTOBAND 8p22GENE_ID 10LOCUSLINK 10HOMOL YESEXPRESS bone| connective tissue| intestine| liver| liver tumor| normal| soft tissue/muscle tissue tumor| adultRESTR_EXPR adultCHROMOSOME 8STS ACC=PMC310725P3 UNISTS=272646STS ACC=WIAF-2120 UNISTS=44576STS ACC=G59899 UNISTS=137181...STS ACC=GDB:187676 UNISTS=155563PROTSIM ORG=10090; PROTGI=6754794; PROTID=NP_035004.1; PCT=76.55; ALN=288PROTSIM ORG=9796; PROTGI=149742490; PROTID=XP_001487907.1; PCT=79.66; ALN=288PROTSIM ORG=9986; PROTGI=126722851; PROTID=NP_001075655.1; PCT=76.90; ALN=288...PROTSIM ORG=9598; PROTGI=114619004; PROTID=XP_519631.2; PCT=98.28; ALN=288SCOUNT 38SEQUENCE ACC=BC067218.1; NID=g45501306; PID=g45501307; SEQTYPE=mRNASEQUENCE ACC=NM_000015.2; NID=g116295259; PID=g116295260; SEQTYPE=mRNASEQUENCE ACC=D90042.1; NID=g219415; PID=g219416; SEQTYPE=mRNASEQUENCE ACC=D90040.1; NID=g219411; PID=g219412; SEQTYPE=mRNASEQUENCE ACC=BC015878.1; NID=g16198419; PID=g16198420; SEQTYPE=mRNASEQUENCE ACC=CR407631.1; NID=g47115198; PID=g47115199; SEQTYPE=mRNASEQUENCE ACC=BG569293.1; NID=g13576946; CLONE=IMAGE:4722596; END=5'; LID=6989; SEQTYPE=EST; TRACE=44157214...SEQUENCE ACC=AU099534.1; NID=g13550663; CLONE=HSI08034; END=5'; LID=8800; SEQTYPE=EST//``` This particular record shows the set of transcripts (shown in the`SEQUENCE` lines) that originate from the human gene NAT2, encoding enN-acetyltransferase. The `PROTSIM` lines show proteins with significantsimilarity to NAT2, whereas the `STS` lines show the correspondingsequence-tagged sites in the genome.To parse UniGene files, use the `Bio.UniGene` module: TODO: Need a working example
###Code
# from Bio import UniGene
# input = open("data/myunigenefile.data")
# record = UniGene.read(input)
###Output
_____no_output_____
###Markdown
The `record` returned by `UniGene.read` is a Python object withattributes corresponding to the fields in the UniGene record. Forexample,
###Code
# record.ID
# record.title
###Output
_____no_output_____
###Markdown
The `EXPRESS` and `RESTR_EXPR` lines are stored as Python lists ofstrings: ```['bone', 'connective tissue', 'intestine', 'liver', 'liver tumor', 'normal', 'soft tissue/muscle tissue tumor', 'adult']``` Specialized objects are returned for the `STS`, `PROTSIM`, and`SEQUENCE` lines, storing the keys shown in each line as attributes:
###Code
# record.sts[0].acc
# record.sts[0].unists
###Output
_____no_output_____
###Markdown
and similarly for the `PROTSIM` and `SEQUENCE` lines.To parse a file containing more than one UniGene record, use the `parse`function in `Bio.UniGene`: TODO: Need a working example
###Code
# from Bio import UniGene
# input = open("unigenerecords.data")
# records = UniGene.parse(input)
# for record in records:
# print(record.ID)
###Output
_____no_output_____
###Markdown
Using a proxy-------------Normally you won’t have to worry about using a proxy, but if this is anissue on your network here is how to deal with it. Internally,`Bio.Entrez` uses the standard Python library `urllib` for accessing theNCBI servers. This will check an environment variable called`http_proxy` to configure any simple proxy automatically. Unfortunatelythis module does not support the use of proxies which requireauthentication.You may choose to set the `http_proxy` environment variable once (howyou do this will depend on your operating system). Alternatively you canset this within Python at the start of your script, for example: ```import osos.environ["http_proxy"] = "http://proxyhost.example.com:8080"``` See the [urllibdocumentation](http://www.python.org/doc/lib/module-urllib.html) formore details. Examples PubMed and Medline {subsec:pub_med}If you are in the medical field or interested in human issues (and manytimes even if you are not!), PubMed() is an excellent source of allkinds of goodies. So like other things, we’d like to be able to grabinformation from it and use it in Python scripts.In this example, we will query PubMed for all articles having to do withorchids (see section \[sec:orchids\] for our motivation). We first checkhow many of such articles there are:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.egquery(term="orchid")
record = Entrez.read(handle)
for row in record["eGQueryResult"]:
if row["DbName"]=="pubmed":
print(row["Count"])
###Output
1376
###Markdown
Now we use the `Bio.Entrez.efetch` function to download the PubMed IDsof these 463 articles:
###Code
handle = Entrez.esearch(db="pubmed", term="orchid", retmax=463)
record = Entrez.read(handle)
idlist = record["IdList"]
print("The first 10 Id's containing all of the PubMed IDs of articles related to orchids:\n {}".format(idlist[:10]))
###Output
The first 10 Id's containing all of the PubMed IDs of articles related to orchids:
['26752741', '26743923', '26738548', '26732875', '26732614', '26724929', '26715121', '26713612', '26708054', '26694378']
###Markdown
Now that we’ve got them, we obviously want to get the correspondingMedline records and extract the information from them. Here, we’lldownload the Medline records in the Medline flat-file format, and usethe `Bio.Medline` module to parse them:
###Code
from Bio import Medline
handle = Entrez.efetch(db="pubmed", id=idlist, rettype="medline")
records = Medline.parse(handle)
###Output
_____no_output_____
###Markdown
NOTE - We’ve just done a separate search and fetch here, the NCBI muchprefer you to take advantage of their history support in this situation.See Section [History and WebEnv](Using-the-history-and-WebEnv).Keep in mind that `records` is an iterator, so you can iterate throughthe records only once. If you want to save the records, you can convertthem to a list:
###Code
records = list(records)
###Output
_____no_output_____
###Markdown
Let’s now iterate over the records to print out some information abouteach record:
###Code
for record in records:
print("title:", record.get("TI", "?"))
print("authors:", record.get("AU", "?"))
print("source:", record.get("SO", "?"))
print("")
###Output
title: Promise and Challenge of DNA Barcoding in Venus Slipper (Paphiopedilum).
authors: ['Guo YY', 'Huang LQ', 'Liu ZJ', 'Wang XQ']
source: PLoS One. 2016 Jan 11;11(1):e0146880. doi: 10.1371/journal.pone.0146880. eCollection 2016.
title: In vitro profiling of anti-MRSA activity of thymoquinone against selected type and clinical strains.
authors: ['Hariharan P', 'Paul-Satyaseela M', 'Gnanamani A']
source: Lett Appl Microbiol. 2016 Jan 7. doi: 10.1111/lam.12544.
title: Low glutathione redox state couples with a decreased ascorbate redox ratio to accelerate flowering in Oncidium orchid.
authors: ['Chin DC', 'Hsieh CC', 'Lin HY', 'Yeh KW']
source: Plant Cell Physiol. 2016 Jan 6. pii: pcv206.
title: Proteomic and morphometric study of the in vitro interaction between Oncidium sphacelatum Lindl. (Orchidaceae) and Thanatephorus sp. RG26 (Ceratobasidiaceae).
authors: ['Lopez-Chavez MY', 'Guillen-Navarro K', 'Bertolini V', 'Encarnacion S', 'Hernandez-Ortiz M', 'Sanchez-Moreno I', 'Damon A']
source: Mycorrhiza. 2016 Jan 6.
title: A transcriptome-wide, organ-specific regulatory map of Dendrobium officinale, an important traditional Chinese orchid herb.
authors: ['Meng Y', 'Yu D', 'Xue J', 'Lu J', 'Feng S', 'Shen C', 'Wang H']
source: Sci Rep. 2016 Jan 6;6:18864. doi: 10.1038/srep18864.
title: Methods for genetic transformation in Dendrobium.
authors: ['Teixeira da Silva JA', 'Dobranszki J', 'Cardoso JC', 'Chandler SF', 'Zeng S']
source: Plant Cell Rep. 2016 Jan 2.
title: Sebacina vermifera: a unique root symbiont with vast agronomic potential.
authors: ['Ray P', 'Craven KD']
source: World J Microbiol Biotechnol. 2016 Jan;32(1):16. doi: 10.1007/s11274-015-1970-7. Epub 2015 Dec 29.
title: Cuticular Hydrocarbons of Orchid Bees Males: Interspecific and Chemotaxonomy Variation.
authors: ['Dos Santos AB', 'do Nascimento FS']
source: PLoS One. 2015 Dec 29;10(12):e0145070. doi: 10.1371/journal.pone.0145070. eCollection 2015.
title: Sex and the Catasetinae (Darwin's favourite orchids).
authors: ['Perez-Escobar OA', 'Gottschling M', 'Whitten WM', 'Salazar G', 'Gerlach G']
source: Mol Phylogenet Evol. 2015 Dec 17. pii: S1055-7903(15)00372-3. doi: 10.1016/j.ympev.2015.11.019.
title: Comparative Transcriptome Analysis of Genes Involved in GA-GID1-DELLA Regulatory Module in Symbiotic and Asymbiotic Seed Germination of Anoectochilus roxburghii (Wall.) Lindl. (Orchidaceae).
authors: ['Liu SS', 'Chen J', 'Li SC', 'Zeng X', 'Meng ZX', 'Guo SX']
source: Int J Mol Sci. 2015 Dec 18;16(12):30190-203. doi: 10.3390/ijms161226224.
title: Dual Drug Loaded Nanoliposomal Chemotherapy: A Promising Strategy for Treatment of Head and Neck Squamous Cell Carcinoma.
authors: ['Mohan A', 'Narayanan S', 'Balasubramanian G', 'Sethuraman S', 'Krishnan UM']
source: Eur J Pharm Biopharm. 2015 Dec 9. pii: S0939-6411(15)00489-0. doi: 10.1016/j.ejpb.2015.11.017.
title: Orally available stilbene derivatives as potent HDAC inhibitors with antiproliferative activities and antitumor effects in human tumor xenografts.
authors: ['Kachhadia V', 'Rajagopal S', 'Ponpandian T', 'Vignesh R', 'Anandhan K', 'Prabhu D', 'Rajendran P', 'Nidhyanandan S', 'Roy AM', 'Ahamed FA', 'Surendran N', 'Rajagopal S', 'Narayanan S', 'Gopalan B']
source: Eur J Med Chem. 2015 Nov 19;108:274-286. doi: 10.1016/j.ejmech.2015.11.014.
title: Parapheromones for Thynnine Wasps.
authors: ['Bohman B', 'Karton A', 'Dixon RC', 'Barrow RA', 'Peakall R']
source: J Chem Ecol. 2015 Dec 14.
title: Interaction networks and the use of floral resources by male orchid bees (Hymenoptera: Apidae: Euglossini) in a primary rain forests of the Choco Region (Colombia).
authors: ['Ospina-Torres R', 'Montoya-Pfeiffer PM', 'Parra-H A', 'Solarte V', 'Tupac Otero J']
source: Rev Biol Trop. 2015 Sep;63(3):647-58.
title: A taste of pineapple evolution through genome sequencing.
authors: ['Xu Q', 'Liu ZJ']
source: Nat Genet. 2015 Dec 1;47(12):1374-6. doi: 10.1038/ng.3450.
title: Somatic Embryogenesis in Two Orchid Genera (Cymbidium, Dendrobium).
authors: ['Teixeira da Silva JA', 'Winarto B']
source: Methods Mol Biol. 2016;1359:371-86. doi: 10.1007/978-1-4939-3061-6_18.
title: Tumors of the Testis: Morphologic Features and Molecular Alterations.
authors: ['Howitt BE', 'Berney DM']
source: Surg Pathol Clin. 2015 Dec;8(4):687-716. doi: 10.1016/j.path.2015.07.007.
title: Scent emission profiles from Darwin's orchid - Angraecum sesquipedale: Investigation of the aldoxime metabolism using clustering analysis.
authors: ['Nielsen LJ', 'Moller BL']
source: Phytochemistry. 2015 Dec;120:3-18. doi: 10.1016/j.phytochem.2015.10.004. Epub 2015 Oct 22.
title: Two common species dominate the species-rich Euglossine bee fauna of an Atlantic Rainforest remnant in Pernambuco, Brazil.
authors: ['Oliveira R', 'Pinto CE', 'Schlindwein C']
source: Braz J Biol. 2015 Nov;75(4 Suppl 1):1-8. doi: 10.1590/1519-6984.18513. Epub 2015 Nov 24.
title: Variation in the Abundance of Neotropical Bees in an Unpredictable Seasonal Environment.
authors: ['Knoll FR']
source: Neotrop Entomol. 2015 Nov 23.
title: Digital Gene Expression Analysis Based on De Novo Transcriptome Assembly Reveals New Genes Associated with Floral Organ Differentiation of the Orchid Plant Cymbidium ensifolium.
authors: ['Yang F', 'Zhu G']
source: PLoS One. 2015 Nov 18;10(11):e0142434. doi: 10.1371/journal.pone.0142434. eCollection 2015.
title: LONG-TERM CONSERVATION OF PROTOCORMS OF Brassavola nodosa (L) LIND. (ORCHIDACEAE): EFFECT OF ABA AND A RANGE OF CRYOCONSERVATION TECHNIQUES.
authors: ['Mata-Rosas M', 'Lastre-Puertos E']
source: Cryo Letters. 2015 Sep-Oct;36(5):289-98.
title: Cuticular Hydrocarbons as Potential Close Range Recognition Cues in Orchid Bees.
authors: ['Pokorny T', 'Ramirez SR', 'Weber MG', 'Eltz T']
source: J Chem Ecol. 2015 Dec;41(12):1080-94. doi: 10.1007/s10886-015-0647-x. Epub 2015 Nov 14.
title: Bilobate leaves of Bauhinia (Leguminosae, Caesalpinioideae, Cercideae) from the middle Miocene of Fujian Province, southeastern China and their biogeographic implications.
authors: ['Lin Y', 'Wong WO', 'Shi G', 'Shen S', 'Li Z']
source: BMC Evol Biol. 2015 Nov 16;15(1):252. doi: 10.1186/s12862-015-0540-9.
title: Functional Significance of Labellum Pattern Variation in a Sexually Deceptive Orchid (Ophrys heldreichii): Evidence of Individual Signature Learning Effects.
authors: ['Stejskal K', 'Streinzer M', 'Dyer A', 'Paulus HF', 'Spaethe J']
source: PLoS One. 2015 Nov 16;10(11):e0142971. doi: 10.1371/journal.pone.0142971. eCollection 2015.
title: Centralization of cleft care in the UK. Part 6: a tale of two studies.
authors: ['Ness AR', 'Wills AK', 'Waylen A', 'Al-Ghatam R', 'Jones TE', 'Preston R', 'Ireland AJ', 'Persson M', 'Smallridge J', 'Hall AJ', 'Sell D', 'Sandy JR']
source: Orthod Craniofac Res. 2015 Nov;18 Suppl 2:56-62. doi: 10.1111/ocr.12111.
title: The Cleft Care UK study. Part 4: perceptual speech outcomes.
authors: ['Sell D', 'Mildinhall S', 'Albery L', 'Wills AK', 'Sandy JR', 'Ness AR']
source: Orthod Craniofac Res. 2015 Nov;18 Suppl 2:36-46. doi: 10.1111/ocr.12112.
title: A cross-sectional survey of 5-year-old children with non-syndromic unilateral cleft lip and palate: the Cleft Care UK study. Part 1: background and methodology.
authors: ['Persson M', 'Sandy JR', 'Waylen A', 'Wills AK', 'Al-Ghatam R', 'Ireland AJ', 'Hall AJ', 'Hollingworth W', 'Jones T', 'Peters TJ', 'Preston R', 'Sell D', 'Smallridge J', 'Worthington H', 'Ness AR']
source: Orthod Craniofac Res. 2015 Nov;18 Suppl 2:1-13. doi: 10.1111/ocr.12104.
title: Seven New Complete Plastome Sequences Reveal Rampant Independent Loss of the ndh Gene Family across Orchids and Associated Instability of the Inverted Repeat/Small Single-Copy Region Boundaries.
authors: ['Kim HT', 'Kim JS', 'Moore MJ', 'Neubig KM', 'Williams NH', 'Whitten WM', 'Kim JH']
source: PLoS One. 2015 Nov 11;10(11):e0142215. doi: 10.1371/journal.pone.0142215. eCollection 2015.
title: Orchid Species Richness along Elevational and Environmental Gradients in Yunnan, China.
authors: ['Zhang SB', 'Chen WY', 'Huang JL', 'Bi YF', 'Yang XF']
source: PLoS One. 2015 Nov 10;10(11):e0142621. doi: 10.1371/journal.pone.0142621. eCollection 2015.
title: Severe outbreeding and inbreeding depression maintain mating system differentiation in Epipactis (Orchidaceae).
authors: ['Brys R', 'Jacquemyn H']
source: J Evol Biol. 2015 Nov 9. doi: 10.1111/jeb.12787.
title: Simultaneous detection of Cymbidium mosaic virus and Odontoglossum ringspot virus in orchids using multiplex RT-PCR.
authors: ['Kim SM', 'Choi SH']
source: Virus Genes. 2015 Dec;51(3):417-22. doi: 10.1007/s11262-015-1258-x. Epub 2015 Nov 5.
title: Analysis of the TCP genes expressed in the inflorescence of the orchid Orchis italica.
authors: ['De Paolo S', 'Gaudio L', 'Aceto S']
source: Sci Rep. 2015 Nov 4;5:16265. doi: 10.1038/srep16265.
title: RNA-Seq SSRs of Moth Orchid and Screening for Molecular Markers across Genus Phalaenopsis (Orchidaceae).
authors: ['Tsai CC', 'Shih HC', 'Wang HV', 'Lin YS', 'Chang CH', 'Chiang YC', 'Chou CH']
source: PLoS One. 2015 Nov 2;10(11):e0141761. doi: 10.1371/journal.pone.0141761. eCollection 2015.
title: Mapping Adolescent Cancer Services: How Do Young People, Their Families, and Staff Describe Specialized Cancer Care in England?
authors: ['Vindrola-Padros C', 'Taylor RM', 'Lea S', 'Hooker L', 'Pearce S', 'Whelan J', 'Gibson F']
source: Cancer Nurs. 2015 Oct 28.
title: A putative miR172-targeted CeAPETALA2-like gene is involved in floral patterning regulation of the orchid Cymbidium ensifolium.
authors: ['Yang FX', 'Zhu GF', 'Wang Z', 'Liu HL', 'Huang D']
source: Genet Mol Res. 2015 Oct 5;14(4):12049-61. doi: 10.4238/2015.October.5.18.
title: Functional Characterization of PhapLEAFY, a FLORICAULA/LEAFY Ortholog in Phalaenopsis aphrodite.
authors: ['Jang S']
source: Plant Cell Physiol. 2015 Nov;56(11):2234-47. doi: 10.1093/pcp/pcv130. Epub 2015 Oct 22.
title: Evolutionary history of PEPC genes in green plants: Implications for the evolution of CAM in orchids.
authors: ['Deng H', 'Zhang LS', 'Zhang GQ', 'Zheng BQ', 'Liu ZJ', 'Wang Y']
source: Mol Phylogenet Evol. 2016 Jan;94(Pt B):559-64. doi: 10.1016/j.ympev.2015.10.007. Epub 2015 Oct 19.
title: Three new alkaloids and three new phenolic glycosides from Liparis odorata.
authors: ['Jiang P', 'Liu H', 'Xu X', 'Liu B', 'Zhang D', 'Lai X', 'Zhu G', 'Xu P', 'Li B']
source: Fitoterapia. 2015 Dec;107:63-8. doi: 10.1016/j.fitote.2015.10.003. Epub 2015 Oct 19.
title: Microsatellite-based genetic diversity patterns in disjunct populations of a rare orchid.
authors: ['Pandey M', 'Richards M', 'Sharma J']
source: Genetica. 2015 Oct 20.
title: Effects of fusaric acid treatment on the protocorm-like bodies of Dendrobium sonia-28.
authors: ['Dehgahi R', 'Zakaria L', 'Mohamad A', 'Joniyas A', 'Subramaniam S']
source: Protoplasma. 2015 Oct 15.
title: Genes are information, so information theory is coming to the aid of evolutionary biology.
authors: ['Sherwin WB']
source: Mol Ecol Resour. 2015 Nov;15(6):1259-61. doi: 10.1111/1755-0998.12458.
title: A Comprehensive Review of the Cosmeceutical Benefits of Vanda Species (Orchidaceae).
authors: ['Hadi H', 'Razali SN', 'Awadh AI']
source: Nat Prod Commun. 2015 Aug;10(8):1483-8.
title: The C-Terminal Sequence and PI motif of the Orchid (Oncidium Gower Ramsey) PISTILLATA (PI) Ortholog Determine its Ability to Bind AP3 Orthologs and Enter the Nucleus to Regulate Downstream Genes Controlling Petal and Stamen Formation.
authors: ['Mao WT', 'Hsu HF', 'Hsu WH', 'Li JY', 'Lee YI', 'Yang CH']
source: Plant Cell Physiol. 2015 Nov;56(11):2079-99. doi: 10.1093/pcp/pcv129. Epub 2015 Sep 30.
title: Alternative translation initiation codons for the plastid maturase MatK: unraveling the pseudogene misconception in the Orchidaceae.
authors: ['Barthet MM', 'Moukarzel K', 'Smith KN', 'Patel J', 'Hilu KW']
source: BMC Evol Biol. 2015 Sep 29;15:210. doi: 10.1186/s12862-015-0491-1.
title: A dual functional probe for "turn-on" fluorescence response of Pb(2+) and colorimetric detection of Cu(2+) based on a rhodamine derivative in aqueous media.
authors: ['Li M', 'Jiang XJ', 'Wu HH', 'Lu HL', 'Li HY', 'Xu H', 'Zang SQ', 'Mak TC']
source: Dalton Trans. 2015 Oct 21;44(39):17326-34. doi: 10.1039/c5dt02731d. Epub 2015 Sep 21.
title: Mining from transcriptomes: 315 single-copy orthologous genes concatenated for the phylogenetic analyses of Orchidaceae.
authors: ['Deng H', 'Zhang GQ', 'Lin M', 'Wang Y', 'Liu ZJ']
source: Ecol Evol. 2015 Sep;5(17):3800-7. doi: 10.1002/ece3.1642. Epub 2015 Aug 20.
title: Orchid conservation in the biodiversity hotspot of southwestern China.
authors: ['Liu Q', 'Chen J', 'Corlett RT', 'Fan X', 'Yu D', 'Yang H', 'Gao J']
source: Conserv Biol. 2015 Sep 15. doi: 10.1111/cobi.12584.
title: Biochemical characterization of embryogenic calli of Vanilla planifolia in response to two years of thidiazuron treatment.
authors: ['Kodja H', 'Noirot M', 'Khoyratty SS', 'Limbada H', 'Verpoorte R', 'Palama TL']
source: Plant Physiol Biochem. 2015 Nov;96:337-44. doi: 10.1016/j.plaphy.2015.08.017. Epub 2015 Aug 28.
title: Vanda roxburghii: an experimental evaluation of antinociceptive properties of a traditional epiphytic medicinal orchid in animal models.
authors: ['Uddin MJ', 'Rahman MM', 'Abdullah-Al-Mamun M', 'Sadik G']
source: BMC Complement Altern Med. 2015 Sep 3;15:305. doi: 10.1186/s12906-015-0833-y.
title: Rapid evolution of chemosensory receptor genes in a pair of sibling species of orchid bees (Apidae: Euglossini).
authors: ['Brand P', 'Ramirez SR', 'Leese F', 'Quezada-Euan JJ', 'Tollrian R', 'Eltz T']
source: BMC Evol Biol. 2015 Aug 28;15:176. doi: 10.1186/s12862-015-0451-9.
title: Changes in Orchid Bee Communities Across Forest-Agroecosystem Boundaries in Brazilian Atlantic Forest Landscapes.
authors: ['Aguiar WM', 'Sofia SH', 'Melo GA', 'Gaglianone MC']
source: Environ Entomol. 2015 Dec;44(6):1465-71. doi: 10.1093/ee/nvv130. Epub 2015 Aug 11.
title: Orchid conservation: making the links.
authors: ['Fay MF', 'Pailler T', 'Dixon KW']
source: Ann Bot. 2015 Sep;116(3):377-9. doi: 10.1093/aob/mcv142.
title: Orchid phylogenomics and multiple drivers of their extraordinary diversification.
authors: ['Givnish TJ', 'Spalink D', 'Ames M', 'Lyon SP', 'Hunter SJ', 'Zuluaga A', 'Iles WJ', 'Clements MA', 'Arroyo MT', 'Leebens-Mack J', 'Endara L', 'Kriebel R', 'Neubig KM', 'Whitten WM', 'Williams NH', 'Cameron KM']
source: Proc Biol Sci. 2015 Sep 7;282(1814). doi: 10.1098/rspb.2015.1553.
title: Photoprotection related to xanthophyll cycle pigments in epiphytic orchids acclimated at different light microenvironments in two tropical dry forests of the Yucatan Peninsula, Mexico.
authors: ['de la Rosa-Manzano E', 'Andrade JL', 'Garcia-Mendoza E', 'Zotz G', 'Reyes-Garcia C']
source: Planta. 2015 Dec;242(6):1425-38. doi: 10.1007/s00425-015-2383-4. Epub 2015 Aug 25.
title: Mycorrhizal fungi isolated from native terrestrial orchids of pristine regions in Cordoba (Argentina).
authors: ['Fernandez Di Pardo A', 'Chiocchio VM', 'Barrera V', 'Colombo RP', 'Martinez AE', 'Gasoni L', 'Godeas AM']
source: Rev Biol Trop. 2015 Mar;63(1):275-83.
title: Orchid-pollinator interactions and potential vulnerability to biological invasion.
authors: ['Chupp AD', 'Battaglia LL', 'Schauber EM', 'Sipes SD']
source: AoB Plants. 2015 Aug 17;7. pii: plv099. doi: 10.1093/aobpla/plv099.
title: Germination and seedling establishment in orchids: a complex of requirements.
authors: ['Rasmussen HN', 'Dixon KW', 'Jersakova J', 'Tesitelova T']
source: Ann Bot. 2015 Sep;116(3):391-402. doi: 10.1093/aob/mcv087. Epub 2015 Aug 12.
title: Capsule formation and asymbiotic seed germination in some hybrids of Phalaenopsis, influenced by pollination season and capsule maturity.
authors: ['Balilashaki K', 'Gantait S', 'Naderi R', 'Vahedi M']
source: Physiol Mol Biol Plants. 2015 Jul;21(3):341-7. doi: 10.1007/s12298-015-0309-z. Epub 2015 Jul 7.
title: dsRNA silencing of an R2R3-MYB transcription factor affects flower cell shape in a Dendrobium hybrid.
authors: ['Lau SE', 'Schwarzacher T', 'Othman RY', 'Harikrishna JA']
source: BMC Plant Biol. 2015 Aug 11;15:194. doi: 10.1186/s12870-015-0577-3.
title: Clinical and echocardiographic characteristics for differentiating between transthyretin-related and light-chain cardiac amyloidoses.
authors: ['Mori M', 'An Y', 'Katayama O', 'Kitagawa T', 'Sasaki Y', 'Onaka T', 'Yonezawa A', 'Murata K', 'Yokota T', 'Ando K', 'Imada K']
source: Ann Hematol. 2015 Nov;94(11):1885-90. doi: 10.1007/s00277-015-2466-0. Epub 2015 Aug 8.
title: First record of the orchid bee genus Eufriesea Cockerell (Hymenoptera: Apidae: Euglossini) in the United States.
authors: ['Griswold T', 'Herndon JD', 'Gonzalez VH']
source: Zootaxa. 2015 May 15;3957(3):342-6. doi: 10.11646/zootaxa.3957.3.7.
title: Thuniopsis: A New Orchid Genus and Phylogeny of the Tribe Arethuseae (Orchidaceae).
authors: ['Li L', 'Ye DP', 'Niu M', 'Yan HF', 'Wen TL', 'Li SJ']
source: PLoS One. 2015 Aug 5;10(8):e0132777. doi: 10.1371/journal.pone.0132777. eCollection 2015.
title: Additive effects of pollinators and herbivores result in both conflicting and reinforcing selection on floral traits.
authors: ['Sletvold N', 'Moritz KK', 'Agren J']
source: Ecology. 2015 Jan;96(1):214-21.
title: Terrestrial orchids in a tropical forest: best sites for abundance differ from those for reproduction.
authors: ['Whitman M', 'Ackerman JD']
source: Ecology. 2015 Mar;96(3):693-704.
title: Spiranthes sinensis Suppresses Production of Pro-Inflammatory Mediators by Down-Regulating the NF-kappaB Signaling Pathway and Up-Regulating HO-1/Nrf2 Anti-Oxidant Protein.
authors: ['Shie PH', 'Huang SS', 'Deng JS', 'Huang GJ']
source: Am J Chin Med. 2015;43(5):969-89. doi: 10.1142/S0192415X15500561. Epub 2015 Jul 30.
title: Phosphodiesterase inhibitor, pentoxifylline enhances anticancer activity of histone deacetylase inhibitor, MS-275 in human breast cancer in vitro and in vivo.
authors: ['Nidhyanandan S', 'Boreddy TS', 'Chandrasekhar KB', 'Reddy ND', 'Kulkarni NM', 'Narayanan S']
source: Eur J Pharmacol. 2015 Oct 5;764:508-19. doi: 10.1016/j.ejphar.2015.07.048. Epub 2015 Jul 21.
title: The effects of smoke derivatives on in vitro seed germination and development of the leopard orchid Ansellia africana.
authors: ['Papenfus HB', 'Naidoo D', 'Posta M', 'Finnie JF', 'Van Staden J']
source: Plant Biol (Stuttg). 2015 Jul 23. doi: 10.1111/plb.12374.
title: DhEFL2, 3 and 4, the three EARLY FLOWERING4-like genes in a Doritaenopsis hybrid regulate floral transition.
authors: ['Chen W', 'Qin Q', 'Zhang C', 'Zheng Y', 'Wang C', 'Zhou M', 'Cui Y']
source: Plant Cell Rep. 2015 Dec;34(12):2027-41. doi: 10.1007/s00299-015-1848-z. Epub 2015 Jul 24.
title: Spatial variation in pollinator-mediated selection on phenology, floral display and spur length in the orchid Gymnadenia conopsea.
authors: ['Chapurlat E', 'Agren J', 'Sletvold N']
source: New Phytol. 2015 Dec;208(4):1264-75. doi: 10.1111/nph.13555. Epub 2015 Jul 15.
title: Building the Evidence for Nursing Practice: Learning from a Structured Review of SIOP Abstracts, 2003-2012.
authors: ['Gibson F', 'Vindrola-Padros C', 'Hinds P', 'Nolbris MJ', 'Kelly D', 'Kelly P', 'Ruccione K', 'Soanes L', 'Woodgate RL', 'Baggott C']
source: Pediatr Blood Cancer. 2015 Dec;62(12):2172-6. doi: 10.1002/pbc.25652. Epub 2015 Jul 14.
title: Genetic variability within and among populations of an invasive, exotic orchid.
authors: ['Ueno S', 'Rodrigues JF', 'Alves-Pereira A', 'Pansarin ER', 'Veasey EA']
source: AoB Plants. 2015 Jul 10;7. pii: plv077. doi: 10.1093/aobpla/plv077.
title: Hydrolysis of clavulanate by Mycobacterium tuberculosis beta-lactamase BlaC harboring a canonical SDN motif.
authors: ['Soroka D', 'Li de la Sierra-Gallay I', 'Dubee V', 'Triboulet S', 'van Tilbeurgh H', 'Compain F', 'Ballell L', 'Barros D', 'Mainardi JL', 'Hugonnet JE', 'Arthur M']
source: Antimicrob Agents Chemother. 2015 Sep;59(9):5714-20. doi: 10.1128/AAC.00598-15. Epub 2015 Jul 6.
title: Experimental fertilization increases amino acid content in floral nectar, fruit set and degree of selfing in the orchid Gymnadenia conopsea.
authors: ['Gijbels P', 'Ceulemans T', 'Van den Ende W', 'Honnay O']
source: Oecologia. 2015 Nov;179(3):785-95. doi: 10.1007/s00442-015-3381-8. Epub 2015 Jul 7.
title: Seasonal cycles, phylogenetic assembly, and functional diversity of orchid bee communities.
authors: ['Ramirez SR', 'Hernandez C', 'Link A', 'Lopez-Uribe MM']
source: Ecol Evol. 2015 May;5(9):1896-907. doi: 10.1002/ece3.1466. Epub 2015 Apr 13.
title: Migration of nonylphenol from food-grade plastic is toxic to the coral reef fish species Pseudochromis fridmani.
authors: ['Hamlin HJ', 'Marciano K', 'Downs CA']
source: Chemosphere. 2015 Nov;139:223-8. doi: 10.1016/j.chemosphere.2015.06.032. Epub 2015 Jun 29.
title: Responses to simulated nitrogen deposition by the neotropical epiphytic orchid Laelia speciosa.
authors: ['Diaz-Alvarez EA', 'Lindig-Cisneros R', 'de la Barrera E']
source: PeerJ. 2015 Jun 23;3:e1021. doi: 10.7717/peerj.1021. eCollection 2015.
title: Applicability of ISSR and DAMD markers for phyto-molecular characterization and association with some important biochemical traits of Dendrobium nobile, an endangered medicinal orchid.
authors: ['Bhattacharyya P', 'Kumaria S', 'Tandon P']
source: Phytochemistry. 2015 Sep;117:306-16. doi: 10.1016/j.phytochem.2015.06.022. Epub 2015 Jun 27.
title: The importance of associations with saprotrophic non-Rhizoctonia fungi among fully mycoheterotrophic orchids is currently under-estimated: novel evidence from sub-tropical Asia.
authors: ['Lee YI', 'Yang CK', 'Gebauer G']
source: Ann Bot. 2015 Sep;116(3):423-35. doi: 10.1093/aob/mcv085. Epub 2015 Jun 25.
title: Continent-wide distribution in mycorrhizal fungi: implications for the biogeography of specialized orchids.
authors: ['Davis BJ', 'Phillips RD', 'Wright M', 'Linde CC', 'Dixon KW']
source: Ann Bot. 2015 Sep;116(3):413-21. doi: 10.1093/aob/mcv084. Epub 2015 Jun 22.
title: Dynamic distribution and the role of abscisic acid during seed development of a lady's slipper orchid, Cypripedium formosanum.
authors: ['Lee YI', 'Chung MC', 'Yeung EC', 'Lee N']
source: Ann Bot. 2015 Sep;116(3):403-11. doi: 10.1093/aob/mcv079. Epub 2015 Jun 22.
title: Characterization of microsatellite loci for an Australian epiphytic orchid, Dendrobium calamiforme, using Illumina sequencing.
authors: ['Trapnell DW', 'Beasley RR', 'Lance SL', 'Field AR', 'Jones KL']
source: Appl Plant Sci. 2015 Jun 5;3(6). pii: apps.1500016. doi: 10.3732/apps.1500016. eCollection 2015 Jun.
title: Factors affecting reproductive success in three entomophilous orchid species in Hungary.
authors: ['Vojtko AE', 'Sonkoly J', 'Lukacs BA', 'Molnar V A']
source: Acta Biol Hung. 2015 Jun;66(2):231-41. doi: 10.1556/018.66.2015.2.9.
title: Pollination by sexual deception promotes outcrossing and mate diversity in self-compatible clonal orchids.
authors: ['Whitehead MR', 'Linde CC', 'Peakall R']
source: J Evol Biol. 2015 Aug;28(8):1526-41. doi: 10.1111/jeb.12673. Epub 2015 Jul 3.
title: Adding Biotic Interactions into Paleodistribution Models: A Host-Cleptoparasite Complex of Neotropical Orchid Bees.
authors: ['Silva DP', 'Varela S', 'Nemesio A', 'De Marco P Jr']
source: PLoS One. 2015 Jun 12;10(6):e0129890. doi: 10.1371/journal.pone.0129890. eCollection 2015.
title: Mapping of the Interaction Between Agrobacterium tumefaciens and Vanda Kasem's Delight Orchid Protocorm-Like Bodies.
authors: ['Gnasekaran P', 'Subramaniam S']
source: Indian J Microbiol. 2015 Sep;55(3):285-91. doi: 10.1007/s12088-015-0519-7. Epub 2015 Feb 25.
title: Spatial asymmetries in connectivity influence colonization-extinction dynamics.
authors: ['Acevedo MA', 'Fletcher RJ Jr', 'Tremblay RL', 'Melendez-Ackerman EJ']
source: Oecologia. 2015 Oct;179(2):415-24. doi: 10.1007/s00442-015-3361-z. Epub 2015 Jun 10.
title: Dendrobium micropropagation: a review.
authors: ['da Silva JA', 'Cardoso JC', 'Dobranszki J', 'Zeng S']
source: Plant Cell Rep. 2015 May;34(5):671-704.
title: Crystal structure of 2-(4-fluoro-3-methyl-phen-yl)-5-{[(naphthalen-1-yl)-oxy]meth-yl}-1,3,4-oxa-diazol e.
authors: ['Govindhan M', 'Subramanian K', 'Viswanathan V', 'Velmurugan D']
source: Acta Crystallogr E Crystallogr Commun. 2015 Mar 11;71(Pt 4):o229-30. doi: 10.1107/S2056989015004144. eCollection 2015 Apr 1.
title: Cymbidium chlorotic mosaic virus, a new sobemovirus isolated from a spring orchid (Cymbidium goeringii) in Japan.
authors: ['Kondo H', 'Takemoto S', 'Maruyama K', 'Chiba S', 'Andika IB', 'Suzuki N']
source: Arch Virol. 2015 Aug;160(8):2099-104. doi: 10.1007/s00705-015-2460-9. Epub 2015 May 31.
title: Effect of pesticide exposure on immunological, hematological and biochemical parameters in thai orchid farmers- a cross-sectional study.
authors: ['Aroonvilairat S', 'Kespichayawattana W', 'Sornprachum T', 'Chaisuriya P', 'Siwadune T', 'Ratanabanangkoon K']
source: Int J Environ Res Public Health. 2015 May 27;12(6):5846-61. doi: 10.3390/ijerph120605846.
title: Phylogeny and classification of the East Asian Amitostigma alliance (Orchidaceae: Orchideae) based on six DNA markers.
authors: ['Tang Y', 'Yukawa T', 'Bateman RM', 'Jiang H', 'Peng H']
source: BMC Evol Biol. 2015 May 26;15:96. doi: 10.1186/s12862-015-0376-3.
title: Combinations of beta-Lactam Antibiotics Currently in Clinical Trials Are Efficacious in a DHP-I-Deficient Mouse Model of Tuberculosis Infection.
authors: ['Rullas J', 'Dhar N', 'McKinney JD', 'Garcia-Perez A', 'Lelievre J', 'Diacon AH', 'Hugonnet JE', 'Arthur M', 'Angulo-Barturen I', 'Barros-Aguirre D', 'Ballell L']
source: Antimicrob Agents Chemother. 2015 Aug;59(8):4997-9. doi: 10.1128/AAC.01063-15. Epub 2015 May 18.
title: A de novo floral transcriptome reveals clues into Phalaenopsis orchid flower development.
authors: ['Huang JZ', 'Lin CP', 'Cheng TC', 'Chang BC', 'Cheng SY', 'Chen YW', 'Lee CY', 'Chin SW', 'Chen FC']
source: PLoS One. 2015 May 13;10(5):e0123474. doi: 10.1371/journal.pone.0123474. eCollection 2015.
title: Mycorrhizal diversity, seed germination and long-term changes in population size across nine populations of the terrestrial orchid Neottia ovata.
authors: ['Jacquemyn H', 'Waud M', 'Merckx VS', 'Lievens B', 'Brys R']
source: Mol Ecol. 2015 Jul;24(13):3269-80. doi: 10.1111/mec.13236. Epub 2015 Jun 5.
title: Potential osteogenic activity of ethanolic extract and oxoflavidin isolated from Pholidota articulata Lindley.
authors: ['Sharma C', 'Dixit M', 'Singh R', 'Agrawal M', 'Mansoori MN', 'Kureel J', 'Singh D', 'Narender T', 'Arya KR']
source: J Ethnopharmacol. 2015 Jul 21;170:57-65. doi: 10.1016/j.jep.2015.04.045. Epub 2015 May 8.
title: Transitions between self-compatibility and self-incompatibility and the evolution of reproductive isolation in the large and diverse tropical genus Dendrobium (Orchidaceae).
authors: ['Pinheiro F', 'Cafasso D', 'Cozzolino S', 'Scopece G']
source: Ann Bot. 2015 Sep;116(3):457-67. doi: 10.1093/aob/mcv057. Epub 2015 May 7.
title: A new species of Cnemaspis (Sauria: Gekkonidae) from Northern Karnataka, India.
authors: ['Srinivasulu C', 'Kumar GC', 'Srinivasulu B']
source: Zootaxa. 2015 Apr 14;3947(1):85-98. doi: 10.11646/zootaxa.3947.1.5.
title: Visual profile of students in integrated schools in Malawi.
authors: ['Kaphle D', 'Marasini S', 'Kalua K', 'Reading A', 'Naidoo KS']
source: Clin Exp Optom. 2015 Jul;98(4):370-4. doi: 10.1111/cxo.12269. Epub 2015 May 5.
title: A direct assessment of realized seed and pollen flow within and between two isolated populations of the food-deceptive orchid Orchis mascula.
authors: ['Helsen K', 'Meekers T', 'Vranckx G', 'Roldan-Ruiz I', 'Vandepitte K', 'Honnay O']
source: Plant Biol (Stuttg). 2015 May 4. doi: 10.1111/plb.12342.
title: Challenges of flow-cytometric estimation of nuclear genome size in orchids, a plant group with both whole-genome and progressively partial endoreplication.
authors: ['Travnicek P', 'Ponert J', 'Urfus T', 'Jersakova J', 'Vrana J', 'Hribova E', 'Dolezel J', 'Suda J']
source: Cytometry A. 2015 Oct;87(10):958-66. doi: 10.1002/cyto.a.22681. Epub 2015 Apr 30.
title: Transcriptome-wide analysis of the MADS-box gene family in the orchid Erycina pusilla.
authors: ['Lin CS', 'Hsu CT', 'Liao C', 'Chang WJ', 'Chou ML', 'Huang YT', 'Chen JJ', 'Ko SS', 'Chan MT', 'Shih MC']
source: Plant Biotechnol J. 2015 Apr 28. doi: 10.1111/pbi.12383.
title: An informational diversity framework, illustrated with sexually deceptive orchids in early stages of speciation.
authors: ['Smouse PE', 'Whitehead MR', 'Peakall R']
source: Mol Ecol Resour. 2015 Nov;15(6):1375-84. doi: 10.1111/1755-0998.12422. Epub 2015 May 20.
title: A new myco-heterotrophic genus, Yunorchis, and the molecular phylogenetic relationships of the tribe Calypsoeae (Epidendroideae, Orchidaceae) inferred from plastid and nuclear DNA sequences.
authors: ['Zhang GQ', 'Li MH', 'Su YY', 'Chen LJ', 'Lan SR', 'Liu ZJ']
source: PLoS One. 2015 Apr 22;10(4):e0123382. doi: 10.1371/journal.pone.0123382. eCollection 2015.
title: Phylogenetic placement and taxonomy of the genus Hederorkis (Orchidaceae).
authors: ['Mytnik-Ejsmont J', 'Szlachetko DL', 'Baranow P', 'Jolliffe K', 'Gorniak M']
source: PLoS One. 2015 Apr 22;10(4):e0122306. doi: 10.1371/journal.pone.0122306. eCollection 2015.
title: Species distribution modelling for conservation of an endangered endemic orchid.
authors: ['Wang HH', 'Wonkka CL', 'Treglia ML', 'Grant WE', 'Smeins FE', 'Rogers WE']
source: AoB Plants. 2015 Apr 21;7. pii: plv039. doi: 10.1093/aobpla/plv039.
title: Floral miniaturisation and autogamy in boreal-arctic plants are epitomised by Iceland's most frequent orchid, Platanthera hyperborea.
authors: ['Bateman RM', 'Sramko G', 'Rudall PJ']
source: PeerJ. 2015 Apr 14;3:e894. doi: 10.7717/peerj.894. eCollection 2015.
title: Highly diversified fungi are associated with the achlorophyllous orchid Gastrodia flavilabella.
authors: ['Liu T', 'Li CM', 'Han YL', 'Chiang TY', 'Chiang YC', 'Sung HM']
source: BMC Genomics. 2015 Mar 14;16:185. doi: 10.1186/s12864-015-1422-7.
title: Floral nectary anatomy and ultrastructure in mycoheterotrophic plant, Epipogium aphyllum Sw. (Orchidaceae).
authors: ['Swieczkowska E', 'Kowalkowska AK']
source: ScientificWorldJournal. 2015;2015:201702. doi: 10.1155/2015/201702. Epub 2015 Mar 25.
title: The complete chloroplast genome sequence of Anoectochilus roxburghii.
authors: ['Yu CW', 'Lian Q', 'Wu KC', 'Yu SH', 'Xie LY', 'Wu ZJ']
source: Mitochondrial DNA. 2015 Apr 13:1-2.
title: Effects of droplet-vitrification cryopreservation based on physiological and antioxidant enzyme activities of Brassidium shooting star orchid.
authors: ['Rahmah S', 'Ahmad Mubbarakh S', 'Soo Ping K', 'Subramaniam S']
source: ScientificWorldJournal. 2015;2015:961793. doi: 10.1155/2015/961793. Epub 2015 Mar 11.
title: Pollinator behaviour on a food-deceptive orchid Calypso bulbosa and coflowering species.
authors: ['Tuomi J', 'Lamsa J', 'Wannas L', 'Abeli T', 'Jakalaniemi A']
source: ScientificWorldJournal. 2015;2015:482161. doi: 10.1155/2015/482161. Epub 2015 Mar 12.
title: Reticulate evolution and sea-level fluctuations together drove species diversification of slipper orchids (Paphiopedilum) in South-East Asia.
authors: ['Guo YY', 'Luo YB', 'Liu ZJ', 'Wang XQ']
source: Mol Ecol. 2015 Jun;24(11):2838-55. doi: 10.1111/mec.13189. Epub 2015 May 7.
title: Crystal structure of 2-{[(naphthalen-1-yl)oxy]meth-yl}-5-(2,4,5-tri-fluoro-phen-yl)-1,3,4-oxa-diazole.
authors: ['Govindhan M', 'Subramanian K', 'Viswanathan V', 'Velmurugan D']
source: Acta Crystallogr E Crystallogr Commun. 2015 Feb 21;71(Pt 3):o190-1. doi: 10.1107/S2056989015003205. eCollection 2015 Mar 1.
title: The effect of mealybug Pseudococcus longispinus (Targioni Tozzetti) infestation of different density on physiological responses of Phalaenopsis x hybridum 'Innocence'.
authors: ['Kot I', 'Kmiec K', 'Gorska-Drabik E', 'Golan K', 'Rubinowska K', 'Lagowska B']
source: Bull Entomol Res. 2015 Jun;105(3):373-80. doi: 10.1017/S000748531500022X. Epub 2015 Apr 1.
title: The Genome of Dendrobium officinale Illuminates the Biology of the Important Traditional Chinese Orchid Herb.
authors: ['Yan L', 'Wang X', 'Liu H', 'Tian Y', 'Lian J', 'Yang R', 'Hao S', 'Wang X', 'Yang S', 'Li Q', 'Qi S', 'Kui L', 'Okpekum M', 'Ma X', 'Zhang J', 'Ding Z', 'Zhang G', 'Wang W', 'Dong Y', 'Sheng J']
source: Mol Plant. 2015 Jun;8(6):922-34. doi: 10.1016/j.molp.2014.12.011. Epub 2014 Dec 24.
title: Recurrent polymorphic mating type variation in Madagascan species (Orchidaceae) exemplifies a high incidence of auto-pollination in tropical orchids.
authors: ['Gamisch A', 'Fischer GA', 'Comes HP']
source: Bot J Linn Soc. 2014 Jun;175(2):242-258. Epub 2014 May 20.
title: When stable-stage equilibrium is unlikely: integrating transient population dynamics improves asymptotic methods.
authors: ['Tremblay RL', 'Raventos J', 'Ackerman JD']
source: Ann Bot. 2015 Sep;116(3):381-90. doi: 10.1093/aob/mcv031. Epub 2015 Mar 26.
title: Antibiotic susceptibility pattern of Enterobacteriaceae and non-fermenter Gram-negative clinical isolates of microbial resource orchid.
authors: ['Hariharan P', 'Bharani T', 'Franklyne JS', 'Biswas P', 'Solanki SS', 'Paul-Satyaseela M']
source: J Nat Sci Biol Med. 2015 Jan-Jun;6(1):198-201. doi: 10.4103/0976-9668.149121.
title: Pollination system and the effect of inflorescence size on fruit set in the deceptive orchid Cephalanthera falcata.
authors: ['Suetsugu K', 'Naito RS', 'Fukushima S', 'Kawakita A', 'Kato M']
source: J Plant Res. 2015 Jul;128(4):585-94. doi: 10.1007/s10265-015-0716-9. Epub 2015 Mar 24.
title: Polysaccharide hydrogel combined with mesenchymal stem cells promotes the healing of corneal alkali burn in rats.
authors: ['Ke Y', 'Wu Y', 'Cui X', 'Liu X', 'Yu M', 'Yang C', 'Li X']
source: PLoS One. 2015 Mar 19;10(3):e0119725. doi: 10.1371/journal.pone.0119725. eCollection 2015.
title: Ex situ germination as a method for seed viability assessment in a peatland orchid, Platanthera blephariglottis.
authors: ['Lemay MA', 'De Vriendt L', 'Pellerin S', 'Poulin M']
source: Am J Bot. 2015 Mar;102(3):390-5. doi: 10.3732/ajb.1400441. Epub 2015 Mar 1.
title: Preliminary findings on identification of mycorrhizal fungi from diverse orchids in the Central Highlands of Madagascar.
authors: ['Yokoya K', 'Zettler LW', 'Kendon JP', 'Bidartondo MI', 'Stice AL', 'Skarha S', 'Corey LL', 'Knight AC', 'Sarasan V']
source: Mycorrhiza. 2015 Nov;25(8):611-25. doi: 10.1007/s00572-015-0635-6. Epub 2015 Mar 14.
title: Pollination biology in the dioecious orchid Catasetum uncatum: How does floral scent influence the behaviour of pollinators?
authors: ['Milet-Pinheiro P', 'Navarro DM', 'Dotterl S', 'Carvalho AT', 'Pinto CE', 'Ayasse M', 'Schlindwein C']
source: Phytochemistry. 2015 Aug;116:149-61. doi: 10.1016/j.phytochem.2015.02.027. Epub 2015 Mar 11.
title: The location and translocation of ndh genes of chloroplast origin in the Orchidaceae family.
authors: ['Lin CS', 'Chen JJ', 'Huang YT', 'Chan MT', 'Daniell H', 'Chang WJ', 'Hsu CT', 'Liao DC', 'Wu FH', 'Lin SY', 'Liao CF', 'Deyholos MK', 'Wong GK', 'Albert VA', 'Chou ML', 'Chen CY', 'Shih MC']
source: Sci Rep. 2015 Mar 12;5:9040. doi: 10.1038/srep09040.
title: Genetic structure is associated with phenotypic divergence in floral traits and reproductive investment in a high-altitude orchid from the Iron Quadrangle, southeastern Brazil.
authors: ['Leles B', 'Chaves AV', 'Russo P', 'Batista JA', 'Lovato MB']
source: PLoS One. 2015 Mar 10;10(3):e0120645. doi: 10.1371/journal.pone.0120645. eCollection 2015.
title: A molecular phylogeny of Aeridinae (Orchidaceae: Epidendroideae) inferred from multiple nuclear and chloroplast regions.
authors: ['Zou LH', 'Huang JX', 'Zhang GQ', 'Liu ZJ', 'Zhuang XY']
source: Mol Phylogenet Evol. 2015 Apr;85:247-54. doi: 10.1016/j.ympev.2015.02.014. Epub 2015 Feb 26.
title: Corrigendum: The genome sequence of the orchid Phalaenopsis equestris.
authors: ['Cai J', 'Liu X', 'Vanneste K', 'Proost S', 'Tsai WC', 'Liu KW', 'Chen LJ', 'He Y', 'Xu Q', 'Bian C', 'Zheng Z', 'Sun F', 'Liu W', 'Hsiao YY', 'Pan ZJ', 'Hsu CC', 'Yang YP', 'Hsu YC', 'Chuang YC', 'Dievart A', 'Dufayard JF', 'Xu X', 'Wang JY', 'Wang J', 'Xiao XJ', 'Zhao XM', 'Du R', 'Zhang GQ', 'Wang M', 'Su YY', 'Xie GC', 'Liu GH', 'Li LQ', 'Huang LQ', 'Luo YB', 'Chen HH', 'Van de Peer Y', 'Liu ZJ']
source: Nat Genet. 2015 Mar;47(3):304. doi: 10.1038/ng0315-304a.
title: Convergent losses of decay mechanisms and rapid turnover of symbiosis genes in mycorrhizal mutualists.
authors: ['Kohler A', 'Kuo A', 'Nagy LG', 'Morin E', 'Barry KW', 'Buscot F', 'Canback B', 'Choi C', 'Cichocki N', 'Clum A', 'Colpaert J', 'Copeland A', 'Costa MD', 'Dore J', 'Floudas D', 'Gay G', 'Girlanda M', 'Henrissat B', 'Herrmann S', 'Hess J', 'Hogberg N', 'Johansson T', 'Khouja HR', 'LaButti K', 'Lahrmann U', 'Levasseur A', 'Lindquist EA', 'Lipzen A', 'Marmeisse R', 'Martino E', 'Murat C', 'Ngan CY', 'Nehls U', 'Plett JM', 'Pringle A', 'Ohm RA', 'Perotto S', 'Peter M', 'Riley R', 'Rineau F', 'Ruytinx J', 'Salamov A', 'Shah F', 'Sun H', 'Tarkka M', 'Tritt A', 'Veneault-Fourrey C', 'Zuccaro A', 'Tunlid A', 'Grigoriev IV', 'Hibbett DS', 'Martin F']
source: Nat Genet. 2015 Apr;47(4):410-5. doi: 10.1038/ng.3223. Epub 2015 Feb 23.
title: Chemical and morphological filters in a specialized floral mimicry system.
authors: ['Martos F', 'Cariou ML', 'Pailler T', 'Fournel J', 'Bytebier B', 'Johnson SD']
source: New Phytol. 2015 Jul;207(1):225-34. doi: 10.1111/nph.13350. Epub 2015 Feb 20.
title: Modeling the two-locus architecture of divergent pollinator adaptation: how variation in SAD paralogs affects fitness and evolutionary divergence in sexually deceptive orchids.
authors: ['Xu S', 'Schluter PM']
source: Ecol Evol. 2015 Jan;5(2):493-502. doi: 10.1002/ece3.1378. Epub 2015 Jan 4.
title: Pollination ecology of two species of Elleanthus (Orchidaceae): novel mechanisms and underlying adaptations to hummingbird pollination.
authors: ['Nunes CE', 'Amorim FW', 'Mayer JL', 'Sazima M']
source: Plant Biol (Stuttg). 2015 Feb 11. doi: 10.1111/plb.12312.
title: Sequential decarboxylative azide-alkyne cycloaddition and dehydrogenative coupling reactions: one-pot synthesis of polycyclic fused triazoles.
authors: ['Bharathimohan K', 'Ponpandian T', 'Ahamed AJ', 'Bhuvanesh N']
source: Beilstein J Org Chem. 2014 Dec 17;10:3031-7. doi: 10.3762/bjoc.10.321. eCollection 2014.
title: Are tetraploids more successful? Floral signals, reproductive success and floral isolation in mixed-ploidy populations of a terrestrial orchid.
authors: ['Gross K', 'Schiestl FP']
source: Ann Bot. 2015 Feb;115(2):263-73. doi: 10.1093/aob/mcu244.
title: Setting the pace of life: membrane composition of flight muscle varies with metabolic rate of hovering orchid bees.
authors: ['Rodriguez E', 'Weber JM', 'Page B', 'Roubik DW', 'Suarez RK', 'Darveau CA']
source: Proc Biol Sci. 2015 Mar 7;282(1802). pii: 20142232. doi: 10.1098/rspb.2014.2232.
title: Mycorrhizal ecology and evolution: the past, the present, and the future.
authors: ['van der Heijden MG', 'Martin FM', 'Selosse MA', 'Sanders IR']
source: New Phytol. 2015 Mar;205(4):1406-23. doi: 10.1111/nph.13288. Epub 2015 Feb 2.
title: The orchid-bee fauna (Hymenoptera: Apidae) of a forest remnant in the southern portion of the Brazilian Amazon.
authors: ['Santos Junior JE', 'Ferrari RR', 'Nemesio A']
source: Braz J Biol. 2014 Aug;74(3 Suppl 1):S184-90. doi: 10.1590/1519-6984.25712.
title: Is the "Centro de Endemismo Pernambuco" a biodiversity hotspot for orchid bees?
authors: ['Nemesio A', 'Santos Junior JE']
source: Braz J Biol. 2014 Aug;74(3 Suppl 1):S78-92. doi: 10.1590/1519-6984.26412.
title: Sampling a biodiversity hotspot: the orchid-bee fauna (Hymenoptera: Apidae) of Tarapoto, northeastern Peru, the richest and most diverse site of the Neotropics.
authors: ['Nemesio A', 'Rasmussen C']
source: Braz J Biol. 2014 Aug;74(3 Suppl 1):S33-44. doi: 10.1590/1519-6984.20412.
title: Mismatch in the distribution of floral ecotypes and pollinators: insights into the evolution of sexually deceptive orchids.
authors: ['Phillips RD', 'Bohman B', 'Anthony JM', 'Krauss SL', 'Dixon KW', 'Peakall R']
source: J Evol Biol. 2015 Mar;28(3):601-12. doi: 10.1111/jeb.12593. Epub 2015 Feb 20.
title: Mycorrhizal networks and coexistence in species-rich orchid communities.
authors: ['Jacquemyn H', 'Brys R', 'Waud M', 'Busschaert P', 'Lievens B']
source: New Phytol. 2015 May;206(3):1127-34. doi: 10.1111/nph.13281. Epub 2015 Jan 23.
title: Two widespread green Neottia species (Orchidaceae) show mycorrhizal preference for Sebacinales in various habitats and ontogenetic stages.
authors: ['Tesitelova T', 'Kotilinek M', 'Jersakova J', 'Joly FX', 'Kosnar J', 'Tatarenko I', 'Selosse MA']
source: Mol Ecol. 2015 Mar;24(5):1122-34. doi: 10.1111/mec.13088. Epub 2015 Feb 16.
title: Genetic stability and phytochemical analysis of the in vitro regenerated plants of Dendrobium nobile Lindl., an endangered medicinal orchid.
authors: ['Bhattacharyya P', 'Kumaria S', 'Diengdoh R', 'Tandon P']
source: Meta Gene. 2014 Jul 15;2:489-504. doi: 10.1016/j.mgene.2014.06.003. eCollection 2014 Dec.
title: Complete chloroplast genome of the orchid Cattleya crispata (Orchidaceae:Laeliinae), a Neotropical rupiculous species.
authors: ['da Rocha Perini V', 'Leles B', 'Furtado C', 'Prosdocimi F']
source: Mitochondrial DNA. 2015 Jan 20:1-3.
title: RNA/DNA co-analysis from human skin and contact traces--results of a sixth collaborative EDNAP exercise.
authors: ['Haas C', 'Hanson E', 'Banemann R', 'Bento AM', 'Berti A', 'Carracedo A', 'Courts C', 'De Cock G', 'Drobnic K', 'Fleming R', 'Franchi C', 'Gomes I', 'Hadzic G', 'Harbison SA', 'Hjort B', 'Hollard C', 'Hoff-Olsen P', 'Keyser C', 'Kondili A', 'Maronas O', 'McCallum N', 'Miniati P', 'Morling N', 'Niederstatter H', 'Noel F', 'Parson W', 'Porto MJ', 'Roeder AD', 'Sauer E', 'Schneider PM', 'Shanthan G', 'Sijen T', 'Syndercombe Court D', 'Turanska M', 'van den Berge M', 'Vennemann M', 'Vidaki A', 'Zatkalikova L', 'Ballantyne J']
source: Forensic Sci Int Genet. 2015 May;16:139-47. doi: 10.1016/j.fsigen.2015.01.002. Epub 2015 Jan 7.
title: Ethylene and pollination decrease transcript abundance of an ethylene receptor gene in Dendrobium petals.
authors: ['Thongkum M', 'Burns P', 'Bhunchoth A', 'Warin N', 'Chatchawankanphanich O', 'van Doorn WG']
source: J Plant Physiol. 2015 Mar 15;176:96-100. doi: 10.1016/j.jplph.2014.12.008. Epub 2014 Dec 18.
title: Pollen limitation and the contribution of autonomous selfing to fruit and seed set in a rewarding orchid.
authors: ['Jacquemyn H', 'Brys R']
source: Am J Bot. 2015 Jan;102(1):67-72. doi: 10.3732/ajb.1400449. Epub 2014 Dec 22.
title: In vitro propagation of Paphiopedilum orchids.
authors: ['Zeng S', 'Huang W', 'Wu K', 'Zhang J', 'Teixeira da Silva JA', 'Duan J']
source: Crit Rev Biotechnol. 2015 Sep 11:1-14.
title: Vanillin-bioconversion and bioengineering of the most popular plant flavor and its de novo biosynthesis in the vanilla orchid.
authors: ['Gallage NJ', 'Moller BL']
source: Mol Plant. 2015 Jan;8(1):40-57. doi: 10.1016/j.molp.2014.11.008. Epub 2014 Dec 11.
title: Crystallographic investigations of select cathinones: emerging illicit street drugs known as `bath salts'.
authors: ['Wood MR', 'Lalancette RA', 'Bernal I']
source: Acta Crystallogr C Struct Chem. 2015 Jan;71(Pt 1):32-8. doi: 10.1107/S2053229614025637. Epub 2015 Jan 1.
title: A genome to unveil the mysteries of orchids.
authors: ['Albert VA', 'Carretero-Paulet L']
source: Nat Genet. 2015 Jan;47(1):3-4. doi: 10.1038/ng.3179.
title: Temporal patterns of orchid mycorrhizal fungi in meadows and forests as revealed by 454 pyrosequencing.
authors: ['Oja J', 'Kohout P', 'Tedersoo L', 'Kull T', 'Koljalg U']
source: New Phytol. 2015 Mar;205(4):1608-18. doi: 10.1111/nph.13223. Epub 2014 Dec 24.
title: Interests shape how adolescents pay attention: the interaction of motivation and top-down attentional processes in biasing sensory activations to anticipated events.
authors: ['Banerjee S', 'Frey HP', 'Molholm S', 'Foxe JJ']
source: Eur J Neurosci. 2015 Mar;41(6):818-34. doi: 10.1111/ejn.12810. Epub 2014 Dec 26.
title: Are carbon and nitrogen exchange between fungi and the orchid Goodyera repens affected by irradiance?
authors: ['Liebel HT', 'Bidartondo MI', 'Gebauer G']
source: Ann Bot. 2015 Feb;115(2):251-61. doi: 10.1093/aob/mcu240. Epub 2014 Dec 22.
title: Taxonomic notes and distribution extension of Durga Das's leaf-nosed bat Hipposiderosdurgadasi Khajuria, 1970 (Chiroptera: Hipposideridae) from south India.
authors: ['Kaur H', 'Chelmala S', 'Srinivasulu B', 'Shah TA', 'Devender G', 'Srinivasulu A']
source: Biodivers Data J. 2014 Nov 20;(2):e4127. doi: 10.3897/BDJ.2.e4127. eCollection 2014.
title: Characterization and expression analysis of somatic embryogenesis receptor-like kinase genes from Phalaenopsis.
authors: ['Huang YW', 'Tsai YJ', 'Chen FC']
source: Genet Mol Res. 2014 Dec 18;13(4):10690-703. doi: 10.4238/2014.December.18.11.
title: [Effects of different fungi on symbiotic seed germination of two Dendrobium species].
authors: ['Zi XM', 'Gao JY']
source: Zhongguo Zhong Yao Za Zhi. 2014 Sep;39(17):3238-44.
title: Conditioned Medium Reconditions Hippocampal Neurons against Kainic Acid Induced Excitotoxicity: An In Vitro Study.
authors: ['Bevinahal PK', 'Venugopal C', 'Yencharla HC', 'Chandanala S', 'Trichur RR', 'Talakad SN', 'Bhonde RR', 'Dhanushkodi A']
source: J Toxicol. 2014;2014:194967. doi: 10.1155/2014/194967. Epub 2014 Nov 23.
title: Histone acetylation accompanied with promoter sequences displaying differential expression profiles of B-class MADS-box genes for phalaenopsis floral morphogenesis.
authors: ['Hsu CC', 'Wu PS', 'Chen TC', 'Yu CW', 'Tsai WC', 'Wu K', 'Wu WL', 'Chen WH', 'Chen HH']
source: PLoS One. 2014 Dec 11;9(12):e106033. doi: 10.1371/journal.pone.0106033. eCollection 2014.
title: The treatment of displaced intra-articular distal radius fractures in elderly patients.
authors: ['Bartl C', 'Stengel D', 'Bruckner T', 'Gebhard F']
source: Dtsch Arztebl Int. 2014 Nov 14;111(46):779-87. doi: 10.3238/arztebl.2014.0779.
title: "Double-trick" visual and chemical mimicry by the juvenile orchid mantis hymenopus coronatus used in predation of the oriental honeybee apis cerana.
authors: ['Mizuno T', 'Yamaguchi S', 'Yamamoto I', 'Yamaoka R', 'Akino T']
source: Zoolog Sci. 2014 Dec;31(12):795-801. doi: 10.2108/zs140126.
title: Role of auxin in orchid development.
authors: ['Novak SD', 'Luna LJ', 'Gamage RN']
source: Plant Signal Behav. 2014;9(10):e972277. doi: 10.4161/psb.32169.
title: Orchid mating: the anther steps onto the stigma.
authors: ['Chen LJ', 'Liu ZJ']
source: Plant Signal Behav. 2014;9(11):e976484. doi: 10.4161/15592324.2014.976484.
title: Plant and fungal gene expression in mycorrhizal protocorms of the orchid Serapias vomeracea colonized by Tulasnella calospora.
authors: ['Balestrini R', 'Nerva L', 'Sillo F', 'Girlanda M', 'Perotto S']
source: Plant Signal Behav. 2014;9(11):e977707. doi: 10.4161/15592324.2014.977707.
title: Development of Cymbidium ensifolium genic-SSR markers and their utility in genetic diversity and population structure analysis in cymbidiums.
authors: ['Li X', 'Jin F', 'Jin L', 'Jackson A', 'Huang C', 'Li K', 'Shu X']
source: BMC Genet. 2014 Dec 5;15:124. doi: 10.1186/s12863-014-0124-5.
title: Antinociceptive and cytotoxic activities of an epiphytic medicinal orchid: Vanda tessellata Roxb.
authors: ['Chowdhury MA', 'Rahman MM', 'Chowdhury MR', 'Uddin MJ', 'Sayeed MA', 'Hossain MA']
source: BMC Complement Altern Med. 2014 Dec 3;14:464. doi: 10.1186/1472-6882-14-464.
title: Climate change: bees and orchids lose touch.
authors: ['Willmer P']
source: Curr Biol. 2014 Dec 1;24(23):R1133-5. doi: 10.1016/j.cub.2014.10.061. Epub 2014 Dec 1.
title: Mudskipper genomes provide insights into the terrestrial adaptation of amphibious fishes.
authors: ['You X', 'Bian C', 'Zan Q', 'Xu X', 'Liu X', 'Chen J', 'Wang J', 'Qiu Y', 'Li W', 'Zhang X', 'Sun Y', 'Chen S', 'Hong W', 'Li Y', 'Cheng S', 'Fan G', 'Shi C', 'Liang J', 'Tom Tang Y', 'Yang C', 'Ruan Z', 'Bai J', 'Peng C', 'Mu Q', 'Lu J', 'Fan M', 'Yang S', 'Huang Z', 'Jiang X', 'Fang X', 'Zhang G', 'Zhang Y', 'Polgar G', 'Yu H', 'Li J', 'Liu Z', 'Zhang G', 'Ravi V', 'Coon SL', 'Wang J', 'Yang H', 'Venkatesh B', 'Wang J', 'Shi Q']
source: Nat Commun. 2014 Dec 2;5:5594. doi: 10.1038/ncomms6594.
title: Traditional uses of medicinal plants in gastrointestinal disorders in Nepal.
authors: ['Rokaya MB', 'Uprety Y', 'Poudel RC', 'Timsina B', 'Munzbergova Z', 'Asselin H', 'Tiwari A', 'Shrestha SS', 'Sigdel SR']
source: J Ethnopharmacol. 2014 Dec 2;158 Pt A:221-9. doi: 10.1016/j.jep.2014.10.014. Epub 2014 Oct 18.
title: Potential disruption of pollination in a sexually deceptive orchid by climatic change.
authors: ['Robbirt KM', 'Roberts DL', 'Hutchings MJ', 'Davy AJ']
source: Curr Biol. 2014 Dec 1;24(23):2845-9. doi: 10.1016/j.cub.2014.10.033. Epub 2014 Nov 6.
title: CLL2-1, a chemical derivative of orchid 1,4-phenanthrenequinones, inhibits human platelet aggregation through thiol modification of calcium-diacylglycerol guanine nucleotide exchange factor-I (CalDAG-GEFI).
authors: ['Liao CY', 'Lee CL', 'Wang HC', 'Liang SS', 'Kung PH', 'Wu YC', 'Chang FR', 'Wu CC']
source: Free Radic Biol Med. 2015 Jan;78:101-10. doi: 10.1016/j.freeradbiomed.2014.10.512. Epub 2014 Oct 29.
title: Individualizing hospital care for children and young people with learning disabilities: it's the little things that make the difference.
authors: ['Oulton K', 'Sell D', 'Kerry S', 'Gibson F']
source: J Pediatr Nurs. 2015 Jan-Feb;30(1):78-86. doi: 10.1016/j.pedn.2014.10.006. Epub 2014 Oct 23.
title: Ethanolic extract of Coelogyne cristata Lindley (Orchidaceae) and its compound coelogin promote osteoprotective activity in ovariectomized estrogen deficient mice.
authors: ['Sharma C', 'Mansoori MN', 'Dixit M', 'Shukla P', 'Kumari T', 'Bhandari SP', 'Narender T', 'Singh D', 'Arya KR']
source: Phytomedicine. 2014 Oct 15;21(12):1702-7. doi: 10.1016/j.phymed.2014.08.008. Epub 2014 Sep 16.
title: Virus resistance in orchids.
authors: ['Koh KW', 'Lu HC', 'Chan MT']
source: Plant Sci. 2014 Nov;228:26-38. doi: 10.1016/j.plantsci.2014.04.015. Epub 2014 Apr 28.
title: In vitro regeneration and ploidy level analysis of Eulophia ochreata Lindl.
authors: ['Shriram V', 'Nanekar V', 'Kumar V', 'Kavi Kishor PB']
source: Indian J Exp Biol. 2014 Nov;52(11):1112-21.
title: A novel animal model of metabolic syndrome with non-alcoholic fatty liver disease and skin inflammation.
authors: ['Kulkarni NM', 'Jaji MS', 'Shetty P', 'Kurhe YV', 'Chaudhary S', 'Vijaykant G', 'Raghul J', 'Vishwakarma SL', 'Rajesh BN', 'Mookkan J', 'Krishnan UM', 'Narayanan S']
source: Pharm Biol. 2015 Aug;53(8):1110-7. doi: 10.3109/13880209.2014.960944. Epub 2014 Nov 28.
title: Identification and Molecular Characterization of Nuclear Citrus leprosis virus, a Member of the Proposed Dichorhavirus Genus Infecting Multiple Citrus Species in Mexico.
authors: ['Roy A', 'Stone AL', 'Shao J', 'Otero-Colina G', 'Wei G', 'Choudhary N', 'Achor D', 'Levy L', 'Nakhla MK', 'Hartung JS', 'Schneider WL', 'Brlansky RH']
source: Phytopathology. 2015 Apr;105(4):564-75. doi: 10.1094/PHYTO-09-14-0245-R.
title: Raising the sugar content--orchid bees overcome the constraints of suction feeding through manipulation of nectar and pollen provisions.
authors: ['Pokorny T', 'Lunau K', 'Eltz T']
source: PLoS One. 2014 Nov 25;9(11):e113823. doi: 10.1371/journal.pone.0113823. eCollection 2014.
title: Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.
authors: ['Silva DP', 'Vilela B', 'De Marco P Jr', 'Nemesio A']
source: PLoS One. 2014 Nov 25;9(11):e113246. doi: 10.1371/journal.pone.0113246. eCollection 2014.
title: Rapid cytolysis of Mycobacterium tuberculosis by faropenem, an orally bioavailable beta-lactam antibiotic.
authors: ['Dhar N', 'Dubee V', 'Ballell L', 'Cuinet G', 'Hugonnet JE', 'Signorino-Gelo F', 'Barros D', 'Arthur M', 'McKinney JD']
source: Antimicrob Agents Chemother. 2015 Feb;59(2):1308-19. doi: 10.1128/AAC.03461-14. Epub 2014 Nov 24.
title: The genome sequence of the orchid Phalaenopsis equestris.
authors: ['Cai J', 'Liu X', 'Vanneste K', 'Proost S', 'Tsai WC', 'Liu KW', 'Chen LJ', 'He Y', 'Xu Q', 'Bian C', 'Zheng Z', 'Sun F', 'Liu W', 'Hsiao YY', 'Pan ZJ', 'Hsu CC', 'Yang YP', 'Hsu YC', 'Chuang YC', 'Dievart A', 'Dufayard JF', 'Xu X', 'Wang JY', 'Wang J', 'Xiao XJ', 'Zhao XM', 'Du R', 'Zhang GQ', 'Wang M', 'Su YY', 'Xie GC', 'Liu GH', 'Li LQ', 'Huang LQ', 'Luo YB', 'Chen HH', 'Van de Peer Y', 'Liu ZJ']
source: Nat Genet. 2015 Jan;47(1):65-72. doi: 10.1038/ng.3149. Epub 2014 Nov 24.
title: Establishment of an efficient in vitro regeneration protocol for rapid and mass propagation of Dendrobium chrysotoxum Lindl. using seed culture.
authors: ['Nongdam P', 'Tikendra L']
source: ScientificWorldJournal. 2014;2014:740150. doi: 10.1155/2014/740150. Epub 2014 Oct 20.
title: Characterization of arbuscular mycorrhizal fungus communities of Aquilaria crassna and Tectona grandis roots and soils in Thailand plantations.
authors: ['Chaiyasen A', 'Young JP', 'Teaumroong N', 'Gavinlertvatana P', 'Lumyong S']
source: PLoS One. 2014 Nov 14;9(11):e112591. doi: 10.1371/journal.pone.0112591. eCollection 2014.
title: Floral isolation is the major reproductive barrier between a pair of rewarding orchid sister species.
authors: ['Sun M', 'Schluter PM', 'Gross K', 'Schiestl FP']
source: J Evol Biol. 2015 Jan;28(1):117-29. doi: 10.1111/jeb.12544. Epub 2015 Jan 5.
title: Temporal variation in mycorrhizal diversity and carbon and nitrogen stable isotope abundance in the wintergreen meadow orchid Anacamptis morio.
authors: ['Ercole E', 'Adamo M', 'Rodda M', 'Gebauer G', 'Girlanda M', 'Perotto S']
source: New Phytol. 2015 Feb;205(3):1308-19. doi: 10.1111/nph.13109. Epub 2014 Nov 10.
title: A deep transcriptomic analysis of pod development in the vanilla orchid (Vanilla planifolia).
authors: ['Rao X', 'Krom N', 'Tang Y', 'Widiez T', 'Havkin-Frenkel D', 'Belanger FC', 'Dixon RA', 'Chen F']
source: BMC Genomics. 2014 Nov 7;15:964. doi: 10.1186/1471-2164-15-964.
title: The life of phi: the development of phi thickenings in roots of the orchids of the genus Miltoniopsis.
authors: ['Idris NA', 'Collings DA']
source: Planta. 2015 Feb;241(2):489-506. doi: 10.1007/s00425-014-2194-z. Epub 2014 Nov 7.
title: Genic rather than genome-wide differences between sexually deceptive Ophrys orchids with different pollinators.
authors: ['Sedeek KE', 'Scopece G', 'Staedler YM', 'Schonenberger J', 'Cozzolino S', 'Schiestl FP', 'Schluter PM']
source: Mol Ecol. 2014 Dec;23(24):6192-205. doi: 10.1111/mec.12992. Epub 2014 Nov 27.
title: Comparative proteomic analysis of labellum and inner lateral petals in Cymbidium ensifolium flowers.
authors: ['Li X', 'Xu W', 'Chowdhury MR', 'Jin F']
source: Int J Mol Sci. 2014 Oct 31;15(11):19877-97. doi: 10.3390/ijms151119877.
title: In vitro propagation and reintroduction of the endangered Renanthera imschootiana Rolfe.
authors: ['Wu K', 'Zeng S', 'Lin D', 'Teixeira da Silva JA', 'Bu Z', 'Zhang J', 'Duan J']
source: PLoS One. 2014 Oct 28;9(10):e110033. doi: 10.1371/journal.pone.0110033. eCollection 2014.
title: The velamen protects photosynthetic orchid roots against UV-B damage, and a large dated phylogeny implies multiple gains and losses of this function during the Cenozoic.
authors: ['Chomicki G', 'Bidel LP', 'Ming F', 'Coiro M', 'Zhang X', 'Wang Y', 'Baissac Y', 'Jay-Allemand C', 'Renner SS']
source: New Phytol. 2015 Feb;205(3):1330-41. doi: 10.1111/nph.13106. Epub 2014 Oct 23.
title: Prolonged exposure to elevated temperature induces floral transition via up-regulation of cytosolic ascorbate peroxidase 1 and subsequent reduction of the ascorbate redox ratio in Oncidium hybrid orchid.
authors: ['Chin DC', 'Shen CH', 'SenthilKumar R', 'Yeh KW']
source: Plant Cell Physiol. 2014 Dec;55(12):2164-76. doi: 10.1093/pcp/pcu146. Epub 2014 Oct 14.
title: Mycorrhizal fungal diversity and community composition in a lithophytic and epiphytic orchid.
authors: ['Xing X', 'Gai X', 'Liu Q', 'Hart MM', 'Guo S']
source: Mycorrhiza. 2015 May;25(4):289-96. doi: 10.1007/s00572-014-0612-5. Epub 2014 Oct 17.
title: Topical atorvastatin ameliorates 12-O-tetradecanoylphorbol-13-acetate induced skin inflammation by reducing cutaneous cytokine levels and NF-kappaB activation.
authors: ['Kulkarni NM', 'Muley MM', 'Jaji MS', 'Vijaykanth G', 'Raghul J', 'Reddy NK', 'Vishwakarma SL', 'Rajesh NB', 'Mookkan J', 'Krishnan UM', 'Narayanan S']
source: Arch Pharm Res. 2015 Jun;38(6):1238-47. doi: 10.1007/s12272-014-0496-0. Epub 2014 Oct 14.
title: Crystal structure of 3-methyl-2,6-bis-(4-methyl-1,3-thia-zol-5-yl)piperidin-4-one.
authors: ['Manimaran A', 'Sethusankar K', 'Ganesan S', 'Ananthan S']
source: Acta Crystallogr Sect E Struct Rep Online. 2014 Aug 30;70(Pt 9):o1055. doi: 10.1107/S1600536814018856. eCollection 2014 Sep 1.
title: Molecular phylogeny and evolutionary history of the Eurasiatic orchid genus Himantoglossum s.l. (Orchidaceae).
authors: ['Sramko G', 'Attila MV', 'Hawkins JA', 'Bateman RM']
source: Ann Bot. 2014 Dec;114(8):1609-26. doi: 10.1093/aob/mcu179. Epub 2014 Oct 7.
title: A flavonoid isolated from Streptomyces sp. (ERINLG-4) induces apoptosis in human lung cancer A549 cells through p53 and cytochrome c release caspase dependant pathway.
authors: ['Balachandran C', 'Sangeetha B', 'Duraipandiyan V', 'Raj MK', 'Ignacimuthu S', 'Al-Dhabi NA', 'Balakrishna K', 'Parthasarathy K', 'Arulmozhi NM', 'Arasu MV']
source: Chem Biol Interact. 2014 Dec 5;224:24-35. doi: 10.1016/j.cbi.2014.09.019. Epub 2014 Oct 5.
title: The folklore medicinal orchids of Sikkim.
authors: ['Panda AK', 'Mandal D']
source: Anc Sci Life. 2013 Oct;33(2):92-6. doi: 10.4103/0257-7941.139043.
title: Effect of cryopreservation on in vitro seed germination and protocorm growth of Mediterranean orchids.
authors: ['Pirondini A', 'Sgarbi E']
source: Cryo Letters. 2014 Jul-Aug;35(4):327-35.
title: Do chlorophyllous orchids heterotrophically use mycorrhizal fungal carbon?
authors: ['Selosse MA', 'Martos F']
source: Trends Plant Sci. 2014 Nov;19(11):683-5.
title: Vanillin - Bioconversion and Bioengineering of the most popular plant flavour and its de novo biosynthesis in the vanilla orchid.
authors: ['Gallage NJ', 'Moeller BL']
source: Mol Plant. 2014 Sep 30. pii: ssu105.
title: Authenticity and traceability of vanilla flavors by analysis of stable isotopes of carbon and hydrogen.
authors: ['Hansen AM', 'Fromberg A', 'Frandsen HL']
source: J Agric Food Chem. 2014 Oct 22;62(42):10326-31. doi: 10.1021/jf503055k. Epub 2014 Oct 13.
title: Are winter-active species vulnerable to climate warming? A case study with the wintergreen terrestrial orchid, Tipularia discolor.
authors: ['Marchin RM', 'Dunn RR', 'Hoffmann WA']
source: Oecologia. 2014 Dec;176(4):1161-72. doi: 10.1007/s00442-014-3074-8. Epub 2014 Sep 26.
title: Antimicrobial compounds from leaf extracts of Jatropha curcas, Psidium guajava, and Andrographis paniculata.
authors: ['Rahman MM', 'Ahmad SH', 'Mohamed MT', 'Ab Rahman MZ']
source: ScientificWorldJournal. 2014;2014:635240. doi: 10.1155/2014/635240. Epub 2014 Aug 26.
title: Eugenol synthase genes in floral scent variation in Gymnadenia species.
authors: ['Gupta AK', 'Schauvinhold I', 'Pichersky E', 'Schiestl FP']
source: Funct Integr Genomics. 2014 Dec;14(4):779-88. doi: 10.1007/s10142-014-0397-9. Epub 2014 Sep 20.
title: Bavituximab plus paclitaxel and carboplatin for the treatment of advanced non-small-cell lung cancer.
authors: ['Digumarti R', 'Bapsy PP', 'Suresh AV', 'Bhattacharyya GS', 'Dasappa L', 'Shan JS', 'Gerber DE']
source: Lung Cancer. 2014 Nov;86(2):231-6. doi: 10.1016/j.lungcan.2014.08.010. Epub 2014 Aug 24.
title: Isolation and differential expression of a novel MAP kinase gene DoMPK4 in Dendrobium officinale.
authors: ['Zhang G', 'Li YM', 'Hu BX', 'Zhang DW', 'Guo SX']
source: Yao Xue Xue Bao. 2014 Jul;49(7):1076-83.
title: RNA interference-based gene silencing of phytoene synthase impairs growth, carotenoids, and plastid phenotype in Oncidium hybrid orchid.
authors: ['Liu JX', 'Chiou CY', 'Shen CH', 'Chen PJ', 'Liu YC', 'Jian CD', 'Shen XL', 'Shen FQ', 'Yeh KW']
source: Springerplus. 2014 Aug 28;3:478. doi: 10.1186/2193-1801-3-478. eCollection 2014.
title: Development of phylogenetic markers for Sebacina (Sebacinaceae) mycorrhizal fungi associated with Australian orchids.
authors: ['Ruibal MP', 'Peakall R', 'Foret S', 'Linde CC']
source: Appl Plant Sci. 2014 Jun 4;2(6). pii: apps.1400015. doi: 10.3732/apps.1400015. eCollection 2014 Jun.
title: Characterization of 13 microsatellite markers for Diuris basaltica (Orchidaceae) and related species.
authors: ['Ahrens CW', 'James EA']
source: Appl Plant Sci. 2014 Jan 7;2(1). pii: apps.1300069. doi: 10.3732/apps.1300069. eCollection 2014 Jan.
title: Pregnancy influences the plasma pharmacokinetics but not the cerebrospinal fluid pharmacokinetics of raltegravir: a preclinical investigation.
authors: ['Mahat MY', 'Thippeswamy BS', 'Khan FR', 'Edunuri R', 'Nidhyanandan S', 'Chaudhary S']
source: Eur J Pharm Sci. 2014 Dec 18;65:38-44. doi: 10.1016/j.ejps.2014.08.012. Epub 2014 Sep 6.
title: Temporal and spatial regulation of anthocyanin biosynthesis provide diverse flower colour intensities and patterning in Cymbidium orchid.
authors: ['Wang L', 'Albert NW', 'Zhang H', 'Arathoon S', 'Boase MR', 'Ngo H', 'Schwinn KE', 'Davies KM', 'Lewis DH']
source: Planta. 2014 Nov;240(5):983-1002. doi: 10.1007/s00425-014-2152-9. Epub 2014 Sep 3.
title: Deep sequencing-based comparative transcriptional profiles of Cymbidium hybridum roots in response to mycorrhizal and non-mycorrhizal beneficial fungi.
authors: ['Zhao X', 'Zhang J', 'Chen C', 'Yang J', 'Zhu H', 'Liu M', 'Lv F']
source: BMC Genomics. 2014 Aug 31;15:747. doi: 10.1186/1471-2164-15-747.
title: A microfluidic system integrated with buried optical fibers for detection of Phalaenopsis orchid pathogens.
authors: ['Lin CL', 'Chang WH', 'Wang CH', 'Lee CH', 'Chen TY', 'Jan FJ', 'Lee GB']
source: Biosens Bioelectron. 2015 Jan 15;63:572-9. doi: 10.1016/j.bios.2014.08.013. Epub 2014 Aug 17.
title: Role of Auxin in orchid development.
authors: ['Darling-Novak S', 'Luna LJ', 'Gamage RN']
source: Plant Signal Behav. 2014 Aug 25;9. pii: e32169.
title: Predicting progression of Alzheimer's disease using ordinal regression.
authors: ['Doyle OM', 'Westman E', 'Marquand AF', 'Mecocci P', 'Vellas B', 'Tsolaki M', 'Kloszewska I', 'Soininen H', 'Lovestone S', 'Williams SC', 'Simmons A']
source: PLoS One. 2014 Aug 20;9(8):e105542. doi: 10.1371/journal.pone.0105542. eCollection 2014.
title: Identity and specificity of Rhizoctonia-like fungi from different populations of Liparis japonica (Orchidaceae) in Northeast China.
authors: ['Ding R', 'Chen XH', 'Zhang LJ', 'Yu XD', 'Qu B', 'Duan R', 'Xu YF']
source: PLoS One. 2014 Aug 20;9(8):e105573. doi: 10.1371/journal.pone.0105573. eCollection 2014.
title: Labellar anatomy and secretion in Bulbophyllum Thouars (Orchidaceae: Bulbophyllinae) sect. Racemosae Benth. & Hook. f.
authors: ['Davies KL', 'Stpiczynska M']
source: Ann Bot. 2014 Oct;114(5):889-911. doi: 10.1093/aob/mcu153. Epub 2014 Aug 13.
title: Molecular characterization and functional analysis of a Flowering locus T homolog gene from a Phalaenopsis orchid.
authors: ['Li DM', 'L FB', 'Zhu GF', 'Sun YB', 'Liu HL', 'Liu JW', 'Wang Z']
source: Genet Mol Res. 2014 Aug 7;13(3):5982-94. doi: 10.4238/2014.August.7.14.
title: Composition and conservation of Orchidaceae on an inselberg in the Brazilian Atlantic Forest and floristic relationships with areas of Eastern Brazil.
authors: ['Pessanha AS', 'Menini Neto L', 'Forzza RC', 'Nascimento MT']
source: Rev Biol Trop. 2014 Jun;62(2):829-41.
title: The complete chloroplast genome of Phalaenopsis "Tiny Star".
authors: ['Kim GB', 'Kwon Y', 'Yu HJ', 'Lim KB', 'Seo JH', 'Mun JH']
source: Mitochondrial DNA. 2016 Mar;27(2):1300-2. doi: 10.3109/19401736.2014.945566. Epub 2014 Aug 5.
title: The cultural and ecological impacts of aboriginal tourism: a case study on Taiwan's Tao tribe.
authors: ['Liu TM', 'Lu DJ']
source: Springerplus. 2014 Jul 8;3:347. doi: 10.1186/2193-1801-3-347. eCollection 2014.
title: Verifying likelihoods for low template DNA profiles using multiple replicates.
authors: ['Steele CD', 'Greenhalgh M', 'Balding DJ']
source: Forensic Sci Int Genet. 2014 Nov;13:82-9. doi: 10.1016/j.fsigen.2014.06.018. Epub 2014 Jul 10.
title: Development of microsatellite markers of vandaceous orchids for species and variety identification.
authors: ['Peyachoknagul S', 'Nettuwakul C', 'Phuekvilai P', 'Wannapinpong S', 'Srikulnath K']
source: Genet Mol Res. 2014 Jul 24;13(3):5441-5. doi: 10.4238/2014.July.24.23.
title: Bayesian estimates of transition probabilities in seven small lithophytic orchid populations: maximizing data availability from many small samples.
authors: ['Tremblay RL', 'McCarthy MA']
source: PLoS One. 2014 Jul 28;9(7):e102859. doi: 10.1371/journal.pone.0102859. eCollection 2014.
title: Occurrence of Bacillus amyloliquefaciens as a systemic endophyte of vanilla orchids.
authors: ['White JF Jr', 'Torres MS', 'Sullivan RF', 'Jabbour RE', 'Chen Q', 'Tadych M', 'Irizarry I', 'Bergen MS', 'Havkin-Frenkel D', 'Belanger FC']
source: Microsc Res Tech. 2014 Nov;77(11):874-85. doi: 10.1002/jemt.22410. Epub 2014 Jul 25.
title: The orchid-bee faunas (Hymenoptera: Apidae) of "Reserva Ecologica Michelin", "RPPN Serra Bonita" and one Atlantic Forest remnant in the state of Bahia, Brazil, with new geographic records.
authors: ['Nemesio A']
source: Braz J Biol. 2014 Feb;74(1):16-22.
title: The corbiculate bees arose from New World oil-collecting bees: implications for the origin of pollen baskets.
authors: ['Martins AC', 'Melo GA', 'Renner SS']
source: Mol Phylogenet Evol. 2014 Nov;80:88-94. doi: 10.1016/j.ympev.2014.07.003. Epub 2014 Jul 15.
title: Evaluation of the predacious mite Hemicheyletia wellsina (Acari: Cheyletidae) as a predator of arthropod pests of orchids.
authors: ['Ray HA', 'Hoy MA']
source: Exp Appl Acarol. 2014 Nov;64(3):287-98. doi: 10.1007/s10493-014-9833-8. Epub 2014 Jul 18.
title: De novo transcriptome assembly from inflorescence of Orchis italica: analysis of coding and non-coding transcripts.
authors: ['De Paolo S', 'Salvemini M', 'Gaudio L', 'Aceto S']
source: PLoS One. 2014 Jul 15;9(7):e102155. doi: 10.1371/journal.pone.0102155. eCollection 2014.
title: Desiccation tolerance, longevity and seed-siring ability of entomophilous pollen from UK native orchid species.
authors: ['Marks TR', 'Seaton PT', 'Pritchard HW']
source: Ann Bot. 2014 Sep;114(3):561-9. doi: 10.1093/aob/mcu139. Epub 2014 Jul 8.
title: High levels of effective long-distance dispersal may blur ecotypic divergence in a rare terrestrial orchid.
authors: ['Vanden Broeck A', 'Van Landuyt W', 'Cox K', 'De Bruyn L', 'Gyselings R', 'Oostermeijer G', 'Valentin B', 'Bozic G', 'Dolinar B', 'Illyes Z', 'Mergeay J']
source: BMC Ecol. 2014 Jul 7;14:20. doi: 10.1186/1472-6785-14-20.
title: Conservation genetics of an endangered lady's slipper orchid: Cypripedium japonicum in China.
authors: ['Qian X', 'Li QJ', 'Liu F', 'Gong MJ', 'Wang CX', 'Tian M']
source: Int J Mol Sci. 2014 Jun 30;15(7):11578-96. doi: 10.3390/ijms150711578.
title: Multiplex RT-PCR detection of three common viruses infecting orchids.
authors: ['Ali RN', 'Dann AL', 'Cross PA', 'Wilson CR']
source: Arch Virol. 2014 Nov;159(11):3095-9. doi: 10.1007/s00705-014-2161-9. Epub 2014 Jul 1.
title: Agrobacterium-mediated transformation of the recalcitrant Vanda Kasem's Delight orchid with higher efficiency.
authors: ['Gnasekaran P', 'Antony JJ', 'Uddain J', 'Subramaniam S']
source: ScientificWorldJournal. 2014;2014:583934. doi: 10.1155/2014/583934. Epub 2014 Apr 8.
title: Cold response in Phalaenopsis aphrodite and characterization of PaCBF1 and PaICE1.
authors: ['Peng PH', 'Lin CH', 'Tsai HW', 'Lin TY']
source: Plant Cell Physiol. 2014 Sep;55(9):1623-35. doi: 10.1093/pcp/pcu093. Epub 2014 Jun 27.
title: Volatile fingerprint of italian populations of orchids using solid phase microextraction and gas chromatography coupled with mass spectrometry.
authors: ['Manzo A', 'Panseri S', 'Vagge I', 'Giorgi A']
source: Molecules. 2014 Jun 11;19(6):7913-36. doi: 10.3390/molecules19067913.
title: [Molecular characterization of a HMG-CoA reductase gene from a rare and endangered medicinal plant, Dendrobium officinale].
authors: ['Zhang L', 'Wang JT', 'Zhang DW', 'Zhang G', 'Guo SX']
source: Yao Xue Xue Bao. 2014 Mar;49(3):411-8.
title: Antitumor activity of ethanolic extract of Dendrobium formosum in T-cell lymphoma: an in vitro and in vivo study.
authors: ['Prasad R', 'Koch B']
source: Biomed Res Int. 2014;2014:753451. doi: 10.1155/2014/753451. Epub 2014 May 18.
title: New insight into the regulation of floral morphogenesis.
authors: ['Tsai WC', 'Pan ZJ', 'Su YY', 'Liu ZJ']
source: Int Rev Cell Mol Biol. 2014;311:157-82. doi: 10.1016/B978-0-12-800179-0.00003-9.
title: Effects of pollination limitation and seed predation on female reproductive success of a deceptive orchid.
authors: ['Walsh RP', 'Arnold PM', 'Michaels HJ']
source: AoB Plants. 2014 Jun 9;6. pii: plu031. doi: 10.1093/aobpla/plu031.
title: Multiple isoforms of phosphoenolpyruvate carboxylase in the Orchidaceae (subtribe Oncidiinae): implications for the evolution of crassulacean acid metabolism.
authors: ['Silvera K', 'Winter K', 'Rodriguez BL', 'Albion RL', 'Cushman JC']
source: J Exp Bot. 2014 Jul;65(13):3623-36. doi: 10.1093/jxb/eru234. Epub 2014 Jun 9.
title: Comparative chloroplast genomes of photosynthetic orchids: insights into evolution of the Orchidaceae and development of molecular markers for phylogenetic applications.
authors: ['Luo J', 'Hou BW', 'Niu ZT', 'Liu W', 'Xue QY', 'Ding XY']
source: PLoS One. 2014 Jun 9;9(6):e99016. doi: 10.1371/journal.pone.0099016. eCollection 2014.
title: Speciation via floral heterochrony and presumed mycorrhizal host switching of endemic butterfly orchids on the Azorean archipelago.
authors: ['Bateman RM', 'Rudall PJ', 'Bidartondo MI', 'Cozzolino S', 'Tranchida-Lombardo V', 'Carine MA', 'Moura M']
source: Am J Bot. 2014 Jun 6;101(6):979-1001.
title: Pollen competition between two sympatric Orchis species (Orchidaceae): the overtaking of conspecific of heterospecific pollen as a reproductive barrier.
authors: ['Luca A', 'Palermo AM', 'Bellusci F', 'Pellegrino G']
source: Plant Biol (Stuttg). 2015 Jan;17(1):219-25. doi: 10.1111/plb.12199. Epub 2014 May 30.
title: In vitro conservation of Dendrobium germplasm.
authors: ['Teixeira da Silva JA', 'Zeng S', 'Galdiano RF Jr', 'Dobranszki J', 'Cardoso JC', 'Vendrame WA']
source: Plant Cell Rep. 2014 Sep;33(9):1413-23. doi: 10.1007/s00299-014-1631-6. Epub 2014 May 21.
title: Gigantol, a bibenzyl from Dendrobium draconis, inhibits the migratory behavior of non-small cell lung cancer cells.
authors: ['Charoenrungruang S', 'Chanvorachote P', 'Sritularak B', 'Pongrakhananon V']
source: J Nat Prod. 2014 Jun 27;77(6):1359-66. doi: 10.1021/np500015v. Epub 2014 May 20.
title: The analysis of the inflorescence miRNome of the orchid Orchis italica reveals a DEF-like MADS-box gene as a new miRNA target.
authors: ['Aceto S', 'Sica M', 'De Paolo S', "D'Argenio V", 'Cantiello P', 'Salvatore F', 'Gaudio L']
source: PLoS One. 2014 May 15;9(5):e97839. doi: 10.1371/journal.pone.0097839. eCollection 2014.
title: Factors affecting the distribution pattern of wild plants with extremely small populations in Hainan Island, China.
authors: ['Chen Y', 'Yang X', 'Yang Q', 'Li D', 'Long W', 'Luo W']
source: PLoS One. 2014 May 15;9(5):e97751. doi: 10.1371/journal.pone.0097751. eCollection 2014.
title: [Temporal and spatial variations of soil NO(3-)-N in Orychophragmus violaceus/spring maize rotation system in North China].
authors: ['Xiong J', 'Wang GL', 'Cao WD', 'Bai JS', 'Zeng NH', 'Yang L', 'Gao SJ', 'Shimizu K']
source: Ying Yong Sheng Tai Xue Bao. 2014 Feb;25(2):467-73.
title: Chemical composition, potential toxicity, and quality control procedures of the crude drug of Cyrtopodium macrobulbon.
authors: ['Morales-Sanchez V', 'Rivero-Cruz I', 'Laguna-Hernandez G', 'Salazar-Chavez G', 'Mata R']
source: J Ethnopharmacol. 2014 Jul 3;154(3):790-7. doi: 10.1016/j.jep.2014.05.006. Epub 2014 May 10.
title: Nutritional regulation in mixotrophic plants: new insights from Limodorum abortivum.
authors: ['Bellino A', 'Alfani A', 'Selosse MA', 'Guerrieri R', 'Borghetti M', 'Baldantoni D']
source: Oecologia. 2014 Jul;175(3):875-85. doi: 10.1007/s00442-014-2940-8. Epub 2014 May 11.
title: Evaluation of internal control for gene expression in Phalaenopsis by quantitative real-time PCR.
authors: ['Yuan XY', 'Jiang SH', 'Wang MF', 'Ma J', 'Zhang XY', 'Cui B']
source: Appl Biochem Biotechnol. 2014 Jul;173(6):1431-45. doi: 10.1007/s12010-014-0951-x. Epub 2014 May 9.
title: Male interference with pollination efficiency in a hermaphroditic orchid.
authors: ['Duffy KJ', 'Johnson SD']
source: J Evol Biol. 2014 Aug;27(8):1751-6. doi: 10.1111/jeb.12395. Epub 2014 May 6.
title: Biodegradation of polystyrene-graft-starch copolymers in three different types of soil.
authors: ['Nikolic V', 'Velickovic S', 'Popovic A']
source: Environ Sci Pollut Res Int. 2014;21(16):9877-86. doi: 10.1007/s11356-014-2946-0. Epub 2014 May 3.
title: Mycorrhizal compatibility and symbiotic reproduction of Gavilea australis, an endangered terrestrial orchid from south Patagonia.
authors: ['Fracchia S', 'Aranda-Rickert A', 'Flachsland E', 'Terada G', 'Sede S']
source: Mycorrhiza. 2014 Nov;24(8):627-34. doi: 10.1007/s00572-014-0579-2. Epub 2014 Apr 30.
title: Bioguided identification of antifungal and antiproliferative compounds from the Brazilian orchid Miltonia flavescens Lindl.
authors: ['Porte LF', 'Santin SM', 'Chiavelli LU', 'Silva CC', 'Faria TJ', 'Faria RT', 'Ruiz AL', 'Carvalho JE', 'Pomini AM']
source: Z Naturforsch C. 2014 Jan-Feb;69(1-2):46-52.
title: Helvolic acid, an antibacterial nortriterpenoid from a fungal endophyte, sp. of orchid endemic to Sri Lanka.
authors: ['Ratnaweera PB', 'Williams DE', 'de Silva ED', 'Wijesundera RL', 'Dalisay DS', 'Andersen RJ']
source: Mycology. 2014 Mar;5(1):23-28. Epub 2014 Mar 25.
title: Gene expression in mycorrhizal orchid protocorms suggests a friendly plant-fungus relationship.
authors: ['Perotto S', 'Rodda M', 'Benetti A', 'Sillo F', 'Ercole E', 'Rodda M', 'Girlanda M', 'Murat C', 'Balestrini R']
source: Planta. 2014 Jun;239(6):1337-49. doi: 10.1007/s00425-014-2062-x. Epub 2014 Apr 24.
title: Synthesis and mechanistic studies of a novel homoisoflavanone inhibitor of endothelial cell growth.
authors: ['Basavarajappa HD', 'Lee B', 'Fei X', 'Lim D', 'Callaghan B', 'Mund JA', 'Case J', 'Rajashekhar G', 'Seo SY', 'Corson TW']
source: PLoS One. 2014 Apr 21;9(4):e95694. doi: 10.1371/journal.pone.0095694. eCollection 2014.
title: A new phylogenetic analysis sheds new light on the relationships in the Calanthe alliance (Orchidaceae) in China.
authors: ['Zhai JW', 'Zhang GQ', 'Li L', 'Wang M', 'Chen LJ', 'Chung SW', 'Rodriguez FJ', 'Francisco-Ortega J', 'Lan SR', 'Xing FW', 'Liu ZJ']
source: Mol Phylogenet Evol. 2014 Aug;77:216-22. doi: 10.1016/j.ympev.2014.04.005. Epub 2014 Apr 18.
title: Molecular systematics of subtribe Orchidinae and Asian taxa of Habenariinae (Orchideae, Orchidaceae) based on plastid matK, rbcL and nuclear ITS.
authors: ['Jin WT', 'Jin XH', 'Schuiteman A', 'Li DZ', 'Xiang XG', 'Huang WC', 'Li JW', 'Huang LQ']
source: Mol Phylogenet Evol. 2014 Aug;77:41-53. doi: 10.1016/j.ympev.2014.04.004. Epub 2014 Apr 16.
title: Memory for expectation-violating concepts: the effects of agents and cultural familiarity.
authors: ['Porubanova M', 'Shaw DJ', 'McKay R', 'Xygalatas D']
source: PLoS One. 2014 Apr 8;9(4):e90684. doi: 10.1371/journal.pone.0090684. eCollection 2014.
title: Discovery of pyrazines as pollinator sex pheromones and orchid semiochemicals: implications for the evolution of sexual deception.
authors: ['Bohman B', 'Phillips RD', 'Menz MH', 'Berntsson BW', 'Flematti GR', 'Barrow RA', 'Dixon KW', 'Peakall R']
source: New Phytol. 2014 Aug;203(3):939-52. doi: 10.1111/nph.12800. Epub 2014 Apr 3.
title: Floral colleters in Pleurothallidinae (Epidendroideae: Orchidaceae).
authors: ['Cardoso-Gustavson P', 'Campbell LM', 'Mazzoni-Viveiros SC', 'de Barros F']
source: Am J Bot. 2014 Apr;101(4):587-97. doi: 10.3732/ajb.1400012. Epub 2014 Mar 31.
title: Expression of paralogous SEP-, FUL-, AG- and STK-like MADS-box genes in wild-type and peloric Phalaenopsis flowers.
authors: ['Acri-Nunes-Miranda R', 'Mondragon-Palomino M']
source: Front Plant Sci. 2014 Mar 12;5:76. doi: 10.3389/fpls.2014.00076. eCollection 2014.
title: Spatio-temporal Genetic Structure of a Tropical Bee Species Suggests High Dispersal Over a Fragmented Landscape.
authors: ['Suni SS', 'Bronstein JL', 'Brosi BJ']
source: Biotropica. 2014 Mar 1;46(2):202-209.
title: 2,2'-[Benzene-1,2-diylbis(iminomethanediyl)]diphenol derivative bearing two amine and hydroxyl groups as fluorescent receptor for Zinc(II) ion.
authors: ['Tayade K', 'Sahoo SK', 'Patil R', 'Singh N', 'Attarde S', 'Kuwar A']
source: Spectrochim Acta A Mol Biomol Spectrosc. 2014 May 21;126:312-6. doi: 10.1016/j.saa.2014.02.003. Epub 2014 Feb 19.
title: Climate, physiological tolerance and sex-biased dispersal shape genetic structure of Neotropical orchid bees.
authors: ['Lopez-Uribe MM', 'Zamudio KR', 'Cardoso CF', 'Danforth BN']
source: Mol Ecol. 2014 Apr;23(7):1874-90. doi: 10.1111/mec.12689. Epub 2014 Mar 18.
title: There is more to pollinator-mediated selection than pollen limitation.
authors: ['Sletvold N', 'Agren J']
source: Evolution. 2014 Jul;68(7):1907-18. doi: 10.1111/evo.12405. Epub 2014 Apr 16.
title: Three new bioactive phenolic glycosides from Liparis odorata.
authors: ['Li B', 'Liu H', 'Zhang D', 'Lai X', 'Liu B', 'Xu X', 'Xu P']
source: Nat Prod Res. 2014;28(8):522-9. doi: 10.1080/14786419.2014.880916. Epub 2014 Mar 17.
title: The evolution of floral deception in Epipactis veratrifolia (Orchidaceae): from indirect defense to pollination.
authors: ['Jin XH', 'Ren ZX', 'Xu SZ', 'Wang H', 'Li DZ', 'Li ZY']
source: BMC Plant Biol. 2014 Mar 12;14:63. doi: 10.1186/1471-2229-14-63.
title: Sexual safety practices of massage parlor-based sex workers and their clients.
authors: ['Kolar K', 'Atchison C', 'Bungay V']
source: AIDS Care. 2014;26(9):1100-4. doi: 10.1080/09540121.2014.894611. Epub 2014 Mar 12.
title: Identification of warm day and cool night conditions induced flowering-related genes in a Phalaenopsis orchid hybrid by suppression subtractive hybridization.
authors: ['Li DM', 'Lu FB', 'Zhu GF', 'Sun YB', 'Xu YC', 'Jiang MD', 'Liu JW', 'Wang Z']
source: Genet Mol Res. 2014 Feb 14;13(3):7037-51. doi: 10.4238/2014.February.14.7.
title: Transcriptional mapping of the messenger and leader RNAs of orchid fleck virus, a bisegmented negative-strand RNA virus.
authors: ['Kondo H', 'Maruyama K', 'Chiba S', 'Andika IB', 'Suzuki N']
source: Virology. 2014 Mar;452-453:166-74. doi: 10.1016/j.virol.2014.01.007. Epub 2014 Feb 4.
title: Flower development of Phalaenopsis orchid involves functionally divergent SEPALLATA-like genes.
authors: ['Pan ZJ', 'Chen YY', 'Du JS', 'Chen YY', 'Chung MC', 'Tsai WC', 'Wang CN', 'Chen HH']
source: New Phytol. 2014 May;202(3):1024-42. doi: 10.1111/nph.12723. Epub 2014 Feb 14.
title: In situ seed baiting to isolate germination-enhancing fungi for an epiphytic orchid, Dendrobium aphyllum (Orchidaceae).
authors: ['Zi XM', 'Sheng CL', 'Goodale UM', 'Shao SC', 'Gao JY']
source: Mycorrhiza. 2014 Oct;24(7):487-99. doi: 10.1007/s00572-014-0565-8. Epub 2014 Feb 23.
title: Combination of vildagliptin and rosiglitazone ameliorates nonalcoholic fatty liver disease in C57BL/6 mice.
authors: ['Mookkan J', 'De S', 'Shetty P', 'Kulkarni NM', 'Devisingh V', 'Jaji MS', 'Lakshmi VP', 'Chaudhary S', 'Kulathingal J', 'Rajesh NB', 'Narayanan S']
source: Indian J Pharmacol. 2014 Jan-Feb;46(1):46-50. doi: 10.4103/0253-7613.125166.
title: The colonization patterns of different fungi on roots of Cymbidium hybridum plantlets and their respective inoculation effects on growth and nutrient uptake of orchid plantlets.
authors: ['Zhao XL', 'Yang JZ', 'Liu S', 'Chen CL', 'Zhu HY', 'Cao JX']
source: World J Microbiol Biotechnol. 2014 Jul;30(7):1993-2003. doi: 10.1007/s11274-014-1623-2. Epub 2014 Feb 16.
title: Pollinator specificity drives strong prepollination reproductive isolation in sympatric sexually deceptive orchids.
authors: ['Whitehead MR', 'Peakall R']
source: Evolution. 2014 Jun;68(6):1561-75. doi: 10.1111/evo.12382. Epub 2014 Mar 26.
title: Floral scent emitted by white and coloured morphs in orchids.
authors: ['Dormont L', 'Delle-Vedove R', 'Bessiere JM', 'Schatz B']
source: Phytochemistry. 2014 Apr;100:51-9. doi: 10.1016/j.phytochem.2014.01.009. Epub 2014 Feb 10.
title: HIV risk behaviors of male injecting drug users and associated non-condom use with regular female sexual partners in north-east India.
authors: ['Mishra RK', 'Ganju D', 'Ramesh S', 'Lalmuanpuii M', 'Biangtung L', 'Humtsoe C', 'Saggurti N']
source: Harm Reduct J. 2014 Feb 13;11:5. doi: 10.1186/1477-7517-11-5.
title: Antimicrobial activity of cold and hot successive pseudobulb extracts of Flickingeria nodosa (Dalz.) Seidenf.
authors: ['Nagananda GS', 'Satishchandra N']
source: Pak J Biol Sci. 2013 Oct 15;16(20):1189-93.
title: Evidence for isolation-by-habitat among populations of an epiphytic orchid species on a small oceanic island.
authors: ['Mallet B', 'Martos F', 'Blambert L', 'Pailler T', 'Humeau L']
source: PLoS One. 2014 Feb 3;9(2):e87469. doi: 10.1371/journal.pone.0087469. eCollection 2014.
title: Stable isotope cellular imaging reveals that both live and degenerating fungal pelotons transfer carbon and nitrogen to orchid protocorms.
authors: ['Kuga Y', 'Sakamoto N', 'Yurimoto H']
source: New Phytol. 2014 Apr;202(2):594-605. doi: 10.1111/nph.12700. Epub 2014 Feb 3.
title: Isolation and characterisation of degradation impurities in the cefazolin sodium drug substance.
authors: ['Sivakumar B', 'Parthasarathy K', 'Murugan R', 'Jeyasudha R', 'Murugan S', 'Saranghdar RJ']
source: Sci Pharm. 2013 Jun 4;81(4):933-50. doi: 10.3797/scipharm.1304-14. eCollection 2013 Dec.
title: Antimycobacterial evaluation of novel hybrid arylidene thiazolidine-2,4-diones.
authors: ['Ponnuchamy S', 'Kanchithalaivan S', 'Ranjith Kumar R', 'Ali MA', 'Choon TS']
source: Bioorg Med Chem Lett. 2014 Feb 15;24(4):1089-93. doi: 10.1016/j.bmcl.2014.01.007. Epub 2014 Jan 11.
title: A framework for assessing supply-side wildlife conservation.
authors: ['Phelps J', 'Carrasco LR', 'Webb EL']
source: Conserv Biol. 2014 Feb;28(1):244-57. doi: 10.1111/cobi.12160. Epub 2013 Nov 1.
title: Impact of primer choice on characterization of orchid mycorrhizal communities using 454 pyrosequencing.
authors: ['Waud M', 'Busschaert P', 'Ruyters S', 'Jacquemyn H', 'Lievens B']
source: Mol Ecol Resour. 2014 Jul;14(4):679-99. doi: 10.1111/1755-0998.12229. Epub 2014 Mar 14.
title: 3-Isopropyl-1-{2-[(1-methyl-1H-tetra-zol-5-yl)sulfan-yl]acet-yl}-2,6-di-phenyl-pi peridin-4-one hemihydrate.
authors: ['Ganesan S', 'Sugumar P', 'Ananthan S', 'Ponnuswamy MN']
source: Acta Crystallogr Sect E Struct Rep Online. 2013 Oct 2;69(Pt 11):o1598. doi: 10.1107/S1600536813026500. eCollection 2013 Oct 2.
title: Carbon and nitrogen gain during the growth of orchid seedlings in nature.
authors: ['Stockel M', 'Tesitelova T', 'Jersakova J', 'Bidartondo MI', 'Gebauer G']
source: New Phytol. 2014 Apr;202(2):606-15. doi: 10.1111/nph.12688. Epub 2014 Jan 21.
title: Growth promotion-related miRNAs in Oncidium orchid roots colonized by the endophytic fungus Piriformospora indica.
authors: ['Ye W', 'Shen CH', 'Lin Y', 'Chen PJ', 'Xu X', 'Oelmuller R', 'Yeh KW', 'Lai Z']
source: PLoS One. 2014 Jan 7;9(1):e84920. doi: 10.1371/journal.pone.0084920. eCollection 2014.
title: Seedling development and evaluation of genetic stability of cryopreserved Dendrobium hybrid mature seeds.
authors: ['Galdiano RF Jr', 'de Macedo Lemos EG', 'de Faria RT', 'Vendrame WA']
source: Appl Biochem Biotechnol. 2014 Mar;172(5):2521-9. doi: 10.1007/s12010-013-0699-8. Epub 2014 Jan 10.
title: Systematic revision of Platanthera in the Azorean archipelago: not one but three species, including arguably Europe's rarest orchid.
authors: ['Bateman RM', 'Rudall PJ', 'Moura M']
source: PeerJ. 2013 Dec 10;1:e218. doi: 10.7717/peerj.218. eCollection 2013.
title: Deep sequencing-based analysis of the Cymbidium ensifolium floral transcriptome.
authors: ['Li X', 'Luo J', 'Yan T', 'Xiang L', 'Jin F', 'Qin D', 'Sun C', 'Xie M']
source: PLoS One. 2013 Dec 31;8(12):e85480. doi: 10.1371/journal.pone.0085480. eCollection 2013.
title: Pyrazines Attract Catocheilus Thynnine Wasps.
authors: ['Bohman B', 'Peakall R']
source: Insects. 2014 Jun 19;5(2):474-87. doi: 10.3390/insects5020474.
title: Comparison of hypoglycemic and antioxidative effects of polysaccharides from four different Dendrobium species.
authors: ['Pan LH', 'Li XF', 'Wang MN', 'Zha XQ', 'Yang XF', 'Liu ZJ', 'Luo YB', 'Luo JP']
source: Int J Biol Macromol. 2014 Mar;64:420-7. doi: 10.1016/j.ijbiomac.2013.12.024. Epub 2013 Dec 24.
title: Caught in the act: pollination of sexually deceptive trap-flowers by fungus gnats in Pterostylis (Orchidaceae).
authors: ['Phillips RD', 'Scaccabarozzi D', 'Retter BA', 'Hayes C', 'Brown GR', 'Dixon KW', 'Peakall R']
source: Ann Bot. 2014 Mar;113(4):629-41. doi: 10.1093/aob/mct295. Epub 2013 Dec 22.
title: Pollinator deception in the orchid mantis.
authors: ["O'Hanlon JC", 'Holwell GI', 'Herberstein ME']
source: Am Nat. 2014 Jan;183(1):126-32. doi: 10.1086/673858. Epub 2013 Sep 23.
title: Coexisting orchid species have distinct mycorrhizal communities and display strong spatial segregation.
authors: ['Jacquemyn H', 'Brys R', 'Merckx VS', 'Waud M', 'Lievens B', 'Wiegand T']
source: New Phytol. 2014 Apr;202(2):616-27. doi: 10.1111/nph.12640. Epub 2013 Dec 11.
title: Structurally characterized arabinogalactan from Anoectochilus formosanus as an immuno-modulator against CT26 colon cancer in BALB/c mice.
authors: ['Yang LC', 'Hsieh CC', 'Lu TJ', 'Lin WC']
source: Phytomedicine. 2014 Apr 15;21(5):647-55. doi: 10.1016/j.phymed.2013.10.032. Epub 2013 Dec 4.
title: Proteome changes in Oncidium sphacelatum (Orchidaceae) at different trophic stages of symbiotic germination.
authors: ['Valadares RB', 'Perotto S', 'Santos EC', 'Lambais MR']
source: Mycorrhiza. 2014 Jul;24(5):349-60. doi: 10.1007/s00572-013-0547-2. Epub 2013 Dec 6.
title: First flowering hybrid between autotrophic and mycoheterotrophic plant species: breakthrough in molecular biology of mycoheterotrophy.
authors: ['Ogura-Tsujita Y', 'Miyoshi K', 'Tsutsumi C', 'Yukawa T']
source: J Plant Res. 2014 Mar;127(2):299-305. doi: 10.1007/s10265-013-0612-0. Epub 2013 Dec 6.
title: Relative importance of pollen and seed dispersal across a Neotropical mountain landscape for an epiphytic orchid.
authors: ['Kartzinel TR', 'Shefferson RP', 'Trapnell DW']
source: Mol Ecol. 2013 Dec;22(24):6048-59. doi: 10.1111/mec.12551. Epub 2013 Nov 8.
title: DNA barcoding of Orchidaceae in Korea.
authors: ['Kim HM', 'Oh SH', 'Bhandari GS', 'Kim CS', 'Park CW']
source: Mol Ecol Resour. 2014 May;14(3):499-507. doi: 10.1111/1755-0998.12207. Epub 2013 Dec 16.
title: A modified ABCDE model of flowering in orchids based on gene expression profiling studies of the moth orchid Phalaenopsis aphrodite.
authors: ['Su CL', 'Chen WC', 'Lee AY', 'Chen CY', 'Chang YC', 'Chao YT', 'Shih MC']
source: PLoS One. 2013 Nov 12;8(11):e80462. doi: 10.1371/journal.pone.0080462. eCollection 2013.
title: Mycorrhizal preferences and fine spatial structure of the epiphytic orchid Epidendrum rhopalostele.
authors: ['Riofrio ML', 'Cruz D', 'Torres E', 'de la Cruz M', 'Iriondo JM', 'Suarez JP']
source: Am J Bot. 2013 Dec;100(12):2339-48. doi: 10.3732/ajb.1300069. Epub 2013 Nov 19.
title: Combining microtomy and confocal laser scanning microscopy for structural analyses of plant-fungus associations.
authors: ['Rath M', 'Grolig F', 'Haueisen J', 'Imhof S']
source: Mycorrhiza. 2014 May;24(4):293-300. doi: 10.1007/s00572-013-0530-y. Epub 2013 Nov 19.
title: High-resolution secondary ion mass spectrometry analysis of carbon dynamics in mycorrhizas formed by an obligately myco-heterotrophic orchid.
authors: ['Bougoure J', 'Ludwig M', 'Brundrett M', 'Cliff J', 'Clode P', 'Kilburn M', 'Grierson P']
source: Plant Cell Environ. 2014 May;37(5):1223-30. doi: 10.1111/pce.12230. Epub 2013 Dec 12.
title: Donkey orchid symptomless virus: a viral 'platypus' from Australian terrestrial orchids.
authors: ['Wylie SJ', 'Li H', 'Jones MG']
source: PLoS One. 2013 Nov 5;8(11):e79587. doi: 10.1371/journal.pone.0079587. eCollection 2013.
title: Genome-wide annotation, expression profiling, and protein interaction studies of the core cell-cycle genes in Phalaenopsis aphrodite.
authors: ['Lin HY', 'Chen JC', 'Wei MJ', 'Lien YC', 'Li HH', 'Ko SS', 'Liu ZH', 'Fang SC']
source: Plant Mol Biol. 2014 Jan;84(1-2):203-26. doi: 10.1007/s11103-013-0128-y. Epub 2013 Sep 25.
title: Effect of plasmolysis on protocorm-like bodies of Dendrobium Bobby Messina orchid following cryopreservation with encapsulation-dehydration method.
authors: ['Antony JJ', 'Mubbarakh SA', 'Mahmood M', 'Subramaniam S']
source: Appl Biochem Biotechnol. 2014 Feb;172(3):1433-44. doi: 10.1007/s12010-013-0636-x. Epub 2013 Nov 12.
title: The orchid-bee fauna (Hymenoptera: Apidae) of 'RPPN Feliciano Miguel Abdala' revisited: relevant changes in community composition.
authors: ['Nemesio A', 'Paula IR']
source: Braz J Biol. 2013 Aug;73(3):515-20. doi: 10.1590/S1519-69842013000300008.
title: Community of orchid bees (Hymenoptera: Apidae) in transitional vegetation between Cerrado and Atlantic Forest in southeastern Brazil.
authors: ['Pires EP', 'Morgado LN', 'Souza B', 'Carvalho CF', 'Nemesio A']
source: Braz J Biol. 2013 Aug;73(3):507-13. doi: 10.1590/S1519-69842013000300007.
title: The AP2-like gene OitaAP2 is alternatively spliced and differentially expressed in inflorescence and vegetative tissues of the orchid Orchis italica.
authors: ['Salemme M', 'Sica M', 'Iazzetti G', 'Gaudio L', 'Aceto S']
source: PLoS One. 2013 Oct 21;8(10):e77454. doi: 10.1371/journal.pone.0077454. eCollection 2013.
title: Identification and characterization of the microRNA transcriptome of a moth orchid Phalaenopsis aphrodite.
authors: ['Chao YT', 'Su CL', 'Jean WH', 'Chen WC', 'Chang YC', 'Shih MC']
source: Plant Mol Biol. 2014 Mar;84(4-5):529-48. doi: 10.1007/s11103-013-0150-0. Epub 2013 Oct 31.
title: [Some aspects of underground organs of spotleaf orchis growth and phenolic compound accumulation at the generative stage of ontogenesis].
authors: ['Marakaev OA', 'Tselebrovskii MV', 'Nikolaeva TN', 'Zagoskina NV']
source: Izv Akad Nauk Ser Biol. 2013 May-Jun;(3):315-23.
title: Floral elaiophores in Lockhartia Hook. (Orchidaceae: Oncidiinae): their distribution, diversity and anatomy.
authors: ['Blanco MA', 'Davies KL', 'Stpiczynska M', 'Carlsward BS', 'Ionta GM', 'Gerlach G']
source: Ann Bot. 2013 Dec;112(9):1775-91. doi: 10.1093/aob/mct232. Epub 2013 Oct 29.
title: Pollinator shifts and the evolution of spur length in the moth-pollinated orchid Platanthera bifolia.
authors: ['Boberg E', 'Alexandersson R', 'Jonsson M', 'Maad J', 'Agren J', 'Nilsson LA']
source: Ann Bot. 2014 Jan;113(2):267-75. doi: 10.1093/aob/mct217. Epub 2013 Oct 29.
title: Conservation value and permeability of neotropical oil palm landscapes for orchid bees.
authors: ['Livingston G', 'Jha S', 'Vega A', 'Gilbert L']
source: PLoS One. 2013 Oct 17;8(10):e78523. doi: 10.1371/journal.pone.0078523. eCollection 2013.
title: First records and description of metallic red females of Euglossa (Alloglossura) gorgonensis Cheesman, with notes on color variation within the species (Hymenoptera, Apidae).
authors: ['Hinojosa-Diaz IA', 'Brosi BJ']
source: Zookeys. 2013 Sep 25;(335):113-9. doi: 10.3897/zookeys.335.6134. eCollection 2013.
title: Cryopreservation of Brassidium Shooting Star orchid using the PVS3 method supported with preliminary histological analysis.
authors: ['Mubbarakh SA', 'Rahmah S', 'Rahman ZA', 'Sah NN', 'Subramaniam S']
source: Appl Biochem Biotechnol. 2014 Jan;172(2):1131-45. doi: 10.1007/s12010-013-0597-0. Epub 2013 Oct 23.
title: Niche conservatism and the future potential range of Epipactis helleborine (Orchidaceae).
authors: ['Kolanowska M']
source: PLoS One. 2013 Oct 15;8(10):e77352. doi: 10.1371/journal.pone.0077352. eCollection 2013.
title: Orchid protocorm-like bodies are somatic embryos.
authors: ['Lee YI', 'Hsu ST', 'Yeung EC']
source: Am J Bot. 2013 Nov;100(11):2121-31. doi: 10.3732/ajb.1300193. Epub 2013 Oct 17.
title: Genetic diversity and population differentiation of Calanthe tsoongiana, a rare and endemic orchid in China.
authors: ['Qian X', 'Wang CX', 'Tian M']
source: Int J Mol Sci. 2013 Oct 14;14(10):20399-413. doi: 10.3390/ijms141020399.
title: Highly diverse and spatially heterogeneous mycorrhizal symbiosis in a rare epiphyte is unrelated to broad biogeographic or environmental features.
authors: ['Kartzinel TR', 'Trapnell DW', 'Shefferson RP']
source: Mol Ecol. 2013 Dec;22(23):5949-61. doi: 10.1111/mec.12536. Epub 2013 Nov 6.
title: The IGSF1 deficiency syndrome: characteristics of male and female patients.
authors: ['Joustra SD', 'Schoenmakers N', 'Persani L', 'Campi I', 'Bonomi M', 'Radetti G', 'Beck-Peccoz P', 'Zhu H', 'Davis TM', 'Sun Y', 'Corssmit EP', 'Appelman-Dijkstra NM', 'Heinen CA', 'Pereira AM', 'Varewijck AJ', 'Janssen JA', 'Endert E', 'Hennekam RC', 'Lombardi MP', 'Mannens MM', 'Bak B', 'Bernard DJ', 'Breuning MH', 'Chatterjee K', 'Dattani MT', 'Oostdijk W', 'Biermasz NR', 'Wit JM', 'van Trotsenburg AS']
source: J Clin Endocrinol Metab. 2013 Dec;98(12):4942-52. doi: 10.1210/jc.2013-2743. Epub 2013 Oct 9.
title: A pollinator shift explains floral divergence in an orchid species complex in South Africa.
authors: ['Peter CI', 'Johnson SD']
source: Ann Bot. 2014 Jan;113(2):277-88. doi: 10.1093/aob/mct216. Epub 2013 Oct 9.
title: Floral adaptation to local pollinator guilds in a terrestrial orchid.
authors: ['Sun M', 'Gross K', 'Schiestl FP']
source: Ann Bot. 2014 Jan;113(2):289-300. doi: 10.1093/aob/mct219. Epub 2013 Oct 9.
title: Challenges and prospects in the telemetry of insects.
authors: ['Daniel Kissling W', 'Pattemore DE', 'Hagen M']
source: Biol Rev Camb Philos Soc. 2014 Aug;89(3):511-30. doi: 10.1111/brv.12065. Epub 2013 Oct 8.
title: Identification and characterization of process-related impurities of trans-resveratrol.
authors: ['Sivakumar B', 'Murugan R', 'Baskaran A', 'Khadangale BP', 'Murugan S', 'Senthilkumar UP']
source: Sci Pharm. 2013 Mar 17;81(3):683-95. doi: 10.3797/scipharm.1301-17. eCollection 2013.
title: Vanilla--its science of cultivation, curing, chemistry, and nutraceutical properties.
authors: ['Anuradha K', 'Shyamala BN', 'Naidu MM']
source: Crit Rev Food Sci Nutr. 2013;53(12):1250-76. doi: 10.1080/10408398.2011.563879.
title: Dichorhavirus: a proposed new genus for Brevipalpus mite-transmitted, nuclear, bacilliform, bipartite, negative-strand RNA plant viruses.
authors: ['Dietzgen RG', 'Kuhn JH', 'Clawson AN', 'Freitas-Astua J', 'Goodin MM', 'Kitajima EW', 'Kondo H', 'Wetzel T', 'Whitfield AE']
source: Arch Virol. 2014 Mar;159(3):607-19. doi: 10.1007/s00705-013-1834-0. Epub 2013 Oct 1.
title: Moscatilin induces apoptosis and mitotic catastrophe in human esophageal cancer cells.
authors: ['Chen CA', 'Chen CC', 'Shen CC', 'Chang HH', 'Chen YJ']
source: J Med Food. 2013 Oct;16(10):869-77. doi: 10.1089/jmf.2012.2617. Epub 2013 Sep 27.
title: Composition of Cypripedium calceolus (Orchidaceae) seeds analyzed by attenuated total reflectance IR spectroscopy: in search of understanding longevity in the ground.
authors: ['Barsberg S', 'Rasmussen HN', 'Kodahl N']
source: Am J Bot. 2013 Oct;100(10):2066-73. doi: 10.3732/ajb.1200646. Epub 2013 Sep 26.
title: Perspectives on MADS-box expression during orchid flower evolution and development.
authors: ['Mondragon-Palomino M']
source: Front Plant Sci. 2013 Sep 23;4:377. doi: 10.3389/fpls.2013.00377. eCollection 2013.
title: Simple-sequence repeat markers of Cattleya coccinea (Orchidaceae), an endangered species of the Brazilian Atlantic Forest.
authors: ['Novello M', 'Rodrigues JF', 'Pinheiro F', 'Oliveira GC', 'Veasey EA', 'Koehler S']
source: Genet Mol Res. 2013 Sep 3;12(3):3274-8. doi: 10.4238/2013.September.3.3.
title: Floral odour chemistry defines species boundaries and underpins strong reproductive isolation in sexually deceptive orchids.
authors: ['Peakall R', 'Whitehead MR']
source: Ann Bot. 2014 Jan;113(2):341-55. doi: 10.1093/aob/mct199. Epub 2013 Sep 19.
title: Identification and symbiotic ability of Psathyrellaceae fungi isolated from a photosynthetic orchid, Cremastra appendiculata (Orchidaceae).
authors: ['Yagame T', 'Funabiki E', 'Nagasawa E', 'Fukiharu T', 'Iwase K']
source: Am J Bot. 2013 Sep;100(9):1823-30. doi: 10.3732/ajb.1300099. Epub 2013 Sep 11.
title: Promoting role of an endophyte on the growth and contents of kinsenosides and flavonoids of Anoectochilus formosanus Hayata, a rare and threatened medicinal Orchidaceae plant.
authors: ['Zhang FS', 'Lv YL', 'Zhao Y', 'Guo SX']
source: J Zhejiang Univ Sci B. 2013 Sep;14(9):785-92. doi: 10.1631/jzus.B1300056.
title: Collection and trade of wild-harvested orchids in Nepal.
authors: ['Subedi A', 'Kunwar B', 'Choi Y', 'Dai Y', 'van Andel T', 'Chaudhary RP', 'de Boer HJ', 'Gravendeel B']
source: J Ethnobiol Ethnomed. 2013 Aug 31;9(1):64. doi: 10.1186/1746-4269-9-64.
title: Direct detection of orchid viruses using nanorod-based fiber optic particle plasmon resonance immunosensor.
authors: ['Lin HY', 'Huang CH', 'Lu SH', 'Kuo IT', 'Chau LK']
source: Biosens Bioelectron. 2014 Jan 15;51:371-8. doi: 10.1016/j.bios.2013.08.009. Epub 2013 Aug 17.
title: The evolution of floral nectaries in Disa (Orchidaceae: Disinae): recapitulation or diversifying innovation?
authors: ['Hobbhahn N', 'Johnson SD', 'Bytebier B', 'Yeung EC', 'Harder LD']
source: Ann Bot. 2013 Nov;112(7):1303-19. doi: 10.1093/aob/mct197. Epub 2013 Aug 29.
title: [Molecular cloning and characterization of S-adenosyl-L-methionine decarboxylase gene (DoSAMDC1) in Dendrobium officinale].
authors: ['Zhao MM', 'Zhang G', 'Zhang DW', 'Guo SX']
source: Yao Xue Xue Bao. 2013 Jun;48(6):946-52.
title: Preliminary genetic linkage maps of Chinese herb Dendrobium nobile and D. moniliforme.
authors: ['Feng S', 'Zhao H', 'Lu J', 'Liu J', 'Shen B', 'Wang H']
source: J Genet. 2013;92(2):205-12.
title: Evidence of separate karyotype evolutionary pathway in Euglossa orchid bees by cytogenetic analyses.
authors: ['Fernandes A', 'Werneck HA', 'Pompolo SG', 'Lopes DM']
source: An Acad Bras Cienc. 2013 Sep;85(3):937-44. doi: 10.1590/S0001-37652013005000050.
title: Histological and micro-CT evidence of stigmatic rostellum receptivity promoting auto-pollination in the madagascan orchid Bulbophyllum bicoloratum.
authors: ['Gamisch A', 'Staedler YM', 'Schonenberger J', 'Fischer GA', 'Comes HP']
source: PLoS One. 2013 Aug 13;8(8):e72688. doi: 10.1371/journal.pone.0072688. eCollection 2013.
title: Virus-induced gene silencing unravels multiple transcription factors involved in floral growth and development in Phalaenopsis orchids.
authors: ['Hsieh MH', 'Pan ZJ', 'Lai PH', 'Lu HC', 'Yeh HH', 'Hsu CC', 'Wu WL', 'Chung MC', 'Wang SS', 'Chen WH', 'Chen HH']
source: J Exp Bot. 2013 Sep;64(12):3869-84. doi: 10.1093/jxb/ert218.
title: Habitat fragmentation effects on the orchid bee communities in remnant forests of southeastern Brazil.
authors: ['Knoll Fdo R', 'Penatti NC']
source: Neotrop Entomol. 2012 Oct;41(5):355-65. doi: 10.1007/s13744-012-0057-5. Epub 2012 Jun 29.
title: Spatial-temporal variation in orchid bee communities (Hymenoptera: Apidae) in remnants of arboreal Caatinga in the Chapada Diamantina region, state of Bahia, Brazil.
authors: ['Andrade-Silva AC', 'Nemesio A', 'de Oliveira FF', 'Nascimento FS']
source: Neotrop Entomol. 2012 Aug;41(4):296-305. doi: 10.1007/s13744-012-0053-9. Epub 2012 Jun 26.
title: Old Fragments of Forest Inside an Urban Area Are Able to Keep Orchid Bee (Hymenoptera: Apidae: Euglossini) Assemblages? The Case of a Brazilian Historical City.
authors: ['Ferreira RP', 'Martins C', 'Dutra MC', 'Mentone CB', 'Antonini Y']
source: Neotrop Entomol. 2013 Jul 16.
title: The ecological basis for biogeographic classification: an example in orchid bees (Apidae: Euglossini).
authors: ['Parra-H A', 'Nates-Parra G']
source: Neotrop Entomol. 2012 Dec;41(6):442-9. doi: 10.1007/s13744-012-0069-1. Epub 2012 Aug 24.
title: Benefits to rare plants and highway safety from annual population reductions of a "native invader," white-tailed deer, in a Chicago-area woodland.
authors: ['Engeman RM', 'Guerrant T', 'Dunn G', 'Beckerman SF', 'Anchor C']
source: Environ Sci Pollut Res Int. 2014 Jan;21(2):1592-7. doi: 10.1007/s11356-013-2056-4. Epub 2013 Aug 14.
title: Start Codon Targeted (SCoT) marker reveals genetic diversity of Dendrobium nobile Lindl., an endangered medicinal orchid species.
authors: ['Bhattacharyya P', 'Kumaria S', 'Kumar S', 'Tandon P']
source: Gene. 2013 Oct 15;529(1):21-6. doi: 10.1016/j.gene.2013.07.096. Epub 2013 Aug 11.
title: Vegetation context influences the strength and targets of pollinator-mediated selection in a deceptive orchid.
authors: ['Sletvold N', 'Grindeland JM', 'Agren J']
source: Ecology. 2013 Jun;94(6):1236-42.
title: Pollination and floral ecology of Arundina graminifolia (Orchidaceae) at the northern border of the species' natural distribution.
authors: ['Sugiura N']
source: J Plant Res. 2014;127(1):131-9. doi: 10.1007/s10265-013-0587-x. Epub 2013 Aug 7.
title: The orchid-bee faunas (Hymenoptera: Apidae) of 'Parque Nacional do Monte Pascoal', 'Parque Nacional do Descobrimento' and three other Atlantic Forest remnants in southern Bahia, eastern Brazil.
authors: ['Nemesio A']
source: Braz J Biol. 2013 May;73(2):437-46. doi: 10.1590/S1519-69842013000200028.
title: The orchid-bee faunas (Hymenoptera: Apidae) of two Atlantic Forest remnants in southern Bahia, eastern Brazil.
authors: ['Nemesio A']
source: Braz J Biol. 2013 May;73(2):375-81. doi: 10.1590/S1519-69842013000200018.
title: Are orchid bees at risk? First comparative survey suggests declining populations of forest-dependent species.
authors: ['Nemesio A']
source: Braz J Biol. 2013 May;73(2):367-74. doi: 10.1590/S1519-69842013000200017.
title: The orchid-bee fauna (Hymenoptera: Apidae) of 'Reserva Biologica de Una', a hotspot in the Atlantic Forest of southern Bahia, eastern Brazil.
authors: ['Nemesio A']
source: Braz J Biol. 2013 May;73(2):347-52. doi: 10.1590/S1519-69842013000200014.
title: Ancestral deceit and labile evolution of nectar production in the African orchid genus Disa.
authors: ['Johnson SD', 'Hobbhahn N', 'Bytebier B']
source: Biol Lett. 2013 Jul 31;9(5):20130500. doi: 10.1098/rsbl.2013.0500. Print 2013 Oct 23.
title: Orchid bee (Hymenoptera: Apidae) community from a gallery forest in the Brazilian Cerrado.
authors: ['Silva FS']
source: Rev Biol Trop. 2012 Jun;60(2):625-33.
title: Genome assembly of citrus leprosis virus nuclear type reveals a close association with orchid fleck virus.
authors: ['Roy A', 'Stone A', 'Otero-Colina G', 'Wei G', 'Choudhary N', 'Achor D', 'Shao J', 'Levy L', 'Nakhla MK', 'Hollingsworth CR', 'Hartung JS', 'Schneider WL', 'Brlansky RH']
source: Genome Announc. 2013 Jul 25;1(4). pii: e00519-13. doi: 10.1128/genomeA.00519-13.
title: Complete genome sequence of Habenaria mosaic virus, a new potyvirus infecting a terrestrial orchid (Habenaria radiata) in Japan.
authors: ['Kondo H', 'Maeda T', 'Gara IW', 'Chiba S', 'Maruyama K', 'Tamada T', 'Suzuki N']
source: Arch Virol. 2014 Jan;159(1):163-6. doi: 10.1007/s00705-013-1784-6. Epub 2013 Jul 16.
title: Modulation of physical environment makes placental mesenchymal stromal cells suitable for therapy.
authors: ['Mathew SA', 'Rajendran S', 'Gupta PK', 'Bhonde R']
source: Cell Biol Int. 2013 Nov;37(11):1197-204. doi: 10.1002/cbin.10154. Epub 2013 Aug 13.
title: Genetic inference of epiphytic orchid colonization; it may only take one.
authors: ['Trapnell DW', 'Hamrick JL', 'Ishibashi CD', 'Kartzinel TR']
source: Mol Ecol. 2013 Jul;22(14):3680-92. doi: 10.1111/mec.12338.
title: Microbial diversity in the floral nectar of seven Epipactis (Orchidaceae) species.
authors: ['Jacquemyn H', 'Lenaerts M', 'Tyteca D', 'Lievens B']
source: Microbiologyopen. 2013 Aug;2(4):644-58. doi: 10.1002/mbo3.103. Epub 2013 Jul 8.
title: [Sequence analysis of LEAFY homologous gene from Dendrobium moniliforme and application for identification of medicinal Dendrobium].
authors: ['Xing WR', 'Hou BW', 'Guan JJ', 'Luo J', 'Ding XY']
source: Yao Xue Xue Bao. 2013 Apr;48(4):597-603.
title: Components of reproductive isolation between Orchis mascula and Orchis pauciflora.
authors: ['Scopece G', 'Croce A', 'Lexer C', 'Cozzolino S']
source: Evolution. 2013 Jul;67(7):2083-93. doi: 10.1111/evo.12091. Epub 2013 Mar 29.
title: Phylogeographic structure and outbreeding depression reveal early stages of reproductive isolation in the neotropical orchid Epidendrum denticulatum.
authors: ['Pinheiro F', 'Cozzolino S', 'de Barros F', 'Gouveia TM', 'Suzuki RM', 'Fay MF', 'Palma-Silva C']
source: Evolution. 2013 Jul;67(7):2024-39. doi: 10.1111/evo.12085. Epub 2013 Mar 21.
title: Asymbiotic seed germination and in vitro conservation of Coelogyne nervosa A. Rich. an endemic orchid to Western Ghats.
authors: ['Abraham S', 'Augustine J', 'Thomas TD']
source: Physiol Mol Biol Plants. 2012 Jul;18(3):245-51. doi: 10.1007/s12298-012-0118-6.
title: Endophytic and mycorrhizal fungi associated with roots of endangered native orchids from the Atlantic Forest, Brazil.
authors: ['Oliveira SF', 'Bocayuva MF', 'Veloso TG', 'Bazzolli DM', 'da Silva CC', 'Pereira OL', 'Kasuya MC']
source: Mycorrhiza. 2014 Jan;24(1):55-64. doi: 10.1007/s00572-013-0512-0. Epub 2013 Jun 30.
title: Convergent evolution of floral signals underlies the success of Neotropical orchids.
authors: ['Papadopulos AS', 'Powell MP', 'Pupulin F', 'Warner J', 'Hawkins JA', 'Salamin N', 'Chittka L', 'Williams NH', 'Whitten WM', 'Loader D', 'Valente LM', 'Chase MW', 'Savolainen V']
source: Proc Biol Sci. 2013 Jun 26;280(1765):20130960. doi: 10.1098/rspb.2013.0960. Print 2013 Aug 22.
title: 3,3'-{[(Biphenyl-2,2'-di-yl)bis-(methyl-ene)]bis-(-oxy)}bis-[N-(4-chloro-phen-yl) benzamide].
authors: ['Rajadurai R', 'Padmanabhan R', 'Meenakshi Sundaram SS', 'Ananthan S']
source: Acta Crystallogr Sect E Struct Rep Online. 2013 May 18;69(Pt 6):o914-5. doi: 10.1107/S160053681301009X. Print 2013 Jun 1.
title: 3,5-Dimethyl-1-{2-[(5-methyl-1,3,4-thia-diazol-2-yl)sulfan-yl]acet-yl}-2,6-diphen ylpiperidin-4-one.
authors: ['Ganesan S', 'Sugumar P', 'Ananthan S', 'Ponnuswamy MN']
source: Acta Crystallogr Sect E Struct Rep Online. 2013 May 11;69(Pt 6):o845. doi: 10.1107/S1600536813012014. Print 2013 Jun 1.
title: Symbiotic seed germination and protocorm development of Aa achalensis Schltr., a terrestrial orchid endemic from Argentina.
authors: ['Sebastian F', 'Vanesa S', 'Eduardo F', 'Graciela T', 'Silvana S']
source: Mycorrhiza. 2014 Jan;24(1):35-43. doi: 10.1007/s00572-013-0510-2. Epub 2013 Jun 20.
title: [Sibling species of rein orchid (Gymnadenia:Orchidaceae, Magnoliophyta) in Russia].
authors: ['Efimov PG']
source: Genetika. 2013 Mar;49(3):343-54.
title: Detection of viruses directly from the fresh leaves of a Phalaenopsis orchid using a microfluidic system.
authors: ['Chang WH', 'Yang SY', 'Lin CL', 'Wang CH', 'Li PC', 'Chen TY', 'Jan FJ', 'Lee GB']
source: Nanomedicine. 2013 Nov;9(8):1274-82. doi: 10.1016/j.nano.2013.05.016. Epub 2013 Jun 8.
title: Floral visual signal increases reproductive success in a sexually deceptive orchid.
authors: ['Rakosy D', 'Streinzer M', 'Paulus HF', 'Spaethe J']
source: Arthropod Plant Interact. 2012 Dec 1;6(4):671-681.
title: Moscatilin inhibits lung cancer cell motility and invasion via suppression of endogenous reactive oxygen species.
authors: ['Kowitdamrong A', 'Chanvorachote P', 'Sritularak B', 'Pongrakhananon V']
source: Biomed Res Int. 2013;2013:765894. doi: 10.1155/2013/765894. Epub 2013 May 8.
title: Transcriptome and proteome data reveal candidate genes for pollinator attraction in sexually deceptive orchids.
authors: ['Sedeek KE', 'Qi W', 'Schauer MA', 'Gupta AK', 'Poveda L', 'Xu S', 'Liu ZJ', 'Grossniklaus U', 'Schiestl FP', 'Schluter PM']
source: PLoS One. 2013 May 29;8(5):e64621. doi: 10.1371/journal.pone.0064621. Print 2013.
title: Beyond orchids and dandelions: testing the 5-HTT "risky" allele for evidence of phenotypic capacitance and frequency-dependent selection.
authors: ['Conley D', 'Rauscher E', 'Siegal ML']
source: Biodemography Soc Biol. 2013;59(1):37-56. doi: 10.1080/19485565.2013.774620.
title: Molecular cloning and spatiotemporal expression of an APETALA1/FRUITFULL-like MADS-box gene from the orchid (Cymbidium faberi).
authors: ['Tian Y', 'Yuan X', 'Jiang S', 'Cui B', 'Su J']
source: Sheng Wu Gong Cheng Xue Bao. 2013 Feb;29(2):203-13.
title: Antigenotoxic effect, composition and antioxidant activity of Dendrobium speciosum.
authors: ['Moretti M', 'Cossignani L', 'Messina F', 'Dominici L', 'Villarini M', 'Curini M', 'Marcotullio MC']
source: Food Chem. 2013 Oct 15;140(4):660-5. doi: 10.1016/j.foodchem.2012.10.022. Epub 2012 Nov 2.
title: Exposure to HIV prevention programmes associated with improved condom use and uptake of HIV testing by female sex workers in Nagaland, Northeast India.
authors: ['Armstrong G', 'Medhi GK', 'Kermode M', 'Mahanta J', 'Goswami P', 'Paranjape R']
source: BMC Public Health. 2013 May 15;13:476. doi: 10.1186/1471-2458-13-476.
title: Significant spatial aggregation and fine-scale genetic structure in the homosporous fern Cyrtomium falcatum (Dryopteridaceae).
authors: ['Chung MY', 'Chung MG']
source: New Phytol. 2013 Aug;199(3):663-72. doi: 10.1111/nph.12293. Epub 2013 May 7.
title: Catalase and superoxide dismutase activities and the total protein content of protocorm-like bodies of Dendrobium sonia-28 subjected to vitrification.
authors: ['Poobathy R', 'Sinniah UR', 'Xavier R', 'Subramaniam S']
source: Appl Biochem Biotechnol. 2013 Jul;170(5):1066-79. doi: 10.1007/s12010-013-0241-z. Epub 2013 May 3.
title: Organ homologies in orchid flowers re-interpreted using the Musk Orchid as a model.
authors: ['Rudall PJ', 'Perl CD', 'Bateman RM']
source: PeerJ. 2013 Feb 12;1:e26. doi: 10.7717/peerj.26. Print 2013.
title: Accessing local knowledge to identify where species of conservation concern occur in a tropical forest landscape.
authors: ['Padmanaba M', 'Sheil D', 'Basuki I', 'Liswanti N']
source: Environ Manage. 2013 Aug;52(2):348-59. doi: 10.1007/s00267-013-0051-7. Epub 2013 May 1.
title: Spatial patterns of photosynthesis in thin- and thick-leaved epiphytic orchids: unravelling C3-CAM plasticity in an organ-compartmented way.
authors: ['Rodrigues MA', 'Matiz A', 'Cruz AB', 'Matsumura AT', 'Takahashi CA', 'Hamachi L', 'Felix LM', 'Pereira PN', 'Latansio-Aidar SR', 'Aidar MP', 'Demarco D', 'Freschi L', 'Mercier H', 'Kerbauy GB']
source: Ann Bot. 2013 Jul;112(1):17-29. doi: 10.1093/aob/mct090. Epub 2013 Apr 25.
title: Transcriptome analysis of Cymbidium sinense and its application to the identification of genes associated with floral development.
authors: ['Zhang J', 'Wu K', 'Zeng S', 'Teixeira da Silva JA', 'Zhao X', 'Tian CE', 'Xia H', 'Duan J']
source: BMC Genomics. 2013 Apr 24;14:279. doi: 10.1186/1471-2164-14-279.
title: Orchid fleck virus structural proteins N and P form intranuclear viroplasm-like structures in the absence of viral infection.
authors: ['Kondo H', 'Chiba S', 'Andika IB', 'Maruyama K', 'Tamada T', 'Suzuki N']
source: J Virol. 2013 Jul;87(13):7423-34. doi: 10.1128/JVI.00270-13. Epub 2013 Apr 24.
title: Scale-up of a comprehensive harm reduction programme for people injecting opioids: lessons from north-eastern India.
authors: ['Lalmuanpuii M', 'Biangtung L', 'Mishra RK', 'Reeve MJ', 'Tzudier S', 'Singh AL', 'Sinate R', 'Sgaier SK']
source: Bull World Health Organ. 2013 Apr 1;91(4):306-12. doi: 10.2471/BLT.12.108274. Epub 2013 Feb 20.
title: Complete chloroplast genome of the genus Cymbidium: lights into the species identification, phylogenetic implications and population genetic analyses.
authors: ['Yang JB', 'Tang M', 'Li HT', 'Zhang ZR', 'Li DZ']
source: BMC Evol Biol. 2013 Apr 18;13:84. doi: 10.1186/1471-2148-13-84.
title: A novel aphrodisiac compound from an orchid that activates nitric oxide synthases.
authors: ['Subramoniam A', 'Gangaprasad A', 'Sureshkumar PK', 'Radhika J', 'Arun KB']
source: Int J Impot Res. 2013 Nov-Dec;25(6):212-6. doi: 10.1038/ijir.2013.18. Epub 2013 Apr 18.
title: A new orchid genus, Danxiaorchis, and phylogenetic analysis of the tribe Calypsoeae.
authors: ['Zhai JW', 'Zhang GQ', 'Chen LJ', 'Xiao XJ', 'Liu KW', 'Tsai WC', 'Hsiao YY', 'Tian HZ', 'Zhu JQ', 'Wang MN', 'Wang FG', 'Xing FW', 'Liu ZJ']
source: PLoS One. 2013 Apr 4;8(4):e60371. doi: 10.1371/journal.pone.0060371. Print 2013.
title: Detection of ancestry informative HLA alleles confirms the admixed origins of Japanese population.
authors: ['Nakaoka H', 'Mitsunaga S', 'Hosomichi K', 'Shyh-Yuh L', 'Sawamoto T', 'Fujiwara T', 'Tsutsui N', 'Suematsu K', 'Shinagawa A', 'Inoko H', 'Inoue I']
source: PLoS One. 2013;8(4):e60793. doi: 10.1371/journal.pone.0060793. Epub 2013 Apr 5.
title: A new molecular phylogeny and a new genus, Pendulorchis, of the Aerides-Vanda alliance (Orchidaceae: Epidendroideae).
authors: ['Zhang GQ', 'Liu KW', 'Chen LJ', 'Xiao XJ', 'Zhai JW', 'Li LQ', 'Cai J', 'Hsiao YY', 'Rao WH', 'Huang J', 'Ma XY', 'Chung SW', 'Huang LQ', 'Tsai WC', 'Liu ZJ']
source: PLoS One. 2013;8(4):e60097. doi: 10.1371/journal.pone.0060097. Epub 2013 Apr 5.
title: Catalog of Erycina pusilla miRNA and categorization of reproductive phase-related miRNAs and their target gene families.
authors: ['Lin CS', 'Chen JJ', 'Huang YT', 'Hsu CT', 'Lu HC', 'Chou ML', 'Chen LC', 'Ou CI', 'Liao DC', 'Yeh YY', 'Chang SB', 'Shen SC', 'Wu FH', 'Shih MC', 'Chan MT']
source: Plant Mol Biol. 2013 May;82(1-2):193-204. doi: 10.1007/s11103-013-0055-y. Epub 2013 Apr 11.
title: The rare terrestrial orchid Nervilia nipponica consistently associates with a single group of novel mycobionts.
authors: ['Nomura N', 'Ogura-Tsujita Y', 'Gale SW', 'Maeda A', 'Umata H', 'Hosaka K', 'Yukawa T']
source: J Plant Res. 2013 Sep;126(5):613-23. doi: 10.1007/s10265-013-0552-8. Epub 2013 Apr 6.
title: Compatible fungi, suitable medium, and appropriate developmental stage essential for stable association of Dendrobium chrysanthum.
authors: ['Hajong S', 'Kumaria S', 'Tandon P']
source: J Basic Microbiol. 2013 Dec;53(12):1025-33. doi: 10.1002/jobm.201200411. Epub 2013 Apr 2.
title: The host bias of three epiphytic Aeridinae orchid species is reflected, but not explained, by mycorrhizal fungal associations.
authors: ['Gowland KM', 'van der Merwe MM', 'Linde CC', 'Clements MA', 'Nicotra AB']
source: Am J Bot. 2013 Apr;100(4):764-77. doi: 10.3732/ajb.1200411. Epub 2013 Apr 1.
title: Tubastatin, a selective histone deacetylase 6 inhibitor shows anti-inflammatory and anti-rheumatic effects.
authors: ['Vishwakarma S', 'Iyer LR', 'Muley M', 'Singh PK', 'Shastry A', 'Saxena A', 'Kulathingal J', 'Vijaykanth G', 'Raghul J', 'Rajesh N', 'Rathinasamy S', 'Kachhadia V', 'Kilambi N', 'Rajgopal S', 'Balasubramanian G', 'Narayanan S']
source: Int Immunopharmacol. 2013 May;16(1):72-8. doi: 10.1016/j.intimp.2013.03.016. Epub 2013 Mar 27.
title: Microsatellite markers in the western prairie fringed orchid, Platanthera praeclara (Orchidaceae).
authors: ['Ross AA', 'Aldrich-Wolfe L', 'Lance S', 'Glenn T', 'Travers SE']
source: Appl Plant Sci. 2013 Mar 22;1(4). pii: apps.1200413. doi: 10.3732/apps.1200413. eCollection 2013 Apr.
title: Discovery of adamantane based highly potent HDAC inhibitors.
authors: ['Gopalan B', 'Ponpandian T', 'Kachhadia V', 'Bharathimohan K', 'Vignesh R', 'Sivasudar V', 'Narayanan S', 'Mandar B', 'Praveen R', 'Saranya N', 'Rajagopal S', 'Rajagopal S']
source: Bioorg Med Chem Lett. 2013 May 1;23(9):2532-7. doi: 10.1016/j.bmcl.2013.03.002. Epub 2013 Mar 14.
title: Mycorrhizas alter nitrogen acquisition by the terrestrial orchid Cymbidium goeringii.
authors: ['Wu J', 'Ma H', 'Xu X', 'Qiao N', 'Guo S', 'Liu F', 'Zhang D', 'Zhou L']
source: Ann Bot. 2013 Jun;111(6):1181-7. doi: 10.1093/aob/mct062. Epub 2013 Mar 26.
title: Variation in nutrient-acquisition patterns by mycorrhizal fungi of rare and common orchids explains diversification in a global biodiversity hotspot.
authors: ['Nurfadilah S', 'Swarts ND', 'Dixon KW', 'Lambers H', 'Merritt DJ']
source: Ann Bot. 2013 Jun;111(6):1233-41. doi: 10.1093/aob/mct064. Epub 2013 Mar 26.
title: Bioanalytical method development, validation and quantification of dorsomorphin in rat plasma by LC-MS/MS.
authors: ['Karthikeyan K', 'Mahat MY', 'Chandrasekaran S', 'Gopal K', 'Franklin PX', 'Sivakumar BJ', 'Singh G', 'Narayanan S', 'Gopalan B', 'Khan AA']
source: Biomed Chromatogr. 2013 Aug;27(8):1018-26. doi: 10.1002/bmc.2899. Epub 2013 Mar 21.
title: Cryopreservation of orchid mycorrhizal fungi: a tool for the conservation of endangered species.
authors: ['Ercole E', 'Rodda M', 'Molinatti M', 'Voyron S', 'Perotto S', 'Girlanda M']
source: J Microbiol Methods. 2013 May;93(2):134-7. doi: 10.1016/j.mimet.2013.03.003. Epub 2013 Mar 18.
title: Climate warming alters effects of management on population viability of threatened species: results from a 30-year experimental study on a rare orchid.
authors: ['Sletvold N', 'Dahlgren JP', 'Oien DI', 'Moen A', 'Ehrlen J']
source: Glob Chang Biol. 2013 Sep;19(9):2729-38. doi: 10.1111/gcb.12167. Epub 2013 Jul 14.
title: [Molecular characterization of a mitogen-activated protein kinase gene DoMPK1 in Dendrobium officinale].
authors: ['Zhang G', 'Zhao MM', 'Song C', 'Zhang DW', 'Li B', 'Guo SX']
source: Yao Xue Xue Bao. 2012 Dec;47(12):1703-9.
title: Contributions of covariance: decomposing the components of stochastic population growth in Cypripedium calceolus.
authors: ['Davison R', 'Nicole F', 'Jacquemyn H', 'Tuljapurkar S']
source: Am Nat. 2013 Mar;181(3):410-20. doi: 10.1086/669155. Epub 2013 Jan 18.
title: Phylogenetic and microsatellite markers for Tulasnella (Tulasnellaceae) mycorrhizal fungi associated with Australian orchids.
authors: ['Ruibal MP', 'Peakall R', 'Smith LM', 'Linde CC']
source: Appl Plant Sci. 2013 Mar 5;1(3). pii: apps.1200394. doi: 10.3732/apps.1200394. eCollection 2013 Mar.
title: Ascertaining the role of Taiwan as a source for the Austronesian expansion.
authors: ['Mirabal S', 'Cadenas AM', 'Garcia-Bertrand R', 'Herrera RJ']
source: Am J Phys Anthropol. 2013 Apr;150(4):551-64. doi: 10.1002/ajpa.22226. Epub 2013 Feb 26.
title: Acquisition of species-specific perfume blends: influence of habitat-dependent compound availability on odour choices of male orchid bees (Euglossa spp.).
authors: ['Pokorny T', 'Hannibal M', 'Quezada-Euan JJ', 'Hedenstrom E', 'Sjoberg N', 'Bang J', 'Eltz T']
source: Oecologia. 2013 Jun;172(2):417-25. doi: 10.1007/s00442-013-2620-0. Epub 2013 Feb 27.
title: Fertilizing ability of cryopreserved pollinia of Luisia macrantha, an endemic orchid of Western Ghats.
authors: ['Ajeeshkumar S', 'Decruse SW']
source: Cryo Letters. 2013 Jan-Feb;34(1):20-9.
title: A narrowly endemic photosynthetic orchid is non-specific in its mycorrhizal associations.
authors: ['Pandey M', 'Sharma J', 'Taylor DL', 'Yadon VL']
source: Mol Ecol. 2013 Apr;22(8):2341-54. doi: 10.1111/mec.12249. Epub 2013 Feb 21.
title: Global transcriptome analysis and identification of a CONSTANS-like gene family in the orchid Erycina pusilla.
authors: ['Chou ML', 'Shih MC', 'Chan MT', 'Liao SY', 'Hsu CT', 'Haung YT', 'Chen JJ', 'Liao DC', 'Wu FH', 'Lin CS']
source: Planta. 2013 Jun;237(6):1425-41. doi: 10.1007/s00425-013-1850-z. Epub 2013 Feb 16.
title: Overexpression of DOSOC1, an ortholog of Arabidopsis SOC1, promotes flowering in the orchid Dendrobium Chao Parya Smile.
authors: ['Ding L', 'Wang Y', 'Yu H']
source: Plant Cell Physiol. 2013 Apr;54(4):595-608. doi: 10.1093/pcp/pct026. Epub 2013 Feb 8.
title: SPAR methods revealed high genetic diversity within populations and high gene flow of Vanda coerulea Griff ex Lindl (Blue Vanda), an endangered orchid species.
authors: ['Manners V', 'Kumaria S', 'Tandon P']
source: Gene. 2013 Apr 25;519(1):91-7. doi: 10.1016/j.gene.2013.01.037. Epub 2013 Feb 8.
title: Oriental orchid (Cymbidium floribundum) attracts the Japanese honeybee (Apis cerana japonica) with a mixture of 3-hydroxyoctanoic acid and 10-hydroxy- (E)-2-decenoic acid.
authors: ['Sugahara M', 'Izutsu K', 'Nishimura Y', 'Sakamoto F']
source: Zoolog Sci. 2013 Feb;30(2):99-104. doi: 10.2108/zsj.30.99.
title: [Cloning and expression analysis of a calcium-dependent protein kinase gene in Dendrobium officinale in response to mycorrhizal fungal infection].
authors: ['Zhang G', 'Zhao MM', 'Li B', 'Song C', 'Zhang DW', 'Guo SX']
source: Yao Xue Xue Bao. 2012 Nov;47(11):1548-54.
title: Sperm elution: an improved two phase recovery method for sexual assault samples.
authors: ['Hulme P', 'Lewis J', 'Davidson G']
source: Sci Justice. 2013 Mar;53(1):28-33. doi: 10.1016/j.scijus.2012.05.003. Epub 2012 May 28.
title: Marital satisfaction and physical health: evidence for an orchid effect.
authors: ['South SC', 'Krueger RF']
source: Psychol Sci. 2013 Mar 1;24(3):373-8. doi: 10.1177/0956797612453116. Epub 2013 Jan 28.
title: Optimizing virus-induced gene silencing efficiency with Cymbidium mosaic virus in Phalaenopsis flower.
authors: ['Hsieh MH', 'Lu HC', 'Pan ZJ', 'Yeh HH', 'Wang SS', 'Chen WH', 'Chen HH']
source: Plant Sci. 2013 Mar;201-202:25-41. doi: 10.1016/j.plantsci.2012.11.003. Epub 2012 Nov 26.
title: Diversity and evolutionary patterns of bacterial gut associates of corbiculate bees.
authors: ['Koch H', 'Abrol DP', 'Li J', 'Schmid-Hempel P']
source: Mol Ecol. 2013 Apr;22(7):2028-44. doi: 10.1111/mec.12209. Epub 2013 Jan 24.
title: Fungal host specificity is not a bottleneck for the germination of Pyroleae species (Ericaceae) in a Bavarian forest.
authors: ['Hynson NA', 'Weiss M', 'Preiss K', 'Gebauer G', 'Treseder KK']
source: Mol Ecol. 2013 Mar;22(5):1473-81. doi: 10.1111/mec.12180. Epub 2013 Jan 24.
title: Morphological, ecological and genetic aspects associated with endemism in the Fly Orchid group.
authors: ['Triponez Y', 'Arrigo N', 'Pellissier L', 'Schatz B', 'Alvarez N']
source: Mol Ecol. 2013 Mar;22(5):1431-46. doi: 10.1111/mec.12169. Epub 2013 Jan 21.
title: Australian orchids and the doctors they commemorate.
authors: ['Pearn JH']
source: Med J Aust. 2013 Jan 21;198(1):52-4.
title: Orchidstra: an integrated orchid functional genomics database.
authors: ['Su CL', 'Chao YT', 'Yen SH', 'Chen CY', 'Chen WC', 'Chang YC', 'Shih MC']
source: Plant Cell Physiol. 2013 Feb;54(2):e11. doi: 10.1093/pcp/pct004. Epub 2013 Jan 16.
title: First Report of Sclerotium Rot on Cymbidium Orchids Caused by Sclerotium rolfsii in Korea.
authors: ['Han KS', 'Lee SC', 'Lee JS', 'Soh JW', 'Kim S']
source: Mycobiology. 2012 Dec;40(4):263-4. doi: 10.5941/MYCO.2012.40.4.263. Epub 2012 Dec 26.
title: Genetic linkage map of EST-SSR and SRAP markers in the endangered Chinese endemic herb Dendrobium (Orchidaceae).
authors: ['Lu JJ', 'Wang S', 'Zhao HY', 'Liu JJ', 'Wang HZ']
source: Genet Mol Res. 2012 Dec 21;11(4):4654-67. doi: 10.4238/2012.December.21.1.
title: OrchidBase 2.0: comprehensive collection of Orchidaceae floral transcriptomes.
authors: ['Tsai WC', 'Fu CH', 'Hsiao YY', 'Huang YM', 'Chen LJ', 'Wang M', 'Liu ZJ', 'Chen HH']
source: Plant Cell Physiol. 2013 Feb;54(2):e7. doi: 10.1093/pcp/pcs187. Epub 2013 Jan 10.
title: Adding perches for cross-pollination ensures the reproduction of a self-incompatible orchid.
authors: ['Liu ZJ', 'Chen LJ', 'Liu KW', 'Li LQ', 'Rao WH', 'Zhang YT', 'Tang GD', 'Huang LQ']
source: PLoS One. 2013;8(1):e53695. doi: 10.1371/journal.pone.0053695. Epub 2013 Jan 7.
title: Aerial roots of epiphytic orchids: the velamen radicum and its role in water and nutrient uptake.
authors: ['Zotz G', 'Winkler U']
source: Oecologia. 2013 Mar;171(3):733-41. doi: 10.1007/s00442-012-2575-6. Epub 2013 Jan 6.
title: The OitaAG and OitaSTK genes of the orchid Orchis italica: a comparative analysis with other C- and D-class MADS-box genes.
authors: ['Salemme M', 'Sica M', 'Gaudio L', 'Aceto S']
source: Mol Biol Rep. 2013 May;40(5):3523-35. doi: 10.1007/s11033-012-2426-x. Epub 2013 Jan 1.
title: Mycorrhizal preference promotes habitat invasion by a native Australian orchid: Microtis media.
authors: ['De Long JR', 'Swarts ND', 'Dixon KW', 'Egerton-Warburton LM']
source: Ann Bot. 2013 Mar;111(3):409-18. doi: 10.1093/aob/mcs294. Epub 2012 Dec 28.
title: [Plant rhabdoviruses with bipartite genomes].
authors: ['Kondo H']
source: Uirusu. 2013;63(2):143-54.
title: Functional characterization of Candida albicans Hos2 histone deacetylase.
authors: ['Karthikeyan G', 'Paul-Satyaseela M', 'Dhatchana Moorthy N', 'Gopalaswamy R', 'Narayanan S']
source: F1000Res. 2013 Nov 11;2:238. doi: 10.12688/f1000research.2-238.v3. eCollection 2013.
title: Eufriesea zhangi sp. n. (Hymenoptera: Apidae: Euglossina), a new orchid bee from Brazil revealed by molecular and morphological characters.
authors: ['Nemesio A', 'Junior JE', 'Santos FR']
source: Zootaxa. 2013 Feb 4;3609:568-82. doi: 10.11646/zootaxa.3609.6.2.
title: Specificity and preference of mycorrhizal associations in two species of the genus Dendrobium (Orchidaceae).
authors: ['Xing X', 'Ma X', 'Deng Z', 'Chen J', 'Wu F', 'Guo S']
source: Mycorrhiza. 2013 May;23(4):317-24. doi: 10.1007/s00572-012-0473-8. Epub 2012 Dec 28.
title: Transcriptomic analysis of floral organs from Phalaenopsis orchid by using oligonucleotide microarray.
authors: ['Hsiao YY', 'Huang TH', 'Fu CH', 'Huang SC', 'Chen YJ', 'Huang YM', 'Chen WH', 'Tsai WC', 'Chen HH']
source: Gene. 2013 Apr 10;518(1):91-100. doi: 10.1016/j.gene.2012.11.069. Epub 2012 Dec 20.
title: mRNA profiling using a minimum of five mRNA markers per body fluid and a novel scoring method for body fluid identification.
authors: ['Roeder AD', 'Haas C']
source: Int J Legal Med. 2013 Jul;127(4):707-21. doi: 10.1007/s00414-012-0794-3. Epub 2012 Dec 20.
title: Monophyly or paraphyly--the taxonomy of Holcoglossum (Aeridinae: Orchidaceae).
authors: ['Xiang X', 'Li D', 'Jin X', 'Hu H', 'Zhou H', 'Jin W', 'Lai Y']
source: PLoS One. 2012;7(12):e52050. doi: 10.1371/journal.pone.0052050. Epub 2012 Dec 14.
title: A comparative study of vitrification and encapsulation-vitrification for cryopreservation of protocorms of Cymbidium eburneum L., a threatened and vulnerable orchid of India.
authors: ['Gogoi K', 'Kumaria S', 'Tandon P']
source: Cryo Letters. 2012 Nov-Dec;33(6):443-52.
title: Understanding the association between injecting and sexual risk behaviors of injecting drug users in Manipur and Nagaland, India.
authors: ['Suohu K', 'Humtsoe C', 'Saggurti N', 'Sabarwal S', 'Mahapatra B', 'Kermode M']
source: Harm Reduct J. 2012 Dec 18;9:40. doi: 10.1186/1477-7517-9-40.
title: A new antibacterial phenanthrenequinone from Dendrobium sinense.
authors: ['Chen XJ', 'Mei WL', 'Zuo WJ', 'Zeng YB', 'Guo ZK', 'Song XQ', 'Dai HF']
source: J Asian Nat Prod Res. 2013;15(1):67-70. doi: 10.1080/10286020.2012.740473. Epub 2012 Dec 11.
title: The Doctrine of Signatures, Materia Medica of Orchids, and the Contributions of Doctor - Orchidologists.
authors: ['Pearn J']
source: Vesalius. 2012 Dec;18(2):99-106.
title: Reference-free comparative genomics of 174 chloroplasts.
authors: ['Kua CS', 'Ruan J', 'Harting J', 'Ye CX', 'Helmus MR', 'Yu J', 'Cannon CH']
source: PLoS One. 2012;7(11):e48995. doi: 10.1371/journal.pone.0048995. Epub 2012 Nov 20.
title: Multiple shoot induction from axillary bud cultures of the medicinal orchid, Dendrobium longicornu.
authors: ['Dohling S', 'Kumaria S', 'Tandon P']
source: AoB Plants. 2012;2012:pls032. doi: 10.1093/aobpla/pls032. Epub 2012 Nov 5.
title: Germination failure is not a critical stage of reproductive isolation between three congeneric orchid species.
authors: ['De Hert K', 'Honnay O', 'Jacquemyn H']
source: Am J Bot. 2012 Nov;99(11):1884-90. doi: 10.3732/ajb.1200381. Epub 2012 Nov 6.
title: Three new cryptic species of Euglossa from Brazil (Hymenoptera, Apidae).
authors: ['Nemesio A', 'Engel MS']
source: Zookeys. 2012;(222):47-68. doi: 10.3897/zookeys.222.3382. Epub 2012 Sep 21.
title: Two new species of Euglossa from South America, with notes on their taxonomic affinities (Hymenoptera, Apidae).
authors: ['Hinojosa-Diaz IA', 'Nemesio A', 'Engel MS']
source: Zookeys. 2012;(221):63-79. doi: 10.3897/zookeys.221.3659. Epub 2012 Sep 13.
title: Genetic variation and structure within 3 endangered Calanthe species (Orchidaceae) from Korea: inference of population-establishment history and implications for conservation.
authors: ['Chung MY', 'Lopez-Pujol J', 'Maki M', 'Moon MO', 'Hyun JO', 'Chung MG']
source: J Hered. 2013 Mar;104(2):248-62. doi: 10.1093/jhered/ess088. Epub 2012 Nov 1.
title: Briacavatolides D-F, new briaranes from the Taiwanese octocoral Briareum excavatum.
authors: ['Wang SK', 'Yeh TT', 'Duh CY']
source: Mar Drugs. 2012 Sep;10(9):2103-10. doi: 10.3390/md10092103. Epub 2012 Sep 24.
title: Observational Research in Childhood Infectious Diseases (ORChID): a dynamic birth cohort study.
authors: ['Lambert SB', 'Ware RS', 'Cook AL', 'Maguire FA', 'Whiley DM', 'Bialasiewicz S', 'Mackay IM', 'Wang D', 'Sloots TP', 'Nissen MD', 'Grimwood K']
source: BMJ Open. 2012 Oct 31;2(6). pii: e002134. doi: 10.1136/bmjopen-2012-002134. Print 2012.
title: Microsatellite primers for the neotropical epiphyte Epidendrum firmum (Orchidaceae).
authors: ['Kartzinel TR', 'Trapnell DW', 'Glenn TC']
source: Am J Bot. 2012 Nov;99(11):e450-2. doi: 10.3732/ajb.1200232. Epub 2012 Oct 31.
title: The production of a key floral volatile is dependent on UV light in a sexually deceptive orchid.
authors: ['Falara V', 'Amarasinghe R', 'Poldy J', 'Pichersky E', 'Barrow RA', 'Peakall R']
source: Ann Bot. 2013 Jan;111(1):21-30. doi: 10.1093/aob/mcs228. Epub 2012 Oct 22.
title: Exotic and indigenous viruses infect wild populations and captive collections of temperate terrestrial orchids (Diuris species) in Australia.
authors: ['Wylie SJ', 'Li H', 'Dixon KW', 'Richards H', 'Jones MG']
source: Virus Res. 2013 Jan;171(1):22-32. doi: 10.1016/j.virusres.2012.10.003. Epub 2012 Oct 23.
title: Orchid fleck virus: an unclassified bipartite, negative-sense RNA plant virus.
authors: ['Peng de W', 'Zheng GH', 'Zheng ZZ', 'Tong QX', 'Ming YL']
source: Arch Virol. 2013 Feb;158(2):313-23. doi: 10.1007/s00705-012-1506-5. Epub 2012 Oct 16.
title: Untangling above- and belowground mycorrhizal fungal networks in tropical orchids.
authors: ['Leake JR', 'Cameron DD']
source: Mol Ecol. 2012 Oct;21(20):4921-4. doi: 10.1111/j.1365-294X.2012.05718.x.
title: Pre-adaptations and the evolution of pollination by sexual deception: Cope's rule of specialization revisited.
authors: ['Vereecken NJ', 'Wilson CA', 'Hotling S', 'Schulz S', 'Banketov SA', 'Mardulyn P']
source: Proc Biol Sci. 2012 Dec 7;279(1748):4786-94. doi: 10.1098/rspb.2012.1804. Epub 2012 Oct 10.
title: The mirror crack'd: both pigment and structure contribute to the glossy blue appearance of the mirror orchid, Ophrys speculum.
authors: ['Vignolini S', 'Davey MP', 'Bateman RM', 'Rudall PJ', 'Moyroud E', 'Tratt J', 'Malmgren S', 'Steiner U', 'Glover BJ']
source: New Phytol. 2012 Dec;196(4):1038-47. doi: 10.1111/j.1469-8137.2012.04356.x. Epub 2012 Oct 9.
###Markdown
The output for this looks like: ```title: Sex pheromone mimicry in the early spider orchid (ophrys sphegodes):patterns of hydrocarbons as the key mechanism for pollination by sexualdeception [In Process Citation]authors: ['Schiestl FP', 'Ayasse M', 'Paulus HF', 'Lofstedt C', 'Hansson BS','Ibarra F', 'Francke W']source: J Comp Physiol [A] 2000 Jun;186(6):567-74``` Especially interesting to note is the list of authors, which is returnedas a standard Python list. This makes it easy to manipulate and searchusing standard Python tools. For instance, we could loop through a wholebunch of entries searching for a particular author with code like thefollowing:
###Code
search_author = "Waits T"
for record in records:
if not "AU" in record:
continue
if search_author in record["AU"]:
print("Author %s found: %s" % (search_author, record["SO"]))
###Output
_____no_output_____
###Markdown
Hopefully this section gave you an idea of the power and flexibility ofthe Entrez and Medline interfaces and how they can be used together. Searching, downloading, and parsing Entrez Nucleotide records {subsec:entrez_example_genbank}Here we’ll show a simple example of performing a remote Entrez query. Insection \[sec:orchids\] of the parsing examples, we talked about usingNCBI’s Entrez website to search the NCBI nucleotide databases for infoon Cypripedioideae, our friends the lady slipper orchids. Now, we’lllook at how to automate that process using a Python script. In thisexample, we’ll just show how to connect, get the results, and parsethem, with the Entrez module doing all of the work.First, we use EGQuery to find out the number of results we will getbefore actually downloading them. EGQuery will tell us how many searchresults were found in each of the databases, but for this example we areonly interested in nucleotides:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.egquery(term="Cypripedioideae")
record = Entrez.read(handle)
for row in record["eGQueryResult"]:
if row["DbName"]=="nuccore":
print(row["Count"])
###Output
4247
###Markdown
So, we expect to find 814 Entrez Nucleotide records (this is the numberI obtained in 2008; it is likely to increase in the future). If you findsome ridiculously high number of hits, you may want to reconsider if youreally want to download all of them, which is our next step:
###Code
from Bio import Entrez
handle = Entrez.esearch(db="nucleotide", term="Cypripedioideae", retmax=814)
record = Entrez.read(handle)
###Output
_____no_output_____
###Markdown
Here, `record` is a Python dictionary containing the search results andsome auxiliary information. Just for information, let’s look at what isstored in this dictionary:
###Code
print(record.keys())
###Output
dict_keys(['TranslationStack', 'IdList', 'TranslationSet', 'QueryTranslation', 'Count', 'RetStart', 'RetMax'])
###Markdown
First, let’s check how many results were found:
###Code
print(record["Count"])
###Output
4247
###Markdown
which is the number we expected. The 814 results are stored in`record['IdList']`:
###Code
len(record["IdList"])
###Output
_____no_output_____
###Markdown
Let’s look at the first five results:
###Code
record["IdList"][:5]
###Output
_____no_output_____
###Markdown
\[sec:entrez-batched-efetch\] We can download these records using`efetch`. While you could download these records one by one, to reducethe load on NCBI’s servers, it is better to fetch a bunch of records atthe same time, shown below. However, in this situation you shouldideally be using the history feature described later inSection [History and WebEnv](Using-the-history-and-WebEnv).
###Code
idlist = ",".join(record["IdList"][:5])
print(idlist)
handle = Entrez.efetch(db="nucleotide", id=idlist, retmode="xml")
records = Entrez.read(handle)
len(records)
###Output
_____no_output_____
###Markdown
Each of these records corresponds to one GenBank record.
###Code
print(records[0].keys())
print(records[0]["GBSeq_primary-accession"])
print(records[0]["GBSeq_other-seqids"])
print(records[0]["GBSeq_definition"])
print(records[0]["GBSeq_organism"])
###Output
Cypripedium calceolus
###Markdown
You could use this to quickly set up searches – but for heavy usage, seeSection [History and WebEnv](Using-the-history-and-WebEnv). Searching, downloading, and parsing GenBank records {sec:entrez-search-fetch-genbank}The GenBank record format is a very popular method of holdinginformation about sequences, sequence features, and other associatedsequence information. The format is a good way to get information fromthe NCBI databases at .In this example we’ll show how to query the NCBI databases,to retrievethe records from the query, and then parse them using `Bio.SeqIO` -something touched on in Section \[sec:SeqIO\_GenBank\_Online\]. Forsimplicity, this example *does not* take advantage of the WebEnv historyfeature – see Section [History and WebEnv](Using-the-history-and-WebEnv) for this.First, we want to make a query and find out the ids of the records toretrieve. Here we’ll do a quick search for one of our favoriteorganisms, *Opuntia* (prickly-pear cacti). We can do quick search andget back the GIs (GenBank identifiers) for all of the correspondingrecords. First we check how many records there are:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.egquery(term="Opuntia AND rpl16")
record = Entrez.read(handle)
for row in record["eGQueryResult"]:
if row["DbName"]=="nuccore":
print(row["Count"])
###Output
26
###Markdown
Now we download the list of GenBank identifiers:
###Code
handle = Entrez.esearch(db="nuccore", term="Opuntia AND rpl16")
record = Entrez.read(handle)
gi_list = record["IdList"]
gi_list
###Output
_____no_output_____
###Markdown
Now we use these GIs to download the GenBank records - note that witholder versions of Biopython you had to supply a comma separated list ofGI numbers to Entrez, as of Biopython 1.59 you can pass a list and thisis converted for you:
###Code
gi_str = ",".join(gi_list)
handle = Entrez.efetch(db="nuccore", id=gi_str, rettype="gb", retmode="text")
###Output
_____no_output_____
###Markdown
If you want to look at the raw GenBank files, you can read from thishandle and print out the result:
###Code
text = handle.read()
print(text)
###Output
LOCUS HQ621368 399 bp DNA linear PLN 26-FEB-2012
DEFINITION Opuntia decumbens voucher Martinez & Eggli 146a (ZSS) ribosomal
protein L16 (rpl16) gene, partial cds; chloroplast.
ACCESSION HQ621368
VERSION HQ621368.1 GI:377581039
KEYWORDS .
SOURCE chloroplast Opuntia decumbens
ORGANISM Opuntia decumbens
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 399)
AUTHORS Arakaki,M., Christin,P.A., Nyffeler,R., Lendel,A., Eggli,U.,
Ogburn,R.M., Spriggs,E., Moore,M.J. and Edwards,E.J.
TITLE Contemporaneous and recent radiations of the world's major
succulent plant lineages
JOURNAL Proc. Natl. Acad. Sci. U.S.A. 108 (20), 8379-8384 (2011)
PUBMED 21536881
REFERENCE 2 (bases 1 to 399)
AUTHORS Arakaki,M., Christin,P.-A., Nyffeler,R., Eggli,U., Ogburn,R.M.,
Spriggs,E., Moore,M.J. and Edwards,E.J.
TITLE Direct Submission
JOURNAL Submitted (15-NOV-2010) Department of Ecology and Evolutionary
Biology, Brown University, 80 Waterman St., Providence, RI 02912,
USA
COMMENT ##Assembly-Data-START##
Assembly Method :: MIRA V3rc4; Geneious v. 4.8
Sequencing Technology :: 454
##Assembly-Data-END##
FEATURES Location/Qualifiers
source 1..399
/organism="Opuntia decumbens"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/specimen_voucher="Martinez & Eggli 146a (ZSS)"
/db_xref="taxon:867482"
/tissue_type="stem"
/note="authority: Opuntia decumbens Salm-Dyck"
gene <1..>399
/gene="rpl16"
CDS <1..>399
/gene="rpl16"
/codon_start=1
/transl_table=11
/product="ribosomal protein L16"
/protein_id="AFB70658.1"
/db_xref="GI:377581040"
/translation="NPKRTRFCKQHRGRMKGISYRGNRICFGRYALQALEPAWITSRQ
IEAGRRAMTRNARRGGKIWVRIFPDKPVTVKSAESRMGSGKGSHLYWVVVVKPGRILY
EISGVSENIARRAISIAASKMPVRTQFIISG"
ORIGIN
1 aaccccaaaa gaaccagatt ctgtaaacaa catagaggaa gaatgaaggg aatatcttat
61 cgggggaatc gtatttgttt cggaagatat gctcttcagg cacttgagcc tgcttggatc
121 acgtctagac aaatagaagc aggtcggcga gcaatgacgc gaaatgcacg ccgcggtgga
181 aaaatatggg tacgtatatt tccagacaaa ccagttacag taaaatctgc ggaaagccgt
241 atgggttcgg ggaaaggatc ccacctatat tgggtagttg ttgtcaaacc cggtcgaata
301 ctttatgaaa taagcggagt atcagaaaat atagcccgaa gggctatctc gatagcggca
361 tctaaaatgc ctgtacgaac tcaattcatt atttcagga
//
LOCUS HM041482 1197 bp DNA linear PLN 03-MAY-2011
DEFINITION Cylindropuntia tunicata ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041482
VERSION HM041482.1 GI:330887241
KEYWORDS .
SOURCE chloroplast Cylindropuntia tunicata
ORGANISM Cylindropuntia tunicata
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Cylindropuntia.
REFERENCE 1 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1197
/organism="Cylindropuntia tunicata"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:766221"
gene <1..>1197
/gene="rpl16"
misc_feature <1094..>1197
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gtgatatacg aaacagtaag agcccatagt atgaagtatg aactaataac tatagaacta
61 ataaccaact catcgcatca cattatctgg atccaaagaa gcagtcaaga taggatattt
121 tggtcctatc attgcagcaa ctgaattttt tttttcataa acaagaaatc gaatgagttg
181 tcaagcaaaa gaaaaaaaaa aaaagaaaaa tatacnttaa aggaggggga tgcggataaa
241 tggaaaggcg aaagaaagaa aaaaatgaat ctaaatgata tacgattcca ctatgtaagg
301 tctttgaatc atatcataaa agacaatgta ataaagcatg aatacagatt cacacataat
361 tatctgatat gaatctattc atagaaaaaa gaaaaaagta agagcctccg gccaataaag
421 actaagaggg gttggctcaa aaacaaagtt cattaagagc tcccattgta gaattcagac
481 ctaatcatta atcaagaagc gatgggaacg atgtaatcca tgaatacaga agattcaatt
541 gaaaaaagaa tcctaatgat tcattgggga ggatggcgga acgaaccaga gaccaattca
601 tctattctga aaagtgataa actaatccta taaaactaaa atagatattg aaagagtaaa
661 tattcgcccg cgaaaattcc ttttttatta aattgctcat attttatttt agcaatgcaa
721 tctaataaaa tatatctata caaaaaaaca tagacaaact atatatataa tatttcaaat
781 tcccttatat atccaaatat aaaaatatct aataaattag atgaatatca aagaatctat
841 tgatttagtg tattattaaa tgtatatctt aattcaatat tattattcta ttcattttta
901 ttcattttca aatttataat atattaatct atatattaat ttagaattct attctaattc
961 gaattcaatt tttaaatatt cattcatatt caattaaaat tgaaattttt tcattcgcga
1021 ggagccggat gagaagaaac tctcatgtcc ggttctgtag tagagatgga attaagaaaa
1081 aaccatcaac tataacccca aaagaaccag attctgtaaa caacatagag gaagaatgaa
1141 gggaatatct tatcggggga atcgtatttg tttcggaaaa tatgctctca ggcacga
//
LOCUS HM041481 1200 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia palmadora ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041481
VERSION HM041481.1 GI:330887240
KEYWORDS .
SOURCE chloroplast Opuntia palmadora
ORGANISM Opuntia palmadora
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1200)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1200)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1200
/organism="Opuntia palmadora"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001118"
gene <1..>1200
/gene="rpl16"
misc_feature <1098..>1200
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 tgatatacga aaagtaagag cccatagtat gaagtatgaa ctaataacta tagaactaat
61 aaccaactca tcgcatcaca ttatctggat ccaaagaagc agtcaagata ggatattttg
121 gtcctatcat tgcagcaact gaattttttt ttcataaaca agaaatcaaa tgagttgtca
181 agcaaaagaa aaaaaaaaga aaaatatacn ttaaaggagg gggatgcgga taaatggaaa
241 ggcgaaagaa agaaaaaaat gaatctaaat gatatacgat tccactatgt aaggtctttg
301 aatcatatca taaaagacaa tgtaataaag catgaataca gattcacaca taattatctg
361 atatgaatct attcatagaa aaaagaaaaa agtaagagcc tccgggccaa taaagactaa
421 gagggttggg ctcaagaaca aagttcatta agagctccat tgtagaattc agacctaatc
481 attaatcaag aagcgatggg aacgatgtaa tccatgaata cagaagattc aattgaaaaa
541 gaatcctaat gattcattgg gaaggatggc ggaacgaacc agagaccaat tcatctattc
601 tgaaaagtga taaactaatc ctataaaact aaaatagata ttgaaagagt aaatattcgc
661 ccgcgaaaat tcctttttta ttaaattgct cacattttat tttagcaatg caatctaata
721 aaatatatct atacaaaaaa atatagacaa actatatata taatatattt caaatttcct
781 tatatatcct aatataaaaa tatctaataa attagatgaa tatcaaagaa tctattgatt
841 tagtgtatta ttaaatgtat atcttaattc aatattatta ttctattcat ttttattatt
901 catttttatt cattttcaaa tttagaatat attaatctat atattaattt agaattctat
961 tctaattcga attcaatttt taaatattca tattcaatta aaattgaaat tttttcattc
1021 gcgaggagcc ggatgagaag aaactctcac gtccggttct gtagtagagg tggaattaag
1081 aaaaaaccat caactataac cccaaaagaa ccagattctg taaacaacat agaggaagaa
1141 tgaagggaat atcttatcgg gggaatcgta tttgtttcgg aagatatgct ctcagcacga
//
LOCUS HM041480 1153 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia microdasys ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041480
VERSION HM041480.1 GI:330887239
KEYWORDS .
SOURCE chloroplast Opuntia microdasys
ORGANISM Opuntia microdasys
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1153)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1153)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1153
/organism="Opuntia microdasys"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:169217"
gene <1..>1153
/gene="rpl16"
misc_feature <1079..>1153
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gcccatagta tgaagtatga actaataact atagaactaa taaccaactc atcgcatcac
61 attatctgga tccaaagaag cagtcaagat aggatatttt ggtcctatca ttgcagcaac
121 tgaatttttt ttttcataaa caagaaatca aatgagttgt caagcaaaag aaaaaaaaaa
181 aaaaaaatat actttaaggg ggggggatgg ggataaaggg aaaggggaaa aaaaaaaaaa
241 aatgaatcta aatgatatac aattccacta tgaaaggtct ttgaatcata tcaaaaaaaa
301 caatgtaata aagcaggaat acagattccc acataattat ctgatatgaa tcttttcata
361 aaaaaaaaaa aaaagtaaga gcctccggcc aataaagact aagagggttg gctcaagaac
421 aaagttcatt aagggctcca ttgtagaatt cagacctaat cattaatcaa gaggcgatgg
481 gaacgatgta atccatgaat acagaagatt caattgaaaa agaatcctaa tgattcattg
541 ggaaggatgg cggaacgaac cagagaccaa ttcatctatt ctgaaaagtg aaaaactaat
601 cctataaaac taaaatagat attgaaagag taaatattcg cccgcgaaaa ttcctttttt
661 attaaattgc tcacatttta ttttagcaat gcaatctaat aaaatatatc tatacaaaaa
721 aatatagaca aactatatat ataatatatt tcaaatttcc ttatatatcc taatataaaa
781 atatctaata aattagatga atatcaaaga atctattgat ttagtgtatt attaaatgta
841 tatcttaatt caatattatt attctattca tttttattat tcatttttat tcattttcaa
901 atttagaata tattaatcta tatattaatt tataattcta ttctaattcg aattcaattt
961 ttaaatattc atattcaatt aaaattgaaa ttttttcatt cgcgaggagc cggatgagaa
1021 gaaactctca cgtccggttc tgtagtagag gtggaattaa gaaaaaacca tcaactataa
1081 ccccaaaaga accagattct gtaaacaaca tagaggaaga atgaagggaa tatcttatcg
1141 ggggatatcg tat
//
LOCUS HM041479 1197 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia megasperma ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041479
VERSION HM041479.1 GI:330887238
KEYWORDS .
SOURCE chloroplast Opuntia megasperma
ORGANISM Opuntia megasperma
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1197
/organism="Opuntia megasperma"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001117"
gene <1..>1197
/gene="rpl16"
misc_feature <1098..>1197
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gatatacgaa aagtaagagc ccatagtatg aagtatgaac taataactat agaactaata
61 accaactcat cgcatcacat tatccggatc caaagaagca gtcaagatag gatattttgg
121 tcctatcatt gcagcaactg aatttttttt tcataaacaa gaaatcaaat gagttgtcaa
181 gcaaaagaaa aaaaaaaaag aaaaatatac tttaaaggag ggggatgcgg ataaatggaa
241 aggcgaaaga aagaaaaaaa tgaatctaaa tgatatacga ttccnctatg taaggtcttt
301 gaatcatatc ataaaagaca atgtaataaa gcatgaatac agattcacac ataattatct
361 gatatgaatc tattcataga aaaaagaaaa aagtaagagc ctccgggcca ataaagacta
421 agagggttgg ctcaagaaca aagttcatta agagctccat tgtagaattc agacctaatc
481 attaatcaag aagcgatggg aacgatgtaa tccatgaata cagaagattc aattgaaaaa
541 gaatcctaat gattcattgg gaaggatggc ggaacgaacc agagaccaat tcatctattc
601 tgaaaagtga taaactaatc ctataaaact aaaatagata ttgaaagagt aaatattcgc
661 ccgcgaaaat tcctttttta ttaaattgct cacattttat tttagcaatg caatctaata
721 aaatatatct atacaaaaaa atatagacaa actatatata taatatattt caaatttcct
781 tatatatcct aatataaaaa tatctaataa attagatgaa tatcaaagaa tctattgatt
841 tagtgtatta ttaaatgtat atcttaattc aatattttta ttctattcat ttttattatt
901 catttttatt cattttcaaa tttagaatat attaatctat atattaattt agaattctat
961 tctaattcga attcaatttt taaatattca tattcaatta aaattgaaat tttttcattc
1021 gcgaggagcc ggatgagaag aaactctcac gtccggttct gtagtagagg tggaattaag
1081 aaaaaaccat caactataac cccaaaagaa ccagattctg taaacaacat agaggaagaa
1141 tgaagggaat atcttatcgg gggaatcgta tttgtttcgg aagatatgct ctcagca
//
LOCUS HM041478 1187 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia macbridei ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041478
VERSION HM041478.1 GI:330887237
KEYWORDS .
SOURCE chloroplast Opuntia macbridei
ORGANISM Opuntia macbridei
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1187)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1187)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1187
/organism="Opuntia macbridei"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001116"
gene <1..>1187
/gene="rpl16"
misc_feature <1090..>1187
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 aaaagtaaga gcccatagta tgaagtatga actaataact atagaactaa taaccaactc
61 atcgcatcac attatctgga tccaaagaag cagtcaagat aggatatttt ggtcctatca
121 ttgcagcaac tgaatttttt tttcataaac aagaaatcaa atgagttgtc aagcaaaaga
181 aaaaaaaaaa agaaaaatat acattaaagg agggggatgc ggataaatgg aaaggcgaaa
241 gaaagaaaaa aatgaatcta aatgatatac gattccacta tgtaaggtct ttgaatcata
301 tcataaaaga caatgtaata aagcatgaat acagattcac acataattat ctgatatgaa
361 tctattcata gaaaaaagaa aaaagtaaga gcctccggcc aataaagact aagagggttg
421 gctcaagaac aaagttcatt aagggctcca tttgtagaat tcagacctaa tcattaatca
481 agaagcgatg ggaacgatgt aattccatga atacagaaga ttcaattgaa aaagatccta
541 atgattcatt gggaaggatg gcggacgaac cagagaccaa ttcatctatt ctgaaaagtg
601 ataaactaat cctataaaac taaaatagat attgaaagag taaatattcg cccgcgaaaa
661 ttcctttttt attaaattgc tcacatttta ttttagcaat gcaatctaat aaaatatatc
721 tatacaaaaa aaatatagac aaactatata tataatatat ttcaaatttc cttatatatc
781 ctaatataaa aatatctaat aatttagatg aatatcaaag aatctattga tttagtgtat
841 tattaaatgt atatcttaat tcaatattat tattctattc atttttatta ttcattttta
901 ttcattttca aatttagaat atattaatct atatattaat ttagaattct attctaattc
961 gaattcaatt tttaaatatt catattcaat taaaattgaa attttttcat tcgcgaggag
1021 ccggatgaga agaaactctc acgtccggtt ctgtagtaga ggtggaatta agaaaaaacc
1081 atcaactata accccaaaag aaccagattc tgtaaacaac atagaggaag aatgaaggga
1141 atatcttatc gggggaatcg tatttgtttc ggaagatatg ctctcag
//
LOCUS HM041477 1197 bp DNA linear PLN 03-MAY-2011
DEFINITION Cylindropuntia leptocaulis ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041477
VERSION HM041477.1 GI:330887236
KEYWORDS .
SOURCE chloroplast Cylindropuntia leptocaulis
ORGANISM Cylindropuntia leptocaulis
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Cylindropuntia.
REFERENCE 1 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1197)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1197
/organism="Cylindropuntia leptocaulis"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:866983"
gene <1..>1197
/gene="rpl16"
misc_feature <1096..>1197
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 ttgtgngnct cctgaagagt aggagcccct agtatgaagt atgaactaat aactatagaa
61 ctaataacca actcatcgca tcacattatc cggatccaaa aaagcagtca agataggata
121 ttttggtcct atcattgcag caactgaatt ttttttttca taaacaagaa atcgaatgag
181 ttgtcaagca aaagaaaaaa aaagaaaaat atactttaaa ggagggggat gcggataaat
241 ggaaaggcga aagaaagaaa aaaatgaatc taaatgatat aggattcccc tatgtaaggt
301 ctttgaatca tatcataaaa gacaatgtaa taaagcatga atacagattc ccacataatt
361 atctgatatg aatctattcc tagaaaaaag aaaaaagtaa gagcctccgg ccaataaaga
421 ctaagagggt tggctcaaga acaaagttca ttaaaagctc ccttgtagaa ttcagaccta
481 atcnttaatc aagaagcgat gggaacgatg taatccctga atacagaaga ttcaattgaa
541 aaagaatcct aatgattcat tgggaaggat ggcggaacga accagagacc aattcatcta
601 ttctgaaaag tgataaacta atcctataaa actaaaatag atattgaaag agtaaatatt
661 cgcccgcgaa atttcctttt ttattaaatt gctcatattt ttttttagca atgcaatcta
721 ataaaatata tctctacaaa aaaacataga caaactatat atatatatat atataatatt
781 tcaaattccc ttatatatcc aaatataaaa atatctaata aattagatga atatcaaaga
841 atctattgat ttagtgtatt attaaatgta tatcttaatt caatattatt attctattca
901 tttttattca ttttcaaatt tataatatat taatctatat attaatatag aattctattc
961 taattcgaat tcaattttta aatattcata ttcaattaaa attgaaattt tttcattcgc
1021 gaggagccgg atgagaagaa actctcatgt ccggttctgt agtagagatg gaattaagaa
1081 aaaaccatca actataaccc caaaagaacc ggattctgta aacaacatag aggaagaatg
1141 aagggaatat cttgtcgggg gaatcgatnn gtncggaant natgntcgcn gcgcgcc
//
LOCUS HM041476 1205 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia lasiacantha ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041476
VERSION HM041476.1 GI:330887235
KEYWORDS .
SOURCE chloroplast Opuntia lasiacantha
ORGANISM Opuntia lasiacantha
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1205)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1205)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1205
/organism="Opuntia lasiacantha"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:547104"
gene <1..>1205
/gene="rpl16"
misc_feature <1103..>1205
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gggcccnnna ngangaaaag tagagcccat agtatgaagt atgaactaat aactatagaa
61 ctaataacca actcatcgca tcacattatc tggatccaaa gaagcagtca agataggata
121 ttttggtcct atcattgcag caactgaatt ttttttttca taaacaagaa atcaaatgag
181 ttgtcaagca aaagaaaaaa aaaaagaaaa atatccttta aaggaggggg atgcggataa
241 atggaaaggc gaaagaaaga aaaaaatgaa tctaaatgat atacgattcc cctatgtaag
301 gtctttgaat catatcataa aagacaatgt aataaagcat gaatacagat tcccccataa
361 ttatctgata tgaatctatt cctagaaaaa agaaaaaagt aagagcctcc ggccaataaa
421 gactaagagg gttggctcaa gaacaaagtt cattaagggc tccattgtag aattcagacc
481 taatcattaa tcaagaggcg atgggaacga tgtaatccat gaatacagaa gattcaattg
541 aaaaagaatc ctaatgattc attgggaagg atggcggaac gaaccagaga ccaattcatc
601 tattctgaaa agtgataaac taatcctata aaactaaaat agatattgaa agagtaaata
661 ttcgcccgcg aaaattcctt ttttattaaa ttgctcacat tttattttag caatgcaatc
721 taataaaatc tatctataca aaaaaatata gacaaactat atatataata tatttcaaat
781 ttccttatat atcctaatat aaaaatatct aataaattag atgaatatca aagaatctat
841 tgatttagtg tattattaaa tgtatatctt aattcaatat tattattcta ttcattttta
901 ttattcattt ttattcattt tcaaatttag aatatattaa tctatatatt aatttataat
961 tctattctaa ttcgaattca atttttaaat attcatattc aattaaaatt gaaatttttt
1021 cattcgcgag gagccggatg agaagaaact ctcacgtccg gttctgtagt agaggtggaa
1081 ttaagaaaaa accatcaact ataaccccaa aagaaccaga ttctgtaaac aacatagagg
1141 aagaatgaag ggaatatctt atcgagggaa tcgtatttgt ttcggaagat agtnctngcn
1201 nggtg
//
LOCUS HM041474 1163 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia helleri ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041474
VERSION HM041474.1 GI:330887233
KEYWORDS .
SOURCE chloroplast Opuntia helleri
ORGANISM Opuntia helleri
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1163)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1163)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1163
/organism="Opuntia helleri"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001115"
gene <1..>1163
/gene="rpl16"
misc_feature <1081..>1163
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gagcccatag tatgaagtat gaactaataa ctatagaact aataaccaac tcatcgcatc
61 acattatccg gatccaaaga agcagtcaag ataggatatt ttggtcctat cattgcagca
121 actgaatttt tttttcataa acaagaaatc aaatgagttg tcaagcaaaa gaaaaaaaaa
181 aaagaaaaat atacattaaa ggagggggat gcggataaat ggaaaggcga aagaaagaaa
241 aaaatgaatc taaatgatat acgattccnc tatgtaaggt ctttgaatca tatcataaaa
301 gacaatgtaa taaagcatga atacagattc acacataatt atctgatatg aatctattca
361 tagaaaaaag aaaaaagtaa gagcctccgg ccaataaaga ctaagagggt tggctcaaga
421 acaaagttca ttaagggctc cattgtagaa ttcagaccta atcattaatc aagaagcgat
481 gggaacgatg taatccatga atacagaaga ttcaattgaa aaagaatcct aatgattcat
541 tgggaaggat ggcggaacga accagagacc aattcatcta ttctgaaaag tgataaacta
601 atcctataaa actaaaatag atattgaaag agtaaatatt cgcccgcgaa aattcctttt
661 ttattaaatt gctcacattt tattttagca atgcaatcta ataaaatata tctatacaaa
721 aaaatataga caaactatat atataatata tttaaaattt ccttatatat cctaatataa
781 aaatatctaa taaattagat gaatatcaaa gaatctattg atttagtgta ttattaaatg
841 tatatcttaa ttcaatattt ttattctatt catttttatt attcattttt attcattttc
901 aaatttagaa tatattaatc tatatattaa tttagaattc tattctaatt cgaattcaat
961 ttttaaatat tcatattcaa ttaaaattga aattttttca ttcgcgagga gccggatgag
1021 aagaaactct cacgtccggt tctgtagtag aggtggaatt aagaaaaaac catcaactat
1081 aaccccaaaa gaaccagatt ctgtaaacaa catagaggaa gaatgaaggg aatatcttat
1141 cgggggaatc gtatttgttt cgg
//
LOCUS HM041473 1203 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia excelsa ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041473
VERSION HM041473.1 GI:330887232
KEYWORDS .
SOURCE chloroplast Opuntia excelsa
ORGANISM Opuntia excelsa
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1203)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1203)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1203
/organism="Opuntia excelsa"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:867487"
gene <1..>1203
/gene="rpl16"
misc_feature <1103..>1203
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 ccgnncnttg nnanacagaa nagtagagcc cnttntntga agtatgaact aatcactatt
61 gaactaatcc ccnactcatc gcatcacatt atctggatcc aaagaagcag tcaagatagg
121 atattttggt cctatcattg cagcaactga attttttttt tcctaaacaa gaaatcaaat
181 gagttgtcaa gcaaaagaaa aaaaagaaaa atatacatta aaggaggggg atgcggataa
241 atggaaaggc gaaagaaaga aaaaaatgaa tctaaatgat atacgattcc cctatgtaag
301 gtctttgaat catatcataa aagacaatgt aataaagcat gaatacagat tcccacataa
361 ttatctgata tgaatctatt catagaaaaa agaaaaaagt aagagcctcc ggccaataaa
421 gactaaaagg gttggctcaa gaacaaagtt cattaagggc tccattgtaa aattcagacc
481 taatcattaa tcaagaggcg atgggaacga tgtaatccat gaatacagaa gattcaattg
541 aaaaagaatc ctaatgattc attgggaagg atggcggaac gaaccagaga ccaattcatc
601 tattctgaaa agtgataaac taatcctata aaactaaaat agatattgaa agagtaaata
661 ttcgcccgcg aaaattcctt ttttattaaa ttgctcacat tttattttag caatgcaatc
721 taataaaatc tatctataca aaaaaatata gacaaactat atatataata tatttcaaat
781 ttccttatat atcctaatat aaaaatatct aataaattag atgaatatca aagaatctat
841 tgatttagtg tattattaaa tgtatatctt aattcaatat tattattcta ttcattttta
901 ttattcattt ttattcattt tcaaatttag aatatattaa tctatatatt aatttagaat
961 tctattctaa ttcgaattca atttttaaat attcatattc aattaaaatt gaaatttttt
1021 cattcgcgag gagccggatg agaagaaact ctcacgtccg gttctgtagt agaggtggaa
1081 ttaagaaaaa accatcaact ataaccccaa aagaaccaga ttctgtaaac aacatagagg
1141 aagaatgaag ggaatatctt atcgggggaa tcgtatngtg cnggctngtg cancgcgggc
1201 nng
//
LOCUS HM041472 1182 bp DNA linear PLN 03-MAY-2011
DEFINITION Opuntia echios ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041472
VERSION HM041472.1 GI:330887231
KEYWORDS .
SOURCE chloroplast Opuntia echios
ORGANISM Opuntia echios
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1182)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1182)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1182
/organism="Opuntia echios"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:412453"
gene <1..>1182
/gene="rpl16"
misc_feature <1085..>1182
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gtaagagccc atagtatgaa gtatgaacta ataactatag aactaataac caactcatcg
61 catcacatta tccggatcca aagaagcagt caagatagga tattttggtc ctatcattgc
121 agcaactgaa tttttttttc ataaacaaga aatcaaatga gttgtcaagc aaaagaaaaa
181 aaaaaaaaaa aaatatacat taaaggaggg ggatgcggat aaatggaaag gcgaaagaaa
241 gaaaaaaatg aatctaaatg atatacgatt ccactatgta aggtctttga atcatatcat
301 aaaagacaat gtaataaagc atgaatacag attcacacat aattatctga tatgaatcta
361 ttcatagaaa aaagaaaaaa gtaagagcct ccggccaata aagactaaga ggttgggctc
421 aagaacaaag ttcattaagg gctccattgt agaattcaga cctaatcatt aatcaagaag
481 cgatgggaac gatgtaatcc atgaatacag aagattcaat tgaaaaagaa tcctaatgat
541 tcattgggaa ggatggcgga acgaaccaga gaccaattca tctattctga aaagtgataa
601 actaatccta taaaactaaa atagatattg aaagagtaaa tattcgcccg cgaaaattcc
661 ttttttatta aattgctcac attttatttt agcaatgcaa tctaataaaa tatatctata
721 caaaaaaata tagacaaact atatatataa tatatttcaa atttccttat atatcctaat
781 ataaaaatat ctaataaatt agatgaatat caaagaatct attgatttag tgtattatta
841 aatgtatatc ttaattcaat attattattc tattcatttt tattattcat ttttattcat
901 tttcaaattt agaatatatt aatctatata ttaatttaga attctattct aattcgaatt
961 caatttttaa atattcatat tcaattaaaa ttgaaatttt ttcattcgcg aggagccgga
1021 tgagaagaaa ctctcacgtc cggttctgta gtagaggtgg aattaagaaa aaaccatcaa
1081 ctataacccc aaaagaacca gattctgtaa acaacataga ggaagaatga agggaatatc
1141 ttatcggggg aatcgtattt gtttcggaag atggctacta ta
//
LOCUS HM041469 1189 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea sp. THH-2011 ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041469
VERSION HM041469.1 GI:330887228
KEYWORDS .
SOURCE chloroplast Opuntia sp. THH-2011
ORGANISM Opuntia sp. THH-2011
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1189)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1189)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1189
/organism="Opuntia sp. THH-2011"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001114"
gene <1..>1189
/gene="rpl16"
misc_feature <1096..>1189
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 atatacgaaa aagtagagcc catagtatga agtatgaact aataactata gaactaataa
61 ccaactcatc gcatcacatt atctggatcc aaagaagcag tcaagatagg atattttggt
121 cctatcattg cagcaactga attttttttt cataaacaag aaatcaaatg agttgtcaag
181 caaaagaaaa aaaaaaaaga aaaatatact ttaagggagg gggatgcgga taaatggaaa
241 ggcgaaagaa agaaaaaaat gaatctaaat gatatacgat tccactatgt aaggtctttg
301 aatcatatca taaaagacaa tgtaataaag catgaataca gattcacaca taattatctg
361 gtatgaatct attcatagaa aaaagaaaaa agtaagaccc tccggccaat aaagactaag
421 agggttggct caagaacaaa gttcattaag ggctccattg tagaattcag acctaatcat
481 taatcaagaa gcgatgggaa cgatgtaatc catgaataca gaagattcaa ttgaaaaaga
541 atcctaatga ttcattggga aggatggcgg aacgaaccag agaccaattc atctattctg
601 aaaagtgata aactaatcct ataaaactaa aatagatatt gaaagagtaa atattcgccc
661 gcgaaaattc cttttttatt aaattgctca cattttattt tagcaatgca atctaataaa
721 atatatctat acaaaaaaat atagacaaac tatatatata atatatttca aatttcctta
781 tatatcctaa tataaaaata tctaataaat tagatgaata tcaaagaatc tattgattta
841 gtgtattatt aaatgtatat cttaattcaa tattattatt ctattcattt ttattattca
901 tttttattca ttttcaaatt tagaatatat taatctatat attaatttag aattctattc
961 taattcgaat tcaattttta aatattcata ttcaattaaa attgaaattt tttcattcgc
1021 gaggagccgg atgagaagaa actctcacgt ccggttctgt agtagaggtg gaattaagaa
1081 aaaaccatca actataaccc caaaagaacc agattctgta aacaacatag aggaagaatg
1141 aagggaatat cttatcgggg gaatcgtatt tgtttcggaa gatatgctc
//
LOCUS HM041468 1202 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea lutea ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041468
VERSION HM041468.1 GI:330887227
KEYWORDS .
SOURCE chloroplast Opuntia lutea
ORGANISM Opuntia lutea
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1202)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1202)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1202
/organism="Opuntia lutea"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001113"
gene <1..>1202
/gene="rpl16"
misc_feature <1099..>1202
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gatatacgaa aaagtaagag cccatagtat gaagtatgaa ctaataacta tagaactaat
61 aaccaactca tcgcatcaca ttatctggat ccaaagaagc agtcaagata ggatattttg
121 gtcctatcat tgcagcaact gaattttttt ttcataaaca agaaatcaaa tgagttgtca
181 agcaaaagaa aaaaaaaaaa gaaaaatata ctttaaggga gggggatgcg gataaatgga
241 aaggcgaaag aaagaaaaaa atgaatctaa atgatatacg attcccccta tgtaaggtct
301 ttgaatcata tcataaaaga caatgtaata aagcatgaat acagattcac acataattat
361 ctgatatgaa tctattcata gaaaaaagaa aaaagtaaga ccctccggcc aataaagact
421 aagagggttg gctcaagaac aaagttcatt aagggctcca ttgtagaatt cagacctaat
481 cattaatcaa gaagcgatgg gaacgatgta atccatgaat acagaagatt caattgaaaa
541 agaatcctaa tgattcattg ggaaggatgg cggaacgaac cagagaccaa ttcatctatt
601 ctgaaaagtg ataaactaat cctataaaac taaaatagat attgaaagag taaatattcg
661 cccgcgaaaa ttcctttttt attaaattgc tcacatttta ttttagcaat gcaatctaat
721 aaaatatatc tatacaaaaa aatatagaca aactatatat ataatatatt tcaaatttcc
781 ttatatatcc taatataaaa atatctaata aattagatga atatcaaaga atctattgat
841 ttagtgtatt attaaatgta tatcttaatt caatattatt attctattca tttttattat
901 tcatttttat tcattttcaa atttagaata tattaatcta tatattaatt tagaattcta
961 ttctaattcg aattcaattt ttaaatattc atattcaatt aaaattgaaa ttttttcatt
1021 cgcgaggagc cggatgagaa gaaactctca cgtccggttc tgtagtagag gtggaattaa
1081 gaaaaaacca tcaactataa ccccaaaaga accagattct gtaaacaaca tagaggaaga
1141 atgaagggaa tatcttatcg ggggaatcgt atttgtttcg gaagatatgc tctcaggcac
1201 ga
//
LOCUS HM041467 1199 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea karwinskiana ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041467
VERSION HM041467.1 GI:330887226
KEYWORDS .
SOURCE chloroplast Opuntia karwinskiana
ORGANISM Opuntia karwinskiana
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1199)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1199)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1199
/organism="Opuntia karwinskiana"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001112"
gene <1..>1199
/gene="rpl16"
misc_feature <1098..>1199
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gtgatatcga aaaagtagag cccatagtat gaagtatgaa ctaataacta tagaactaat
61 aaccaactca tcgcatcaca ttatctggat ccaaagaagc agtcaagata ggatattttg
121 gtcctatcat tgcagcaact gaattttttt ttcataaaca agaaatcaaa tgagttgtca
181 agcaaaagaa aaaaaaaaaa gaaaaattta ctttaaggga gggggatgcg gataaatgga
241 aaggcgaaag aaagaaaaaa atgaatctaa atgatatacg attcccctat gtagggtctt
301 tgaatcatat cataaaaaac aatgtaataa agcatgaata cagattcccc cataattatc
361 tggtatgaat cttttcatag aaaaaaaaaa aaagtaagag cctccggcca ataaaaacta
421 aaagggttgg ctcaagaaca aagttcatta agggctccat tgtagaattc agacctaatc
481 nttaatcaag aagcgatggg aacgatgtaa tccatgaata cagaagattc aattgaaaaa
541 gaatcctaat gattcattgg gaaggatggc ggaacgaacc agagaccaat tcatctattc
601 tgaaaagtga taaactaatc ctataaaact aaaatagata ttgaaagagt aaatattcgc
661 ccgcgaaaat tcctttttta ttaaattgct cacattttat tttagcaatg caatctaata
721 aaatatatct atacaaaaaa atatagacaa actatatata taatatattt caaatttcct
781 tatatatcct aatataaaaa tatctaataa attagatgaa tatcaaagaa tctattgatt
841 tagtgtatta ttaaatgtat atcttaattc aatattatta ttctattcat ttttattatt
901 catttttatt cattttcaaa tttagaatat attaatctat atattaattt agaattctat
961 tctaattcga attcaatttt taaatattca tattcaatta aaattgaaat tttttcattc
1021 gcgaggagcc ggatgagaag aaactctcac gtccggttct gtagtagagg tggaattaag
1081 aaaaaaccat caactataac cccaaaagaa ccagattctg taaacaacat agaggaagaa
1141 tgaagggaat atcttatgcg ggggaatcgt attgtttcgg aagatatgct ctgcggccc
//
LOCUS HM041466 1205 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea gaumeri ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041466
VERSION HM041466.1 GI:330887225
KEYWORDS .
SOURCE chloroplast Nopalea gaumeri
ORGANISM Nopalea gaumeri
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1205)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1205)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1205
/organism="Nopalea gaumeri"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001111"
gene <1..>1205
/gene="rpl16"
misc_feature <1103..>1205
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gctgtgatat acgaaanagt aagagcccat agtatgaagt atgaactaat aactatagaa
61 ctaataacca actcatcgca tcacattatc tggatccaaa gaagcagtca agataggata
121 ttttggtcct atcattgcag caactgaatt tttttttcat aaacaagaaa tcaaatgagt
181 tgtcaagcaa aagaaaaaaa aaaaaaaaaa tatacattaa aggaggggga tgcggataaa
241 tggaaaggcg aaagaaagaa aaaaatgaat ctaaatgata tacgattcca ctatgtaagg
301 tctttgaatc atatcataaa agacaatgta ataaagcatg aatacagatt cacacataat
361 tatctgaata tgaatctatt catagaaaaa agaaaaaagt aagaccctcc ggccaataaa
421 gactaaaggg gttggctcaa gaacaaagtt cattaagggc tccattgtag aattcagacc
481 taatcattaa tcaagaagcg atgggaacga tgtaatccat gaatacagaa gattcaattg
541 aaaaagaatc ctaatgattc attgggaagg atggcggaac gaaccagaga ccaattcatc
601 tattctgaaa agtgataaac taatcctata aaactaaaat agatattgaa agagtaaata
661 ttcgcccgcg aaaattcctt ttttattaaa ttgctcacat tttattttag caatgcaatc
721 taataaaata tatctataca aaaaaatata gacaaactat atatataata tatttcaaat
781 ttccttatat atcctaatat aaaaatatct aataaattag atgaatatca aagaatctat
841 tgatttagtg tattattaaa tgtatatctt aattcaatat tattattcta ttcattttta
901 ttattcattt ttattcattt tcaaatttag aatatattaa tctatatatt aatttagaat
961 tctattctaa ttcgaattca atttttaaat attcatattc aattaaaatt gaaatttttt
1021 cattcgcgag gagccggatg agaagaaact ctcacgtccg gttctgtagt agaggtggaa
1081 ttaagaaaaa accatcaact ataaccccaa aagaaccaga ttctgtaaac aacatagagg
1141 aagaatgaag ggaatatctt atcgggggaa tcgtatttgt ttcggaagat atgctctcag
1201 cacga
//
LOCUS HM041465 1190 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea dejecta ribosomal protein L16-like (rpl16) gene, partial
sequence; chloroplast.
ACCESSION HM041465
VERSION HM041465.1 GI:330887224
KEYWORDS .
SOURCE chloroplast Opuntia dejecta
ORGANISM Opuntia dejecta
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1190)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1190)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1190
/organism="Opuntia dejecta"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:1001110"
gene <1..>1190
/gene="rpl16"
misc_feature <1096..>1190
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 tgatatacga aanagtaaga gcccatagta tgaagtatga actaataact atagaactaa
61 taaccaactc atcgcatcac attatctgga tccaaagaag cagtcaagat aggatatttt
121 ggtcctatca ttgcagcaac tgaatttttt tttcataaac aagaaatcaa atgagttgtc
181 aagcaaaaga aaaaaaaaaa aaaaaatata ctttaangga gggggatgcg gataaatgga
241 aaggcgaaag aaagaaaaaa atgaatctaa atgatatacg attccactat gtaaggtctt
301 tgaatcatat cataaaagac aatgtaataa agcatgaata cagattcaca cataattatc
361 tgtatgatct attcatagaa aaaagaaaaa agtaagagcc tccggccaat aaagactaag
421 agggttggct caagaacaaa gttcattaag ggctccattg tagaattcag acctaatcat
481 taatcaagaa gcgatgggaa cgatgtaatc catgaataca gaagattcaa ttgaaaaaga
541 atcctaatga ttcattggga aggatggcgg aacgaaccag agaccaattc atctattctg
601 aaaagtgata aactaatcct ataaaactaa aatagatatt gaaagagtaa atattcgccc
661 gcgaaaattc cttttttatt aaattgctca cattttattt tagcaatgca atctaataaa
721 atatatctat acaaaaaaat atagacaaac tatatatata atatatttca aatttcctta
781 tatatcctaa tataaaaata tctaataaat tagatgaata tcaaagaatc tattgattta
841 gtgtattatt aaatgtatat cttaattcaa tattattatt ctattcattt ttattattca
901 tttttattca ttttcaaatt tagaatatat taatctatat attaatttag aattctattc
961 taattcgaat tcaattttta aatattcata ttcaattaaa attgaaattt tttcattcgc
1021 gaggagccgg atgagaagaa actctcacgt ccggttctgt agtagaggtg gaattaagaa
1081 aaaaccatca actataaccc caaaagaacc agattctgta aacaacatag aggaagaatg
1141 aagggaatat cttatcgggg gaatcgtatt tgtttcggaa gatatgctct
//
LOCUS HM041464 1184 bp DNA linear PLN 03-MAY-2011
DEFINITION Nopalea cochenillifera ribosomal protein L16-like (rpl16) gene,
partial sequence; chloroplast.
ACCESSION HM041464
VERSION HM041464.1 GI:330887223
KEYWORDS .
SOURCE chloroplast Opuntia cochenillifera
ORGANISM Opuntia cochenillifera
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 1184)
AUTHORS Hernandez-Hernandez,T., Hernandez,H.M., De-Nova,J.A., Puente,R.,
Eguiarte,L.E. and Magallon,S.
TITLE Phylogenetic relationships and evolution of growth form in
Cactaceae (Caryophyllales, Eudicotyledoneae)
JOURNAL Am. J. Bot. 98 (1), 44-61 (2011)
PUBMED 21613084
REFERENCE 2 (bases 1 to 1184)
AUTHORS Hernandez-Hernandez,T., Magallon,S.A., Hernandez,H.M., De-Nova,A.,
Puente,R. and Eguiarte,L.E.
TITLE Direct Submission
JOURNAL Submitted (17-MAR-2010) Departamento de Botanica, Instituto de
Biologia, Universidad Nacional Autonoma de Mexico, 3er Circuito de
Ciudad Universitaria, Ciudad Universitaria, Coyoacan, Distrito
Federal C.P. 04510, Mexico
FEATURES Location/Qualifiers
source 1..1184
/organism="Opuntia cochenillifera"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:338184"
gene <1..>1184
/gene="rpl16"
misc_feature <1102..>1184
/gene="rpl16"
/note="similar to ribosomal protein L16"
ORIGIN
1 gctgtgatat acgaaaaagt aagagcccat agtatgaagt atgaactaac aactatagaa
61 ctaataacca actcatcgca tcacattatc tggatccaaa gaagcagtca agataggata
121 ttttggtcct atcattgcag caactgaatt tttttttcat aaacaagaaa tcaaatgagt
181 tgtcaagcaa aagaaaaaaa aaaaaaaaaa tatactttaa aggaggggga tgcggataaa
241 tggaaaggcg aaagaaagaa aaaaatgaat ctaaatgata tacgattcca ctatgtaagg
301 tctttgaatc atatcataaa agacaatgta ataaagcatg aatacagatt cccacataat
361 tatctgatat gaatctattc atagaaaaaa gaaaaaagta agagcctccg gccaataaag
421 actaagaggg ttggctcaag aacaaagttc attaagggct ccattgtaga attcagacct
481 aatcattaat caagaagcga tgggaacgat gtaatccatg aatacagaag attcaattga
541 aaaagaatcc taatgattca ttgggaagga tggcggaacg aaccagagac caattcatct
601 attctgaaaa gtgataaact aatcctataa aactaaaata gatattgaaa gagtaaatat
661 tcgcccgcga aaattccttt tttattaaat tgctcacatt ttattttagc aatgcaatct
721 aataaaatat atctatacaa aaaaatatag acaaactata tatataatat atttcaaatt
781 tccttatata tcctaatata aaaatatcta ataaattaga tgaatatcaa agaatctatt
841 gatttagtgt attattaaat gtatatctta attcaatatt attattctat tcatttttat
901 tattcatttt tattcatttt caaatttaga atatattaat ctatatatta atttagaatt
961 ctattctaat tcgaattcaa tttttaaata ttcatattca attaaaattg aaattttttc
1021 attcgcgagg agccggatga gaagaaactc tcacgtccgg ttctgtagta gaggtggaat
1081 taagaaaaaa ccatcaacta taaccccaaa agaaccagat tctgtaaaca acatagagga
1141 agaatgaagg gaatatctta tcgggggaat cgtatttgtt tcgg
//
LOCUS AY851612 892 bp DNA linear PLN 10-APR-2007
DEFINITION Opuntia subulata rpl16 gene, intron; chloroplast.
ACCESSION AY851612
VERSION AY851612.1 GI:57240072
KEYWORDS .
SOURCE chloroplast Austrocylindropuntia subulata
ORGANISM Austrocylindropuntia subulata
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Austrocylindropuntia.
REFERENCE 1 (bases 1 to 892)
AUTHORS Butterworth,C.A. and Wallace,R.S.
TITLE Molecular Phylogenetics of the Leafy Cactus Genus Pereskia
(Cactaceae)
JOURNAL Syst. Bot. 30 (4), 800-808 (2005)
REFERENCE 2 (bases 1 to 892)
AUTHORS Butterworth,C.A. and Wallace,R.S.
TITLE Direct Submission
JOURNAL Submitted (10-DEC-2004) Desert Botanical Garden, 1201 North Galvin
Parkway, Phoenix, AZ 85008, USA
FEATURES Location/Qualifiers
source 1..892
/organism="Austrocylindropuntia subulata"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:106982"
gene <1..>892
/gene="rpl16"
intron <1..>892
/gene="rpl16"
ORIGIN
1 cattaaagaa gggggatgcg gataaatgga aaggcgaaag aaagaaaaaa atgaatctaa
61 atgatatacg attccactat gtaaggtctt tgaatcatat cataaaagac aatgtaataa
121 agcatgaata cagattcaca cataattatc tgatatgaat ctattcatag aaaaaagaaa
181 aaagtaagag cctccggcca ataaagacta agagggttgg ctcaagaaca aagttcatta
241 agagctccat tgtagaattc agacctaatc attaatcaag aagcgatggg aacgatgtaa
301 tccatgaata cagaagattc aattgaaaaa gatcctaatg atcattggga aggatggcgg
361 aacgaaccag agaccaattc atctattctg aaaagtgata aactaatcct ataaaactaa
421 aatagatatt gaaagagtaa atattcgccc gcgaaaattc cttttttatt aaattgctca
481 tattttattt tagcaatgca atctaataaa atatatctat acaaaaaaat atagacaaac
541 tatatatata taatatattt caaatttcct tatataccca aatataaaaa tatctaataa
601 attagatgaa tatcaaagaa tctattgatt tagtgtatta ttaaatgtat atcttaattc
661 aatattatta ttctattcat ttttattcat tttcaaattt ataatatatt aatctatata
721 ttaatttata attctattct aattcgaatt caatttttaa atattcatat tcaattaaaa
781 ttgaaatttt ttcattcgcg aggagccgga tgagaagaaa ctctcatgtc cggttctgta
841 gtagagatgg aattaagaaa aaaccatcaa ctataacccc aagagaacca ga
//
LOCUS AY851611 881 bp DNA linear PLN 10-APR-2007
DEFINITION Opuntia polyacantha rpl16 gene, intron; chloroplast.
ACCESSION AY851611
VERSION AY851611.1 GI:57240071
KEYWORDS .
SOURCE chloroplast Opuntia polyacantha
ORGANISM Opuntia polyacantha
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Opuntia.
REFERENCE 1 (bases 1 to 881)
AUTHORS Butterworth,C.A. and Wallace,R.S.
TITLE Molecular Phylogenetics of the Leafy Cactus Genus Pereskia
(Cactaceae)
JOURNAL Syst. Bot. 30 (4), 800-808 (2005)
REFERENCE 2 (bases 1 to 881)
AUTHORS Butterworth,C.A. and Wallace,R.S.
TITLE Direct Submission
JOURNAL Submitted (10-DEC-2004) Desert Botanical Garden, 1201 North Galvin
Parkway, Phoenix, AZ 85008, USA
FEATURES Location/Qualifiers
source 1..881
/organism="Opuntia polyacantha"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:307728"
gene <1..>881
/gene="rpl16"
intron <1..>881
/gene="rpl16"
ORIGIN
1 cattaaagga gggggatgcg gataaatgga aaggcgaaag aaagaaaaaa atgaatctaa
61 atgatatacg attccactat gtaaggtctt tgaatcatat cataaaagac aatgtaataa
121 agcatgaata cagattcaca cataattatc tgatatgaat ctattcatag aaaaaagaaa
181 aaagtaagag cctccggcca ataaagacta agagggttgg ctcaagaaca aagttcatta
241 agggctccat tgtagaattc agacctaatc attaatcaag aagcgatggg aacgatgtaa
301 tccatgaata cagaagattc aattgaaaaa gatcctaatg atcattggga aggatggcgg
361 aacgaaccag agaccaattc atctattctg aaaagtgata aactaatcct ataaaactaa
421 aatagatatt gaaagagtaa atattcgccc gcgaaaattc cttttttatt aaattgctca
481 cattttattt tagcaatgca atctaataaa atatatctat acaaaaaaat atagacaaac
541 tctatatata atatatttca aatttcctta tatatcctaa tataaaaata tctaataaat
601 tagatgaata tcaaagaatc tattgattta gtgtattatt aaatgtatat cttaattcaa
661 tattattatt ctattcattt tcaaatttag aatatattaa tctatatatt aatttagaat
721 tctattctaa ttcgaattca atttttaaat attcatattc aattaaaatt gaaatttttt
781 cattcgcgag gagccggatg agaagaaact ctcacgtccg gttactgtag tagaggtgga
841 attaagaaaa aaccatcaac tataacccca aaagaaccag a
//
LOCUS AF191661 895 bp DNA linear PLN 07-NOV-1999
DEFINITION Opuntia kuehnrichiana rpl16 gene; chloroplast gene for chloroplast
product, partial intron sequence.
ACCESSION AF191661
VERSION AF191661.1 GI:6273287
KEYWORDS .
SOURCE chloroplast Cumulopuntia sphaerica
ORGANISM Cumulopuntia sphaerica
Eukaryota; Viridiplantae; Streptophyta; Embryophyta; Tracheophyta;
Spermatophyta; Magnoliophyta; eudicotyledons; Gunneridae;
Pentapetalae; Caryophyllales; Cactineae; Cactaceae; Opuntioideae;
Cumulopuntia.
REFERENCE 1 (bases 1 to 895)
AUTHORS Dickie,S.L. and Wallace,R.S.
TITLE Phylogeny of the subfamily Opuntioideae (Cactaceae)
JOURNAL Unpublished
REFERENCE 2 (bases 1 to 895)
AUTHORS Dickie,S.L. and Wallace,R.S.
TITLE Direct Submission
JOURNAL Submitted (28-SEP-1999) Botany, Iowa State University, 353 Bessey
Hall, Ames, IA 50011-1020, USA
FEATURES Location/Qualifiers
source 1..895
/organism="Cumulopuntia sphaerica"
/organelle="plastid:chloroplast"
/mol_type="genomic DNA"
/db_xref="taxon:106979"
/note="subfamily Opuntioideae; synonym: Cumulopuntia
kuenrichiana"
gene <1..>895
/gene="rpl16"
intron <1..>895
/gene="rpl16"
ORIGIN
1 tatacattaa agaaggggga tgcggataaa tggaaaggcg aaagaaagaa aaaaatgaat
61 ctaaatgata tacgattcca ctatgtaagg tctttgaatc atatcataaa agacaatgta
121 ataaagcatg aatacagatt cacacataat tatctgatat gaatctattc atagaaaaaa
181 gaaaaaagta agagcctccg gccaataaag actaagaggg ttggctcaag aacaaagttc
241 attaagagct ccattgtaga attcagacct aatcattaat caagaagcga tgggaacgat
301 gtaatccatg aatacagaag attcaattga aaaagatcct atgatccatt gggaaggatg
361 gcggaacgaa ccagagacca attcatctat tctgaaaagt gataaactaa tcctataaaa
421 ctaaaataga tattgaaaga gtaaatattc gcccgcgaaa attccttttt tttttaaatt
481 gctcatattt tattttagca atgcaatcta ataaaatata tctatacaaa aaaataaaga
541 caaactatat atataatata tttcaaattt ccttatatat ccaaatataa aaatatctaa
601 taaattagat gaatatcaaa gaatctattg atttagtgta ttattaaatg tatatcttaa
661 ttcaatatta ttattctatt catttttatt cattttcaat tttataatat attaatctat
721 atattaattt ataattctat tctaattcga attcaatttt taaatattca tattcaatta
781 aaattgaaat tttttcattc gcgaggagcc ggatgagaag aaactctcat gtccggttct
841 gtagtagaga tggaattaag aaaaaaccat caactataac cccaagagaa ccaga
//
###Markdown
In this case, we are just getting the raw records. To get the records ina more Python-friendly form, we can use `Bio.SeqIO` to parse the GenBankdata into `SeqRecord` objects, including `SeqFeature` objects (seeChapter \[chapter:Bio.SeqIO\]):
###Code
from Bio import SeqIO
handle = Entrez.efetch(db="nuccore", id=gi_str, rettype="gb", retmode="text")
records = SeqIO.parse(handle, "gb")
###Output
_____no_output_____
###Markdown
We can now step through the records and look at the information we areinterested in:
###Code
for record in records:
print("%s, length %i, with %i features" \
% (record.name, len(record), len(record.features)))
###Output
HQ621368, length 399, with 3 features
HM041482, length 1197, with 3 features
HM041481, length 1200, with 3 features
HM041480, length 1153, with 3 features
HM041479, length 1197, with 3 features
HM041478, length 1187, with 3 features
HM041477, length 1197, with 3 features
HM041476, length 1205, with 3 features
HM041474, length 1163, with 3 features
HM041473, length 1203, with 3 features
HM041472, length 1182, with 3 features
HM041469, length 1189, with 3 features
HM041468, length 1202, with 3 features
HM041467, length 1199, with 3 features
HM041466, length 1205, with 3 features
HM041465, length 1190, with 3 features
HM041464, length 1184, with 3 features
AY851612, length 892, with 3 features
AY851611, length 881, with 3 features
AF191661, length 895, with 3 features
###Markdown
Using these automated query retrieval functionality is a big plus overdoing things by hand. Although the module should obey the NCBI’s maxthree queries per second rule, the NCBI have other recommendations likeavoiding peak hours. See Section \[sec:entrez-guidelines\]. Inparticular, please note that for simplicity, this example does not usethe WebEnv history feature. You should use this for any non-trivialsearch and download work, see Section [History and WebEnv](Using-the-history-and-WebEnv).Finally, if plan to repeat your analysis, rather than downloading thefiles from the NCBI and parsing them immediately (as shown in thisexample), you should just download the records *once* and save them toyour hard disk, and then parse the local file. Finding the lineage of an organismStaying with a plant example, let’s now find the lineage of theCypripedioideae orchid family. First, we search the Taxonomy databasefor Cypripedioideae, which yields exactly one NCBI taxonomy identifier:
###Code
from Bio import Entrez
Entrez.email = "[email protected]" # Always tell NCBI who you are
handle = Entrez.esearch(db="Taxonomy", term="Cypripedioideae")
record = Entrez.read(handle)
record["IdList"]
record["IdList"][0]
###Output
_____no_output_____
###Markdown
Now, we use `efetch` to download this entry in the Taxonomy database,and then parse it:
###Code
handle = Entrez.efetch(db="Taxonomy", id="158330", retmode="xml")
records = Entrez.read(handle)
###Output
_____no_output_____
###Markdown
Again, this record stores lots of information:
###Code
records[0].keys()
###Output
_____no_output_____
###Markdown
We can get the lineage directly from this record:
###Code
records[0]["Lineage"]
###Output
_____no_output_____
###Markdown
The record data contains much more than just the information shown here- for example look under `LineageEx` instead of `Lineage` and you’ll getthe NCBI taxon identifiers of the lineage entries too.Using the history and WebEnv---------------------------Often you will want to make a series of linked queries. Most typically,running a search, perhaps refining the search, and then retrievingdetailed search results. You *can* do this by making a series ofseparate calls to Entrez. However, the NCBI prefer you to take advantageof their history support - for example combining ESearch and EFetch.Another typical use of the history support would be to combine EPost andEFetch. You use EPost to upload a list of identifiers, which starts anew history session. You then download the records with EFetch byreferring to the session (instead of the identifiers). Searching for and downloading sequences using the historySuppose we want to search and download all the *Opuntia* rpl16nucleotide sequences, and store them in a FASTA file. As shown inSection \[sec:entrez-search-fetch-genbank\], we can naively combine`Bio.Entrez.esearch()` to get a list of GI numbers, and then call`Bio.Entrez.efetch()` to download them all.However, the approved approach is to run the search with the historyfeature. Then, we can fetch the results by reference to the searchresults - which the NCBI can anticipate and cache.To do this, call `Bio.Entrez.esearch()` as normal, but with theadditional argument of `usehistory="y"`,
###Code
from Bio import Entrez
Entrez.email = "[email protected]"
search_handle = Entrez.esearch(db="nucleotide",term="Opuntia[orgn] and rpl16", usehistory="y")
search_results = Entrez.read(search_handle)
search_handle.close()
###Output
_____no_output_____
###Markdown
When you get the XML output back, it will still include the usual searchresults. However, you also get given two additional pieces of information, the```WebEnv``` session cookie, and the ```QueryKey```:
###Code
gi_list = search_results["IdList"]
count = int(search_results["Count"])
assert count == len(gi_list)
print("The WebEnv is {}".format(search_results["WebEnv"]))
print("The QueryKey is {}".format(search_results["QueryKey"]))
###Output
The WebEnv is NCID_1_946410500_130.14.18.34_9001_1452651901_1799213676_0MetA0_S_MegaStore_F_1
The QueryKey is 1
###Markdown
Having stored these values in variables session\_cookie andquery\_key we can use them as parameters to`Bio.Entrez.efetch()` instead of giving the GI numbers as identifiers.While for small searches you might be OK downloading everything at once,it is better to download in batches. You use the retstartand retmax parameters to specify which range of searchresults you want returned (starting entry using zero-based counting, andmaximum number of results to return). Sometimes you will getintermittent errors from Entrez, HTTPError 5XX, we use a try exceptpause retry block to address this. For example, ```from Bio import Entrezimport timetry: from urllib.error import HTTPError for Python 3except ImportError: from urllib2 import HTTPError for Python 2batch_size = 3out_handle = open("orchid_rpl16.fasta", "w")for start in range(0, count, batch_size): end = min(count, start+batch_size) print("Going to download record %i to %i" % (start+1, end)) attempt = 1 while attempt <= 3: try: fetch_handle = Entrez.efetch(db="nucleotide", rettype="fasta", retmode="text", retstart=start, retmax=batch_size, webenv=webenv, query_key=query_key) except HTTPError as err: if 500 <= err.code <= 599: print("Received error from server %s" % err) print("Attempt %i of 3" % attempt) attempt += 1 time.sleep(15) else: raise data = fetch_handle.read() fetch_handle.close() out_handle.write(data)out_handle.close()``` For illustrative purposes, this example downloaded the FASTA records inbatches of three. Unless you are downloading genomes or chromosomes, youwould normally pick a larger batch size. Searching for and downloading abstracts using the historyHere is another history example, searching for papers published in thelast year about the *Opuntia*, and then downloading them into a file inMedLine format: ```from Bio import Entrezimport timetry: from urllib.error import HTTPError for Python 3except ImportError: from urllib2 import HTTPError for Python 2Entrez.email = "[email protected]"search_results = Entrez.read(Entrez.esearch(db="pubmed", term="Opuntia[ORGN]", reldate=365, datetype="pdat", usehistory="y"))count = int(search_results["Count"])print("Found %i results" % count)batch_size = 10out_handle = open("recent_orchid_papers.txt", "w")for start in range(0,count,batch_size): end = min(count, start+batch_size) print("Going to download record %i to %i" % (start+1, end)) attempt = 1 while attempt <= 3: try: fetch_handle = Entrez.efetch(db="pubmed",rettype="medline", retmode="text",retstart=start, retmax=batch_size, webenv=search_results["WebEnv"], query_key=search_results["QueryKey"]) except HTTPError as err: if 500 <= err.code <= 599: print("Received error from server %s" % err) print("Attempt %i of 3" % attempt) attempt += 1 time.sleep(15) else: raise data = fetch_handle.read() fetch_handle.close() out_handle.write(data)out_handle.close()``` At the time of writing, this gave 28 matches - but because this is adate dependent search, this will of course vary. As described inSection \[subsec:entrez-and-medline\] above, you can then use`Bio.Medline` to parse the saved records. Searching for citations {sec:elink-citations}Back in Section \[sec:elink\] we mentioned ELink can be used to searchfor citations of a given paper. Unfortunately this only covers journalsindexed for PubMed Central (doing it for all the journals in PubMedwould mean a lot more work for the NIH). Let’s try this for theBiopython PDB parser paper, PubMed ID 14630660:
###Code
from Bio import Entrez
Entrez.email = "[email protected]"
pmid = "14630660"
results = Entrez.read(Entrez.elink(dbfrom="pubmed", db="pmc",
LinkName="pubmed_pmc_refs", from_uid=pmid))
pmc_ids = [link["Id"] for link in results[0]["LinkSetDb"][0]["Link"]]
pmc_ids
###Output
_____no_output_____
###Markdown
Great - eleven articles. But why hasn’t the Biopython application notebeen found (PubMed ID 19304878)? Well, as you might have guessed fromthe variable names, there are not actually PubMed IDs, but PubMedCentral IDs. Our application note is the third citing paper in thatlist, PMCID 2682512.So, what if (like me) you’d rather get back a list of PubMed IDs? Wellwe can call ELink again to translate them. This becomes a two stepprocess, so by now you should expect to use the history feature toaccomplish it (Section [History and WebEnv](Using-the-history-and-WebEnv)).But first, taking the more straightforward approach of making a second(separate) call to ELink:
###Code
results2 = Entrez.read(Entrez.elink(dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed",
from_uid=",".join(pmc_ids)))
pubmed_ids = [link["Id"] for link in results2[0]["LinkSetDb"][0]["Link"]]
pubmed_ids
###Output
_____no_output_____ |
h2o_glrm_sampled_dataset.ipynb | ###Markdown
Impute missing values using generalized low rank model on H2O Initiate h2o cluster
###Code
library(h2o)
h2o.init(nthreads = -1, max_mem_size = '10G')
# localH2O <- h2o.init(ip = 'localhost', port = 54321, max_mem_size = '24G', nthreads=-1)
###Output
----------------------------------------------------------------------
Your next step is to start H2O:
> h2o.init()
For H2O package documentation, ask for help:
> ??h2o
After starting H2O, you can use the Web UI at http://localhost:54321
For more information visit http://docs.h2o.ai
----------------------------------------------------------------------
Attaching package: ‘h2o’
The following objects are masked from ‘package:stats’:
cor, sd, var
The following objects are masked from ‘package:base’:
&&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,
colnames<-, ifelse, is.character, is.factor, is.numeric, log,
log10, log1p, log2, round, signif, trunc
###Markdown
Import sample data to h2o
###Code
data.hex <- h2o.importFile(path = normalizePath(
"PHBsample14_cleaned_small.csv"
# "lowrank_archetypes.csv"
)
, destination_frame = "data.hex")
data.hex$C1 <- NULL
dim(data.hex)
h2o.names(data.hex)
h2o.str(data.hex)
###Output
Class 'H2OFrame' <environment: 0x7fd236c0e780>
- attr(*, "op")= chr "cols"
- attr(*, "eval")= logi TRUE
- attr(*, "id")= chr "RTMP_sid_92f5_1"
- attr(*, "nrow")= int 6000
- attr(*, "ncol")= int 515
- attr(*, "types")=List of 515
..$ : chr "int"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
..$ : chr "int"
..$ : chr "int"
..$ : chr "real"
..$ : chr "real"
..$ : chr "int"
..$ : chr "real"
..$ : chr "real"
..$ : chr "int"
..$ : chr "int"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
..$ : chr "real"
.. [list output truncated]
- attr(*, "data")='data.frame': 10 obs. of 515 variables:
..$ X : num 54383 57508 12008 56042 24142 ...
..$ Gender : Factor w/ 2 levels "F","M": 1 1 1 1 2 1 2 1 1 1
..$ JointInd : Factor w/ 2 levels "N","Y": 1 1 2 1 1 1 2 1 1 1
..$ SCPeriod : num 3 7 4 3 6 4 8 6 8 4
..$ GMDBInd : Factor w/ 2 levels "N","Y": 1 1 2 2 1 1 1 1 1 1
..$ Dist : Factor w/ 5 levels "BK","CA","IA",..: 1 3 5 5 1 5 2 5 3 5
..$ Comm : Factor w/ 5 levels "A","B","C","D",..: 2 5 3 2 1 1 2 2 1 3
..$ OriginalOwner_C1 : Factor w/ 3 levels "","N","Y": 2 2 2 2 2 2 2 3 2 2
..$ Match1 : Factor w/ 2 levels "N","Y": 2 2 2 2 2 2 2 2 2 2
..$ Nielsen.County.Size.Code_C3 : Factor w/ 6 levels "","A","B","C",..: 5 3 2 2 3 2 4 3 2 2
..$ Dwell.Type.Indicator_C3 : Factor w/ 3 levels "","N","S": 3 3 3 3 3 3 3 3 3 3
..$ Home.Owner.Renter.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 4 4 4 4 4 4 4 4 4 4
..$ Household.Education.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 2 2 2 2 2 2 4 4 2 2
..$ Number.of.Adults.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 4 4 4 4 4 2 4 4 4 4
..$ Length.of.Residence.Level_C3 : Factor w/ 3 levels "","N","S": 3 3 3 3 3 3 3 3 3 3
..$ Household.Age.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 4 4 4 2 4 4 4 4 4 4
..$ Presence.of.Kids.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 2 2 2 2 2 2 2 2 2 2
..$ Household.Size.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 4 4 4 2 4 4 4 4 4 4
..$ Target.Narrow.Band.Income.Indicator_C3 : Factor w/ 4 levels "","H","N","S": 4 4 4 2 4 4 4 4 2 4
..$ Match3 : Factor w/ 2 levels "N","Y": 2 2 2 2 2 2 2 2 2 2
..$ Match4 : Factor w/ 2 levels "N","Y": 2 2 2 2 2 2 2 2 2 2
..$ Qual : Factor w/ 2 levels "N","Q": 1 2 2 2 1 1 1 2 2 1
..$ EligibleInd : Factor w/ 3 levels "","N","Y": 3 3 3 2 1 3 3 3 3 3
..$ FirstEligQInd : Factor w/ 3 levels "","N","Y": 2 2 2 2 1 2 2 2 2 2
..$ UtilizationInd : Factor w/ 3 levels "","N","Y": 2 2 2 2 1 3 2 3 3 2
..$ PolNum_UW : num 43598 215034 368517 433403 505923 ...
..$ IssAgeALB : num 69 64 67 47 66 80 65 67 73 54
..$ Dur : num 4 3 3 6 3 8 2 5 7 5
..$ AV : num 105440 110735 90409 108971 339526 ...
..$ WDtoDate : num 0 0 0 0 0 ...
..$ WDCount : num 0 0 0 0 0 4 0 2 0 0
..$ AVPctEq : num 0.687 NaN NaN 0.695 0.695 ...
..$ LotSize_C1 : num 11543 3125 4000 20000 24829 ...
..$ BuildingArea_C1 : num 1372 1643 2320 2128 2169 ...
..$ EstMarketValue_C1 : num 169000 106000 744000 435000 284000 NaN 0 165000 182000 562000
..$ MarkettoArea_C1 : num 123.2 64.5 320.7 204.4 130.9 ...
..$ CEN_bg_populationDensity : num 142 1082 2262 1041 415 ...
..$ CEN_bg_pctMale : num 0.46 0.501 0.45 0.414 0.505 ...
..$ CEN_bg_age85plus : num 0.0115 0.0114 0.0282 0.0102 0.0206 ...
..$ CEN_bg_ageUnder5 : num 0.0478 0.035 0.0393 0.0735 0.0233 ...
..$ CEN_bg_ageUnder10 : num 0.1396 0.0935 0.0827 0.1726 0.1264 ...
..$ CEN_bg_ageUnder15 : num 0.2036 0.1293 0.0998 0.3293 0.2164 ...
..$ CEN_bg_pctFamilyHH : num 0.578 0.667 0.711 0.95 0.67 ...
..$ CEN_bg_pctMarriedHH : num 0.532 0.581 0.622 0.761 0.646 ...
..$ CEN_bg_pctMalenoWifeHH : num 0.0216 0.0269 0 0 0.0107 ...
..$ CEN_bg_pctFemalenoHusbandHH : num 0.0238 0.0591 0.0894 0.189 0.0136 ...
..$ CEN_bg_pctLiveAloneHH : num 0.4048 0.2616 0.158 0.0501 0.2495 ...
..$ CEN_bg_pctNoHSGrad : num 0.14008 0.07026 0 0.00779 0.01283 ...
..$ CEN_bg_pctHSGrad : num 0.3502 0.4756 0.1534 0.303 0.0363 ...
..$ CEN_bg_pctSomeCollege : num 0.235 0.19 0.207 0.163 0.126 ...
..$ CEN_bg_pctAssociateDegree : num 0.0571 0.0804 0.1382 0.0649 0.0534 ...
..$ CEN_bg_pctBachelorDegree : num 0.0986 0.1436 0.2681 0.2372 0.4233 ...
..$ CEN_bg_pctMastersDegree : num 0.0389 0.0295 0.0773 0.1203 0.2512 ...
..$ CEN_bg_pctProfessionalDegree : num 0.05577 0.00407 0.11241 0.04156 0.02993 ...
..$ CEN_bg_pctDoctorateDegree : num 0.02464 0.00611 0.04333 0.06234 0.06734 ...
..$ CEN_bg_pctHHincomeLT10K : num 0.1039 0.0125 0 0 0.0204 ...
..$ CEN_bg_pctHHincomeLT15K : num 0.1299 0.0376 0.0166 0 0.034 ...
..$ CEN_bg_pctHHincomeGE200K : num 0.0519 0.0125 0.1559 0.2262 0.101 ...
..$ CEN_bg_pctHHWageIncome : num 0.565 0.762 0.742 0.843 0.898 ...
..$ CEN_bg_pctHHSelfEmpIncome : num 0.0779 0.095 0.3119 0.1567 0.0981 ...
..$ CEN_bg_pctHHInvestIncome : num 0.251 0.303 0.524 0.283 0.33 ...
..$ CEN_bg_pctHHSocialSecurityIncome : num 0.444 0.391 0.306 0.158 0.18 ...
..$ CEN_bg_pctHHSuppSocSecInc : num 0 0.0251 0 0 0.0155 ...
..$ CEN_bg_pctHHPublicAssistIncome : num 0 0.0125 0.0249 0 0 ...
..$ CEN_bg_pctHHRetirementIncome : num 0.115 0.276 0.289 0.126 0.215 ...
..$ CEN_bg_pctHHOtherIncome : num 0.1732 0.1362 0.0312 0.1373 0.2019 ...
..$ CEN_bg_pctWorkforceFemale : num 0.378 0.489 0.444 0.58 0.445 ...
..$ CEN_bg_pctWorkforceForProfit : num 0.617 0.83 0.695 0.778 0.471 ...
..$ CEN_bg_pctWorkforceNonProfit : num 0.0443 0.0638 0.0768 0.0558 0.2623 ...
..$ CEN_bg_pctWorkforceGovt : num 0.2083 0.078 0.0543 0.0959 0.2481 ...
..$ CEN_bg_pctWorkforceSelfEmp : num 0.1302 0.0383 0.3614 0.1222 0.0232 ...
..$ CEN_bg_pctWorkforceFamily : num 0 0 0 0 0 ...
..$ CEN_bg_pctVacantHousingUnits : num 0.1523 0.1171 0.2723 0.0535 0 ...
..$ CEN_bg_pctOwnerOccUnits : num 0.798 0.772 0.511 0.803 0.896 ...
..$ CEN_bg_pctRenterOccUnits : num 0.0495 0.1108 0.2163 0.1437 0.1039 ...
..$ CEN_bg_pctSeasonalHousingUnits : num 0 0.0127 0.1952 0 0 ...
..$ CEN_bg_AvgHHSizeOwnerOccup : num 2.32 2.48 2.07 3.43 2.88 3.12 2.42 2.48 3.08 2.9
..$ CEN_bg_AvgHHSizeRenterOccup : num 1.3 1.5 2.04 2.84 2.75 1.88 2.56 1.36 0 1.46
..$ CEN_bg_pctOwnOccValGE500K : num 0.0621 0 0.9645 0.3676 0.1506 ...
..$ CEN_bg_pctOwnOccValGE200K : num 0.4414 0.0574 1 1 0.9848 ...
..$ CEN_bg_pctOwnOccValGE100K : num 0.844 0.609 1 1 0.985 ...
..$ CEN_bg_pctOwnOccValGE50K : num 1 0.949 1 1 0.985 ...
..$ CEN_bg_pctOwnOccNoMortgage : num 0.4529 0.4221 0.1923 0.0629 0.1517 ...
..$ CEN_bg_pctOwnOccSecondMort : num 0.0598 0.168 0.4112 0.3486 0.2947 ...
..$ CEN_bg_pctManagementOcc : num 0.0781 0.1348 0.2903 0.1412 0.276 ...
..$ CEN_bg_pctComputerOcc : num 0.112 0.0482 0.015 0.058 0.0773 ...
..$ CEN_bg_pctEducationOcc : num 0.1016 0.0383 0.1629 0.0622 0.2588 ...
..$ CEN_bg_pctHealthPractitionersOcc : num 0.0443 0.0525 0.1423 0.1296 0.0738 ...
..$ CEN_bg_pctHealthSupportOcc : num 0 0.0241 0.0375 0.0285 0 ...
..$ CEN_bg_pctProtectServiceOcc : num 0 0.00426 0 0.06849 0.01368 ...
..$ CEN_bg_pctFoodOcc : num 0 0.0638 0.03 0.0537 0 ...
..$ CEN_bg_pctCleaningOcc : num 0.02083 0.03262 0.01498 0.0137 0.00892 ...
..$ CEN_bg_pctPersonalCareOcc : num 0 0.0255 0.015 0.0295 0.0369 ...
..$ CEN_bg_pctSalesOcc : num 0.2396 0.1447 0.1704 0.0295 0.0904 ...
..$ CEN_bg_pctAdminOcc : num 0.0365 0.1546 0.0543 0.2476 0.1124 ...
..$ CEN_bg_pctFarmingOcc : num 0 0 0 0 0 ...
..$ CEN_bg_pctConstructionOcc : num 0.138 0.0936 0.0543 0.0938 0.0178 ...
..$ CEN_bg_pctRepairOcc : num 0 0.0298 0 0.0169 0.0137 ...
..$ CEN_bg_pctProductionOcc : num 0.1406 0.0752 0.0131 0.0158 0.0113 ...
.. [list output truncated]
###Markdown
Data Munging change Boolean and Categorical variables into factor
###Code
for (col in 2:8) {
data.hex[,col] <- as.factor(as.character(data.hex[,col]))
}
str(data.hex[,2:8])
# Summary of Datatypes:
# PK: 1
# Boolean: 2:8
# Categorical: 9:20
# Positive Integer: 21:135
# Positive Numeric: 136:187
# Percentage Numeric: 188:500
# Real Value Numeric: 501:510
###Output
_____no_output_____
###Markdown
Train glrm model First, Select K
###Code
glrm_K <- function(K_input){
t0 = Sys.time()
data.glrm <- h2o.glrm(
training_frame = data.hex,
# impute variables except policy number
cols = c(2:ncol(data.hex)),
k = K_input, seed = 1234, init = "SVD", svd_method = "GramSVD",
loss = "Quadratic",
multi_loss = "Categorical",
transform = "NORMALIZE",
impute_original = TRUE,
regularization_x = "Quadratic", regularization_y = "Quadratic",
max_iterations = 2000, min_step_size = 1e-6)
t1 = Sys.time()
return(data.glrm)
}
k_values = c(seq(20, 60, 5))
glrm_results = list(new(Class="H2ODimReductionModel"), length(k_values))
i = 1
for (k in k_values){
print(paste0("Training GLRM model with K = ", k))
t0 = Sys.time()
glrm_results[[i]] = glrm_K(k)
t1 = Sys.time()
print("Model trained")
print(t1-t0)
i = i + 1
}
dfSelectK = data.frame()
for (i in 1:9){
dfSelectK = rbind(dfSelectK, glrm_results[[i]]@model$model_summary)
}
dfSelectK$K_values = k_values
plot(dfSelectK[, c('K_values', 'final_objective_value')])
###Output
_____no_output_____
###Markdown
When K = 50 objective function reached platuae, thus continue with this number of rank to train glrm
###Code
t0 = Sys.time()
data.glrm <- h2o.glrm(training_frame = data.hex,
# impute variables except policy number
cols = c(2:ncol(data.hex)),
k = 50, seed = 1234, init = "SVD", svd_method = "GramSVD",
# Specify different loss function for different data types
# loss_by_col = c("Ordinal"),
# loss_by_col_idx = c(5),
# loss_by_col_idx = c(3:4, 5, 6:26, 28:ncol(data.hex)),
loss = "Quadratic",
multi_loss = "Categorical",
transform = "NORMALIZE",
impute_original = TRUE,
regularization_x = "Quadratic", regularization_y = "Quadratic",
max_iterations = 200, min_step_size = 1e-6)
t1 = Sys.time()
print(t1-t0)
plot(data.glrm)
###Output
Time difference of 2.777899 mins
###Markdown
Next, export and imputation 1. Low rank representation, principal stances
###Code
rep <- h2o.getFrame(data.glrm@model$representation_name)
h2o.exportFile(rep, path = "lowrank_rep.csv")
rep_rframe <- as.data.frame(rep)
###Output
|
| | 0%
|
|======================================================================| 100%
###Markdown
2. Get archetypes
###Code
archetypes <- h2o.proj_archetypes(data.glrm, data.hex, reverse_transform = TRUE)
h2o.exportFile(archetypes, path = "lowrank_archetypes.csv")
archetypes_rframe <- as.data.frame(archetypes)
###Output
|
| | 0%
|
|======================================================================| 100%
###Markdown
3. Reconstruct the original matrix
###Code
data.pred <- predict(data.glrm, data.hex)
h2o.exportFile(data.pred, path = "recontr_data.csv")
reconstr_rframe <- as.data.frame(data.pred)
###Output
|======================================================================| 100%
|======================================================================| 100%
###Markdown
Shutdown h2o cluster
###Code
h2o.shutdown(prompt = FALSE)
###Output
_____no_output_____ |
notebooks/WeightImprintingSoftmax/WeightImprintingSoftmax_ImagenetA.ipynb | ###Markdown
Mount Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
!pip install -U -q PyDrive
!pip install httplib2==0.15.0
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from pydrive.files import GoogleDriveFileList
from google.colab import auth
from oauth2client.client import GoogleCredentials
from getpass import getpass
import urllib
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Cloning PAL_2021 to access modules.
# Need password to access private repo.
if 'CLIPPER' not in os.listdir():
cmd_string = 'git clone https://github.com/PAL-ML/CLIPPER.git'
os.system(cmd_string)
###Output
Collecting httplib2==0.15.0
[?25l Downloading https://files.pythonhosted.org/packages/be/83/5e006e25403871ffbbf587c7aa4650158c947d46e89f2d50dcaf018464de/httplib2-0.15.0-py3-none-any.whl (94kB)
[K |███▌ | 10kB 22.5MB/s eta 0:00:01
[K |███████ | 20kB 28.8MB/s eta 0:00:01
[K |██████████▍ | 30kB 22.2MB/s eta 0:00:01
[K |█████████████▉ | 40kB 17.0MB/s eta 0:00:01
[K |█████████████████▎ | 51kB 8.2MB/s eta 0:00:01
[K |████████████████████▊ | 61kB 7.6MB/s eta 0:00:01
[K |████████████████████████▏ | 71kB 8.4MB/s eta 0:00:01
[K |███████████████████████████▋ | 81kB 9.2MB/s eta 0:00:01
[K |███████████████████████████████ | 92kB 9.4MB/s eta 0:00:01
[K |████████████████████████████████| 102kB 5.9MB/s
[?25hInstalling collected packages: httplib2
Found existing installation: httplib2 0.17.4
Uninstalling httplib2-0.17.4:
Successfully uninstalled httplib2-0.17.4
Successfully installed httplib2-0.15.0
###Markdown
Installation Install multi label metrics dependencies
###Code
! pip install scikit-learn==0.24
###Output
Collecting scikit-learn==0.24
[?25l Downloading https://files.pythonhosted.org/packages/b1/ed/ab51a8da34d2b3f4524b21093081e7f9e2ddf1c9eac9f795dcf68ad0a57d/scikit_learn-0.24.0-cp37-cp37m-manylinux2010_x86_64.whl (22.3MB)
[K |████████████████████████████████| 22.3MB 1.3MB/s
[?25hRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn==0.24) (1.0.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scikit-learn==0.24) (1.19.5)
Collecting threadpoolctl>=2.0.0
Downloading https://files.pythonhosted.org/packages/f7/12/ec3f2e203afa394a149911729357aa48affc59c20e2c1c8297a60f33f133/threadpoolctl-2.1.0-py3-none-any.whl
Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.7/dist-packages (from scikit-learn==0.24) (1.4.1)
Installing collected packages: threadpoolctl, scikit-learn
Found existing installation: scikit-learn 0.22.2.post1
Uninstalling scikit-learn-0.22.2.post1:
Successfully uninstalled scikit-learn-0.22.2.post1
Successfully installed scikit-learn-0.24.0 threadpoolctl-2.1.0
###Markdown
Install CLIP dependencies
###Code
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
! pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
! pip install ftfy regex
! wget https://openaipublic.azureedge.net/clip/bpe_simple_vocab_16e6.txt.gz -O bpe_simple_vocab_16e6.txt.gz
!pip install git+https://github.com/Sri-vatsa/CLIP # using this fork because of visualization capabilities
###Output
Collecting git+https://github.com/Sri-vatsa/CLIP
Cloning https://github.com/Sri-vatsa/CLIP to /tmp/pip-req-build-vuf156kn
Running command git clone -q https://github.com/Sri-vatsa/CLIP /tmp/pip-req-build-vuf156kn
Requirement already satisfied: ftfy in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (6.0.3)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (4.41.1)
Requirement already satisfied: torch~=1.7.1 in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (1.7.1+cu110)
Requirement already satisfied: torchvision~=0.8.2 in /usr/local/lib/python3.7/dist-packages (from clip==1.0) (0.8.2+cu110)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from ftfy->clip==1.0) (0.2.5)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch~=1.7.1->clip==1.0) (3.7.4.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch~=1.7.1->clip==1.0) (1.19.5)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision~=0.8.2->clip==1.0) (7.1.2)
Building wheels for collected packages: clip
Building wheel for clip (setup.py) ... [?25l[?25hdone
Created wheel for clip: filename=clip-1.0-cp37-none-any.whl size=1368623 sha256=5a9e91320cc8ffbb2d79efddf20437108b9ddd0e828c033c5db36f16ca537503
Stored in directory: /tmp/pip-ephem-wheel-cache-6ll7l0x4/wheels/cc/55/69/0d411dabbd5009fd069d47b47cf7839c54e595dc61725b307b
Successfully built clip
Installing collected packages: clip
Successfully installed clip-1.0
###Markdown
Install clustering dependencies
###Code
!pip -q install umap-learn>=0.3.7
###Output
_____no_output_____
###Markdown
Install dataset manager dependencies
###Code
!pip install wget
###Output
Collecting wget
Downloading https://files.pythonhosted.org/packages/47/6a/62e288da7bcda82b935ff0c6cfe542970f04e29c756b0e147251b2fb251f/wget-3.2.zip
Building wheels for collected packages: wget
Building wheel for wget (setup.py) ... [?25l[?25hdone
Created wheel for wget: filename=wget-3.2-cp37-none-any.whl size=9681 sha256=b097749b55adf99d36a44a9095d431bca49f830d513519fccee098d61a40e79e
Stored in directory: /root/.cache/pip/wheels/40/15/30/7d8f7cea2902b4db79e3fea550d7d7b85ecb27ef992b618f3f
Successfully built wget
Installing collected packages: wget
Successfully installed wget-3.2
###Markdown
Imports
###Code
# ML Libraries
import tensorflow as tf
import tensorflow_hub as hub
import torch
import torch.nn as nn
import torchvision.models as models
import torchvision.transforms as transforms
import keras
# Data processing
import PIL
import base64
import imageio
import pandas as pd
import numpy as np
import json
from PIL import Image
import cv2
from sklearn.feature_extraction.image import extract_patches_2d
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from IPython.core.display import display, HTML
from matplotlib import cm
import matplotlib.image as mpimg
# Models
import clip
# Datasets
import tensorflow_datasets as tfds
# Clustering
# import umap
from sklearn import metrics
from sklearn.cluster import KMeans
#from yellowbrick.cluster import KElbowVisualizer
# Misc
import progressbar
import logging
from abc import ABC, abstractmethod
import time
import urllib.request
import os
from sklearn.metrics import jaccard_score, hamming_loss, accuracy_score, f1_score
from sklearn.preprocessing import MultiLabelBinarizer
# Modules
from CLIPPER.code.ExperimentModules import embedding_models
from CLIPPER.code.ExperimentModules.dataset_manager import DatasetManager
from CLIPPER.code.ExperimentModules.weight_imprinting_classifier import WeightImprintingClassifier
from CLIPPER.code.ExperimentModules import simclr_data_augmentations
from CLIPPER.code.ExperimentModules.utils import (save_npy, load_npy,
get_folder_id,
create_expt_dir,
save_to_drive,
load_all_from_drive_folder,
download_file_by_name,
delete_file_by_name)
logging.getLogger('googleapicliet.discovery_cache').setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Initialization & Constants Dataset details
###Code
dataset_name = 'ImagenetA'
folder_name = "ImagenetA-Embeddings-28-02-21"
# Change parentid to match that of experiments root folder in gdrive
parentid = '1bK72W-Um20EQDEyChNhNJthUNbmoSEjD'
# Filepaths
test_labels_filename = "test_labels.npz"
test_embeddings_filename_suffix = "_embeddings_test.npz"
# Initialize sepcific experiment folder in drive
folderid = create_expt_dir(drive, parentid, folder_name)
###Output
title: ImagenetA-Embeddings-28-02-21, id: 13IXmLLCxY96gh9FQMR6Il_dXDB2LsYcR
Experiment folder already exists. WARNING: Following with this run might overwrite existing results stored.
###Markdown
Few shot learning parameters
###Code
num_ways = 5 # [5, 20]
num_shot = 5 # [5, 1]
num_eval = 15 # [5, 10, 15, 19]
num_episodes = 100
shuffle = False
###Output
_____no_output_____
###Markdown
Image embedding and augmentations
###Code
embedding_model = embedding_models.CLIPEmbeddingWrapper()
num_augmentations = 0 # [0, 5, 10]
trivial=False # [True, False]
###Output
_____no_output_____
###Markdown
Training parameters
###Code
# List of number of epochs to train over, e.g. [5, 10, 15, 20]. [0] indicates no training.
train_epochs_arr = [0]
# Single label (softmax) parameters
multi_label= False # [True, False] i.e. sigmoid or softmax
metrics_val = ['accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy']
###Output
_____no_output_____
###Markdown
Load data
###Code
def get_ndarray_from_drive(drive, folderid, filename):
download_file_by_name(drive, folderid, filename)
return np.load(filename)['data']
test_labels = get_ndarray_from_drive(drive, folderid, test_labels_filename)
dm = DatasetManager()
test_data_generator = dm.load_dataset('imagenet_a', split='test')
class_names = dm.get_class_names()
print(class_names)
###Output
[1mDownloading and preparing dataset imagenet_a/0.1.0 (download: 655.70 MiB, generated: 650.87 MiB, total: 1.28 GiB) to /root/tensorflow_datasets/imagenet_a/0.1.0...[0m
###Markdown
Create label dictionary
###Code
unique_labels = np.unique(test_labels)
print(len(unique_labels))
label_dictionary = {la:[] for la in unique_labels}
for i in range(len(test_labels)):
la = test_labels[i]
label_dictionary[la].append(i)
###Output
_____no_output_____
###Markdown
Weight Imprinting models on train data embeddings Function definitions
###Code
def start_progress_bar(bar_len):
widgets = [
' [',
progressbar.Timer(format= 'elapsed time: %(elapsed)s'),
'] ',
progressbar.Bar('*'),' (',
progressbar.ETA(), ') ',
]
pbar = progressbar.ProgressBar(
max_value=bar_len, widgets=widgets
).start()
return pbar
def prepare_indices(
num_ways,
num_shot,
num_eval,
num_episodes,
label_dictionary,
labels,
shuffle=False
):
eval_indices = []
train_indices = []
wi_y = []
eval_y = []
label_dictionary = {la:label_dictionary[la] for la in label_dictionary if len(label_dictionary[la]) >= (num_shot+num_eval)}
unique_labels = list(label_dictionary.keys())
pbar = start_progress_bar(num_episodes)
for s in range(num_episodes):
# Setting random seed for replicability
np.random.seed(s)
_train_indices = []
_eval_indices = []
selected_labels = np.random.choice(unique_labels, size=num_ways, replace=False)
for la in selected_labels:
la_indices = label_dictionary[la]
select = np.random.choice(la_indices, size = num_shot+num_eval, replace=False)
tr_idx = list(select[:num_shot])
ev_idx = list(select[num_shot:])
_train_indices = _train_indices + tr_idx
_eval_indices = _eval_indices + ev_idx
if shuffle:
np.random.shuffle(_train_indices)
np.random.shuffle(_eval_indices)
train_indices.append(_train_indices)
eval_indices.append(_eval_indices)
_wi_y = test_labels[_train_indices]
_eval_y = test_labels[_eval_indices]
wi_y.append(_wi_y)
eval_y.append(_eval_y)
pbar.update(s+1)
return train_indices, eval_indices, wi_y, eval_y
def embed_images(
embedding_model,
train_indices,
num_augmentations,
trivial=False
):
def augment_image(image, num_augmentations, trivial):
""" Perform SimCLR augmentations on the image
"""
if np.max(image) > 1:
image = image/255
augmented_images = [image]
def _run_filters(image):
width = image.shape[1]
height = image.shape[0]
image_aug = simclr_data_augmentations.random_crop_with_resize(
image,
height,
width
)
image_aug = tf.image.random_flip_left_right(image_aug)
image_aug = simclr_data_augmentations.random_color_jitter(image_aug)
image_aug = simclr_data_augmentations.random_blur(
image_aug,
height,
width
)
image_aug = tf.reshape(image_aug, [image.shape[0], image.shape[1], 3])
image_aug = tf.clip_by_value(image_aug, 0., 1.)
return image_aug.numpy()
for _ in range(num_augmentations):
if trivial:
aug_image = image
else:
aug_image = _run_filters(image)
augmented_images.append(aug_image)
augmented_images = np.stack(augmented_images)
return augmented_images
embedding_model.load_model()
unique_indices = np.unique(np.array(train_indices))
ds = dm.load_dataset('imagenet_a', split='test')
embeddings = []
IMAGE_IDX = 'image'
pbar = start_progress_bar(unique_indices.size+1)
num_done=0
for idx, item in enumerate(ds):
if idx in unique_indices:
image = item[IMAGE_IDX]
if num_augmentations > 0:
aug_images = augment_image(image, num_augmentations, trivial)
else:
aug_images = image
processed_images = embedding_model.preprocess_data(aug_images)
embedding = embedding_model.embed_images(processed_images)
embeddings.append(embedding)
num_done += 1
pbar.update(num_done+1)
if idx == unique_indices[-1]:
break
embeddings = np.stack(embeddings)
return unique_indices, embeddings
def train_model_for_episode(
indices_and_embeddings,
train_indices,
wi_y,
num_augmentations,
train_epochs=None,
train_batch_size=5,
multi_label=True
):
train_embeddings = []
train_labels = []
ind = indices_and_embeddings[0]
emb = indices_and_embeddings[1]
for idx, tr_idx in enumerate(train_indices):
train_embeddings.append(emb[np.argwhere(ind==tr_idx)[0][0]])
train_labels += [wi_y[idx] for _ in range(num_augmentations+1)]
train_embeddings = np.concatenate(train_embeddings)
train_embeddings = WeightImprintingClassifier.preprocess_input(train_embeddings)
wi_weights, label_mapping = WeightImprintingClassifier.get_imprinting_weights(
train_embeddings, train_labels, False, multi_label
)
wi_parameters = {
"num_classes": num_ways,
"input_dims": train_embeddings.shape[-1],
"scale": False,
"dense_layer_weights": wi_weights,
"multi_label": multi_label
}
wi_cls = WeightImprintingClassifier(wi_parameters)
if train_epochs:
# ep_y = train_labels
rev_label_mapping = {label_mapping[val]:val for val in label_mapping}
train_y = np.zeros((len(train_labels), num_ways))
for idx_y, l in enumerate(train_labels):
if multi_label:
for _l in l:
train_y[idx_y, rev_label_mapping[_l]] = 1
else:
train_y[idx_y, rev_label_mapping[l]] = 1
wi_cls.train(train_embeddings, train_y, train_epochs, train_batch_size)
return wi_cls, label_mapping
def evaluate_model_for_episode(
model,
eval_x,
eval_y,
label_mapping,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
threshold=0.7,
multi_label=True
):
eval_x = WeightImprintingClassifier.preprocess_input(eval_x)
logits = model.predict_scores(eval_x).tolist()
if multi_label:
pred_y = model.predict_multi_label(eval_x, threshold)
pred_y = [[label_mapping[v] for v in l] for l in pred_y]
met = model.evaluate_multi_label_metrics(
eval_x, eval_y, label_mapping, threshold, metrics
)
else:
pred_y = model.predict_single_label(eval_x)
pred_y = [label_mapping[l] for l in pred_y]
met = model.evaluate_single_label_metrics(
eval_x, eval_y, label_mapping, metrics
)
return pred_y, met, logits
def run_episode_through_model(
indices_and_embeddings,
train_indices,
eval_indices,
wi_y,
eval_y,
thresholds=None,
num_augmentations=0,
train_epochs=None,
train_batch_size=5,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
embeddings=None,
multi_label=True
):
metrics_values = {m:[] for m in metrics}
wi_cls, label_mapping = train_model_for_episode(
indices_and_embeddings,
train_indices,
wi_y,
num_augmentations,
train_epochs,
train_batch_size,
multi_label=multi_label
)
eval_x = embeddings[eval_indices]
ep_logits = []
if multi_label:
for t in thresholds:
pred_labels, met, logits = evaluate_model_for_episode(
wi_cls,
eval_x,
eval_y,
label_mapping,
threshold=t,
metrics=metrics,
multi_label=True
)
ep_logits.append(logits)
for m in metrics:
metrics_values[m].append(met[m])
else:
pred_labels, metrics_values, logits = evaluate_model_for_episode(
wi_cls,
eval_x,
eval_y,
label_mapping,
metrics=metrics,
multi_label=False
)
ep_logits = logits
return metrics_values, ep_logits
def run_evaluations(
indices_and_embeddings,
train_indices,
eval_indices,
wi_y,
eval_y,
num_episodes,
num_ways,
thresholds,
verbose=True,
normalize=True,
train_epochs=None,
train_batch_size=5,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
embeddings=None,
num_augmentations=0,
multi_label=True
):
metrics_values = {m:[] for m in metrics}
all_logits = []
if verbose:
pbar = start_progress_bar(num_episodes)
for idx_ep in range(num_episodes):
_train_indices = train_indices[idx_ep]
_eval_indices = eval_indices[idx_ep]
if multi_label:
_wi_y = [[label] for label in wi_y[idx_ep]]
_eval_y = [[label] for label in eval_y[idx_ep]]
else:
_wi_y = wi_y[idx_ep]
_eval_y = eval_y[idx_ep]
met, ep_logits = run_episode_through_model(
indices_and_embeddings,
_train_indices,
_eval_indices,
_wi_y,
_eval_y,
num_augmentations=num_augmentations,
train_epochs=train_epochs,
train_batch_size=train_batch_size,
embeddings=embeddings,
thresholds=thresholds,
metrics=metrics,
multi_label=multi_label
)
all_logits.append(ep_logits)
for m in metrics:
metrics_values[m].append(met[m])
if verbose:
pbar.update(idx_ep+1)
return metrics_values, all_logits
def get_max_mean_jaccard_index_by_threshold(metrics_thresholds):
max_mean_jaccard = np.max([np.mean(mt['jaccard']) for mt in metrics_thresholds])
return max_mean_jaccard
def get_max_mean_jaccard_index_with_threshold(metrics_thresholds):
arr = np.array(metrics_thresholds['jaccard'])
max_mean_jaccard = np.max(np.mean(arr, 0))
threshold = np.argmax(np.mean(arr, 0))
return max_mean_jaccard, threshold
def get_max_mean_f1_score_with_threshold(metrics_thresholds):
arr = np.array(metrics_thresholds['f1_score'])
max_mean_jaccard = np.max(np.mean(arr, 0))
threshold = np.argmax(np.mean(arr, 0))
return max_mean_jaccard, threshold
def get_mean_max_jaccard_index_by_episode(metrics_thresholds):
mean_max_jaccard = np.mean(np.max(np.array([mt['jaccard'] for mt in metrics_thresholds]), axis=0))
return mean_max_jaccard
def plot_metrics_by_threshold(
metrics_thresholds,
thresholds,
metrics=['hamming', 'jaccard', 'subset_accuracy', 'ap', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'classwise_accuracy', 'c_accuracy'],
title_suffix=""
):
legend = []
fig = plt.figure(figsize=(10,10))
if 'jaccard' in metrics:
mean_jaccard_threshold = np.mean(np.array(metrics_thresholds['jaccard']), axis=0)
opt_threshold_jaccard = thresholds[np.argmax(mean_jaccard_threshold)]
plt.plot(thresholds, mean_jaccard_threshold, c='blue')
plt.axvline(opt_threshold_jaccard, ls="--", c='blue')
legend.append("Jaccard Index")
legend.append(opt_threshold_jaccard)
if 'hamming' in metrics:
mean_hamming_threshold = np.mean(np.array(metrics_thresholds['hamming']), axis=0)
opt_threshold_hamming = thresholds[np.argmin(mean_hamming_threshold)]
plt.plot(thresholds, mean_hamming_threshold, c='green')
plt.axvline(opt_threshold_hamming, ls="--", c='green')
legend.append("Hamming Score")
legend.append(opt_threshold_hamming)
if 'map' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['map']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='red')
plt.axvline(opt_threshold_f1_score, ls="--", c='red')
legend.append("mAP")
legend.append(opt_threshold_f1_score)
if 'o_f1' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_f1']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='yellow')
plt.axvline(opt_threshold_f1_score, ls="--", c='yellow')
legend.append("OF1")
legend.append(opt_threshold_f1_score)
if 'c_f1' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_f1']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='orange')
plt.axvline(opt_threshold_f1_score, ls="--", c='orange')
legend.append("CF1")
legend.append(opt_threshold_f1_score)
if 'o_precision' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_precision']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='purple')
plt.axvline(opt_threshold_f1_score, ls="--", c='purple')
legend.append("OP")
legend.append(opt_threshold_f1_score)
if 'c_precision' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_precision']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='cyan')
plt.axvline(opt_threshold_f1_score, ls="--", c='cyan')
legend.append("CP")
legend.append(opt_threshold_f1_score)
if 'o_recall' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['o_recall']), axis=0)
opt_threshold_f1_score = thresholds[np.argmin(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='brown')
plt.axvline(opt_threshold_f1_score, ls="--", c='brown')
legend.append("OR")
legend.append(opt_threshold_f1_score)
if 'c_recall' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_recall']), axis=0)
opt_threshold_f1_score = thresholds[np.argmin(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='pink')
plt.axvline(opt_threshold_f1_score, ls="--", c='pink')
legend.append("CR")
legend.append(opt_threshold_f1_score)
if 'c_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['c_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='maroon')
plt.axvline(opt_threshold_f1_score, ls="--", c='maroon')
legend.append("CACC")
legend.append(opt_threshold_f1_score)
if 'top1_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['top1_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='magenta')
plt.axvline(opt_threshold_f1_score, ls="--", c='magenta')
legend.append("TOP1")
legend.append(opt_threshold_f1_score)
if 'top5_accuracy' in metrics:
mean_f1_score_threshold = np.mean(np.array(metrics_thresholds['top5_accuracy']), axis=0)
opt_threshold_f1_score = thresholds[np.argmax(mean_f1_score_threshold)]
plt.plot(thresholds, mean_f1_score_threshold, c='slategray')
plt.axvline(opt_threshold_f1_score, ls="--", c='slategray')
legend.append("TOP5")
legend.append(opt_threshold_f1_score)
plt.xlabel('Threshold')
plt.ylabel('Value')
plt.legend(legend)
title = title_suffix+"\nMulti label metrics by threshold"
plt.title(title)
plt.grid()
fname = os.path.join(PLOT_DIR, title_suffix)
plt.savefig(fname)
plt.show()
###Output
_____no_output_____
###Markdown
Setting multiple thresholds
###Code
# No threshold for softmax
thresholds = None
###Output
_____no_output_____
###Markdown
Main Picking indices
###Code
train_indices, eval_indices, wi_y, eval_y = prepare_indices(
num_ways, num_shot, num_eval, num_episodes, label_dictionary, test_labels, shuffle
)
indices, embeddings = embed_images(
embedding_model,
train_indices,
num_augmentations,
trivial=trivial
)
###Output
100%|████████████████████████████████████████| 354M/354M [00:01<00:00, 191MiB/s]
[elapsed time: 0:00:42] |********************************* | (ETA: 0:00:00)
###Markdown
CLIP
###Code
clip_embeddings_test_fn = "clip" + test_embeddings_filename_suffix
clip_embeddings_test = get_ndarray_from_drive(drive, folderid, clip_embeddings_test_fn)
import warnings
warnings.filterwarnings('ignore')
if train_epochs_arr == [0]:
if trivial:
results_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_with_logits.json"
else:
results_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_with_logits.json"
else:
if trivial:
results_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_with_logits.json"
else:
results_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_with_logits.json"
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
download_file_by_name(drive, folderid, results_filename)
if results_filename in os.listdir():
with open(results_filename, 'r') as f:
json_loaded = json.load(f)
clip_metrics_over_train_epochs = json_loaded['metrics']
logits_over_train_epochs = json_loaded["logits"]
else:
clip_metrics_over_train_epochs = []
logits_over_train_epochs = []
for idx, train_epochs in enumerate(train_epochs_arr):
if idx < len(clip_metrics_over_train_epochs):
continue
print(train_epochs)
clip_metrics_thresholds, all_logits = run_evaluations(
(indices, embeddings),
train_indices,
eval_indices,
wi_y,
eval_y,
num_episodes,
num_ways,
thresholds,
train_epochs=train_epochs,
num_augmentations=num_augmentations,
embeddings=clip_embeddings_test,
multi_label=multi_label,
metrics=metrics_val
)
clip_metrics_over_train_epochs.append(clip_metrics_thresholds)
logits_over_train_epochs.append(all_logits)
fin_list = []
for a1 in wi_y:
fin_a1_list = []
for a2 in a1:
# fin_a2_list = []
# for a3 in a2:
# new_val = a3.decode("utf-8")
# fin_a2_list.append(new_val)
new_val = str(a2)
fin_a1_list.append(new_val)
fin_list.append(fin_a1_list)
with open(results_filename, 'w') as f:
results = {'metrics': clip_metrics_over_train_epochs,
"logits": logits_over_train_epochs,
"true_labels": fin_list}
json.dump(results, f)
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
delete_file_by_name(drive, folderid, results_filename)
save_to_drive(drive, folderid, results_filename)
def get_best_metric_and_threshold(mt, metric_name, thresholds, optimal='max'):
if optimal=='max':
opt_value = np.max(np.mean(np.array(mt[metric_name]), axis=0))
opt_threshold = thresholds[np.argmax(np.mean(np.array(mt[metric_name]), axis=0))]
if optimal=='min':
opt_value = np.min(np.mean(np.array(mt[metric_name]), axis=0))
opt_threshold = thresholds[np.argmin(np.mean(np.array(mt[metric_name]), axis=0))]
return opt_value, opt_threshold
def get_avg_metric(mt, metric_name):
opt_value = np.mean(np.array(mt[metric_name]), axis=0)
return opt_value
all_metrics = ['accuracy', 'map', 'c_f1', 'o_f1', 'c_precision', 'o_precision', 'c_recall', 'o_recall', 'top1_accuracy', 'top5_accuracy', 'c_accuracy']
f1_vals = []
f1_t_vals = []
jaccard_vals = []
jaccard_t_vals = []
final_dict = {}
for ind_metric in all_metrics:
vals = []
t_vals = []
final_array = []
for mt in clip_metrics_over_train_epochs:
ret_val = get_avg_metric(mt,ind_metric)
vals.append(ret_val)
final_array.append(vals)
final_dict[ind_metric] = final_array
if train_epochs_arr == [0]:
if trivial:
graph_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_graphs.json"
else:
graph_filename = "new_metrics"+dataset_name+"_softmax_0t"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_graphs.json"
else:
if trivial:
graph_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_trivial_metrics_graphs.json"
else:
graph_filename = "new_metrics"+dataset_name+"_softmax_"+str(num_ways)+"w"+str(num_shot)+"s"+str(num_augmentations)+"a_metrics_graphs.json"
with open(graph_filename, 'w') as f:
json.dump(final_dict, f)
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
delete_file_by_name(drive, folderid, graph_filename)
save_to_drive(drive, folderid, graph_filename)
final_dict
###Output
_____no_output_____ |
Lecture7/blood cell CNN.ipynb | ###Markdown
Show example image
###Code
x, y = next(train_gen)
x.shape, y.shape
import matplotlib.pyplot as plt
import numpy as np
train_gen.class_indices
np.argmax(y[0])
plt.imshow(x[0])
plt.show()
###Output
_____no_output_____
###Markdown
Define our CNN model
###Code
kernel_size = (3, 3)
pool_size = (4, 4)
x_in = tf.keras.layers.Input(shape=(256, 256, 3))
x = tf.keras.layers.Conv2D(filters=8, kernel_size=kernel_size, activation='relu')(x_in)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPool2D(pool_size=pool_size)(x)
x = tf.keras.layers.Conv2D(filters=16, kernel_size=kernel_size, activation='relu')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPool2D(pool_size=pool_size)(x)
x = tf.keras.layers.Conv2D(filters=32, kernel_size=kernel_size, activation='relu')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPool2D(pool_size=pool_size)(x)
x_flatten = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(64, activation='relu')(x_flatten)
x_out = tf.keras.layers.Dense(4, activation='softmax')(x)
model = tf.keras.Model(x_in, x_out)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
model.fit_generator(
train_gen,
steps_per_epoch=len(train_gen),
epochs=10,
validation_data=test_gen,
validation_steps=len(test_gen),
workers=4,
)
os.listdir(train_dir)
train_gen.class_indices
from skimage.io import imread
imgs = []
for folder in os.listdir(train_dir):
label = train_gen.class_indices[folder]
for path in os.listdir(os.path.join(train_dir, folder)):
img = imread(os.path.join(train_dir, folder, path))
imgs.append(img)
imgs = np.asarray(imgs)
imgs.shape
###Output
_____no_output_____
###Markdown
Transfer learning with InceptionV3
###Code
from tensorflow.keras.applications.inception_v3 import InceptionV3
base_model = InceptionV3(weights='imagenet', include_top=False)
base_model.summary()
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(128, activation='relu')(x)
predictions = tf.keras.layers.Dense(4, activation='softmax')(x)
model_transfer = tf.keras.Model(base_model.input, predictions)
for layer in base_model.layers:
layer.trainable = False
model_transfer.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['acc'])
model_transfer.fit_generator(
train_gen,
steps_per_epoch=len(train_gen),
epochs=10,
validation_data=test_gen,
validation_steps=len(test_gen),
workers=4,
)
x, y = next(test_gen)
x.shape, y.shape
y_pred = model_transfer.predict(x)
y_pred.shape
y_pred[0]
###Output
_____no_output_____ |
compareSpacingSpell.ipynb | ###Markdown
Predict google map review dataset model- kcbert- fine-tuned with naver shopping review dataset (200,000개)- train 5 epochs- 0.97 accuracy dataset- google map review of tourist places in Daejeon, Korea
###Code
import torch
from torch import nn, Tensor
from torch.optim import Optimizer
from torch.utils.data import DataLoader, RandomSampler, DistributedSampler, random_split
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.nn import CrossEntropyLoss
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning import LightningModule, Trainer, seed_everything
from pytorch_lightning.metrics.functional import accuracy, precision, recall
from transformers import AdamW, BertForSequenceClassification, AdamW, BertConfig, AutoTokenizer, BertTokenizer, TrainingArguments
from keras.preprocessing.sequence import pad_sequences
import random
import numpy as np
import time
import datetime
import pandas as pd
import os
from tqdm import tqdm
import pandas as pd
from transformers import AutoTokenizer, AutoModelWithLMHead
from keras.preprocessing.sequence import pad_sequences
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
pj_path = os.getenv('HOME') + '/Projects/JeongCheck'
data_path = pj_path + '/compare'
data_list = os.listdir(data_path)
print(len(data_list))
data_list
file_list = os.listdir(data_path)
file_list
spacing = pd.read_csv(data_path + f'/{file_list[0]}')
spell = pd.read_csv(data_path + f'/{file_list[1]}')
spacing.head()
spell.head()
len(spacing), len(spell)
print(spacing.isna().sum())
print('\n')
print(spell.isna().sum())
print(set(spacing.label))
print(set(spell.label))
print(len(spacing[spacing.label==2]))
print(len(spell[spell.label==2]))
test_spac = spacing.copy()
test_spel = spell.copy()
print(len(test_spac), len(test_spel))
###Output
8421 8420
###Markdown
중립 데이터 제외
###Code
test_spac = test_spac[test_spac.label != 2]
print(len(test_spac))
test_spel = test_spel[test_spel.label != 2]
print(len(test_spel))
from transformers import BertForSequenceClassification, AdamW, BertConfig
tokenizer = AutoTokenizer.from_pretrained("beomi/kcbert-base")
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
pj_path + "/bert_model/checkpoint-2000",
num_labels = 2,
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
params = list(model.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
def convert_input_data(sentences):
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
MAX_LEN = 64
# 토큰을 숫자 인덱스로 변환
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# 문장을 MAX_LEN 길이에 맞게 자르고, 모자란 부분을 패딩 0으로 채움
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# 어텐션 마스크 초기화
attention_masks = []
# 어텐션 마스크를 패딩이 아니면 1, 패딩이면 0으로 설정
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
inputs = torch.tensor(input_ids)
masks = torch.tensor(attention_masks)
return inputs, masks
def test_sentences(sentences):
# 평가모드로 변경!!!!!
model.eval()
inputs, masks = convert_input_data(sentences)
# 데이터를 GPU에 넣음
b_input_ids = inputs.to(device)
b_input_mask = masks.to(device)
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
return logits
device = "cuda:0"
model = model.to(device)
###Output
_____no_output_____
###Markdown
데이터 변환
###Code
def preprocessing(df):
df.document=df.comment.replace('[^A-Za-zㄱ-ㅎㅏ-ㅣ가-힣]+','')
return df
# result = preprocessing(gr_data)
# result = result.dropna()
# print(result)
# 감성분석할 comment 추출
def export_com(preprocessed_df):
sens =[]
for sen in preprocessed_df.comment:
sens.append(sen)
print('check lenght :', len(sens), len(preprocessed_df)) # 개수 확인
print('sample sentence :', sens[1])
return sens
def make_predicted_label(sen):
sen = [sen]
score = test_sentences(sen)
result = np.argmax(score)
if result == 0: # negative
return 0
elif result == 1: # positive
return 1
def predict_label(model, df, place_name):
result = preprocessing(df)
result = result.dropna()
sens = export_com(result)
scores_data=[]
for sen in sens:
scores_data.append(make_predicted_label(sen))
df['pred'] = scores_data
cor = df[df.label == df.pred]
uncor = df[df.label != df.pred]
print('correct prediction num :', len(cor))
print('uncorrect prediction num :', len(uncor))
print('correct label check :' ,set(cor.label))
# df.to_csv(pj_path + f'/sentiment_data/{place_name}_pred_kcbert.csv')
return df
print('### spacing ###')
predict_spac = predict_label(model, test_spac, 'total')
print('### spell ###')
predict_spel = predict_label(model, test_spel, 'total')
###Output
### spacing ###
check lenght : 8389 8389
sample sentence : 코로나 때문에 너무 오랜만에 가본 예술의 전당이였고 공연도 너무 좋았습니다 직원 분들도 너무 친절했구요 있는 공연들 최대한 아이들과 많이 가보려구요 가시는 분들 모두 좋은 시간 보내세요
correct prediction num : 6708
uncorrect prediction num : 1681
correct label check : {0, 1}
### spell ###
check lenght : 8386 8386
sample sentence : 역대 엑스포 개최 도시의 기념품과 희귀한 아이템을 한곳에서 볼 수 있습니다
###Markdown
Loss (RMSE)
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import math
def rmse(y, y_pred):
from sklearn.metrics import mean_squared_error
import math
print(len(y), len(y_pred))
rmse_label = math.sqrt(mean_squared_error(y, y_pred))
print('rmse of label :', rmse_label)
###Output
_____no_output_____
###Markdown
Accuracy
###Code
def acc(y, y_pred, total):
correct = (y_pred == y).sum().item()
print(f'Accuracy of the network on the {total} test text: %d %%' % (
100 * correct / total))
def cal_perform(df):
y = df.label
y_pred = df.pred
print('##### label lenght check #####')
if len(y) == len(y_pred):
total = len(y)
print(total)
else:
print('different length')
rmse(y, y_pred)
acc(y, y_pred, total)
print('### spacing ###')
cal_perform(predict_spac)
print('### spell ###')
cal_perform(predict_spel)
###Output
### spacing ###
##### label lenght check #####
8389
8389 8389
rmse of label : 0.44763986853418153
Accuracy of the network on the 8389 test text: 79 %
### spell ###
##### label lenght check #####
8386
8386 8386
rmse of label : 0.44638623696243956
Accuracy of the network on the 8386 test text: 80 %
###Markdown
Predict google map review dataset model- kcbert- fine-tuned with naver shopping review dataset (200,000개)- train 5 epochs- 0.97 accuracy dataset- google map review of tourist places in Daejeon, Korea
###Code
import torch
from torch import nn, Tensor
from torch.optim import Optimizer
from torch.utils.data import DataLoader, RandomSampler, DistributedSampler, random_split
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.nn import CrossEntropyLoss
from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning import LightningModule, Trainer, seed_everything
from pytorch_lightning.metrics.functional import accuracy, precision, recall
from transformers import AdamW, BertForSequenceClassification, AdamW, BertConfig, AutoTokenizer, BertTokenizer, TrainingArguments
from keras.preprocessing.sequence import pad_sequences
import random
import numpy as np
import time
import datetime
import pandas as pd
import os
from tqdm import tqdm
import pandas as pd
from transformers import AutoTokenizer, AutoModelWithLMHead
from keras.preprocessing.sequence import pad_sequences
if torch.cuda.is_available():
device = torch.device("cuda")
print('There are %d GPU(s) available.' % torch.cuda.device_count())
print('We will use the GPU:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
pj_path = os.getenv('HOME') + '/Projects/JeongCheck'
data_path = pj_path + '/compare'
data_list = os.listdir(data_path)
print(len(data_list))
data_list
file_list = os.listdir(data_path)
file_list
spacing = pd.read_csv(data_path + f'/{file_list[0]}')
spell = pd.read_csv(data_path + f'/{file_list[1]}')
spacing.head()
spell.head()
len(spacing), len(spell)
print(spacing.isna().sum())
print('\n')
print(spell.isna().sum())
print(set(spacing.label))
print(set(spell.label))
print(len(spacing[spacing.label==2]))
print(len(spell[spell.label==2]))
test_spac = spacing.copy()
test_spel = spell.copy()
print(len(test_spac), len(test_spel))
###Output
8421 8420
###Markdown
중립 데이터 제외
###Code
test_spac = test_spac[test_spac.label != 2]
print(len(test_spac))
test_spel = test_spel[test_spel.label != 2]
print(len(test_spel))
from transformers import BertForSequenceClassification, AdamW, BertConfig
tokenizer = AutoTokenizer.from_pretrained("beomi/kcbert-base")
# Load BertForSequenceClassification, the pretrained BERT model with a single
# linear classification layer on top.
model = BertForSequenceClassification.from_pretrained(
pj_path + "/bert_model/checkpoint-2000",
num_labels = 2,
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
params = list(model.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
def convert_input_data(sentences):
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
MAX_LEN = 64
# 토큰을 숫자 인덱스로 변환
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# 문장을 MAX_LEN 길이에 맞게 자르고, 모자란 부분을 패딩 0으로 채움
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# 어텐션 마스크 초기화
attention_masks = []
# 어텐션 마스크를 패딩이 아니면 1, 패딩이면 0으로 설정
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
inputs = torch.tensor(input_ids)
masks = torch.tensor(attention_masks)
return inputs, masks
def test_sentences(sentences):
# 평가모드로 변경!!!!!
model.eval()
inputs, masks = convert_input_data(sentences)
# 데이터를 GPU에 넣음
b_input_ids = inputs.to(device)
b_input_mask = masks.to(device)
# 그래디언트 계산 안함
with torch.no_grad():
# Forward 수행
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# 로스 구함
logits = outputs[0]
# CPU로 데이터 이동
logits = logits.detach().cpu().numpy()
return logits
device = "cuda:0"
model = model.to(device)
###Output
_____no_output_____
###Markdown
데이터 변환
###Code
def preprocessing(df):
df.document=df.comment.replace('[^A-Za-zㄱ-ㅎㅏ-ㅣ가-힣]+','')
return df
# result = preprocessing(gr_data)
# result = result.dropna()
# print(result)
# 감성분석할 comment 추출
def export_com(preprocessed_df):
sens =[]
for sen in preprocessed_df.comment:
sens.append(sen)
print('check lenght :', len(sens), len(preprocessed_df)) # 개수 확인
print('sample sentence :', sens[1])
return sens
def make_predicted_label(sen):
sen = [sen]
score = test_sentences(sen)
result = np.argmax(score)
if result == 0: # negative
return 0
elif result == 1: # positive
return 1
def predict_label(model, df, place_name):
result = preprocessing(df)
result = result.dropna()
sens = export_com(result)
scores_data=[]
for sen in sens:
scores_data.append(make_predicted_label(sen))
df['pred'] = scores_data
cor = df[df.label == df.pred]
uncor = df[df.label != df.pred]
print('correct prediction num :', len(cor))
print('uncorrect prediction num :', len(uncor))
print('correct label check :' ,set(cor.label))
# df.to_csv(pj_path + f'/sentiment_data/{place_name}_pred_kcbert.csv')
return df
print('### spacing ###')
predict_spac = predict_label(model, test_spac, 'total')
print('### spell ###')
predict_spel = predict_label(model, test_spel, 'total')
###Output
### spacing ###
check lenght : 8389 8389
sample sentence : 코로나 때문에 너무 오랜만에 가본 예술의 전당이였고 공연도 너무 좋았습니다 직원 분들도 너무 친절했구요 있는 공연들 최대한 아이들과 많이 가보려구요 가시는 분들 모두 좋은 시간 보내세요
###Markdown
Loss (RMSE)
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import math
def rmse(y, y_pred):
from sklearn.metrics import mean_squared_error
import math
print('lenght check (origin, prediction):', len(y), len(y_pred))
rmse_label = math.sqrt(mean_squared_error(y, y_pred))
print('rmse of label :', rmse_label)
###Output
_____no_output_____
###Markdown
Accuracy
###Code
def acc(y, y_pred, total):
correct = (y_pred == y).sum().item()
print(f'Accuracy of the network on the {total} test text: %d %%' % (
100 * correct / total))
###Output
_____no_output_____
###Markdown
f1-score
###Code
from sklearn.metrics import f1_score, classification_report
def f1(y, y_pred):
score = f1_score(y, y_pred)
report = classification_report(y, y_pred)
print('f1 score :', score)
print('===== classification report =====')
print(report)
###Output
_____no_output_____
###Markdown
calculate performance- RMSE- Accuracy- f1-score
###Code
def cal_perform(df):
y = df.label
y_pred = df.pred
if len(y) == len(y_pred):
total = len(y)
print('label length :', total)
else:
print('It has different length !')
rmse(y, y_pred)
acc(y, y_pred, total)
f1(y, y_pred)
print('===== spacing =====')
cal_perform(predict_spac)
print('===== spell =====')
cal_perform(predict_spel)
###Output
===== spacing =====
label length : 8389
lenght check (origin, prediction): 8389 8389
rmse of label : 0.44763986853418153
Accuracy of the network on the 8389 test text: 79 %
f1 score : 0.872699734948883
===== classification report =====
precision recall f1-score support
0 0.41 0.74 0.53 1274
1 0.95 0.81 0.87 7115
accuracy 0.80 8389
macro avg 0.68 0.78 0.70 8389
weighted avg 0.86 0.80 0.82 8389
===== spell =====
label length : 8386
lenght check (origin, prediction): 8386 8386
rmse of label : 0.44638623696243956
Accuracy of the network on the 8386 test text: 80 %
f1 score : 0.8733035105011752
===== classification report =====
precision recall f1-score support
0 0.41 0.75 0.53 1276
1 0.95 0.81 0.87 7110
accuracy 0.80 8386
macro avg 0.68 0.78 0.70 8386
weighted avg 0.87 0.80 0.82 8386
|
ReinforcementLearning/DAT257x/library/LabFiles/Module 4/Ex4.2 Policy Evaluation In-Place.ipynb | ###Markdown
DAT257x: Reinforcement Learning Explained Lab 4: Dynamic Programming Exercise 4.2 Policy Evaluation using in-place method Policy Evaluation calculates the value function for a policy, given the policy and the full definition of the associated Markov Decision Process. The full definition of an MDP is the set of states, the set of available actions for each state, the set of rewards, the discount factor, and the state/reward transition function.
###Code
import test_dp # required for testing and grading your code
import gridworld_mdp as gw # defines the MDP for a 4x4 gridworld
###Output
_____no_output_____
###Markdown
**Implement the algorithm for Iterative Policy Evaluation using the in-place approach**. In the in-place approach, one array holds the values being estimated for each state and the same array is used for estimates of states needed by the algorithm.A empty function **policy_eval_in_place** is provided below; implement the body of the function to correctly calculate the value of the policy using the 2 array approach. The function defines 5 parameters - a definition of each parameter is given in the comment block for the function. For sample parameter values, see the calling code in the cell following the function.
###Code
def policy_eval_in_place(state_count, gamma, theta, get_policy, get_transitions):
"""
This function uses the two-array approach to evaluate the specified policy for the specified MDP:
'state_count' is the total number of states in the MDP. States are represented as 0-relative numbers.
'gamma' is the MDP discount factor for rewards.
'theta' is the small number threshold to signal convergence of the value function (see Iterative Policy Evaluation algorithm).
'get_policy' is the stochastic policy function - it takes a state parameter and returns list of tuples,
where each tuple is of the form: (action, probability). It represents the policy being evaluated.
'get_transitions' is the state/reward transiton function. It accepts two parameters, state and action, and returns
a list of tuples, where each tuple is of the form: (next_state, reward, probabiliity).
"""
V = state_count*[0]
while True:
delta = 0
state = 0
while state < state_count:
hist = V[state]
V[state] = 0
for action, action_probability in get_policy(state):
for next_state, reward, probability in get_transitions(state, action):
if next_state == state:
V[state] += action_probability * probability * (reward + gamma * hist)
else:
V[state] += action_probability * probability * (reward + gamma * V[next_state])
print("nxt_rew: " + str(V[next_state]) + ", nxt: " + str(next_state) + ", state:" + str(state) + ", V: " + str(V[state]))
delta = max(delta, abs(hist - V[state]))
#print(reward)
print(str(state) + " " + str(V[state]))
state += 1
if delta < theta:
break
return V
###Output
_____no_output_____
###Markdown
First, test our function using the MDP defined by gw.* functions.
###Code
def get_equal_policy(state):
# build a simple policy where all 4 actions have the same probability, ignoring the specified state
policy = ( ("up", .25), ("right", .25), ("down", .25), ("left", .25))
return policy
n_states = gw.get_state_count()
# test our function
values = policy_eval_in_place(state_count=n_states, gamma=.9, theta=.001, get_policy=get_equal_policy, \
get_transitions=gw.get_transitions)
print("Values=", values)
###Output
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.25, nxt: 1, state:1, V: -0.25
nxt_rew: 0, nxt: 2, state:1, V: -0.5
nxt_rew: 0, nxt: 5, state:1, V: -0.75
nxt_rew: 0.0, nxt: 0, state:1, V: -1.0
1 -1.0
nxt_rew: -0.25, nxt: 2, state:2, V: -0.25
nxt_rew: 0, nxt: 3, state:2, V: -0.5
nxt_rew: 0, nxt: 6, state:2, V: -0.75
nxt_rew: -1.0, nxt: 1, state:2, V: -1.225
2 -1.225
nxt_rew: -0.25, nxt: 3, state:3, V: -0.25
nxt_rew: -0.5, nxt: 3, state:3, V: -0.5
nxt_rew: 0, nxt: 7, state:3, V: -0.75
nxt_rew: -1.225, nxt: 2, state:3, V: -1.275625
3 -1.275625
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: 0, nxt: 5, state:4, V: -0.5
nxt_rew: 0, nxt: 8, state:4, V: -0.75
nxt_rew: -1.0, nxt: 4, state:4, V: -1.0
4 -1.0
nxt_rew: -1.0, nxt: 1, state:5, V: -0.475
nxt_rew: 0, nxt: 6, state:5, V: -0.725
nxt_rew: 0, nxt: 9, state:5, V: -0.975
nxt_rew: -1.0, nxt: 4, state:5, V: -1.45
5 -1.45
nxt_rew: -1.225, nxt: 2, state:6, V: -0.525625
nxt_rew: 0, nxt: 7, state:6, V: -0.775625
nxt_rew: 0, nxt: 10, state:6, V: -1.025625
nxt_rew: -1.45, nxt: 5, state:6, V: -1.601875
6 -1.601875
nxt_rew: -1.275625, nxt: 3, state:7, V: -0.537015625
nxt_rew: -0.787015625, nxt: 7, state:7, V: -0.787015625
nxt_rew: 0, nxt: 11, state:7, V: -1.037015625
nxt_rew: -1.601875, nxt: 6, state:7, V: -1.6474375
7 -1.6474375
nxt_rew: -1.0, nxt: 4, state:8, V: -0.475
nxt_rew: 0, nxt: 9, state:8, V: -0.725
nxt_rew: 0, nxt: 12, state:8, V: -0.975
nxt_rew: -1.225, nxt: 8, state:8, V: -1.225
8 -1.225
nxt_rew: -1.45, nxt: 5, state:9, V: -0.5762499999999999
nxt_rew: 0, nxt: 10, state:9, V: -0.8262499999999999
nxt_rew: 0, nxt: 13, state:9, V: -1.07625
nxt_rew: -1.225, nxt: 8, state:9, V: -1.601875
9 -1.601875
nxt_rew: -1.601875, nxt: 6, state:10, V: -0.610421875
nxt_rew: 0, nxt: 11, state:10, V: -0.860421875
nxt_rew: 0, nxt: 14, state:10, V: -1.1104218750000001
nxt_rew: -1.601875, nxt: 9, state:10, V: -1.7208437500000002
10 -1.7208437500000002
nxt_rew: -1.6474375, nxt: 7, state:11, V: -0.6206734375
nxt_rew: -0.8706734375, nxt: 11, state:11, V: -0.8706734375
nxt_rew: 0.0, nxt: 0, state:11, V: -1.1206734375
nxt_rew: -1.7208437500000002, nxt: 10, state:11, V: -1.7578632812500001
11 -1.7578632812500001
nxt_rew: -1.225, nxt: 8, state:12, V: -0.525625
nxt_rew: 0, nxt: 13, state:12, V: -0.775625
nxt_rew: -1.025625, nxt: 12, state:12, V: -1.025625
nxt_rew: -1.275625, nxt: 12, state:12, V: -1.275625
12 -1.275625
nxt_rew: -1.601875, nxt: 9, state:13, V: -0.610421875
nxt_rew: 0, nxt: 14, state:13, V: -0.860421875
nxt_rew: -1.1104218750000001, nxt: 13, state:13, V: -1.1104218750000001
nxt_rew: -1.275625, nxt: 12, state:13, V: -1.6474375
13 -1.6474375
nxt_rew: -1.7208437500000002, nxt: 10, state:14, V: -0.6371898437500001
nxt_rew: 0.0, nxt: 0, state:14, V: -0.8871898437500001
nxt_rew: -1.13718984375, nxt: 14, state:14, V: -1.13718984375
nxt_rew: -1.6474375, nxt: 13, state:14, V: -1.7578632812500001
14 -1.7578632812500001
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.475, nxt: 1, state:1, V: -0.475
nxt_rew: -1.225, nxt: 2, state:1, V: -1.0006249999999999
nxt_rew: -1.45, nxt: 5, state:1, V: -1.5768749999999998
nxt_rew: 0.0, nxt: 0, state:1, V: -1.8268749999999998
1 -1.8268749999999998
nxt_rew: -0.525625, nxt: 2, state:2, V: -0.525625
nxt_rew: -1.275625, nxt: 3, state:2, V: -1.062640625
nxt_rew: -1.601875, nxt: 6, state:2, V: -1.6730625
nxt_rew: -1.8268749999999998, nxt: 1, state:2, V: -2.3341093749999997
2 -2.3341093749999997
nxt_rew: -0.537015625, nxt: 3, state:3, V: -0.537015625
nxt_rew: -1.07403125, nxt: 3, state:3, V: -1.07403125
nxt_rew: -1.6474375, nxt: 7, state:3, V: -1.6947046875
nxt_rew: -2.3341093749999997, nxt: 2, state:3, V: -2.469879296875
3 -2.469879296875
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -1.45, nxt: 5, state:4, V: -0.8262499999999999
nxt_rew: -1.225, nxt: 8, state:4, V: -1.351875
nxt_rew: -1.8268749999999998, nxt: 4, state:4, V: -1.8268749999999998
4 -1.8268749999999998
nxt_rew: -1.8268749999999998, nxt: 1, state:5, V: -0.661046875
nxt_rew: -1.601875, nxt: 6, state:5, V: -1.27146875
nxt_rew: -1.601875, nxt: 9, state:5, V: -1.881890625
nxt_rew: -1.8268749999999998, nxt: 4, state:5, V: -2.5429375
5 -2.5429375
nxt_rew: -2.3341093749999997, nxt: 2, state:6, V: -0.7751746093749999
nxt_rew: -1.6474375, nxt: 7, state:6, V: -1.3958480468749999
nxt_rew: -1.7208437500000002, nxt: 10, state:6, V: -2.0330378906249997
nxt_rew: -2.5429375, nxt: 5, state:6, V: -2.855198828125
6 -2.855198828125
nxt_rew: -2.469879296875, nxt: 3, state:7, V: -0.805722841796875
nxt_rew: -1.426396279296875, nxt: 7, state:7, V: -1.426396279296875
nxt_rew: -1.7578632812500001, nxt: 11, state:7, V: -2.071915517578125
nxt_rew: -2.855198828125, nxt: 6, state:7, V: -2.96433525390625
7 -2.96433525390625
nxt_rew: -1.8268749999999998, nxt: 4, state:8, V: -0.661046875
nxt_rew: -1.601875, nxt: 9, state:8, V: -1.27146875
nxt_rew: -1.275625, nxt: 12, state:8, V: -1.808484375
nxt_rew: -2.3341093749999997, nxt: 8, state:8, V: -2.3341093749999997
8 -2.3341093749999997
nxt_rew: -2.5429375, nxt: 5, state:9, V: -0.8221609375
nxt_rew: -1.7208437500000002, nxt: 10, state:9, V: -1.45935078125
nxt_rew: -1.6474375, nxt: 13, state:9, V: -2.08002421875
nxt_rew: -2.3341093749999997, nxt: 8, state:9, V: -2.8551988281250003
9 -2.8551988281250003
nxt_rew: -2.855198828125, nxt: 6, state:10, V: -0.892419736328125
nxt_rew: -1.7578632812500001, nxt: 11, state:10, V: -1.537938974609375
nxt_rew: -1.7578632812500001, nxt: 14, state:10, V: -2.183458212890625
nxt_rew: -2.8551988281250003, nxt: 9, state:10, V: -3.07587794921875
10 -3.07587794921875
nxt_rew: -2.96433525390625, nxt: 7, state:11, V: -0.9169754321289063
nxt_rew: -1.5624946704101563, nxt: 11, state:11, V: -1.5624946704101563
nxt_rew: 0.0, nxt: 0, state:11, V: -1.8124946704101563
nxt_rew: -3.07587794921875, nxt: 10, state:11, V: -2.7545672089843753
11 -2.7545672089843753
nxt_rew: -2.3341093749999997, nxt: 8, state:12, V: -0.7751746093749999
nxt_rew: -1.6474375, nxt: 13, state:12, V: -1.3958480468749999
nxt_rew: -1.9328636718749999, nxt: 12, state:12, V: -1.9328636718749999
nxt_rew: -2.469879296875, nxt: 12, state:12, V: -2.469879296875
12 -2.469879296875
nxt_rew: -2.8551988281250003, nxt: 9, state:13, V: -0.8924197363281251
nxt_rew: -1.7578632812500001, nxt: 14, state:13, V: -1.537938974609375
nxt_rew: -2.158612412109375, nxt: 13, state:13, V: -2.158612412109375
nxt_rew: -2.469879296875, nxt: 12, state:13, V: -2.96433525390625
13 -2.96433525390625
nxt_rew: -3.07587794921875, nxt: 10, state:14, V: -0.9420725385742188
nxt_rew: 0.0, nxt: 0, state:14, V: -1.1920725385742188
nxt_rew: -1.8375917768554688, nxt: 14, state:14, V: -1.8375917768554688
nxt_rew: -2.96433525390625, nxt: 13, state:14, V: -2.7545672089843753
14 -2.7545672089843753
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.661046875, nxt: 1, state:1, V: -0.661046875
nxt_rew: -2.3341093749999997, nxt: 2, state:1, V: -1.4362214843749999
nxt_rew: -2.5429375, nxt: 5, state:1, V: -2.258382421875
nxt_rew: 0.0, nxt: 0, state:1, V: -2.508382421875
1 -2.508382421875
nxt_rew: -0.7751746093749999, nxt: 2, state:2, V: -0.7751746093749999
nxt_rew: -2.469879296875, nxt: 3, state:2, V: -1.580897451171875
nxt_rew: -2.855198828125, nxt: 6, state:2, V: -2.4733171875
nxt_rew: -2.508382421875, nxt: 1, state:2, V: -3.287703232421875
2 -3.287703232421875
nxt_rew: -0.805722841796875, nxt: 3, state:3, V: -0.805722841796875
nxt_rew: -1.61144568359375, nxt: 3, state:3, V: -1.61144568359375
nxt_rew: -2.96433525390625, nxt: 7, state:3, V: -2.528421115722656
nxt_rew: -3.287703232421875, nxt: 2, state:3, V: -3.518154343017578
3 -3.518154343017578
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -2.5429375, nxt: 5, state:4, V: -1.0721609375
nxt_rew: -2.3341093749999997, nxt: 8, state:4, V: -1.8473355468750001
nxt_rew: -2.508382421875, nxt: 4, state:4, V: -2.508382421875
4 -2.508382421875
nxt_rew: -2.508382421875, nxt: 1, state:5, V: -0.814386044921875
nxt_rew: -2.855198828125, nxt: 6, state:5, V: -1.70680578125
nxt_rew: -2.8551988281250003, nxt: 9, state:5, V: -2.599225517578125
nxt_rew: -2.508382421875, nxt: 4, state:5, V: -3.4136115625
5 -3.4136115625
nxt_rew: -3.287703232421875, nxt: 2, state:6, V: -0.989733227294922
nxt_rew: -2.96433525390625, nxt: 7, state:6, V: -1.9067086594238283
nxt_rew: -3.07587794921875, nxt: 10, state:6, V: -2.848781197998047
nxt_rew: -3.4136115625, nxt: 5, state:6, V: -3.866843799560547
6 -3.866843799560547
nxt_rew: -3.518154343017578, nxt: 3, state:7, V: -1.0415847271789551
nxt_rew: -1.9585601593078614, nxt: 7, state:7, V: -1.9585601593078614
nxt_rew: -2.7545672089843753, nxt: 11, state:7, V: -2.828337781329346
nxt_rew: -3.866843799560547, nxt: 6, state:7, V: -3.948377636230469
7 -3.948377636230469
nxt_rew: -2.508382421875, nxt: 4, state:8, V: -0.814386044921875
nxt_rew: -2.8551988281250003, nxt: 9, state:8, V: -1.7068057812500002
nxt_rew: -2.469879296875, nxt: 12, state:8, V: -2.512528623046875
nxt_rew: -3.287703232421875, nxt: 8, state:8, V: -3.287703232421875
8 -3.287703232421875
nxt_rew: -3.4136115625, nxt: 5, state:9, V: -1.0180626015624998
nxt_rew: -3.07587794921875, nxt: 10, state:9, V: -1.9601351401367186
nxt_rew: -2.96433525390625, nxt: 13, state:9, V: -2.877110572265625
nxt_rew: -3.287703232421875, nxt: 8, state:9, V: -3.866843799560547
9 -3.866843799560547
nxt_rew: -3.866843799560547, nxt: 6, state:10, V: -1.120039854901123
nxt_rew: -2.7545672089843753, nxt: 11, state:10, V: -1.9898174769226076
nxt_rew: -2.7545672089843753, nxt: 14, state:10, V: -2.859595098944092
nxt_rew: -3.866843799560547, nxt: 9, state:10, V: -3.979634953845215
10 -3.979634953845215
nxt_rew: -3.948377636230469, nxt: 7, state:11, V: -1.1383849681518554
nxt_rew: -2.00816259017334, nxt: 11, state:11, V: -2.00816259017334
nxt_rew: 0.0, nxt: 0, state:11, V: -2.25816259017334
nxt_rew: -3.979634953845215, nxt: 10, state:11, V: -3.4035804547885133
11 -3.4035804547885133
nxt_rew: -3.287703232421875, nxt: 8, state:12, V: -0.989733227294922
nxt_rew: -2.96433525390625, nxt: 13, state:12, V: -1.9067086594238283
nxt_rew: -2.712431501220703, nxt: 12, state:12, V: -2.712431501220703
nxt_rew: -3.5181543430175783, nxt: 12, state:12, V: -3.5181543430175783
12 -3.5181543430175783
nxt_rew: -3.866843799560547, nxt: 9, state:13, V: -1.120039854901123
nxt_rew: -2.7545672089843753, nxt: 14, state:13, V: -1.9898174769226076
nxt_rew: -2.906792909051514, nxt: 13, state:13, V: -2.906792909051514
nxt_rew: -3.5181543430175783, nxt: 12, state:13, V: -3.948377636230469
13 -3.948377636230469
nxt_rew: -3.979634953845215, nxt: 10, state:14, V: -1.1454178646151734
nxt_rew: 0.0, nxt: 0, state:14, V: -1.3954178646151734
nxt_rew: -2.265195486636658, nxt: 14, state:14, V: -2.265195486636658
nxt_rew: -3.948377636230469, nxt: 13, state:14, V: -3.4035804547885133
14 -3.4035804547885133
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.814386044921875, nxt: 1, state:1, V: -0.814386044921875
nxt_rew: -3.287703232421875, nxt: 2, state:1, V: -1.8041192722167971
nxt_rew: -3.4136115625, nxt: 5, state:1, V: -2.822181873779297
nxt_rew: 0.0, nxt: 0, state:1, V: -3.072181873779297
1 -3.072181873779297
nxt_rew: -0.989733227294922, nxt: 2, state:2, V: -0.989733227294922
nxt_rew: -3.518154343017578, nxt: 3, state:2, V: -2.0313179544738773
nxt_rew: -3.866843799560547, nxt: 6, state:2, V: -3.1513578093750003
nxt_rew: -3.072181873779297, nxt: 1, state:2, V: -4.092598730975342
2 -4.092598730975342
nxt_rew: -1.0415847271789551, nxt: 3, state:3, V: -1.0415847271789551
nxt_rew: -2.0831694543579102, nxt: 3, state:3, V: -2.0831694543579102
nxt_rew: -3.948377636230469, nxt: 7, state:3, V: -3.2215544225097656
nxt_rew: -4.092598730975342, nxt: 2, state:3, V: -4.392389136979218
3 -4.392389136979218
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -3.4136115625, nxt: 5, state:4, V: -1.2680626015624998
nxt_rew: -3.287703232421875, nxt: 8, state:4, V: -2.2577958288574216
nxt_rew: -3.0721818737792965, nxt: 4, state:4, V: -3.0721818737792965
4 -3.0721818737792965
nxt_rew: -3.072181873779297, nxt: 1, state:5, V: -0.9412409216003418
nxt_rew: -3.866843799560547, nxt: 6, state:5, V: -2.061280776501465
nxt_rew: -3.866843799560547, nxt: 9, state:5, V: -3.181320631402588
nxt_rew: -3.0721818737792965, nxt: 4, state:5, V: -4.12256155300293
5 -4.12256155300293
nxt_rew: -4.092598730975342, nxt: 2, state:6, V: -1.170834714469452
nxt_rew: -3.948377636230469, nxt: 7, state:6, V: -2.3092196826213076
nxt_rew: -3.979634953845215, nxt: 10, state:6, V: -3.454637547236481
nxt_rew: -4.12256155300293, nxt: 5, state:6, V: -4.63221389666214
6 -4.63221389666214
nxt_rew: -4.392389136979218, nxt: 3, state:7, V: -1.2382875558203241
nxt_rew: -2.3766725239721795, nxt: 7, state:7, V: -2.3766725239721795
nxt_rew: -3.4035804547885133, nxt: 11, state:7, V: -3.3924781262995953
nxt_rew: -4.63221389666214, nxt: 6, state:7, V: -4.684726253048577
7 -4.684726253048577
nxt_rew: -3.0721818737792965, nxt: 4, state:8, V: -0.9412409216003418
nxt_rew: -3.866843799560547, nxt: 9, state:8, V: -2.061280776501465
nxt_rew: -3.5181543430175783, nxt: 12, state:8, V: -3.10286550368042
nxt_rew: -4.092598730975342, nxt: 8, state:8, V: -4.092598730975342
8 -4.092598730975342
nxt_rew: -4.12256155300293, nxt: 5, state:9, V: -1.1775763494256593
nxt_rew: -3.979634953845215, nxt: 10, state:9, V: -2.3229942140408326
nxt_rew: -3.948377636230469, nxt: 13, state:9, V: -3.461379182192688
nxt_rew: -4.092598730975342, nxt: 8, state:9, V: -4.63221389666214
9 -4.63221389666214
nxt_rew: -4.63221389666214, nxt: 6, state:10, V: -1.2922481267489816
nxt_rew: -3.4035804547885133, nxt: 11, state:10, V: -2.308053729076397
nxt_rew: -3.4035804547885133, nxt: 14, state:10, V: -3.3238593314038125
nxt_rew: -4.63221389666214, nxt: 9, state:10, V: -4.616107458152794
10 -4.616107458152794
nxt_rew: -4.684726253048577, nxt: 7, state:11, V: -1.3040634069359298
nxt_rew: -2.3198690092633454, nxt: 11, state:11, V: -2.3198690092633454
nxt_rew: 0.0, nxt: 0, state:11, V: -2.5698690092633454
nxt_rew: -4.616107458152794, nxt: 10, state:11, V: -3.858493187347724
11 -3.858493187347724
nxt_rew: -4.092598730975342, nxt: 8, state:12, V: -1.170834714469452
nxt_rew: -3.948377636230469, nxt: 13, state:12, V: -2.3092196826213076
nxt_rew: -3.3508044098002627, nxt: 12, state:12, V: -3.3508044098002627
nxt_rew: -4.392389136979218, nxt: 12, state:12, V: -4.392389136979218
12 -4.392389136979218
nxt_rew: -4.63221389666214, nxt: 9, state:13, V: -1.2922481267489816
nxt_rew: -3.4035804547885133, nxt: 14, state:13, V: -2.308053729076397
nxt_rew: -3.4464386972282526, nxt: 13, state:13, V: -3.4464386972282526
nxt_rew: -4.392389136979218, nxt: 12, state:13, V: -4.684726253048577
13 -4.684726253048577
nxt_rew: -4.616107458152794, nxt: 10, state:14, V: -1.2886241780843788
nxt_rew: 0.0, nxt: 0, state:14, V: -1.5386241780843788
nxt_rew: -2.5544297804117946, nxt: 14, state:14, V: -2.5544297804117946
nxt_rew: -4.684726253048577, nxt: 13, state:14, V: -3.858493187347724
14 -3.858493187347724
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.9412409216003418, nxt: 1, state:1, V: -0.9412409216003418
nxt_rew: -4.092598730975342, nxt: 2, state:1, V: -2.1120756360697936
nxt_rew: -4.12256155300293, nxt: 5, state:1, V: -3.289651985495453
nxt_rew: 0.0, nxt: 0, state:1, V: -3.539651985495453
1 -3.539651985495453
nxt_rew: -1.170834714469452, nxt: 2, state:2, V: -1.170834714469452
nxt_rew: -4.392389136979218, nxt: 3, state:2, V: -2.4091222702897763
nxt_rew: -4.63221389666214, nxt: 6, state:2, V: -3.701370397038758
nxt_rew: -3.539651985495453, nxt: 1, state:2, V: -4.747792093775235
2 -4.747792093775235
nxt_rew: -1.2382875558203241, nxt: 3, state:3, V: -1.2382875558203241
nxt_rew: -2.4765751116406483, nxt: 3, state:3, V: -2.4765751116406483
nxt_rew: -4.684726253048577, nxt: 7, state:3, V: -3.780638518576578
nxt_rew: -4.747792093775235, nxt: 2, state:3, V: -5.098891739676006
3 -5.098891739676006
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -4.12256155300293, nxt: 5, state:4, V: -1.4275763494256593
nxt_rew: -4.092598730975342, nxt: 8, state:4, V: -2.598411063895111
nxt_rew: -3.539651985495453, nxt: 4, state:4, V: -3.539651985495453
4 -3.539651985495453
nxt_rew: -3.539651985495453, nxt: 1, state:5, V: -1.046421696736477
nxt_rew: -4.63221389666214, nxt: 6, state:5, V: -2.338669823485459
nxt_rew: -4.63221389666214, nxt: 9, state:5, V: -3.6309179502344406
nxt_rew: -3.539651985495453, nxt: 4, state:5, V: -4.677339646970918
5 -4.677339646970918
nxt_rew: -4.747792093775235, nxt: 2, state:6, V: -1.318253221099428
nxt_rew: -4.684726253048577, nxt: 7, state:6, V: -2.6223166280353576
nxt_rew: -4.616107458152794, nxt: 10, state:6, V: -3.9109408061197364
nxt_rew: -4.677339646970918, nxt: 5, state:6, V: -5.213342226688193
6 -5.213342226688193
nxt_rew: -5.098891739676006, nxt: 3, state:7, V: -1.3972506414271013
nxt_rew: -2.701314048363031, nxt: 7, state:7, V: -2.701314048363031
nxt_rew: -3.858493187347724, nxt: 11, state:7, V: -3.819475015516269
nxt_rew: -5.213342226688193, nxt: 6, state:7, V: -5.2424770165211125
7 -5.2424770165211125
nxt_rew: -3.539651985495453, nxt: 4, state:8, V: -1.046421696736477
nxt_rew: -4.63221389666214, nxt: 9, state:8, V: -2.338669823485459
nxt_rew: -4.392389136979218, nxt: 12, state:8, V: -3.576957379305783
nxt_rew: -4.747792093775235, nxt: 8, state:8, V: -4.747792093775235
8 -4.747792093775235
nxt_rew: -4.677339646970918, nxt: 5, state:9, V: -1.3024014205684564
nxt_rew: -4.616107458152794, nxt: 10, state:9, V: -2.591025598652835
nxt_rew: -4.684726253048577, nxt: 13, state:9, V: -3.8950890055887646
nxt_rew: -4.747792093775235, nxt: 8, state:9, V: -5.213342226688193
9 -5.213342226688193
nxt_rew: -5.213342226688193, nxt: 6, state:10, V: -1.4230020010048434
nxt_rew: -3.858493187347724, nxt: 11, state:10, V: -2.5411629681580816
nxt_rew: -3.858493187347724, nxt: 14, state:10, V: -3.6593239353113196
nxt_rew: -5.213342226688193, nxt: 9, state:10, V: -5.082325936316163
10 -5.082325936316163
nxt_rew: -5.2424770165211125, nxt: 7, state:11, V: -1.4295573287172503
nxt_rew: -2.5477182958704883, nxt: 11, state:11, V: -2.5477182958704883
nxt_rew: 0.0, nxt: 0, state:11, V: -2.7977182958704883
nxt_rew: -5.082325936316163, nxt: 10, state:11, V: -4.191241631541625
11 -4.191241631541625
nxt_rew: -4.747792093775235, nxt: 8, state:12, V: -1.318253221099428
nxt_rew: -4.684726253048577, nxt: 13, state:12, V: -2.6223166280353576
nxt_rew: -3.8606041838556817, nxt: 12, state:12, V: -3.8606041838556817
nxt_rew: -5.098891739676006, nxt: 12, state:12, V: -5.098891739676006
12 -5.098891739676006
nxt_rew: -5.213342226688193, nxt: 9, state:13, V: -1.4230020010048434
nxt_rew: -3.858493187347724, nxt: 14, state:13, V: -2.5411629681580816
nxt_rew: -3.845226375094011, nxt: 13, state:13, V: -3.845226375094011
nxt_rew: -5.098891739676006, nxt: 12, state:13, V: -5.2424770165211125
13 -5.2424770165211125
nxt_rew: -5.082325936316163, nxt: 10, state:14, V: -1.3935233356711367
nxt_rew: 0.0, nxt: 0, state:14, V: -1.6435233356711367
nxt_rew: -2.7616843028243747, nxt: 14, state:14, V: -2.7616843028243747
nxt_rew: -5.2424770165211125, nxt: 13, state:14, V: -4.191241631541625
14 -4.191241631541625
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.046421696736477, nxt: 1, state:1, V: -1.046421696736477
nxt_rew: -4.747792093775235, nxt: 2, state:1, V: -2.364674917835905
nxt_rew: -4.677339646970918, nxt: 5, state:1, V: -3.667076338404361
nxt_rew: 0.0, nxt: 0, state:1, V: -3.917076338404361
1 -3.917076338404361
nxt_rew: -1.318253221099428, nxt: 2, state:2, V: -1.318253221099428
nxt_rew: -5.098891739676006, nxt: 3, state:2, V: -2.7155038625265293
nxt_rew: -5.213342226688193, nxt: 6, state:2, V: -4.138505863531373
nxt_rew: -3.917076338404361, nxt: 1, state:2, V: -5.269848039672354
2 -5.269848039672354
nxt_rew: -1.3972506414271013, nxt: 3, state:3, V: -1.3972506414271013
nxt_rew: -2.7945012828542026, nxt: 3, state:3, V: -2.7945012828542026
nxt_rew: -5.2424770165211125, nxt: 7, state:3, V: -4.2240586115714525
nxt_rew: -5.269848039672354, nxt: 2, state:3, V: -5.659774420497732
3 -5.659774420497732
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -4.677339646970918, nxt: 5, state:4, V: -1.5524014205684564
nxt_rew: -4.747792093775235, nxt: 8, state:4, V: -2.870654641667884
nxt_rew: -3.917076338404361, nxt: 4, state:4, V: -3.917076338404361
4 -3.917076338404361
nxt_rew: -3.917076338404361, nxt: 1, state:5, V: -1.1313421761409814
nxt_rew: -5.213342226688193, nxt: 6, state:5, V: -2.5543441771458246
nxt_rew: -5.213342226688193, nxt: 9, state:5, V: -3.9773461781506683
nxt_rew: -3.917076338404361, nxt: 4, state:5, V: -5.108688354291649
5 -5.108688354291649
nxt_rew: -5.269848039672354, nxt: 2, state:6, V: -1.4357158089262796
nxt_rew: -5.2424770165211125, nxt: 7, state:6, V: -2.8652731376435296
nxt_rew: -5.082325936316163, nxt: 10, state:6, V: -4.258796473314666
nxt_rew: -5.108688354291649, nxt: 5, state:6, V: -5.658251353030288
6 -5.658251353030288
nxt_rew: -5.659774420497732, nxt: 3, state:7, V: -1.5234492446119898
nxt_rew: -2.9530065733292403, nxt: 7, state:7, V: -2.9530065733292403
nxt_rew: -4.191241631541625, nxt: 11, state:7, V: -4.146035940426106
nxt_rew: -5.658251353030288, nxt: 6, state:7, V: -5.669142494857921
7 -5.669142494857921
nxt_rew: -3.917076338404361, nxt: 4, state:8, V: -1.1313421761409814
nxt_rew: -5.213342226688193, nxt: 9, state:8, V: -2.5543441771458246
nxt_rew: -5.098891739676006, nxt: 12, state:8, V: -3.951594818572926
nxt_rew: -5.269848039672354, nxt: 8, state:8, V: -5.269848039672354
8 -5.269848039672354
nxt_rew: -5.108688354291649, nxt: 5, state:9, V: -1.3994548797156212
nxt_rew: -5.082325936316163, nxt: 10, state:9, V: -2.792978215386758
nxt_rew: -5.2424770165211125, nxt: 13, state:9, V: -4.222535544104009
nxt_rew: -5.269848039672354, nxt: 8, state:9, V: -5.6582513530302885
9 -5.6582513530302885
nxt_rew: -5.658251353030288, nxt: 6, state:10, V: -1.5231065544318148
nxt_rew: -4.191241631541625, nxt: 11, state:10, V: -2.7161359215286804
nxt_rew: -4.191241631541625, nxt: 14, state:10, V: -3.9091652886255464
nxt_rew: -5.6582513530302885, nxt: 9, state:10, V: -5.432271843057361
10 -5.432271843057361
nxt_rew: -5.669142494857921, nxt: 7, state:11, V: -1.5255570613430323
nxt_rew: -2.718586428439898, nxt: 11, state:11, V: -2.718586428439898
nxt_rew: 0.0, nxt: 0, state:11, V: -2.968586428439898
nxt_rew: -5.432271843057361, nxt: 10, state:11, V: -4.440847593127804
11 -4.440847593127804
nxt_rew: -5.269848039672354, nxt: 8, state:12, V: -1.4357158089262796
nxt_rew: -5.2424770165211125, nxt: 13, state:12, V: -2.8652731376435296
nxt_rew: -4.262523779070631, nxt: 12, state:12, V: -4.262523779070631
nxt_rew: -5.659774420497732, nxt: 12, state:12, V: -5.659774420497732
12 -5.659774420497732
nxt_rew: -5.6582513530302885, nxt: 9, state:13, V: -1.523106554431815
nxt_rew: -4.191241631541625, nxt: 14, state:13, V: -2.7161359215286804
nxt_rew: -4.14569325024593, nxt: 13, state:13, V: -4.14569325024593
nxt_rew: -5.659774420497732, nxt: 12, state:13, V: -5.66914249485792
13 -5.66914249485792
nxt_rew: -5.432271843057361, nxt: 10, state:14, V: -1.4722611646879062
nxt_rew: 0.0, nxt: 0, state:14, V: -1.7222611646879062
nxt_rew: -2.915290531784772, nxt: 14, state:14, V: -2.915290531784772
nxt_rew: -5.66914249485792, nxt: 13, state:14, V: -4.440847593127804
14 -4.440847593127804
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.1313421761409814, nxt: 1, state:1, V: -1.1313421761409814
nxt_rew: -5.269848039672354, nxt: 2, state:1, V: -2.567057985067261
nxt_rew: -5.108688354291649, nxt: 5, state:1, V: -3.966512864782882
nxt_rew: 0.0, nxt: 0, state:1, V: -4.216512864782882
1 -4.216512864782882
nxt_rew: -1.4357158089262796, nxt: 2, state:2, V: -1.4357158089262796
nxt_rew: -5.659774420497732, nxt: 3, state:2, V: -2.9591650535382694
nxt_rew: -5.658251353030288, nxt: 6, state:2, V: -4.482271607970084
nxt_rew: -4.216512864782882, nxt: 1, state:2, V: -5.680987002546233
2 -5.680987002546233
nxt_rew: -1.5234492446119898, nxt: 3, state:3, V: -1.5234492446119898
nxt_rew: -3.0468984892239797, nxt: 3, state:3, V: -3.0468984892239797
nxt_rew: -5.669142494857921, nxt: 7, state:3, V: -4.572455550567012
nxt_rew: -5.680987002546233, nxt: 2, state:3, V: -6.100677626139914
3 -6.100677626139914
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -5.108688354291649, nxt: 5, state:4, V: -1.6494548797156212
nxt_rew: -5.269848039672354, nxt: 8, state:4, V: -3.0851706886419006
nxt_rew: -4.216512864782882, nxt: 4, state:4, V: -4.216512864782882
4 -4.216512864782882
nxt_rew: -4.216512864782882, nxt: 1, state:5, V: -1.1987153945761484
nxt_rew: -5.658251353030288, nxt: 6, state:5, V: -2.721821949007963
nxt_rew: -5.6582513530302885, nxt: 9, state:5, V: -4.244928503439779
nxt_rew: -4.216512864782882, nxt: 4, state:5, V: -5.443643898015927
5 -5.443643898015927
nxt_rew: -5.680987002546233, nxt: 2, state:6, V: -1.5282220755729023
nxt_rew: -5.669142494857921, nxt: 7, state:6, V: -3.0537791369159346
nxt_rew: -5.432271843057361, nxt: 10, state:6, V: -4.526040301603841
nxt_rew: -5.443643898015927, nxt: 5, state:6, V: -6.000860178657424
6 -6.000860178657424
nxt_rew: -6.100677626139914, nxt: 3, state:7, V: -1.6226524658814807
nxt_rew: -3.148209527224513, nxt: 7, state:7, V: -3.148209527224513
nxt_rew: -4.440847593127804, nxt: 11, state:7, V: -4.397400235678269
nxt_rew: -6.000860178657424, nxt: 6, state:7, V: -5.99759377587619
7 -5.99759377587619
nxt_rew: -4.216512864782882, nxt: 4, state:8, V: -1.1987153945761484
nxt_rew: -5.6582513530302885, nxt: 9, state:8, V: -2.721821949007963
nxt_rew: -5.659774420497732, nxt: 12, state:8, V: -4.245271193619953
nxt_rew: -5.680987002546233, nxt: 8, state:8, V: -5.680987002546233
8 -5.680987002546233
nxt_rew: -5.443643898015927, nxt: 5, state:9, V: -1.4748198770535836
nxt_rew: -5.432271843057361, nxt: 10, state:9, V: -2.94708104174149
nxt_rew: -5.66914249485792, nxt: 13, state:9, V: -4.472638103084522
nxt_rew: -5.680987002546233, nxt: 8, state:9, V: -6.000860178657424
9 -6.000860178657424
nxt_rew: -6.000860178657424, nxt: 6, state:10, V: -1.6001935401979206
nxt_rew: -4.440847593127804, nxt: 11, state:10, V: -2.8493842486516767
nxt_rew: -4.440847593127804, nxt: 14, state:10, V: -4.098574957105432
nxt_rew: -6.000860178657424, nxt: 9, state:10, V: -5.698768497303353
10 -5.698768497303353
nxt_rew: -5.99759377587619, nxt: 7, state:11, V: -1.5994585995721429
nxt_rew: -2.848649308025899, nxt: 11, state:11, V: -2.848649308025899
nxt_rew: 0.0, nxt: 0, state:11, V: -3.098649308025899
nxt_rew: -5.698768497303353, nxt: 10, state:11, V: -4.630872219919153
11 -4.630872219919153
nxt_rew: -5.680987002546233, nxt: 8, state:12, V: -1.5282220755729023
nxt_rew: -5.66914249485792, nxt: 13, state:12, V: -3.053779136915934
nxt_rew: -4.577228381527924, nxt: 12, state:12, V: -4.577228381527924
nxt_rew: -6.100677626139913, nxt: 12, state:12, V: -6.100677626139913
12 -6.100677626139913
nxt_rew: -6.000860178657424, nxt: 9, state:13, V: -1.6001935401979206
nxt_rew: -4.440847593127804, nxt: 14, state:13, V: -2.8493842486516767
nxt_rew: -4.374941309994709, nxt: 13, state:13, V: -4.374941309994709
nxt_rew: -6.100677626139913, nxt: 12, state:13, V: -5.997593775876189
13 -5.997593775876189
nxt_rew: -5.698768497303353, nxt: 10, state:14, V: -1.5322229118932547
nxt_rew: 0.0, nxt: 0, state:14, V: -1.7822229118932547
nxt_rew: -3.031413620347011, nxt: 14, state:14, V: -3.031413620347011
nxt_rew: -5.997593775876189, nxt: 13, state:14, V: -4.630872219919153
14 -4.630872219919153
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.1987153945761484, nxt: 1, state:1, V: -1.1987153945761484
nxt_rew: -5.680987002546233, nxt: 2, state:1, V: -2.7269374701490507
nxt_rew: -5.443643898015927, nxt: 5, state:1, V: -4.201757347202634
nxt_rew: 0.0, nxt: 0, state:1, V: -4.451757347202634
1 -4.451757347202634
nxt_rew: -1.5282220755729023, nxt: 2, state:2, V: -1.5282220755729023
nxt_rew: -6.100677626139914, nxt: 3, state:2, V: -3.150874541454383
nxt_rew: -6.000860178657424, nxt: 6, state:2, V: -4.751068081652304
nxt_rew: -4.451757347202634, nxt: 1, state:2, V: -6.0027134847728965
2 -6.0027134847728965
nxt_rew: -1.6226524658814807, nxt: 3, state:3, V: -1.6226524658814807
nxt_rew: -3.2453049317629614, nxt: 3, state:3, V: -3.2453049317629614
nxt_rew: -5.99759377587619, nxt: 7, state:3, V: -4.844763531335104
nxt_rew: -6.0027134847728965, nxt: 2, state:3, V: -6.4453740654090055
3 -6.4453740654090055
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -5.443643898015927, nxt: 5, state:4, V: -1.7248198770535836
nxt_rew: -5.680987002546233, nxt: 8, state:4, V: -3.2530419526264858
nxt_rew: -4.451757347202634, nxt: 4, state:4, V: -4.451757347202634
4 -4.451757347202634
nxt_rew: -4.451757347202634, nxt: 1, state:5, V: -1.2516454031205928
nxt_rew: -6.000860178657424, nxt: 6, state:5, V: -2.8518389433185134
nxt_rew: -6.000860178657424, nxt: 9, state:5, V: -4.452032483516434
nxt_rew: -4.451757347202634, nxt: 4, state:5, V: -5.703677886637027
5 -5.703677886637027
nxt_rew: -6.0027134847728965, nxt: 2, state:6, V: -1.6006105340739016
nxt_rew: -5.99759377587619, nxt: 7, state:6, V: -3.2000691336460445
nxt_rew: -5.698768497303353, nxt: 10, state:6, V: -4.7322920455393
nxt_rew: -5.703677886637027, nxt: 5, state:6, V: -6.265619570032631
6 -6.265619570032631
nxt_rew: -6.4453740654090055, nxt: 3, state:7, V: -1.7002091647170263
nxt_rew: -3.299667764289169, nxt: 7, state:7, V: -3.299667764289169
nxt_rew: -4.630872219919153, nxt: 11, state:7, V: -4.591614013770979
nxt_rew: -6.265619570032631, nxt: 6, state:7, V: -6.251378417028321
7 -6.251378417028321
nxt_rew: -4.451757347202634, nxt: 4, state:8, V: -1.2516454031205928
nxt_rew: -6.000860178657424, nxt: 9, state:8, V: -2.8518389433185134
nxt_rew: -6.100677626139913, nxt: 12, state:8, V: -4.474491409199993
nxt_rew: -6.002713484772896, nxt: 8, state:8, V: -6.002713484772896
8 -6.002713484772896
nxt_rew: -5.703677886637027, nxt: 5, state:9, V: -1.533327524493331
nxt_rew: -5.698768497303353, nxt: 10, state:9, V: -3.0655504363865855
nxt_rew: -5.997593775876189, nxt: 13, state:9, V: -4.665009035958728
nxt_rew: -6.002713484772896, nxt: 8, state:9, V: -6.26561957003263
9 -6.26561957003263
nxt_rew: -6.265619570032631, nxt: 6, state:10, V: -1.659764403257342
nxt_rew: -4.630872219919153, nxt: 11, state:10, V: -2.9517106527391515
nxt_rew: -4.630872219919153, nxt: 14, state:10, V: -4.243656902220961
nxt_rew: -6.26561957003263, nxt: 9, state:10, V: -5.903421305478302
10 -5.903421305478302
nxt_rew: -6.251378417028321, nxt: 7, state:11, V: -1.6565601438313722
nxt_rew: -2.9485063933131817, nxt: 11, state:11, V: -2.9485063933131817
nxt_rew: 0.0, nxt: 0, state:11, V: -3.1985063933131817
nxt_rew: -5.903421305478302, nxt: 10, state:11, V: -4.776776187045799
11 -4.776776187045799
nxt_rew: -6.002713484772896, nxt: 8, state:12, V: -1.6006105340739016
nxt_rew: -5.997593775876189, nxt: 13, state:12, V: -3.2000691336460445
nxt_rew: -4.822721599527525, nxt: 12, state:12, V: -4.822721599527525
nxt_rew: -6.4453740654090055, nxt: 12, state:12, V: -6.4453740654090055
12 -6.4453740654090055
nxt_rew: -6.26561957003263, nxt: 9, state:13, V: -1.6597644032573418
nxt_rew: -4.630872219919153, nxt: 14, state:13, V: -2.951710652739151
nxt_rew: -4.551169252311293, nxt: 13, state:13, V: -4.551169252311293
nxt_rew: -6.4453740654090055, nxt: 12, state:13, V: -6.25137841702832
13 -6.25137841702832
nxt_rew: -5.903421305478302, nxt: 10, state:14, V: -1.578269793732618
nxt_rew: 0.0, nxt: 0, state:14, V: -1.828269793732618
nxt_rew: -3.1202160432144277, nxt: 14, state:14, V: -3.1202160432144277
nxt_rew: -6.25137841702832, nxt: 13, state:14, V: -4.7767761870458
14 -4.7767761870458
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.2516454031205928, nxt: 1, state:1, V: -1.2516454031205928
nxt_rew: -6.0027134847728965, nxt: 2, state:1, V: -2.8522559371944944
nxt_rew: -5.703677886637027, nxt: 5, state:1, V: -4.385583461687825
nxt_rew: 0.0, nxt: 0, state:1, V: -4.635583461687825
1 -4.635583461687825
nxt_rew: -1.6006105340739016, nxt: 2, state:2, V: -1.6006105340739016
nxt_rew: -6.4453740654090055, nxt: 3, state:2, V: -3.300819698790928
nxt_rew: -6.265619570032631, nxt: 6, state:2, V: -4.96058410204827
nxt_rew: -4.635583461687825, nxt: 1, state:2, V: -6.253590380928031
2 -6.253590380928031
nxt_rew: -1.7002091647170263, nxt: 3, state:3, V: -1.7002091647170263
nxt_rew: -3.4004183294340526, nxt: 3, state:3, V: -3.4004183294340526
nxt_rew: -6.251378417028321, nxt: 7, state:3, V: -5.056978473265425
nxt_rew: -6.253590380928031, nxt: 2, state:3, V: -6.714036308974232
3 -6.714036308974232
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -5.703677886637027, nxt: 5, state:4, V: -1.783327524493331
nxt_rew: -6.002713484772896, nxt: 8, state:4, V: -3.3839380585672325
nxt_rew: -4.635583461687825, nxt: 4, state:4, V: -4.635583461687825
4 -4.635583461687825
nxt_rew: -4.635583461687825, nxt: 1, state:5, V: -1.2930062788797607
nxt_rew: -6.265619570032631, nxt: 6, state:5, V: -2.9527706821371025
nxt_rew: -6.26561957003263, nxt: 9, state:5, V: -4.612535085394445
nxt_rew: -4.635583461687825, nxt: 4, state:5, V: -5.905541364274205
5 -5.905541364274205
nxt_rew: -6.253590380928031, nxt: 2, state:6, V: -1.657057835708807
nxt_rew: -6.251378417028321, nxt: 7, state:6, V: -3.3136179795401794
nxt_rew: -5.903421305478302, nxt: 10, state:6, V: -4.8918877732727974
nxt_rew: -5.905541364274205, nxt: 5, state:6, V: -6.470634580234494
6 -6.470634580234494
nxt_rew: -6.714036308974232, nxt: 3, state:7, V: -1.7606581695192023
nxt_rew: -3.4172183133505745, nxt: 7, state:7, V: -3.4172183133505745
nxt_rew: -4.776776187045799, nxt: 11, state:7, V: -4.7419929554358795
nxt_rew: -6.470634580234494, nxt: 6, state:7, V: -6.44788573598864
7 -6.44788573598864
nxt_rew: -4.635583461687825, nxt: 4, state:8, V: -1.2930062788797607
nxt_rew: -6.26561957003263, nxt: 9, state:8, V: -2.9527706821371025
nxt_rew: -6.4453740654090055, nxt: 12, state:8, V: -4.652979846854128
nxt_rew: -6.2535903809280295, nxt: 8, state:8, V: -6.2535903809280295
8 -6.2535903809280295
nxt_rew: -5.905541364274205, nxt: 5, state:9, V: -1.5787468069616961
nxt_rew: -5.903421305478302, nxt: 10, state:9, V: -3.1570166006943143
nxt_rew: -6.25137841702832, nxt: 13, state:9, V: -4.813576744525687
nxt_rew: -6.2535903809280295, nxt: 8, state:9, V: -6.470634580234494
9 -6.470634580234494
nxt_rew: -6.470634580234494, nxt: 6, state:10, V: -1.705892780552761
nxt_rew: -4.776776187045799, nxt: 11, state:10, V: -3.030667422638066
nxt_rew: -4.7767761870458, nxt: 14, state:10, V: -4.355442064723372
nxt_rew: -6.470634580234494, nxt: 9, state:10, V: -6.061334845276132
10 -6.061334845276132
nxt_rew: -6.44788573598864, nxt: 7, state:11, V: -1.7007742905974441
nxt_rew: -3.025548932682749, nxt: 11, state:11, V: -3.025548932682749
nxt_rew: 0.0, nxt: 0, state:11, V: -3.275548932682749
nxt_rew: -6.061334845276132, nxt: 10, state:11, V: -4.889349272869879
11 -4.889349272869879
nxt_rew: -6.2535903809280295, nxt: 8, state:12, V: -1.6570578357088066
nxt_rew: -6.25137841702832, nxt: 13, state:12, V: -3.3136179795401786
nxt_rew: -5.013827144257204, nxt: 12, state:12, V: -5.013827144257204
nxt_rew: -6.714036308974231, nxt: 12, state:12, V: -6.714036308974231
12 -6.714036308974231
nxt_rew: -6.470634580234494, nxt: 9, state:13, V: -1.705892780552761
nxt_rew: -4.7767761870458, nxt: 14, state:13, V: -3.030667422638066
nxt_rew: -4.6872275664694385, nxt: 13, state:13, V: -4.6872275664694385
nxt_rew: -6.714036308974231, nxt: 12, state:13, V: -6.44788573598864
13 -6.44788573598864
nxt_rew: -6.061334845276132, nxt: 10, state:14, V: -1.6138003401871297
nxt_rew: 0.0, nxt: 0, state:14, V: -1.8638003401871297
nxt_rew: -3.188574982272435, nxt: 14, state:14, V: -3.188574982272435
nxt_rew: -6.44788573598864, nxt: 13, state:14, V: -4.889349272869879
14 -4.889349272869879
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.2930062788797607, nxt: 1, state:1, V: -1.2930062788797607
nxt_rew: -6.253590380928031, nxt: 2, state:1, V: -2.9500641145885678
nxt_rew: -5.905541364274205, nxt: 5, state:1, V: -4.528810921550264
nxt_rew: 0.0, nxt: 0, state:1, V: -4.778810921550264
1 -4.778810921550264
nxt_rew: -1.657057835708807, nxt: 2, state:2, V: -1.657057835708807
nxt_rew: -6.714036308974232, nxt: 3, state:2, V: -3.4177160052280096
nxt_rew: -6.470634580234494, nxt: 6, state:2, V: -5.123608785780771
nxt_rew: -4.778810921550264, nxt: 1, state:2, V: -6.44884124312958
2 -6.44884124312958
nxt_rew: -1.7606581695192023, nxt: 3, state:3, V: -1.7606581695192023
nxt_rew: -3.5213163390384046, nxt: 3, state:3, V: -3.5213163390384046
nxt_rew: -6.44788573598864, nxt: 7, state:3, V: -5.222090629635849
nxt_rew: -6.44884124312958, nxt: 2, state:3, V: -6.923079909340005
3 -6.923079909340005
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -5.905541364274205, nxt: 5, state:4, V: -1.8287468069616961
nxt_rew: -6.2535903809280295, nxt: 8, state:4, V: -3.4858046426705025
nxt_rew: -4.778810921550264, nxt: 4, state:4, V: -4.778810921550264
4 -4.778810921550264
nxt_rew: -4.778810921550264, nxt: 1, state:5, V: -1.3252324573488095
nxt_rew: -6.470634580234494, nxt: 6, state:5, V: -3.0311252379015707
nxt_rew: -6.470634580234494, nxt: 9, state:5, V: -4.737018018454332
nxt_rew: -4.778810921550264, nxt: 4, state:5, V: -6.0622504758031415
5 -6.0622504758031415
nxt_rew: -6.44884124312958, nxt: 2, state:6, V: -1.7009892797041557
nxt_rew: -6.44788573598864, nxt: 7, state:6, V: -3.4017635703015996
nxt_rew: -6.061334845276132, nxt: 10, state:6, V: -5.015563910488729
nxt_rew: -6.0622504758031415, nxt: 5, state:6, V: -6.629570267544436
6 -6.629570267544436
nxt_rew: -6.923079909340005, nxt: 3, state:7, V: -1.807692979601501
nxt_rew: -3.5084672701989454, nxt: 7, state:7, V: -3.5084672701989454
nxt_rew: -4.889349272869879, nxt: 11, state:7, V: -4.858570856594668
nxt_rew: -6.629570267544436, nxt: 6, state:7, V: -6.600224166792167
7 -6.600224166792167
nxt_rew: -4.778810921550264, nxt: 4, state:8, V: -1.3252324573488095
nxt_rew: -6.470634580234494, nxt: 9, state:8, V: -3.0311252379015707
nxt_rew: -6.714036308974231, nxt: 12, state:8, V: -4.791783407420773
nxt_rew: -6.44884124312958, nxt: 8, state:8, V: -6.44884124312958
8 -6.44884124312958
nxt_rew: -6.0622504758031415, nxt: 5, state:9, V: -1.6140063570557068
nxt_rew: -6.061334845276132, nxt: 10, state:9, V: -3.2278066972428365
nxt_rew: -6.44788573598864, nxt: 13, state:9, V: -4.928580987840281
nxt_rew: -6.44884124312958, nxt: 8, state:9, V: -6.629570267544437
9 -6.629570267544437
nxt_rew: -6.629570267544436, nxt: 6, state:10, V: -1.741653310197498
nxt_rew: -4.889349272869879, nxt: 11, state:10, V: -3.0917568965932207
nxt_rew: -4.889349272869879, nxt: 14, state:10, V: -4.441860482988943
nxt_rew: -6.629570267544437, nxt: 9, state:10, V: -6.1835137931864415
10 -6.1835137931864415
nxt_rew: -6.600224166792167, nxt: 7, state:11, V: -1.7350504375282376
nxt_rew: -3.0851540239239603, nxt: 11, state:11, V: -3.0851540239239603
nxt_rew: 0.0, nxt: 0, state:11, V: -3.3351540239239603
nxt_rew: -6.1835137931864415, nxt: 10, state:11, V: -4.97644462739091
11 -4.97644462739091
nxt_rew: -6.44884124312958, nxt: 8, state:12, V: -1.7009892797041557
nxt_rew: -6.44788573598864, nxt: 13, state:12, V: -3.4017635703015996
nxt_rew: -5.162421739820802, nxt: 12, state:12, V: -5.162421739820802
nxt_rew: -6.923079909340004, nxt: 12, state:12, V: -6.923079909340004
12 -6.923079909340004
nxt_rew: -6.629570267544437, nxt: 9, state:13, V: -1.7416533101974983
nxt_rew: -4.889349272869879, nxt: 14, state:13, V: -3.091756896593221
nxt_rew: -4.792531187190665, nxt: 13, state:13, V: -4.792531187190665
nxt_rew: -6.923079909340004, nxt: 12, state:13, V: -6.600224166792166
13 -6.600224166792166
nxt_rew: -6.1835137931864415, nxt: 10, state:14, V: -1.6412906034669494
nxt_rew: 0.0, nxt: 0, state:14, V: -1.8912906034669494
nxt_rew: -3.241394189862672, nxt: 14, state:14, V: -3.241394189862672
nxt_rew: -6.600224166792166, nxt: 13, state:14, V: -4.97644462739091
14 -4.97644462739091
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.3252324573488095, nxt: 1, state:1, V: -1.3252324573488095
nxt_rew: -6.44884124312958, nxt: 2, state:1, V: -3.026221737052965
nxt_rew: -6.0622504758031415, nxt: 5, state:1, V: -4.6402280941086715
nxt_rew: 0.0, nxt: 0, state:1, V: -4.8902280941086715
1 -4.8902280941086715
nxt_rew: -1.7009892797041557, nxt: 2, state:2, V: -1.7009892797041557
nxt_rew: -6.923079909340005, nxt: 3, state:2, V: -3.508682259305657
nxt_rew: -6.629570267544436, nxt: 6, state:2, V: -5.250335569503155
nxt_rew: -4.8902280941086715, nxt: 1, state:2, V: -6.600636890677606
2 -6.600636890677606
nxt_rew: -1.807692979601501, nxt: 3, state:3, V: -1.807692979601501
nxt_rew: -3.615385959203002, nxt: 3, state:3, V: -3.615385959203002
nxt_rew: -6.600224166792167, nxt: 7, state:3, V: -5.35043639673124
nxt_rew: -6.600636890677606, nxt: 2, state:3, V: -7.085579697133701
3 -7.085579697133701
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -6.0622504758031415, nxt: 5, state:4, V: -1.8640063570557068
nxt_rew: -6.44884124312958, nxt: 8, state:4, V: -3.5649956367598623
nxt_rew: -4.8902280941086715, nxt: 4, state:4, V: -4.8902280941086715
4 -4.8902280941086715
nxt_rew: -4.8902280941086715, nxt: 1, state:5, V: -1.3503013211744512
nxt_rew: -6.629570267544436, nxt: 6, state:5, V: -3.0919546313719493
nxt_rew: -6.629570267544437, nxt: 9, state:5, V: -4.833607941569447
nxt_rew: -4.8902280941086715, nxt: 4, state:5, V: -6.1839092627438985
5 -6.1839092627438985
nxt_rew: -6.600636890677606, nxt: 2, state:6, V: -1.7351433004024615
nxt_rew: -6.600224166792167, nxt: 7, state:6, V: -3.470193737930699
nxt_rew: -6.1835137931864415, nxt: 10, state:6, V: -5.111484341397649
nxt_rew: -6.1839092627438985, nxt: 5, state:6, V: -6.752863925515026
6 -6.752863925515026
nxt_rew: -7.085579697133701, nxt: 3, state:7, V: -1.8442554318550828
nxt_rew: -3.5793058693833206, nxt: 7, state:7, V: -3.5793058693833206
nxt_rew: -4.97644462739091, nxt: 11, state:7, V: -4.949005910546275
nxt_rew: -6.752863925515026, nxt: 6, state:7, V: -6.718400293787156
7 -6.718400293787156
nxt_rew: -4.8902280941086715, nxt: 4, state:8, V: -1.3503013211744512
nxt_rew: -6.629570267544437, nxt: 9, state:8, V: -3.0919546313719497
nxt_rew: -6.923079909340004, nxt: 12, state:8, V: -4.89964761097345
nxt_rew: -6.600636890677606, nxt: 8, state:8, V: -6.600636890677606
8 -6.600636890677606
nxt_rew: -6.1839092627438985, nxt: 5, state:9, V: -1.6413795841173773
nxt_rew: -6.1835137931864415, nxt: 10, state:9, V: -3.2826701875843267
nxt_rew: -6.600224166792166, nxt: 13, state:9, V: -5.017720625112564
nxt_rew: -6.600636890677606, nxt: 8, state:9, V: -6.752863925515026
9 -6.752863925515026
nxt_rew: -6.752863925515026, nxt: 6, state:10, V: -1.769394383240881
nxt_rew: -4.97644462739091, nxt: 11, state:10, V: -3.1390944244038357
nxt_rew: -4.97644462739091, nxt: 14, state:10, V: -4.50879446556679
nxt_rew: -6.752863925515026, nxt: 9, state:10, V: -6.278188848807671
10 -6.278188848807671
nxt_rew: -6.718400293787156, nxt: 7, state:11, V: -1.76164006610211
nxt_rew: -3.131340107265065, nxt: 11, state:11, V: -3.131340107265065
nxt_rew: 0.0, nxt: 0, state:11, V: -3.381340107265065
nxt_rew: -6.278188848807671, nxt: 10, state:11, V: -5.043932598246791
11 -5.043932598246791
nxt_rew: -6.600636890677606, nxt: 8, state:12, V: -1.7351433004024615
nxt_rew: -6.600224166792166, nxt: 13, state:12, V: -3.470193737930699
nxt_rew: -5.2778867175321995, nxt: 12, state:12, V: -5.2778867175321995
nxt_rew: -7.0855796971337, nxt: 12, state:12, V: -7.0855796971337
12 -7.0855796971337
nxt_rew: -6.752863925515026, nxt: 9, state:13, V: -1.769394383240881
nxt_rew: -4.97644462739091, nxt: 14, state:13, V: -3.1390944244038357
nxt_rew: -4.8741448619320735, nxt: 13, state:13, V: -4.8741448619320735
nxt_rew: -7.0855796971337, nxt: 12, state:13, V: -6.718400293787156
13 -6.718400293787156
nxt_rew: -6.278188848807671, nxt: 10, state:14, V: -1.662592490981726
nxt_rew: 0.0, nxt: 0, state:14, V: -1.912592490981726
nxt_rew: -3.2822925321446808, nxt: 14, state:14, V: -3.2822925321446808
nxt_rew: -6.718400293787156, nxt: 13, state:14, V: -5.043932598246791
14 -5.043932598246791
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.3503013211744512, nxt: 1, state:1, V: -1.3503013211744512
nxt_rew: -6.600636890677606, nxt: 2, state:1, V: -3.0854446215769125
nxt_rew: -6.1839092627438985, nxt: 5, state:1, V: -4.72682420569429
nxt_rew: 0.0, nxt: 0, state:1, V: -4.97682420569429
1 -4.97682420569429
###Markdown
**Expected output from running above cell:**`Values= [0.0, -5.275906485600302, -7.125803667372325, -7.647729922717661, -5.275906485600302, -6.604213913250977, -7.1785079112764745, -7.126384243656092, -7.125803667372325, -7.178507911276475, -6.604678371775787, -5.276663994322859, -7.647729922717662, -7.1263842436560925, -5.27666399432286]` Now, test our function using the test_dp helper. The helper also uses the gw MDP, but with a different gamma value.If our function passes all tests, a passcode will be printed.
###Code
# test our function using the test_db helper
test_dp.policy_eval_in_place_test(policy_eval_in_place)
###Output
Testing: Policy Evaluation (in-place)
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.25, nxt: 1, state:1, V: -0.25
nxt_rew: 0, nxt: 5, state:1, V: -0.5
nxt_rew: 0.0, nxt: 0, state:1, V: -0.75
nxt_rew: 0, nxt: 2, state:1, V: -1.0
1 -1.0
nxt_rew: -0.25, nxt: 2, state:2, V: -0.25
nxt_rew: 0, nxt: 6, state:2, V: -0.5
nxt_rew: -1.0, nxt: 1, state:2, V: -1.0
nxt_rew: 0, nxt: 3, state:2, V: -1.25
2 -1.25
nxt_rew: -0.25, nxt: 3, state:3, V: -0.25
nxt_rew: 0, nxt: 7, state:3, V: -0.5
nxt_rew: -1.25, nxt: 2, state:3, V: -1.0625
nxt_rew: -1.3125, nxt: 3, state:3, V: -1.3125
3 -1.3125
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: 0, nxt: 8, state:4, V: -0.5
nxt_rew: -0.75, nxt: 4, state:4, V: -0.75
nxt_rew: 0, nxt: 5, state:4, V: -1.0
4 -1.0
nxt_rew: -1.0, nxt: 1, state:5, V: -0.5
nxt_rew: 0, nxt: 9, state:5, V: -0.75
nxt_rew: -1.0, nxt: 4, state:5, V: -1.25
nxt_rew: 0, nxt: 6, state:5, V: -1.5
5 -1.5
nxt_rew: -1.25, nxt: 2, state:6, V: -0.5625
nxt_rew: 0, nxt: 10, state:6, V: -0.8125
nxt_rew: -1.5, nxt: 5, state:6, V: -1.4375
nxt_rew: 0, nxt: 7, state:6, V: -1.6875
6 -1.6875
nxt_rew: -1.3125, nxt: 3, state:7, V: -0.578125
nxt_rew: 0, nxt: 11, state:7, V: -0.828125
nxt_rew: -1.6875, nxt: 6, state:7, V: -1.5
nxt_rew: -1.75, nxt: 7, state:7, V: -1.75
7 -1.75
nxt_rew: -1.0, nxt: 4, state:8, V: -0.5
nxt_rew: 0, nxt: 12, state:8, V: -0.75
nxt_rew: -1.0, nxt: 8, state:8, V: -1.0
nxt_rew: 0, nxt: 9, state:8, V: -1.25
8 -1.25
nxt_rew: -1.5, nxt: 5, state:9, V: -0.625
nxt_rew: 0, nxt: 13, state:9, V: -0.875
nxt_rew: -1.25, nxt: 8, state:9, V: -1.4375
nxt_rew: 0, nxt: 10, state:9, V: -1.6875
9 -1.6875
nxt_rew: -1.6875, nxt: 6, state:10, V: -0.671875
nxt_rew: 0, nxt: 14, state:10, V: -0.921875
nxt_rew: -1.6875, nxt: 9, state:10, V: -1.59375
nxt_rew: 0, nxt: 11, state:10, V: -1.84375
10 -1.84375
nxt_rew: -1.75, nxt: 7, state:11, V: -0.6875
nxt_rew: 0.0, nxt: 0, state:11, V: -0.9375
nxt_rew: -1.84375, nxt: 10, state:11, V: -1.6484375
nxt_rew: -1.8984375, nxt: 11, state:11, V: -1.8984375
11 -1.8984375
nxt_rew: -1.25, nxt: 8, state:12, V: -0.5625
nxt_rew: -0.8125, nxt: 12, state:12, V: -0.8125
nxt_rew: -1.0625, nxt: 12, state:12, V: -1.0625
nxt_rew: 0, nxt: 13, state:12, V: -1.3125
12 -1.3125
nxt_rew: -1.6875, nxt: 9, state:13, V: -0.671875
nxt_rew: -0.921875, nxt: 13, state:13, V: -0.921875
nxt_rew: -1.3125, nxt: 12, state:13, V: -1.5
nxt_rew: 0, nxt: 14, state:13, V: -1.75
13 -1.75
nxt_rew: -1.84375, nxt: 10, state:14, V: -0.7109375
nxt_rew: -0.9609375, nxt: 14, state:14, V: -0.9609375
nxt_rew: -1.75, nxt: 13, state:14, V: -1.6484375
nxt_rew: 0.0, nxt: 0, state:14, V: -1.8984375
14 -1.8984375
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.5, nxt: 1, state:1, V: -0.5
nxt_rew: -1.5, nxt: 5, state:1, V: -1.125
nxt_rew: 0.0, nxt: 0, state:1, V: -1.375
nxt_rew: -1.25, nxt: 2, state:1, V: -1.9375
1 -1.9375
nxt_rew: -0.5625, nxt: 2, state:2, V: -0.5625
nxt_rew: -1.6875, nxt: 6, state:2, V: -1.234375
nxt_rew: -1.9375, nxt: 1, state:2, V: -1.96875
nxt_rew: -1.3125, nxt: 3, state:2, V: -2.546875
2 -2.546875
nxt_rew: -0.578125, nxt: 3, state:3, V: -0.578125
nxt_rew: -1.75, nxt: 7, state:3, V: -1.265625
nxt_rew: -2.546875, nxt: 2, state:3, V: -2.15234375
nxt_rew: -2.73046875, nxt: 3, state:3, V: -2.73046875
3 -2.73046875
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -1.25, nxt: 8, state:4, V: -0.8125
nxt_rew: -1.3125, nxt: 4, state:4, V: -1.3125
nxt_rew: -1.5, nxt: 5, state:4, V: -1.9375
4 -1.9375
nxt_rew: -1.9375, nxt: 1, state:5, V: -0.734375
nxt_rew: -1.6875, nxt: 9, state:5, V: -1.40625
nxt_rew: -1.9375, nxt: 4, state:5, V: -2.140625
nxt_rew: -1.6875, nxt: 6, state:5, V: -2.8125
5 -2.8125
nxt_rew: -2.546875, nxt: 2, state:6, V: -0.88671875
nxt_rew: -1.84375, nxt: 10, state:6, V: -1.59765625
nxt_rew: -2.8125, nxt: 5, state:6, V: -2.55078125
nxt_rew: -1.75, nxt: 7, state:6, V: -3.23828125
6 -3.23828125
nxt_rew: -2.73046875, nxt: 3, state:7, V: -0.9326171875
nxt_rew: -1.8984375, nxt: 11, state:7, V: -1.6572265625
nxt_rew: -3.23828125, nxt: 6, state:7, V: -2.716796875
nxt_rew: -3.404296875, nxt: 7, state:7, V: -3.404296875
7 -3.404296875
nxt_rew: -1.9375, nxt: 4, state:8, V: -0.734375
nxt_rew: -1.3125, nxt: 12, state:8, V: -1.3125
nxt_rew: -1.875, nxt: 8, state:8, V: -1.875
nxt_rew: -1.6875, nxt: 9, state:8, V: -2.546875
8 -2.546875
nxt_rew: -2.8125, nxt: 5, state:9, V: -0.953125
nxt_rew: -1.75, nxt: 13, state:9, V: -1.640625
nxt_rew: -2.546875, nxt: 8, state:9, V: -2.52734375
nxt_rew: -1.84375, nxt: 10, state:9, V: -3.23828125
9 -3.23828125
nxt_rew: -3.23828125, nxt: 6, state:10, V: -1.0595703125
nxt_rew: -1.8984375, nxt: 14, state:10, V: -1.7841796875
nxt_rew: -3.23828125, nxt: 9, state:10, V: -2.84375
nxt_rew: -1.8984375, nxt: 11, state:10, V: -3.568359375
10 -3.568359375
nxt_rew: -3.404296875, nxt: 7, state:11, V: -1.10107421875
nxt_rew: 0.0, nxt: 0, state:11, V: -1.35107421875
nxt_rew: -3.568359375, nxt: 10, state:11, V: -2.4931640625
nxt_rew: -3.2177734375, nxt: 11, state:11, V: -3.2177734375
11 -3.2177734375
nxt_rew: -2.546875, nxt: 8, state:12, V: -0.88671875
nxt_rew: -1.46484375, nxt: 12, state:12, V: -1.46484375
nxt_rew: -2.04296875, nxt: 12, state:12, V: -2.04296875
nxt_rew: -1.75, nxt: 13, state:12, V: -2.73046875
12 -2.73046875
nxt_rew: -3.23828125, nxt: 9, state:13, V: -1.0595703125
nxt_rew: -1.7470703125, nxt: 13, state:13, V: -1.7470703125
nxt_rew: -2.73046875, nxt: 12, state:13, V: -2.6796875
nxt_rew: -1.8984375, nxt: 14, state:13, V: -3.404296875
13 -3.404296875
nxt_rew: -3.568359375, nxt: 10, state:14, V: -1.14208984375
nxt_rew: -1.86669921875, nxt: 14, state:14, V: -1.86669921875
nxt_rew: -3.404296875, nxt: 13, state:14, V: -2.9677734375
nxt_rew: 0.0, nxt: 0, state:14, V: -3.2177734375
14 -3.2177734375
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.734375, nxt: 1, state:1, V: -0.734375
nxt_rew: -2.8125, nxt: 5, state:1, V: -1.6875
nxt_rew: 0.0, nxt: 0, state:1, V: -1.9375
nxt_rew: -2.546875, nxt: 2, state:1, V: -2.82421875
1 -2.82421875
nxt_rew: -0.88671875, nxt: 2, state:2, V: -0.88671875
nxt_rew: -3.23828125, nxt: 6, state:2, V: -1.9462890625
nxt_rew: -2.82421875, nxt: 1, state:2, V: -2.90234375
nxt_rew: -2.73046875, nxt: 3, state:2, V: -3.8349609375
2 -3.8349609375
nxt_rew: -0.9326171875, nxt: 3, state:3, V: -0.9326171875
nxt_rew: -3.404296875, nxt: 7, state:3, V: -2.03369140625
nxt_rew: -3.8349609375, nxt: 2, state:3, V: -3.242431640625
nxt_rew: -4.175048828125, nxt: 3, state:3, V: -4.175048828125
3 -4.175048828125
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -2.546875, nxt: 8, state:4, V: -1.13671875
nxt_rew: -1.87109375, nxt: 4, state:4, V: -1.87109375
nxt_rew: -2.8125, nxt: 5, state:4, V: -2.82421875
4 -2.82421875
nxt_rew: -2.82421875, nxt: 1, state:5, V: -0.9560546875
nxt_rew: -3.23828125, nxt: 9, state:5, V: -2.015625
nxt_rew: -2.82421875, nxt: 4, state:5, V: -2.9716796875
nxt_rew: -3.23828125, nxt: 6, state:5, V: -4.03125
5 -4.03125
nxt_rew: -3.8349609375, nxt: 2, state:6, V: -1.208740234375
nxt_rew: -3.568359375, nxt: 10, state:6, V: -2.350830078125
nxt_rew: -4.03125, nxt: 5, state:6, V: -3.608642578125
nxt_rew: -3.404296875, nxt: 7, state:6, V: -4.709716796875
6 -4.709716796875
nxt_rew: -4.175048828125, nxt: 3, state:7, V: -1.29376220703125
nxt_rew: -3.2177734375, nxt: 11, state:7, V: -2.34820556640625
nxt_rew: -4.709716796875, nxt: 6, state:7, V: -3.775634765625
nxt_rew: -4.876708984375, nxt: 7, state:7, V: -4.876708984375
7 -4.876708984375
nxt_rew: -2.82421875, nxt: 4, state:8, V: -0.9560546875
nxt_rew: -2.73046875, nxt: 12, state:8, V: -1.888671875
nxt_rew: -2.775390625, nxt: 8, state:8, V: -2.775390625
nxt_rew: -3.23828125, nxt: 9, state:8, V: -3.8349609375
8 -3.8349609375
nxt_rew: -4.03125, nxt: 5, state:9, V: -1.2578125
nxt_rew: -3.404296875, nxt: 13, state:9, V: -2.35888671875
nxt_rew: -3.8349609375, nxt: 8, state:9, V: -3.567626953125
nxt_rew: -3.568359375, nxt: 10, state:9, V: -4.709716796875
9 -4.709716796875
nxt_rew: -4.709716796875, nxt: 6, state:10, V: -1.42742919921875
nxt_rew: -3.2177734375, nxt: 14, state:10, V: -2.48187255859375
nxt_rew: -4.709716796875, nxt: 9, state:10, V: -3.9093017578125
nxt_rew: -3.2177734375, nxt: 11, state:10, V: -4.9637451171875
10 -4.9637451171875
nxt_rew: -4.876708984375, nxt: 7, state:11, V: -1.46917724609375
nxt_rew: 0.0, nxt: 0, state:11, V: -1.71917724609375
nxt_rew: -4.9637451171875, nxt: 10, state:11, V: -3.210113525390625
nxt_rew: -4.264556884765625, nxt: 11, state:11, V: -4.264556884765625
11 -4.264556884765625
nxt_rew: -3.8349609375, nxt: 8, state:12, V: -1.208740234375
nxt_rew: -2.141357421875, nxt: 12, state:12, V: -2.141357421875
nxt_rew: -3.073974609375, nxt: 12, state:12, V: -3.073974609375
nxt_rew: -3.404296875, nxt: 13, state:12, V: -4.175048828125
12 -4.175048828125
nxt_rew: -4.709716796875, nxt: 9, state:13, V: -1.42742919921875
nxt_rew: -2.52850341796875, nxt: 13, state:13, V: -2.52850341796875
nxt_rew: -4.175048828125, nxt: 12, state:13, V: -3.822265625
nxt_rew: -3.2177734375, nxt: 14, state:13, V: -4.876708984375
13 -4.876708984375
nxt_rew: -4.9637451171875, nxt: 10, state:14, V: -1.490936279296875
nxt_rew: -2.545379638671875, nxt: 14, state:14, V: -2.545379638671875
nxt_rew: -4.876708984375, nxt: 13, state:14, V: -4.014556884765625
nxt_rew: 0.0, nxt: 0, state:14, V: -4.264556884765625
14 -4.264556884765625
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -0.9560546875, nxt: 1, state:1, V: -0.9560546875
nxt_rew: -4.03125, nxt: 5, state:1, V: -2.2138671875
nxt_rew: 0.0, nxt: 0, state:1, V: -2.4638671875
nxt_rew: -3.8349609375, nxt: 2, state:1, V: -3.672607421875
1 -3.672607421875
nxt_rew: -1.208740234375, nxt: 2, state:2, V: -1.208740234375
nxt_rew: -4.709716796875, nxt: 6, state:2, V: -2.63616943359375
nxt_rew: -3.672607421875, nxt: 1, state:2, V: -3.8043212890625
nxt_rew: -4.175048828125, nxt: 3, state:2, V: -5.09808349609375
2 -5.09808349609375
nxt_rew: -1.29376220703125, nxt: 3, state:3, V: -1.29376220703125
nxt_rew: -4.876708984375, nxt: 7, state:3, V: -2.762939453125
nxt_rew: -5.09808349609375, nxt: 2, state:3, V: -4.2874603271484375
nxt_rew: -5.5812225341796875, nxt: 3, state:3, V: -5.5812225341796875
3 -5.5812225341796875
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -3.8349609375, nxt: 8, state:4, V: -1.458740234375
nxt_rew: -2.414794921875, nxt: 4, state:4, V: -2.414794921875
nxt_rew: -4.03125, nxt: 5, state:4, V: -3.672607421875
4 -3.672607421875
nxt_rew: -3.672607421875, nxt: 1, state:5, V: -1.16815185546875
nxt_rew: -4.709716796875, nxt: 9, state:5, V: -2.5955810546875
nxt_rew: -3.672607421875, nxt: 4, state:5, V: -3.76373291015625
nxt_rew: -4.709716796875, nxt: 6, state:5, V: -5.191162109375
5 -5.191162109375
nxt_rew: -5.09808349609375, nxt: 2, state:6, V: -1.5245208740234375
nxt_rew: -4.9637451171875, nxt: 10, state:6, V: -3.0154571533203125
nxt_rew: -5.191162109375, nxt: 5, state:6, V: -4.5632476806640625
nxt_rew: -4.876708984375, nxt: 7, state:6, V: -6.0324249267578125
6 -6.0324249267578125
nxt_rew: -5.5812225341796875, nxt: 3, state:7, V: -1.6453056335449219
nxt_rew: -4.264556884765625, nxt: 11, state:7, V: -2.961444854736328
nxt_rew: -6.0324249267578125, nxt: 6, state:7, V: -4.719551086425781
nxt_rew: -6.188728332519531, nxt: 7, state:7, V: -6.188728332519531
7 -6.188728332519531
nxt_rew: -3.672607421875, nxt: 4, state:8, V: -1.16815185546875
nxt_rew: -4.175048828125, nxt: 12, state:8, V: -2.4619140625
nxt_rew: -3.670654296875, nxt: 8, state:8, V: -3.670654296875
nxt_rew: -4.709716796875, nxt: 9, state:8, V: -5.09808349609375
8 -5.09808349609375
nxt_rew: -5.191162109375, nxt: 5, state:9, V: -1.54779052734375
nxt_rew: -4.876708984375, nxt: 13, state:9, V: -3.0169677734375
nxt_rew: -5.09808349609375, nxt: 8, state:9, V: -4.5414886474609375
nxt_rew: -4.9637451171875, nxt: 10, state:9, V: -6.0324249267578125
9 -6.0324249267578125
nxt_rew: -6.0324249267578125, nxt: 6, state:10, V: -1.7581062316894531
nxt_rew: -4.264556884765625, nxt: 14, state:10, V: -3.0742454528808594
nxt_rew: -6.0324249267578125, nxt: 9, state:10, V: -4.8323516845703125
nxt_rew: -4.264556884765625, nxt: 11, state:10, V: -6.148490905761719
10 -6.148490905761719
nxt_rew: -6.188728332519531, nxt: 7, state:11, V: -1.7971820831298828
nxt_rew: 0.0, nxt: 0, state:11, V: -2.047182083129883
nxt_rew: -6.148490905761719, nxt: 10, state:11, V: -3.8343048095703125
nxt_rew: -5.150444030761719, nxt: 11, state:11, V: -5.150444030761719
11 -5.150444030761719
nxt_rew: -5.09808349609375, nxt: 8, state:12, V: -1.5245208740234375
nxt_rew: -2.8182830810546875, nxt: 12, state:12, V: -2.8182830810546875
nxt_rew: -4.1120452880859375, nxt: 12, state:12, V: -4.1120452880859375
nxt_rew: -4.876708984375, nxt: 13, state:12, V: -5.5812225341796875
12 -5.5812225341796875
nxt_rew: -6.0324249267578125, nxt: 9, state:13, V: -1.7581062316894531
nxt_rew: -3.227283477783203, nxt: 13, state:13, V: -3.227283477783203
nxt_rew: -5.5812225341796875, nxt: 12, state:13, V: -4.872589111328125
nxt_rew: -4.264556884765625, nxt: 14, state:13, V: -6.188728332519531
13 -6.188728332519531
nxt_rew: -6.148490905761719, nxt: 10, state:14, V: -1.7871227264404297
nxt_rew: -3.103261947631836, nxt: 14, state:14, V: -3.103261947631836
nxt_rew: -6.188728332519531, nxt: 13, state:14, V: -4.900444030761719
nxt_rew: 0.0, nxt: 0, state:14, V: -5.150444030761719
14 -5.150444030761719
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.16815185546875, nxt: 1, state:1, V: -1.16815185546875
nxt_rew: -5.191162109375, nxt: 5, state:1, V: -2.7159423828125
nxt_rew: 0.0, nxt: 0, state:1, V: -2.9659423828125
nxt_rew: -5.09808349609375, nxt: 2, state:1, V: -4.4904632568359375
1 -4.4904632568359375
nxt_rew: -1.5245208740234375, nxt: 2, state:2, V: -1.5245208740234375
nxt_rew: -6.0324249267578125, nxt: 6, state:2, V: -3.2826271057128906
nxt_rew: -4.4904632568359375, nxt: 1, state:2, V: -4.655242919921875
nxt_rew: -5.5812225341796875, nxt: 3, state:2, V: -6.300548553466797
2 -6.300548553466797
nxt_rew: -1.6453056335449219, nxt: 3, state:3, V: -1.6453056335449219
nxt_rew: -6.188728332519531, nxt: 7, state:3, V: -3.4424877166748047
nxt_rew: -6.300548553466797, nxt: 2, state:3, V: -5.267624855041504
nxt_rew: -6.912930488586426, nxt: 3, state:3, V: -6.912930488586426
3 -6.912930488586426
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -5.09808349609375, nxt: 8, state:4, V: -1.7745208740234375
nxt_rew: -2.9426727294921875, nxt: 4, state:4, V: -2.9426727294921875
nxt_rew: -5.191162109375, nxt: 5, state:4, V: -4.4904632568359375
4 -4.4904632568359375
nxt_rew: -4.4904632568359375, nxt: 1, state:5, V: -1.3726158142089844
nxt_rew: -6.0324249267578125, nxt: 9, state:5, V: -3.1307220458984375
nxt_rew: -4.4904632568359375, nxt: 4, state:5, V: -4.503337860107422
nxt_rew: -6.0324249267578125, nxt: 6, state:5, V: -6.261444091796875
5 -6.261444091796875
nxt_rew: -6.300548553466797, nxt: 2, state:6, V: -1.8251371383666992
nxt_rew: -6.148490905761719, nxt: 10, state:6, V: -3.612259864807129
nxt_rew: -6.261444091796875, nxt: 5, state:6, V: -5.427620887756348
nxt_rew: -6.188728332519531, nxt: 7, state:6, V: -7.2248029708862305
6 -7.2248029708862305
nxt_rew: -6.912930488586426, nxt: 3, state:7, V: -1.9782326221466064
nxt_rew: -5.150444030761719, nxt: 11, state:7, V: -3.515843629837036
nxt_rew: -7.2248029708862305, nxt: 6, state:7, V: -5.572044372558594
nxt_rew: -7.369226455688477, nxt: 7, state:7, V: -7.369226455688477
7 -7.369226455688477
nxt_rew: -4.4904632568359375, nxt: 4, state:8, V: -1.3726158142089844
nxt_rew: -5.5812225341796875, nxt: 12, state:8, V: -3.0179214477539062
nxt_rew: -4.542442321777344, nxt: 8, state:8, V: -4.542442321777344
nxt_rew: -6.0324249267578125, nxt: 9, state:8, V: -6.300548553466797
8 -6.300548553466797
nxt_rew: -6.261444091796875, nxt: 5, state:9, V: -1.8153610229492188
nxt_rew: -6.188728332519531, nxt: 13, state:9, V: -3.6125431060791016
nxt_rew: -6.300548553466797, nxt: 8, state:9, V: -5.437680244445801
nxt_rew: -6.148490905761719, nxt: 10, state:9, V: -7.2248029708862305
9 -7.2248029708862305
nxt_rew: -7.2248029708862305, nxt: 6, state:10, V: -2.0562007427215576
nxt_rew: -5.150444030761719, nxt: 14, state:10, V: -3.5938117504119873
nxt_rew: -7.2248029708862305, nxt: 9, state:10, V: -5.650012493133545
nxt_rew: -5.150444030761719, nxt: 11, state:10, V: -7.187623500823975
10 -7.187623500823975
nxt_rew: -7.369226455688477, nxt: 7, state:11, V: -2.092306613922119
nxt_rew: 0.0, nxt: 0, state:11, V: -2.342306613922119
nxt_rew: -7.187623500823975, nxt: 10, state:11, V: -4.389212489128113
nxt_rew: -5.9268234968185425, nxt: 11, state:11, V: -5.9268234968185425
11 -5.9268234968185425
nxt_rew: -6.300548553466797, nxt: 8, state:12, V: -1.8251371383666992
nxt_rew: -3.470442771911621, nxt: 12, state:12, V: -3.470442771911621
nxt_rew: -5.115748405456543, nxt: 12, state:12, V: -5.115748405456543
nxt_rew: -6.188728332519531, nxt: 13, state:12, V: -6.912930488586426
12 -6.912930488586426
nxt_rew: -7.2248029708862305, nxt: 9, state:13, V: -2.0562007427215576
nxt_rew: -3.8533828258514404, nxt: 13, state:13, V: -3.8533828258514404
nxt_rew: -6.912930488586426, nxt: 12, state:13, V: -5.831615447998047
nxt_rew: -5.150444030761719, nxt: 14, state:13, V: -7.369226455688477
13 -7.369226455688477
nxt_rew: -7.187623500823975, nxt: 10, state:14, V: -2.0469058752059937
nxt_rew: -3.5845168828964233, nxt: 14, state:14, V: -3.5845168828964233
nxt_rew: -7.369226455688477, nxt: 13, state:14, V: -5.6768234968185425
nxt_rew: 0.0, nxt: 0, state:14, V: -5.9268234968185425
14 -5.9268234968185425
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.3726158142089844, nxt: 1, state:1, V: -1.3726158142089844
nxt_rew: -6.261444091796875, nxt: 5, state:1, V: -3.187976837158203
nxt_rew: 0.0, nxt: 0, state:1, V: -3.437976837158203
nxt_rew: -6.300548553466797, nxt: 2, state:1, V: -5.263113975524902
1 -5.263113975524902
nxt_rew: -1.8251371383666992, nxt: 2, state:2, V: -1.8251371383666992
nxt_rew: -7.2248029708862305, nxt: 6, state:2, V: -3.881337881088257
nxt_rew: -5.263113975524902, nxt: 1, state:2, V: -5.447116374969482
nxt_rew: -6.912930488586426, nxt: 3, state:2, V: -7.425348997116089
2 -7.425348997116089
nxt_rew: -1.9782326221466064, nxt: 3, state:3, V: -1.9782326221466064
nxt_rew: -7.369226455688477, nxt: 7, state:3, V: -4.070539236068726
nxt_rew: -7.425348997116089, nxt: 2, state:3, V: -6.176876485347748
nxt_rew: -8.155109107494354, nxt: 3, state:3, V: -8.155109107494354
3 -8.155109107494354
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -6.300548553466797, nxt: 8, state:4, V: -2.075137138366699
nxt_rew: -3.4477529525756836, nxt: 4, state:4, V: -3.4477529525756836
nxt_rew: -6.261444091796875, nxt: 5, state:4, V: -5.263113975524902
4 -5.263113975524902
nxt_rew: -5.263113975524902, nxt: 1, state:5, V: -1.5657784938812256
nxt_rew: -7.2248029708862305, nxt: 9, state:5, V: -3.621979236602783
nxt_rew: -5.263113975524902, nxt: 4, state:5, V: -5.187757730484009
nxt_rew: -7.2248029708862305, nxt: 6, state:5, V: -7.243958473205566
5 -7.243958473205566
nxt_rew: -7.425348997116089, nxt: 2, state:6, V: -2.106337249279022
nxt_rew: -7.187623500823975, nxt: 10, state:6, V: -4.153243124485016
nxt_rew: -7.243958473205566, nxt: 5, state:6, V: -6.2142327427864075
nxt_rew: -7.369226455688477, nxt: 7, state:6, V: -8.306539356708527
6 -8.306539356708527
nxt_rew: -8.155109107494354, nxt: 3, state:7, V: -2.2887772768735886
nxt_rew: -5.9268234968185425, nxt: 11, state:7, V: -4.020483151078224
nxt_rew: -8.306539356708527, nxt: 6, state:7, V: -6.347117990255356
nxt_rew: -8.439424604177475, nxt: 7, state:7, V: -8.439424604177475
7 -8.439424604177475
nxt_rew: -5.263113975524902, nxt: 4, state:8, V: -1.5657784938812256
nxt_rew: -6.912930488586426, nxt: 12, state:8, V: -3.544011116027832
nxt_rew: -5.369148254394531, nxt: 8, state:8, V: -5.369148254394531
nxt_rew: -7.2248029708862305, nxt: 9, state:8, V: -7.425348997116089
8 -7.425348997116089
nxt_rew: -7.243958473205566, nxt: 5, state:9, V: -2.0609896183013916
nxt_rew: -7.369226455688477, nxt: 13, state:9, V: -4.153296232223511
nxt_rew: -7.425348997116089, nxt: 8, state:9, V: -6.259633481502533
nxt_rew: -7.187623500823975, nxt: 10, state:9, V: -8.306539356708527
9 -8.306539356708527
nxt_rew: -8.306539356708527, nxt: 6, state:10, V: -2.3266348391771317
nxt_rew: -5.9268234968185425, nxt: 14, state:10, V: -4.058340713381767
nxt_rew: -8.306539356708527, nxt: 9, state:10, V: -6.384975552558899
nxt_rew: -5.9268234968185425, nxt: 11, state:10, V: -8.116681426763535
10 -8.116681426763535
nxt_rew: -8.439424604177475, nxt: 7, state:11, V: -2.3598561510443687
nxt_rew: 0.0, nxt: 0, state:11, V: -2.6098561510443687
nxt_rew: -8.116681426763535, nxt: 10, state:11, V: -4.889026507735252
nxt_rew: -6.620732381939888, nxt: 11, state:11, V: -6.620732381939888
11 -6.620732381939888
nxt_rew: -7.425348997116089, nxt: 8, state:12, V: -2.106337249279022
nxt_rew: -4.084569871425629, nxt: 12, state:12, V: -4.084569871425629
nxt_rew: -6.062802493572235, nxt: 12, state:12, V: -6.062802493572235
nxt_rew: -7.369226455688477, nxt: 13, state:12, V: -8.155109107494354
12 -8.155109107494354
nxt_rew: -8.306539356708527, nxt: 9, state:13, V: -2.3266348391771317
nxt_rew: -4.418941453099251, nxt: 13, state:13, V: -4.418941453099251
nxt_rew: -8.155109107494354, nxt: 12, state:13, V: -6.707718729972839
nxt_rew: -5.9268234968185425, nxt: 14, state:13, V: -8.439424604177475
13 -8.439424604177475
nxt_rew: -8.116681426763535, nxt: 10, state:14, V: -2.2791703566908836
nxt_rew: -4.010876230895519, nxt: 14, state:14, V: -4.010876230895519
nxt_rew: -8.439424604177475, nxt: 13, state:14, V: -6.370732381939888
nxt_rew: 0.0, nxt: 0, state:14, V: -6.620732381939888
14 -6.620732381939888
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.5657784938812256, nxt: 1, state:1, V: -1.5657784938812256
nxt_rew: -7.243958473205566, nxt: 5, state:1, V: -3.626768112182617
nxt_rew: 0.0, nxt: 0, state:1, V: -3.876768112182617
nxt_rew: -7.425348997116089, nxt: 2, state:1, V: -5.983105361461639
1 -5.983105361461639
nxt_rew: -2.106337249279022, nxt: 2, state:2, V: -2.106337249279022
nxt_rew: -8.306539356708527, nxt: 6, state:2, V: -4.432972088456154
nxt_rew: -5.983105361461639, nxt: 1, state:2, V: -6.178748428821564
nxt_rew: -8.155109107494354, nxt: 3, state:2, V: -8.467525705695152
2 -8.467525705695152
nxt_rew: -2.2887772768735886, nxt: 3, state:3, V: -2.2887772768735886
nxt_rew: -8.439424604177475, nxt: 7, state:3, V: -4.648633427917957
nxt_rew: -8.467525705695152, nxt: 2, state:3, V: -7.015514854341745
nxt_rew: -9.304292131215334, nxt: 3, state:3, V: -9.304292131215334
3 -9.304292131215334
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -7.425348997116089, nxt: 8, state:4, V: -2.356337249279022
nxt_rew: -3.922115743160248, nxt: 4, state:4, V: -3.922115743160248
nxt_rew: -7.243958473205566, nxt: 5, state:4, V: -5.983105361461639
4 -5.983105361461639
nxt_rew: -5.983105361461639, nxt: 1, state:5, V: -1.7457763403654099
nxt_rew: -8.306539356708527, nxt: 9, state:5, V: -4.0724111795425415
nxt_rew: -5.983105361461639, nxt: 4, state:5, V: -5.818187519907951
nxt_rew: -8.306539356708527, nxt: 6, state:5, V: -8.144822359085083
5 -8.144822359085083
nxt_rew: -8.467525705695152, nxt: 2, state:6, V: -2.366881426423788
nxt_rew: -8.116681426763535, nxt: 10, state:6, V: -4.646051783114672
nxt_rew: -8.144822359085083, nxt: 5, state:6, V: -6.9322573728859425
nxt_rew: -8.439424604177475, nxt: 7, state:6, V: -9.292113523930311
6 -9.292113523930311
nxt_rew: -9.304292131215334, nxt: 3, state:7, V: -2.5760730328038335
nxt_rew: -6.620732381939888, nxt: 11, state:7, V: -4.4812561282888055
nxt_rew: -9.292113523930311, nxt: 6, state:7, V: -7.054284509271383
nxt_rew: -9.414140660315752, nxt: 7, state:7, V: -9.414140660315752
7 -9.414140660315752
nxt_rew: -5.983105361461639, nxt: 4, state:8, V: -1.7457763403654099
nxt_rew: -8.155109107494354, nxt: 12, state:8, V: -4.034553617238998
nxt_rew: -6.140890866518021, nxt: 8, state:8, V: -6.140890866518021
nxt_rew: -8.306539356708527, nxt: 9, state:8, V: -8.467525705695152
8 -8.467525705695152
nxt_rew: -8.144822359085083, nxt: 5, state:9, V: -2.2862055897712708
nxt_rew: -8.439424604177475, nxt: 13, state:9, V: -4.6460617408156395
nxt_rew: -8.467525705695152, nxt: 8, state:9, V: -7.012943167239428
nxt_rew: -8.116681426763535, nxt: 10, state:9, V: -9.292113523930311
9 -9.292113523930311
nxt_rew: -9.292113523930311, nxt: 6, state:10, V: -2.573028380982578
nxt_rew: -6.620732381939888, nxt: 14, state:10, V: -4.47821147646755
nxt_rew: -9.292113523930311, nxt: 9, state:10, V: -7.051239857450128
nxt_rew: -6.620732381939888, nxt: 11, state:10, V: -8.9564229529351
10 -8.9564229529351
nxt_rew: -9.414140660315752, nxt: 7, state:11, V: -2.603535165078938
nxt_rew: 0.0, nxt: 0, state:11, V: -2.853535165078938
nxt_rew: -8.9564229529351, nxt: 10, state:11, V: -5.342640903312713
nxt_rew: -7.247823998797685, nxt: 11, state:11, V: -7.247823998797685
11 -7.247823998797685
nxt_rew: -8.467525705695152, nxt: 8, state:12, V: -2.366881426423788
nxt_rew: -4.655658703297377, nxt: 12, state:12, V: -4.655658703297377
nxt_rew: -6.944435980170965, nxt: 12, state:12, V: -6.944435980170965
nxt_rew: -8.439424604177475, nxt: 13, state:12, V: -9.304292131215334
12 -9.304292131215334
nxt_rew: -9.292113523930311, nxt: 9, state:13, V: -2.573028380982578
nxt_rew: -4.9328845320269465, nxt: 13, state:13, V: -4.9328845320269465
nxt_rew: -9.304292131215334, nxt: 12, state:13, V: -7.50895756483078
nxt_rew: -6.620732381939888, nxt: 14, state:13, V: -9.414140660315752
13 -9.414140660315752
nxt_rew: -8.9564229529351, nxt: 10, state:14, V: -2.489105738233775
nxt_rew: -4.394288833718747, nxt: 14, state:14, V: -4.394288833718747
nxt_rew: -9.414140660315752, nxt: 13, state:14, V: -6.997823998797685
nxt_rew: 0.0, nxt: 0, state:14, V: -7.247823998797685
14 -7.247823998797685
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.7457763403654099, nxt: 1, state:1, V: -1.7457763403654099
nxt_rew: -8.144822359085083, nxt: 5, state:1, V: -4.031981930136681
nxt_rew: 0.0, nxt: 0, state:1, V: -4.281981930136681
nxt_rew: -8.467525705695152, nxt: 2, state:1, V: -6.648863356560469
1 -6.648863356560469
nxt_rew: -2.366881426423788, nxt: 2, state:2, V: -2.366881426423788
nxt_rew: -9.292113523930311, nxt: 6, state:2, V: -4.939909807406366
nxt_rew: -6.648863356560469, nxt: 1, state:2, V: -6.852125646546483
nxt_rew: -9.304292131215334, nxt: 3, state:2, V: -9.428198679350317
2 -9.428198679350317
nxt_rew: -2.5760730328038335, nxt: 3, state:3, V: -2.5760730328038335
nxt_rew: -9.414140660315752, nxt: 7, state:3, V: -5.1796081978827715
nxt_rew: -9.428198679350317, nxt: 2, state:3, V: -7.786657867720351
nxt_rew: -10.362730900524184, nxt: 3, state:3, V: -10.362730900524184
3 -10.362730900524184
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -8.467525705695152, nxt: 8, state:4, V: -2.616881426423788
nxt_rew: -4.362657766789198, nxt: 4, state:4, V: -4.362657766789198
nxt_rew: -8.144822359085083, nxt: 5, state:4, V: -6.648863356560469
4 -6.648863356560469
nxt_rew: -6.648863356560469, nxt: 1, state:5, V: -1.9122158391401172
nxt_rew: -9.292113523930311, nxt: 9, state:5, V: -4.485244220122695
nxt_rew: -6.648863356560469, nxt: 4, state:5, V: -6.397460059262812
nxt_rew: -9.292113523930311, nxt: 6, state:5, V: -8.97048844024539
5 -8.97048844024539
nxt_rew: -9.428198679350317, nxt: 2, state:6, V: -2.607049669837579
nxt_rew: -8.9564229529351, nxt: 10, state:6, V: -5.096155408071354
nxt_rew: -8.97048844024539, nxt: 5, state:6, V: -7.5887775181327015
nxt_rew: -9.414140660315752, nxt: 7, state:6, V: -10.19231268321164
6 -10.19231268321164
nxt_rew: -10.362730900524184, nxt: 3, state:7, V: -2.840682725131046
nxt_rew: -7.247823998797685, nxt: 11, state:7, V: -4.902638724830467
nxt_rew: -10.19231268321164, nxt: 6, state:7, V: -7.700716895633377
nxt_rew: -10.304252060712315, nxt: 7, state:7, V: -10.304252060712315
7 -10.304252060712315
nxt_rew: -6.648863356560469, nxt: 4, state:8, V: -1.9122158391401172
nxt_rew: -9.304292131215334, nxt: 12, state:8, V: -4.488288871943951
nxt_rew: -6.855170298367739, nxt: 8, state:8, V: -6.855170298367739
nxt_rew: -9.292113523930311, nxt: 9, state:8, V: -9.428198679350317
8 -9.428198679350317
nxt_rew: -8.97048844024539, nxt: 5, state:9, V: -2.4926221100613475
nxt_rew: -9.414140660315752, nxt: 13, state:9, V: -5.0961572751402855
nxt_rew: -9.428198679350317, nxt: 8, state:9, V: -7.703206944977865
nxt_rew: -8.9564229529351, nxt: 10, state:9, V: -10.19231268321164
9 -10.19231268321164
nxt_rew: -10.19231268321164, nxt: 6, state:10, V: -2.79807817080291
nxt_rew: -7.247823998797685, nxt: 14, state:10, V: -4.860034170502331
nxt_rew: -10.19231268321164, nxt: 9, state:10, V: -7.658112341305241
nxt_rew: -7.247823998797685, nxt: 11, state:10, V: -9.720068341004662
10 -9.720068341004662
nxt_rew: -10.304252060712315, nxt: 7, state:11, V: -2.826063015178079
nxt_rew: 0.0, nxt: 0, state:11, V: -3.076063015178079
nxt_rew: -9.720068341004662, nxt: 10, state:11, V: -5.756080100429244
nxt_rew: -7.818036100128666, nxt: 11, state:11, V: -7.818036100128666
11 -7.818036100128666
nxt_rew: -9.428198679350317, nxt: 8, state:12, V: -2.607049669837579
nxt_rew: -5.183122702641413, nxt: 12, state:12, V: -5.183122702641413
nxt_rew: -7.759195735445246, nxt: 12, state:12, V: -7.759195735445246
nxt_rew: -9.414140660315752, nxt: 13, state:12, V: -10.362730900524184
12 -10.362730900524184
nxt_rew: -10.19231268321164, nxt: 9, state:13, V: -2.79807817080291
nxt_rew: -5.401613335881848, nxt: 13, state:13, V: -5.401613335881848
nxt_rew: -10.362730900524184, nxt: 12, state:13, V: -8.242296061012894
nxt_rew: -7.247823998797685, nxt: 14, state:13, V: -10.304252060712315
13 -10.304252060712315
nxt_rew: -9.720068341004662, nxt: 10, state:14, V: -2.6800170852511656
nxt_rew: -4.741973084950587, nxt: 14, state:14, V: -4.741973084950587
nxt_rew: -10.304252060712315, nxt: 13, state:14, V: -7.568036100128666
nxt_rew: 0.0, nxt: 0, state:14, V: -7.818036100128666
14 -7.818036100128666
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -1.9122158391401172, nxt: 1, state:1, V: -1.9122158391401172
nxt_rew: -8.97048844024539, nxt: 5, state:1, V: -4.404837949201465
nxt_rew: 0.0, nxt: 0, state:1, V: -4.654837949201465
nxt_rew: -9.428198679350317, nxt: 2, state:1, V: -7.261887619039044
1 -7.261887619039044
nxt_rew: -2.607049669837579, nxt: 2, state:2, V: -2.607049669837579
nxt_rew: -10.19231268321164, nxt: 6, state:2, V: -5.405127840640489
nxt_rew: -7.261887619039044, nxt: 1, state:2, V: -7.47059974540025
nxt_rew: -10.362730900524184, nxt: 3, state:2, V: -10.311282470531296
2 -10.311282470531296
nxt_rew: -2.840682725131046, nxt: 3, state:3, V: -2.840682725131046
nxt_rew: -10.304252060712315, nxt: 7, state:3, V: -5.666745740309125
nxt_rew: -10.311282470531296, nxt: 2, state:3, V: -8.494566357941949
nxt_rew: -11.335249083072995, nxt: 3, state:3, V: -11.335249083072995
3 -11.335249083072995
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -9.428198679350317, nxt: 8, state:4, V: -2.857049669837579
nxt_rew: -4.769265508977696, nxt: 4, state:4, V: -4.769265508977696
nxt_rew: -8.97048844024539, nxt: 5, state:4, V: -7.261887619039044
4 -7.261887619039044
nxt_rew: -7.261887619039044, nxt: 1, state:5, V: -2.065471904759761
nxt_rew: -10.19231268321164, nxt: 9, state:5, V: -4.863550075562671
nxt_rew: -7.261887619039044, nxt: 4, state:5, V: -6.929021980322432
nxt_rew: -10.19231268321164, nxt: 6, state:5, V: -9.727100151125342
5 -9.727100151125342
nxt_rew: -10.311282470531296, nxt: 2, state:6, V: -2.827820617632824
nxt_rew: -9.720068341004662, nxt: 10, state:6, V: -5.5078377028839896
nxt_rew: -9.727100151125342, nxt: 5, state:6, V: -8.189612740665325
nxt_rew: -10.304252060712315, nxt: 7, state:6, V: -11.015675755843404
6 -11.015675755843404
nxt_rew: -11.335249083072995, nxt: 3, state:7, V: -3.0838122707682487
nxt_rew: -7.818036100128666, nxt: 11, state:7, V: -5.288321295800415
nxt_rew: -11.015675755843404, nxt: 6, state:7, V: -8.292240234761266
nxt_rew: -11.118303249939345, nxt: 7, state:7, V: -11.118303249939345
7 -11.118303249939345
nxt_rew: -7.261887619039044, nxt: 4, state:8, V: -2.065471904759761
nxt_rew: -10.362730900524184, nxt: 12, state:8, V: -4.906154629890807
nxt_rew: -7.513204299728386, nxt: 8, state:8, V: -7.513204299728386
nxt_rew: -10.19231268321164, nxt: 9, state:8, V: -10.311282470531296
8 -10.311282470531296
nxt_rew: -9.727100151125342, nxt: 5, state:9, V: -2.6817750377813354
nxt_rew: -10.304252060712315, nxt: 13, state:9, V: -5.507838052959414
nxt_rew: -10.311282470531296, nxt: 8, state:9, V: -8.335658670592238
nxt_rew: -9.720068341004662, nxt: 10, state:9, V: -11.015675755843404
9 -11.015675755843404
nxt_rew: -11.015675755843404, nxt: 6, state:10, V: -3.003918938960851
nxt_rew: -7.818036100128666, nxt: 14, state:10, V: -5.208427963993017
nxt_rew: -11.015675755843404, nxt: 9, state:10, V: -8.212346902953868
nxt_rew: -7.818036100128666, nxt: 11, state:10, V: -10.416855927986035
10 -10.416855927986035
nxt_rew: -11.118303249939345, nxt: 7, state:11, V: -3.029575812484836
nxt_rew: 0.0, nxt: 0, state:11, V: -3.279575812484836
nxt_rew: -10.416855927986035, nxt: 10, state:11, V: -6.133789794481345
nxt_rew: -8.338298819513511, nxt: 11, state:11, V: -8.338298819513511
11 -8.338298819513511
nxt_rew: -10.311282470531296, nxt: 8, state:12, V: -2.827820617632824
nxt_rew: -5.66850334276387, nxt: 12, state:12, V: -5.66850334276387
nxt_rew: -8.509186067894916, nxt: 12, state:12, V: -8.509186067894916
nxt_rew: -10.304252060712315, nxt: 13, state:12, V: -11.335249083072995
12 -11.335249083072995
nxt_rew: -11.015675755843404, nxt: 9, state:13, V: -3.003918938960851
nxt_rew: -5.82998195413893, nxt: 13, state:13, V: -5.82998195413893
nxt_rew: -11.335249083072995, nxt: 12, state:13, V: -8.913794224907178
nxt_rew: -7.818036100128666, nxt: 14, state:13, V: -11.118303249939345
13 -11.118303249939345
nxt_rew: -10.416855927986035, nxt: 10, state:14, V: -2.8542139819965087
nxt_rew: -5.058723007028675, nxt: 14, state:14, V: -5.058723007028675
nxt_rew: -11.118303249939345, nxt: 13, state:14, V: -8.088298819513511
nxt_rew: 0.0, nxt: 0, state:14, V: -8.338298819513511
14 -8.338298819513511
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -2.065471904759761, nxt: 1, state:1, V: -2.065471904759761
nxt_rew: -9.727100151125342, nxt: 5, state:1, V: -4.747246942541096
nxt_rew: 0.0, nxt: 0, state:1, V: -4.997246942541096
nxt_rew: -10.311282470531296, nxt: 2, state:1, V: -7.82506756017392
1 -7.82506756017392
nxt_rew: -2.827820617632824, nxt: 2, state:2, V: -2.827820617632824
nxt_rew: -11.015675755843404, nxt: 6, state:2, V: -5.831739556593675
nxt_rew: -7.82506756017392, nxt: 1, state:2, V: -8.038006446637155
nxt_rew: -11.335249083072995, nxt: 3, state:2, V: -11.121818717405404
2 -11.121818717405404
nxt_rew: -3.0838122707682487, nxt: 3, state:3, V: -3.0838122707682487
nxt_rew: -11.118303249939345, nxt: 7, state:3, V: -6.113388083253085
nxt_rew: -11.121818717405404, nxt: 2, state:3, V: -9.143842762604436
nxt_rew: -12.227655033372685, nxt: 3, state:3, V: -12.227655033372685
3 -12.227655033372685
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -10.311282470531296, nxt: 8, state:4, V: -3.077820617632824
nxt_rew: -5.143292522392585, nxt: 4, state:4, V: -5.143292522392585
nxt_rew: -9.727100151125342, nxt: 5, state:4, V: -7.82506756017392
4 -7.82506756017392
nxt_rew: -7.82506756017392, nxt: 1, state:5, V: -2.20626689004348
nxt_rew: -11.015675755843404, nxt: 9, state:5, V: -5.210185829004331
nxt_rew: -7.82506756017392, nxt: 4, state:5, V: -7.416452719047811
nxt_rew: -11.015675755843404, nxt: 6, state:5, V: -10.420371658008662
5 -10.420371658008662
nxt_rew: -11.121818717405404, nxt: 2, state:6, V: -3.030454679351351
nxt_rew: -10.416855927986035, nxt: 10, state:6, V: -5.88466866134786
nxt_rew: -10.420371658008662, nxt: 5, state:6, V: -8.739761575850025
nxt_rew: -11.118303249939345, nxt: 7, state:6, V: -11.769337388334861
6 -11.769337388334861
nxt_rew: -12.227655033372685, nxt: 3, state:7, V: -3.306913758343171
nxt_rew: -8.338298819513511, nxt: 11, state:7, V: -5.641488463221549
nxt_rew: -11.769337388334861, nxt: 6, state:7, V: -8.833822810305264
nxt_rew: -11.8633986227901, nxt: 7, state:7, V: -11.8633986227901
7 -11.8633986227901
nxt_rew: -7.82506756017392, nxt: 4, state:8, V: -2.20626689004348
nxt_rew: -11.335249083072995, nxt: 12, state:8, V: -5.290079160811729
nxt_rew: -8.117899778444553, nxt: 8, state:8, V: -8.117899778444553
nxt_rew: -11.015675755843404, nxt: 9, state:8, V: -11.121818717405404
8 -11.121818717405404
nxt_rew: -10.420371658008662, nxt: 5, state:9, V: -2.8550929145021655
nxt_rew: -11.118303249939345, nxt: 13, state:9, V: -5.884668726987002
nxt_rew: -11.121818717405404, nxt: 8, state:9, V: -8.915123406338353
nxt_rew: -10.416855927986035, nxt: 10, state:9, V: -11.769337388334861
9 -11.769337388334861
nxt_rew: -11.769337388334861, nxt: 6, state:10, V: -3.1923343470837153
nxt_rew: -8.338298819513511, nxt: 14, state:10, V: -5.526909051962093
nxt_rew: -11.769337388334861, nxt: 9, state:10, V: -8.719243399045808
nxt_rew: -8.338298819513511, nxt: 11, state:10, V: -11.053818103924186
10 -11.053818103924186
nxt_rew: -11.8633986227901, nxt: 7, state:11, V: -3.215849655697525
nxt_rew: 0.0, nxt: 0, state:11, V: -3.465849655697525
nxt_rew: -11.053818103924186, nxt: 10, state:11, V: -6.479304181678572
nxt_rew: -8.81387888655695, nxt: 11, state:11, V: -8.81387888655695
11 -8.81387888655695
nxt_rew: -11.121818717405404, nxt: 8, state:12, V: -3.030454679351351
nxt_rew: -6.1142669501196, nxt: 12, state:12, V: -6.1142669501196
nxt_rew: -9.198079220887848, nxt: 12, state:12, V: -9.198079220887848
nxt_rew: -11.118303249939345, nxt: 13, state:12, V: -12.227655033372685
12 -12.227655033372685
nxt_rew: -11.769337388334861, nxt: 9, state:13, V: -3.1923343470837153
nxt_rew: -6.2219101595685515, nxt: 13, state:13, V: -6.2219101595685515
nxt_rew: -12.227655033372685, nxt: 12, state:13, V: -9.528823917911723
nxt_rew: -8.338298819513511, nxt: 14, state:13, V: -11.8633986227901
13 -11.8633986227901
nxt_rew: -11.053818103924186, nxt: 10, state:14, V: -3.0134545259810466
nxt_rew: -5.348029230859424, nxt: 14, state:14, V: -5.348029230859424
nxt_rew: -11.8633986227901, nxt: 13, state:14, V: -8.56387888655695
nxt_rew: 0.0, nxt: 0, state:14, V: -8.81387888655695
14 -8.81387888655695
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -2.20626689004348, nxt: 1, state:1, V: -2.20626689004348
nxt_rew: -10.420371658008662, nxt: 5, state:1, V: -5.061359804545646
nxt_rew: 0.0, nxt: 0, state:1, V: -5.311359804545646
nxt_rew: -11.121818717405404, nxt: 2, state:1, V: -8.341814483896997
1 -8.341814483896997
nxt_rew: -3.030454679351351, nxt: 2, state:2, V: -3.030454679351351
nxt_rew: -11.769337388334861, nxt: 6, state:2, V: -6.222789026435066
nxt_rew: -8.341814483896997, nxt: 1, state:2, V: -8.558242647409315
nxt_rew: -12.227655033372685, nxt: 3, state:2, V: -11.865156405752487
2 -11.865156405752487
nxt_rew: -3.306913758343171, nxt: 3, state:3, V: -3.306913758343171
nxt_rew: -11.8633986227901, nxt: 7, state:3, V: -6.522763414040696
nxt_rew: -11.865156405752487, nxt: 2, state:3, V: -9.739052515478818
nxt_rew: -13.045966273821989, nxt: 3, state:3, V: -13.045966273821989
3 -13.045966273821989
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -11.121818717405404, nxt: 8, state:4, V: -3.280454679351351
nxt_rew: -5.486721569394831, nxt: 4, state:4, V: -5.486721569394831
nxt_rew: -10.420371658008662, nxt: 5, state:4, V: -8.341814483896997
4 -8.341814483896997
nxt_rew: -8.341814483896997, nxt: 1, state:5, V: -2.335453620974249
nxt_rew: -11.769337388334861, nxt: 9, state:5, V: -5.5277879680579645
nxt_rew: -8.341814483896997, nxt: 4, state:5, V: -7.863241589032214
nxt_rew: -11.769337388334861, nxt: 6, state:5, V: -11.055575936115929
5 -11.055575936115929
nxt_rew: -11.865156405752487, nxt: 2, state:6, V: -3.2162891014381216
nxt_rew: -11.053818103924186, nxt: 10, state:6, V: -6.229743627419168
nxt_rew: -11.055575936115929, nxt: 5, state:6, V: -9.24363761144815
nxt_rew: -11.8633986227901, nxt: 7, state:6, V: -12.459487267145676
6 -12.459487267145676
nxt_rew: -13.045966273821989, nxt: 3, state:7, V: -3.5114915684554973
nxt_rew: -8.81387888655695, nxt: 11, state:7, V: -5.964961290094735
nxt_rew: -12.459487267145676, nxt: 6, state:7, V: -9.329833106881154
nxt_rew: -12.545682762578679, nxt: 7, state:7, V: -12.545682762578679
7 -12.545682762578679
nxt_rew: -8.341814483896997, nxt: 4, state:8, V: -2.335453620974249
nxt_rew: -12.227655033372685, nxt: 12, state:8, V: -5.64236737931742
nxt_rew: -8.672822058668771, nxt: 8, state:8, V: -8.672822058668771
nxt_rew: -11.769337388334861, nxt: 9, state:8, V: -11.865156405752487
8 -11.865156405752487
nxt_rew: -11.055575936115929, nxt: 5, state:9, V: -3.0138939840289822
nxt_rew: -11.8633986227901, nxt: 13, state:9, V: -6.229743639726507
nxt_rew: -11.865156405752487, nxt: 8, state:9, V: -9.446032741164629
nxt_rew: -11.053818103924186, nxt: 10, state:9, V: -12.459487267145676
9 -12.459487267145676
nxt_rew: -12.459487267145676, nxt: 6, state:10, V: -3.364871816786419
nxt_rew: -8.81387888655695, nxt: 14, state:10, V: -5.818341538425656
nxt_rew: -12.459487267145676, nxt: 9, state:10, V: -9.183213355212075
nxt_rew: -8.81387888655695, nxt: 11, state:10, V: -11.636683076851313
10 -11.636683076851313
nxt_rew: -12.545682762578679, nxt: 7, state:11, V: -3.3864206906446697
nxt_rew: 0.0, nxt: 0, state:11, V: -3.6364206906446697
nxt_rew: -11.636683076851313, nxt: 10, state:11, V: -6.795591459857498
nxt_rew: -9.249061181496735, nxt: 11, state:11, V: -9.249061181496735
11 -9.249061181496735
nxt_rew: -11.865156405752487, nxt: 8, state:12, V: -3.2162891014381216
nxt_rew: -6.523202859781293, nxt: 12, state:12, V: -6.523202859781293
nxt_rew: -9.830116618124464, nxt: 12, state:12, V: -9.830116618124464
nxt_rew: -11.8633986227901, nxt: 13, state:12, V: -13.045966273821989
12 -13.045966273821989
nxt_rew: -12.459487267145676, nxt: 9, state:13, V: -3.364871816786419
nxt_rew: -6.580721472483944, nxt: 13, state:13, V: -6.580721472483944
nxt_rew: -13.045966273821989, nxt: 12, state:13, V: -10.092213040939441
nxt_rew: -8.81387888655695, nxt: 14, state:13, V: -12.545682762578679
13 -12.545682762578679
nxt_rew: -11.636683076851313, nxt: 10, state:14, V: -3.159170769212828
nxt_rew: -5.6126404908520655, nxt: 14, state:14, V: -5.6126404908520655
nxt_rew: -12.545682762578679, nxt: 13, state:14, V: -8.999061181496735
nxt_rew: 0.0, nxt: 0, state:14, V: -9.249061181496735
14 -9.249061181496735
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -2.335453620974249, nxt: 1, state:1, V: -2.335453620974249
nxt_rew: -11.055575936115929, nxt: 5, state:1, V: -5.349347605003231
nxt_rew: 0.0, nxt: 0, state:1, V: -5.599347605003231
nxt_rew: -11.865156405752487, nxt: 2, state:1, V: -8.815636706441353
1 -8.815636706441353
nxt_rew: -3.2162891014381216, nxt: 2, state:2, V: -3.2162891014381216
nxt_rew: -12.459487267145676, nxt: 6, state:2, V: -6.5811609182245405
nxt_rew: -8.815636706441353, nxt: 1, state:2, V: -9.035070094834879
nxt_rew: -13.045966273821989, nxt: 3, state:2, V: -12.546561663290376
2 -12.546561663290376
nxt_rew: -3.5114915684554973, nxt: 3, state:3, V: -3.5114915684554973
nxt_rew: -12.545682762578679, nxt: 7, state:3, V: -6.897912259100167
nxt_rew: -12.546561663290376, nxt: 2, state:3, V: -10.284552674922761
nxt_rew: -13.796044243378258, nxt: 3, state:3, V: -13.796044243378258
3 -13.796044243378258
nxt_rew: 0.0, nxt: 0, state:4, V: -0.25
nxt_rew: -11.865156405752487, nxt: 8, state:4, V: -3.4662891014381216
nxt_rew: -5.801742722412371, nxt: 4, state:4, V: -5.801742722412371
nxt_rew: -11.055575936115929, nxt: 5, state:4, V: -8.815636706441353
4 -8.815636706441353
nxt_rew: -8.815636706441353, nxt: 1, state:5, V: -2.4539091766103382
nxt_rew: -12.459487267145676, nxt: 9, state:5, V: -5.818780993396757
nxt_rew: -8.815636706441353, nxt: 4, state:5, V: -8.272690170007095
nxt_rew: -12.459487267145676, nxt: 6, state:5, V: -11.637561986793514
5 -11.637561986793514
nxt_rew: -12.546561663290376, nxt: 2, state:6, V: -3.386640415822594
nxt_rew: -11.636683076851313, nxt: 10, state:6, V: -6.545811185035422
nxt_rew: -11.637561986793514, nxt: 5, state:6, V: -9.7052016817338
nxt_rew: -12.545682762578679, nxt: 7, state:6, V: -13.09162237237847
6 -13.09162237237847
nxt_rew: -13.796044243378258, nxt: 3, state:7, V: -3.6990110608445645
nxt_rew: -9.249061181496735, nxt: 11, state:7, V: -6.261276356218748
nxt_rew: -13.09162237237847, nxt: 6, state:7, V: -9.784181949313366
nxt_rew: -13.170602639958036, nxt: 7, state:7, V: -13.170602639958036
7 -13.170602639958036
nxt_rew: -8.815636706441353, nxt: 4, state:8, V: -2.4539091766103382
nxt_rew: -13.045966273821989, nxt: 12, state:8, V: -5.9654007450658355
nxt_rew: -9.181689846503957, nxt: 8, state:8, V: -9.181689846503957
nxt_rew: -12.459487267145676, nxt: 9, state:8, V: -12.546561663290376
8 -12.546561663290376
nxt_rew: -11.637561986793514, nxt: 5, state:9, V: -3.1593904966983786
nxt_rew: -12.545682762578679, nxt: 13, state:9, V: -6.545811187343048
nxt_rew: -12.546561663290376, nxt: 8, state:9, V: -9.932451603165642
nxt_rew: -11.636683076851313, nxt: 10, state:9, V: -13.09162237237847
9 -13.09162237237847
nxt_rew: -13.09162237237847, nxt: 6, state:10, V: -3.5229055930946176
nxt_rew: -9.249061181496735, nxt: 14, state:10, V: -6.085170888468801
nxt_rew: -13.09162237237847, nxt: 9, state:10, V: -9.608076481563419
nxt_rew: -9.249061181496735, nxt: 11, state:10, V: -12.170341776937603
10 -12.170341776937603
nxt_rew: -13.170602639958036, nxt: 7, state:11, V: -3.542650659989509
nxt_rew: 0.0, nxt: 0, state:11, V: -3.792650659989509
nxt_rew: -12.170341776937603, nxt: 10, state:11, V: -7.08523610422391
nxt_rew: -9.647501399598093, nxt: 11, state:11, V: -9.647501399598093
11 -9.647501399598093
nxt_rew: -12.546561663290376, nxt: 8, state:12, V: -3.386640415822594
nxt_rew: -6.898131984278091, nxt: 12, state:12, V: -6.898131984278091
nxt_rew: -10.409623552733589, nxt: 12, state:12, V: -10.409623552733589
nxt_rew: -12.545682762578679, nxt: 13, state:12, V: -13.796044243378258
12 -13.796044243378258
nxt_rew: -13.09162237237847, nxt: 9, state:13, V: -3.5229055930946176
nxt_rew: -6.909326283739287, nxt: 13, state:13, V: -6.909326283739287
nxt_rew: -13.796044243378258, nxt: 12, state:13, V: -10.608337344583852
nxt_rew: -9.249061181496735, nxt: 14, state:13, V: -13.170602639958036
13 -13.170602639958036
nxt_rew: -12.170341776937603, nxt: 10, state:14, V: -3.2925854442344007
nxt_rew: -5.854850739608585, nxt: 14, state:14, V: -5.854850739608585
nxt_rew: -13.170602639958036, nxt: 13, state:14, V: -9.397501399598093
nxt_rew: 0.0, nxt: 0, state:14, V: -9.647501399598093
14 -9.647501399598093
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
nxt_rew: 0.0, nxt: 0, state:0, V: 0.0
0 0.0
nxt_rew: -2.4539091766103382, nxt: 1, state:1, V: -2.4539091766103382
nxt_rew: -11.637561986793514, nxt: 5, state:1, V: -5.613299673308717
nxt_rew: 0.0, nxt: 0, state:1, V: -5.863299673308717
nxt_rew: -12.546561663290376, nxt: 2, state:1, V: -9.24994008913131
1 -9.24994008913131
nxt_rew: -3.386640415822594, nxt: 2, state:2, V: -3.386640415822594
|
tutorials/hosting_capacity.ipynb | ###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluting constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_mw():
return normal(loc=0.5, scale=0.05)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.05 MW. Depending on the existing information, it would also possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_mw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_mw, violation_type]
break
else:
plant_size = get_plant_size_mw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_mw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using pandas/matplotlib:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [MW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluting constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_kw():
return normal(loc=500, scale=50)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 500 kW and a standard deviation of 50 kW. Depending on the existing information, it would also possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_kw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_kw, violation_type]
break
else:
plant_size = get_plant_size_kw()
pp.create_sgen(net, chose_bus(net), p_kw=-plant_size, q_kvar=0)
installed_kw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using pandas/matplotlib:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results, width=.1, ax=ax)
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [kW]")
ax = axes[1]
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
D:\Python\Anaconda\lib\site-packages\seaborn\categorical.py:2171: UserWarning: The boxplot API has been changed. Attempting to adjust your arguments for the new API (which might not work). Please update your code. See the version 0.6 release notes for more info.
warnings.warn(msg, UserWarning)
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluting constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_kw():
return normal(loc=0.5, scale=0.05)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.005 MW. Depending on the existing information, it would also possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_kw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_kw, violation_type]
break
else:
plant_size = get_plant_size_kw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_kw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using pandas/matplotlib:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [kW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluating constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_mw():
return normal(loc=0.5, scale=0.05)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.05 MW. Depending on the existing information, it would also be possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_mw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_mw, violation_type]
break
else:
plant_size = get_plant_size_mw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_mw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using matplotlib and seaborn:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [MW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluating constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_mw():
return normal(loc=0.5, scale=0.05)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.05 MW. Depending on the existing information, it would also be possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_mw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_mw, violation_type]
break
else:
plant_size = get_plant_size_mw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_mw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using matplotlib and seaborn:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [MW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_core.py:1326: UserWarning: Vertical orientation ignored with only `x` specified.
warnings.warn(single_var_warning.format("Vertical", "x"))
C:\Users\KRONTI~1\AppData\Local\Temp/ipykernel_21244/1743167715.py:15: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels([""])
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluting constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_kw():
return normal(loc=500, scale=50)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 500 kW and a standard deviation of 50 kW. Depending on the existing information, it would also possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_kw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_kw, violation_type]
break
else:
plant_size = get_plant_size_kw()
pp.create_sgen(net, chose_bus(net), p_kw=-plant_size, q_kvar=0)
installed_kw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using pandas/matplotlib:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [kW]")
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Hosting CapacityThe term PV hosting capacity is defined as the maximum PV capacity which can be connected to a specific grid, while still complying with relevant grid codes and grid planning principles. Here we will introduce a basic algorithm to calculate PV hosting capacity with pandapower.The basic idea of calculating hosting capacity is to increase PV installation until a violation of any planning principle or constraint occurs. To analyse hosting capacity, we need three basic building blocks:1. Evaluating constraint violations2. Chosing connection points for new PV plants3. Defining the installed power of new PV plants Evaluation of constraint violationsOur example function that evaluates constraint violation is defined as:
###Code
import pandapower as pp
def violations(net):
pp.runpp(net)
if net.res_line.loading_percent.max() > 50:
return (True, "Line \n Overloading")
elif net.res_trafo.loading_percent.max() > 50:
return (True, "Transformer \n Overloading")
elif net.res_bus.vm_pu.max() > 1.04:
return (True, "Voltage \n Violation")
else:
return (False, None)
###Output
_____no_output_____
###Markdown
The function runs a power flow and then checks for line loading and transformer loading (both of which have to be below 50%) and for voltage rise (which has to be below 1.04 pu). The function returns a boolean flag to signal if any constraint is violated as well as a string that indicates the type of constraint violation. Chosing a connection busIf new PV plants are installed, a connection bus has to be chosen. Here, we chose one random bus of each of the buses that have a load connection:
###Code
from numpy.random import choice
def chose_bus(net):
return choice(net.load.bus.values)
###Output
_____no_output_____
###Markdown
Chosing a PV plant sizeThe function that returns a plant size is given as:
###Code
from numpy.random import normal
def get_plant_size_mw():
return normal(loc=0.5, scale=0.05)
###Output
_____no_output_____
###Markdown
This function returns a random value from a normal distribution with a mean of 0.5 MW and a standard deviation of 0.05 MW. Depending on the existing information, it would also be possible to use other probability distributions, such as a Weibull distribution, or to draw values from existing plant sizes. Evaluating Hosting CapacityWe now use these building blocks to evaluate hosting capacity in a generic network. We use the MV Oberrhein network from the pandapower networks package as an example:
###Code
import pandapower.networks as nw
def load_network():
return nw.mv_oberrhein(scenario="generation")
###Output
_____no_output_____
###Markdown
The hosting capacity is then evaluated like this:
###Code
import pandas as pd
iterations = 50
results = pd.DataFrame(columns=["installed", "violation"])
for i in range(iterations):
net = load_network()
installed_mw = 0
while 1:
violated, violation_type = violations(net)
if violated:
results.loc[i] = [installed_mw, violation_type]
break
else:
plant_size = get_plant_size_mw()
pp.create_sgen(net, chose_bus(net), p_mw=plant_size, q_mvar=0)
installed_mw += plant_size
###Output
_____no_output_____
###Markdown
This algorithm adds new PV plants until a violation of any constraint occurs. Then, it saves the installed PV capacity. This is carried out for a number of iteration (here: 50) to get a distribution of hosting capacity values depending on connection points and plant sizes.The results can be visualized using matplotlib and seaborn:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('xtick', labelsize=18) # fontsize of the tick labels
plt.rc('ytick', labelsize=18) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # fontsize of the tick labels
plt.rc('axes', labelsize=20) # fontsize of the tick labels
plt.rcParams['font.size'] = 20
import seaborn as sns
sns.set_style("whitegrid", {'axes.grid' : False})
"""
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
ax = axes[0]
sns.boxplot(results.installed, width=.1, ax=ax, orient="v")
ax.set_xticklabels([""])
ax.set_ylabel("Installed Capacity [MW]")
"""
ax = axes[1]
ax.axis("equal")
results.violation.value_counts().plot(kind="pie", ax=ax, autopct=lambda x:"%.0f %%"%x)
ax.set_ylabel("")
ax.set_xlabel("")
sns.despine()
plt.tight_layout()
###Output
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\ProgramData\Anaconda3\lib\site-packages\seaborn\_core.py:1303: UserWarning: Vertical orientation ignored with only `x` specified.
warnings.warn(single_var_warning.format("Vertical", "x"))
<ipython-input-7-dc62282704be>:15: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels([""])
|
notebooks/tg/ttn/general/real/amplitude/mnist_gt_4.ipynb | ###Markdown
Imports
###Code
import math
import pandas as pd
import pennylane as qml
import time
from keras.datasets import mnist
from matplotlib import pyplot as plt
from pennylane import numpy as np
from pennylane.templates import AmplitudeEmbedding, AngleEmbedding
from pennylane.templates.subroutines import ArbitraryUnitary
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
Model Params
###Code
np.random.seed(131)
initial_params = np.random.random([13])
INITIALIZATION_METHOD = 'Amplitude'
BATCH_SIZE = 20
EPOCHS = 400
STEP_SIZE = 0.01
BETA_1 = 0.9
BETA_2 = 0.99
EPSILON = 0.00000001
TRAINING_SIZE = 0.78
VALIDATION_SIZE = 0.07
TEST_SIZE = 1-TRAINING_SIZE-VALIDATION_SIZE
initial_time = time.time()
###Output
_____no_output_____
###Markdown
Import dataset
###Code
(train_X, train_y), (test_X, test_y) = mnist.load_data()
examples = np.append(train_X, test_X, axis=0)
examples = examples.reshape(70000, 28*28)
classes = np.append(train_y, test_y)
x = []
y = []
for (example, label) in zip(examples, classes):
if label in [0, 1, 2, 3]:
x.append(example)
y.append(-1)
else:
x.append(example)
y.append(1)
x = np.array(x)
y = np.array(y)
# Normalize pixels values
x = x / 255
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=TEST_SIZE, shuffle=True)
validation_indexes = np.random.random_integers(len(X_train), size=(math.floor(len(X_train)*VALIDATION_SIZE),))
X_validation = [X_train[n-1] for n in validation_indexes]
y_validation = [y_train[n-1] for n in validation_indexes]
pca = PCA(n_components=8)
pca.fit(X_train)
X_train = pca.transform(X_train)
X_validation = pca.transform(X_validation)
X_test = pca.transform(X_test)
preprocessing_time = time.time()
###Output
_____no_output_____
###Markdown
Circuit creation
###Code
device = qml.device("default.qubit", wires=3)
def unitary(params, wire1, wire2):
# qml.RZ(0, wires=wire1)
qml.RY(params[0], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[1], wires=wire2)
# qml.RZ(0, wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[2], wires=wire2)
qml.CNOT(wires=[wire1, wire2])
qml.RY(params[3], wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[4], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[5], wires=wire2)
# qml.RZ(0, wires=wire2)
@qml.qnode(device)
def circuit(features, params):
# Load state
if INITIALIZATION_METHOD == 'Amplitude':
AmplitudeEmbedding(features=features, wires=range(3), normalize=True, pad_with=0.)
else:
AngleEmbedding(features=features, wires=range(3), rotation='Y')
# First layer
unitary(params[0:6], 0, 1)
# Second layer
unitary(params[6:12], 1, 2)
# Third layer
qml.RY(params[12], wires=2)
# Measurement
return qml.expval(qml.PauliZ(2))
###Output
_____no_output_____
###Markdown
Circuit example
###Code
features = X_train[0]
print(f"Inital parameters: {initial_params}\n")
print(f"Example features: {features}\n")
print(f"Expectation value: {circuit(features, initial_params)}\n")
print(circuit.draw())
###Output
Inital parameters: [0.65015361 0.94810917 0.38802889 0.64129616 0.69051205 0.12660931
0.23946678 0.25415707 0.42644165 0.83900255 0.74503365 0.38067928
0.26169292]
Example features: [-2.13370975 -1.89235125 2.16298094 0.8329378 1.03914201 2.89509077
-2.82213403 0.1741292 ]
Expectation value: 0.5596790711246593
0: ──╭QubitStateVector(M0)──RY(0.65)───╭X─────────────╭C─────────────╭X──RY(0.691)─────────────────────────────────────────────────────────────────────┤
1: ──├QubitStateVector(M0)──RY(0.948)──╰C──RY(0.388)──╰X──RY(0.641)──╰C──RY(0.127)──RY(0.239)──╭X─────────────╭C─────────────╭X──RY(0.745)─────────────┤
2: ──╰QubitStateVector(M0)──RY(0.254)──────────────────────────────────────────────────────────╰C──RY(0.426)──╰X──RY(0.839)──╰C──RY(0.381)──RY(0.262)──┤ ⟨Z⟩
M0 =
[-0.38346 +0.j -0.34008421+0.j 0.38872047+0.j 0.14969155+0.j
0.18674958+0.j 0.52029171+0.j -0.50718028+0.j 0.03129366+0.j]
###Markdown
Accuracy test definition
###Code
def measure_accuracy(x, y, circuit_params):
class_errors = 0
for example, example_class in zip(x, y):
predicted_value = circuit(example, circuit_params)
if (example_class > 0 and predicted_value <= 0) or (example_class <= 0 and predicted_value > 0):
class_errors += 1
return 1 - (class_errors/len(y))
###Output
_____no_output_____
###Markdown
Training
###Code
params = initial_params
opt = qml.AdamOptimizer(stepsize=STEP_SIZE, beta1=BETA_1, beta2=BETA_2, eps=EPSILON)
test_accuracies = []
best_validation_accuracy = 0.0
best_params = []
for i in range(len(X_train)):
features = X_train[i]
expected_value = y_train[i]
def cost(circuit_params):
value = circuit(features, circuit_params)
return ((expected_value - value) ** 2)/len(X_train)
params = opt.step(cost, params)
if i % BATCH_SIZE == 0:
print(f"epoch {i//BATCH_SIZE}")
if i % (10*BATCH_SIZE) == 0:
current_accuracy = measure_accuracy(X_validation, y_validation, params)
test_accuracies.append(current_accuracy)
print(f"accuracy: {current_accuracy}")
if current_accuracy > best_validation_accuracy:
print("best accuracy so far!")
best_validation_accuracy = current_accuracy
best_params = params
if len(test_accuracies) == 30:
print(f"test_accuracies: {test_accuracies}")
if np.allclose(best_validation_accuracy, test_accuracies[0]):
params = best_params
break
del test_accuracies[0]
print("Optimized rotation angles: {}".format(params))
training_time = time.time()
###Output
Optimized rotation angles: [ 0.26563633 1.66502588 1.62440585 0.10308565 0.69051205 0.19301864
0.30587611 -0.71334516 2.2251851 0.40209308 0.74503365 0.02780758
-0.09117878]
###Markdown
Testing
###Code
accuracy = measure_accuracy(X_test, y_test, params)
print(accuracy)
test_time = time.time()
print(f"pre-processing time: {preprocessing_time-initial_time}")
print(f"training time: {training_time - preprocessing_time}")
print(f"test time: {test_time - training_time}")
print(f"total time: {test_time - initial_time}")
###Output
pre-processing time: 14.912296533584595
training time: 1695.5147542953491
test time: 62.28123927116394
total time: 1772.7082901000977
|
draft/amazon-studio-demos/dataset.ipynb | ###Markdown
Download data set
###Code
%%sh
wget -N https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip
unzip -o bank-additional.zip
import pandas as pd
data = pd.read_csv('./bank-additional/bank-additional-full.csv', sep=';')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 50) # Keep the output on one page
data[:10] # Show the first 10 lines
###Output
_____no_output_____
###Markdown
Split data set
###Code
import numpy as np
train_data, test_data, _ = np.split(data.sample(frac=1, random_state=123),
[int(0.95 * len(data)), int(len(data))])
# Save to CSV files
train_data.to_csv('automl-train.csv', index=False, header=True, sep=',') # Need to keep column names
test_data.to_csv('automl-test.csv', index=False, header=True, sep=',')
%%sh
ls -l *.csv
###Output
-rw-r--r-- 1 root root 257339 Dec 9 18:10 automl-test.csv
-rw-r--r-- 1 root root 4889516 Dec 9 18:10 automl-train.csv
###Markdown
Upload data set to Amazon S3
###Code
import sagemaker
prefix = 'sagemaker/DEMO-automl-dm/input'
sess = sagemaker.Session()
uri = sess.upload_data(path="automl-train.csv", key_prefix=prefix)
print(uri)
###Output
s3://sagemaker-us-east-2-308412838853/sagemaker/DEMO-automl-dm/input/automl-train.csv
###Markdown
Predict the test data set
###Code
ep_name = 'chzar-studio-demo'
import boto3,sys
sm_rt = boto3.Session().client('runtime.sagemaker')
tp = tn = fp = fn = count = 0
with open('automl-test.csv') as f:
lines = f.readlines()
for l in lines[1:]: # Skip header
l = l.split(',') # Split CSV line into features
label = l[-1] # Store 'yes'/'no' label
l = l[:-1] # Remove label
l = ','.join(l) # Rebuild CSV line without label
response = sm_rt.invoke_endpoint(EndpointName=ep_name, ContentType='text/csv', Accept='text/csv', Body=l)
response = response['Body'].read().decode("utf-8")
#print ("label %s response %s" %(label,response))
if 'yes' in label:
# Sample is positive
if 'yes' in response:
# True positive
tp=tp+1
else:
# False negative
fn=fn+1
else:
# Sample is negative
if 'no' in response:
# True negative
tn=tn+1
else:
# False positive
fp=fp+1
count = count+1
if (count % 100 == 0):
sys.stdout.write(str(count)+' ')
print ("Done")
print ("%d %d" % (tn, fp))
print ("%d %d" % (fn, tp))
accuracy = (tp+tn)/(tp+tn+fp+fn)
precision = tp/(tp+fp)
recall = tn/(tn+fn)
f1 = (2*precision*recall)/(precision+recall)
print ("%.4f %.4f %.4f %.4f" % (accuracy, precision, recall, f1))
###Output
_____no_output_____ |
notebooks/animation.ipynb | ###Markdown
DeepSVG animation between user-drawn images
###Code
device = torch.device("cuda:0"if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Load the pretrained model and dataset
###Code
pretrained_path = "./pretrained/hierarchical_ordered.pth.tar"
# pretrained_path = "./pretrained/treevis_model.pth.tar"
from configs.deepsvg.hierarchical_ordered import Config
cfg = Config()
cfg.model_cfg.dropout = 0. # for faster convergence
model = cfg.make_model().to(device)
utils.load_model(pretrained_path, model)
model.eval();
dataset = load_dataset(cfg)
def load_svg(filename):
svg = SVG.load_svg(filename)
svg = dataset.simplify(svg)
svg = dataset.preprocess(svg, mean=True)
return svg
def easein_easeout(t):
return t*t / (2. * (t*t - t) + 1.);
def interpolate(z1, z2, n=25, filename=None, ease=True, do_display=True):
alphas = torch.linspace(0., 1., n)
if ease:
alphas = easein_easeout(alphas)
z_list = [(1-a) * z1 + a * z2 for a in alphas]
img_list = [decode(z, do_display=False, return_png=True) for z in z_list]
to_gif(img_list + img_list[::-1], file_path=filename, frame_duration=1/12)
def encode(data):
model_args = batchify((data[key] for key in cfg.model_args), device)
with torch.no_grad():
z = model(*model_args, encode_mode=True)
return z
def encode_icon(idx):
data = dataset.get(id=idx, random_aug=False)
return encode(data)
def encode_svg(svg):
data = dataset.get(svg=svg)
return encode(data)
def decode(z, do_display=True, return_svg=False, return_png=False):
commands_y, args_y = model.greedy_sample(z=z)
tensor_pred = SVGTensor.from_cmd_args(commands_y[0].cpu(), args_y[0].cpu())
svg_path_sample = SVG.from_tensor(tensor_pred.data, viewbox=Bbox(256), allow_empty=True).normalize().split_paths().set_color("random")
if return_svg:
return svg_path_sample
return svg_path_sample.draw(do_display=do_display, return_png=return_png)
def interpolate_icons(idx1=None, idx2=None, n=25, *args, **kwargs):
z1, z2 = encode_icon(idx1), encode_icon(idx2)
interpolate(z1, z2, n=n, *args, **kwargs)
###Output
_____no_output_____
###Markdown
Loading user-drawn frames
###Code
tree39199 = load_svg("docs/frames/39199.svg")
tree39203 = load_svg("docs/frames/39203.svg")
tree39199.draw_colored();tree39203.draw_colored()
finetune_dataset = SVGFinetuneDataset(dataset, [tree39199, tree39203], frac=1.0, nb_augmentations=750)
lego1 = load_svg("docs/frames/lego_1.svg")
lego2 = load_svg("docs/frames/lego_2.svg")
###Output
_____no_output_____
###Markdown
`draw_colored` lets you see the individual paths in an SVG icon.
###Code
lego1.draw_colored(); lego2.draw_colored()
bird1 = load_svg("docs/frames/bird_1.svg")
bird2 = load_svg("docs/frames/bird_2.svg"); bird2.permute([1, 0, 2]);
###Output
_____no_output_____
###Markdown
When path orders don't match between the two frames, just manually change the order using the `permute` method. For best results, keep in mind that the the model was trained using paths ordered with the lexicographical order (top to bottom, left to right).Colors are in this order:- deepskyblue- lime- deeppink- gold- coral- darkviolet- royalblue- darkmagenta
###Code
bird1.draw_colored(); bird2.draw_colored()
face1 = load_svg("docs/frames/face_1.svg"); face1.permute([1, 0, 2, 3, 4, 5]);
face2 = load_svg("docs/frames/face_2.svg"); face2.permute([5, 0, 1, 2, 3, 4]); face2[0].reverse();
###Output
_____no_output_____
###Markdown
Sometimes, the orientation (clockwise/counter-clockwise) of paths don't match. Fix this usng the `reverse` method.
###Code
face1.draw_colored(); face2.draw_colored()
football1 = load_svg("docs/frames/football_1.svg"); football1.permute([0, 1, 4, 2, 3, 5, 6, 7]); football1[3].reverse(); football1[4].reverse();
football2 = load_svg("docs/frames/football_2.svg"); football2.permute([0, 2, 3, 5, 4, 7, 6, 1]);
football1.draw_colored(); football2.draw_colored()
pencil1 = load_svg("docs/frames/pencil_1.svg")
pencil2 = load_svg("docs/frames/pencil_2.svg"); pencil2.permute([1, 0, 2, 3, 4, 5]);
pencil1.draw_colored(); pencil2.draw_colored()
ship1 = load_svg("docs/frames/ship_1.svg"); ship1.permute([0, 1, 3, 2]);
ship2 = load_svg("docs/frames/ship_2.svg")
ship1.draw_colored(); ship2.draw_colored()
###Output
_____no_output_____
###Markdown
Finetune the model on those additional SVG icons for a few steps (~10-30 seconds).
###Code
finetune_dataset = SVGFinetuneDataset(dataset, [lego1, lego2, football1, football2, bird1, bird2, ship1, ship2, pencil1, pencil2, face1, face2],
frac=1.0, nb_augmentations=750)
dataloader = DataLoader(finetune_dataset, batch_size=cfg.batch_size, shuffle=True, drop_last=True,
num_workers=cfg.loader_num_workers, collate_fn=cfg.collate_fn)
# Optimizer, lr & warmup schedulers
optimizers = cfg.make_optimizers(model)
scheduler_lrs = cfg.make_schedulers(optimizers, epoch_size=len(dataloader))
scheduler_warmups = cfg.make_warmup_schedulers(optimizers, scheduler_lrs)
loss_fns = [l.to(device) for l in cfg.make_losses()]
epoch = 0
for step, data in enumerate(dataloader):
model.train()
model_args = [data[arg].to(device) for arg in cfg.model_args]
labels = data["label"].to(device) if "label" in data else None
params_dict, weights_dict = cfg.get_params(step, epoch), cfg.get_weights(step, epoch)
for i, (loss_fn, optimizer, scheduler_lr, scheduler_warmup, optimizer_start) in enumerate(
zip(loss_fns, optimizers, scheduler_lrs, scheduler_warmups, cfg.optimizer_starts), 1):
optimizer.zero_grad()
output = model(*model_args, params=params_dict)
loss_dict = loss_fn(output, labels, weights=weights_dict)
loss_dict["loss"].backward()
if cfg.grad_clip is not None:
nn.utils.clip_grad_norm_(model.parameters(), cfg.grad_clip)
optimizer.step()
if scheduler_lr is not None:
scheduler_lr.step()
if scheduler_warmup is not None:
scheduler_warmup.step()
if step % 20 == 0:
print(f"Step {step}: loss: {loss_dict['loss']}")
model.eval();
###Output
_____no_output_____
###Markdown
Display the interpolations! 🚀
###Code
z_tree39199, z_tree39203 = encode_svg(tree39199), encode_svg(tree39203)
interpolate(z_tree39199, z_tree39203)
z_lego1, z_lego2 = encode_svg(lego1), encode_svg(lego2)
interpolate(z_lego1, z_lego2)
z_face1, z_face2 = encode_svg(face1), encode_svg(face2)
interpolate(z_face1, z_face2)
z_bird1, z_bird2 = encode_svg(bird1), encode_svg(bird2)
interpolate(z_bird1, z_bird2)
z_football1, z_football2 = encode_svg(football1), encode_svg(football2)
interpolate(z_football1, z_football2)
z_pencil1, z_pencil2 = encode_svg(pencil1), encode_svg(pencil2)
interpolate(z_pencil1, z_pencil2)
z_ship1, z_ship2 = encode_svg(ship1), encode_svg(ship2)
interpolate(z_ship1, z_ship2)
###Output
_____no_output_____
###Markdown
DeepSVG animation between user-drawn images
###Code
device = torch.device("cuda:0"if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Load the pretrained model and dataset
###Code
pretrained_path = "./pretrained/hierarchical_ordered.pth.tar"
from configs.deepsvg.hierarchical_ordered import Config
cfg = Config()
cfg.model_cfg.dropout = 0. # for faster convergence
model = cfg.make_model().to(device)
utils.load_model(pretrained_path, model)
model.eval();
dataset = load_dataset(cfg)
def load_svg(filename):
svg = SVG.load_svg(filename)
svg = dataset.simplify(svg)
svg = dataset.preprocess(svg, mean=True)
return svg
def easein_easeout(t):
return t*t / (2. * (t*t - t) + 1.);
def interpolate(z1, z2, n=25, filename=None, ease=True, do_display=True):
alphas = torch.linspace(0., 1., n)
if ease:
alphas = easein_easeout(alphas)
z_list = [(1-a) * z1 + a * z2 for a in alphas]
img_list = [decode(z, do_display=False, return_png=True) for z in z_list]
to_gif(img_list + img_list[::-1], file_path=filename, frame_duration=1/12)
def encode(data):
model_args = batchify((data[key] for key in cfg.model_args), device)
with torch.no_grad():
z = model(*model_args, encode_mode=True)
return z
def encode_icon(idx):
data = dataset.get(id=idx, random_aug=False)
return encode(data)
def encode_svg(svg):
data = dataset.get(svg=svg)
return encode(data)
def decode(z, do_display=True, return_svg=False, return_png=False):
commands_y, args_y = model.greedy_sample(z=z)
tensor_pred = SVGTensor.from_cmd_args(commands_y[0].cpu(), args_y[0].cpu())
svg_path_sample = SVG.from_tensor(tensor_pred.data, viewbox=Bbox(256), allow_empty=True).normalize().split_paths().set_color("random")
if return_svg:
return svg_path_sample
return svg_path_sample.draw(do_display=do_display, return_png=return_png)
def interpolate_icons(idx1=None, idx2=None, n=25, *args, **kwargs):
z1, z2 = encode_icon(idx1), encode_icon(idx2)
interpolate(z1, z2, n=n, *args, **kwargs)
###Output
_____no_output_____
###Markdown
Loading user-drawn frames
###Code
lego1 = load_svg("docs/frames/lego_1.svg")
lego2 = load_svg("docs/frames/lego_2.svg")
###Output
_____no_output_____
###Markdown
`draw_colored` lets you see the individual paths in an SVG icon.
###Code
lego1.draw_colored(); lego2.draw_colored()
bird1 = load_svg("docs/frames/bird_1.svg")
bird2 = load_svg("docs/frames/bird_2.svg"); bird2.permute([1, 0, 2]);
###Output
_____no_output_____
###Markdown
When path orders don't match between the two frames, just manually change the order using the `permute` method. For best results, keep in mind that the the model was trained using paths ordered with the lexicographical order (top to bottom, left to right).Colors are in this order:- deepskyblue- lime- deeppink- gold- coral- darkviolet- royalblue- darkmagenta
###Code
bird1.draw_colored(); bird2.draw_colored()
face1 = load_svg("docs/frames/face_1.svg"); face1.permute([1, 0, 2, 3, 4, 5]);
face2 = load_svg("docs/frames/face_2.svg"); face2.permute([5, 0, 1, 2, 3, 4]); face2[0].reverse();
###Output
_____no_output_____
###Markdown
Sometimes, the orientation (clockwise/counter-clockwise) of paths don't match. Fix this usng the `reverse` method.
###Code
face1.draw_colored(); face2.draw_colored()
football1 = load_svg("docs/frames/football_1.svg"); football1.permute([0, 1, 4, 2, 3, 5, 6, 7]); football1[3].reverse(); football1[4].reverse();
football2 = load_svg("docs/frames/football_2.svg"); football2.permute([0, 2, 3, 5, 4, 7, 6, 1]);
football1.draw_colored(); football2.draw_colored()
pencil1 = load_svg("docs/frames/pencil_1.svg")
pencil2 = load_svg("docs/frames/pencil_2.svg"); pencil2.permute([1, 0, 2, 3, 4, 5]);
pencil1.draw_colored(); pencil2.draw_colored()
ship1 = load_svg("docs/frames/ship_1.svg"); ship1.permute([0, 1, 3, 2]);
ship2 = load_svg("docs/frames/ship_2.svg")
ship1.draw_colored(); ship2.draw_colored()
###Output
_____no_output_____
###Markdown
Finetune the model on those additional SVG icons for a few steps (~10-30 seconds).
###Code
finetune_dataset = SVGFinetuneDataset(dataset, [lego1, lego2, football1, football2, bird1, bird2, ship1, ship2, pencil1, pencil2, face1, face2],
frac=1.0, nb_augmentations=750)
dataloader = DataLoader(finetune_dataset, batch_size=cfg.batch_size, shuffle=True, drop_last=True,
num_workers=cfg.loader_num_workers, collate_fn=cfg.collate_fn)
# Optimizer, lr & warmup schedulers
optimizers = cfg.make_optimizers(model)
scheduler_lrs = cfg.make_schedulers(optimizers, epoch_size=len(dataloader))
scheduler_warmups = cfg.make_warmup_schedulers(optimizers, scheduler_lrs)
loss_fns = [l.to(device) for l in cfg.make_losses()]
epoch = 0
for step, data in enumerate(dataloader):
model.train()
model_args = [data[arg].to(device) for arg in cfg.model_args]
labels = data["label"].to(device) if "label" in data else None
params_dict, weights_dict = cfg.get_params(step, epoch), cfg.get_weights(step, epoch)
for i, (loss_fn, optimizer, scheduler_lr, scheduler_warmup, optimizer_start) in enumerate(
zip(loss_fns, optimizers, scheduler_lrs, scheduler_warmups, cfg.optimizer_starts), 1):
optimizer.zero_grad()
output = model(*model_args, params=params_dict)
loss_dict = loss_fn(output, labels, weights=weights_dict)
loss_dict["loss"].backward()
if cfg.grad_clip is not None:
nn.utils.clip_grad_norm_(model.parameters(), cfg.grad_clip)
optimizer.step()
if scheduler_lr is not None:
scheduler_lr.step()
if scheduler_warmup is not None:
scheduler_warmup.step()
if step % 20 == 0:
print(f"Step {step}: loss: {loss_dict['loss']}")
model.eval();
###Output
_____no_output_____
###Markdown
Display the interpolations! 🚀
###Code
z_lego1, z_lego2 = encode_svg(lego1), encode_svg(lego2)
interpolate(z_lego1, z_lego2)
z_face1, z_face2 = encode_svg(face1), encode_svg(face2)
interpolate(z_face1, z_face2)
z_bird1, z_bird2 = encode_svg(bird1), encode_svg(bird2)
interpolate(z_bird1, z_bird2)
z_football1, z_football2 = encode_svg(football1), encode_svg(football2)
interpolate(z_football1, z_football2)
z_pencil1, z_pencil2 = encode_svg(pencil1), encode_svg(pencil2)
interpolate(z_pencil1, z_pencil2)
z_ship1, z_ship2 = encode_svg(ship1), encode_svg(ship2)
interpolate(z_ship1, z_ship2)
###Output
_____no_output_____
###Markdown
Animation Toyplot can also create animated figures, by recording changes to a figure over time. Assume you've setup the following scatterplot:
###Code
import numpy
x = numpy.random.normal(size=100)
y = numpy.random.normal(size=len(x))
import toyplot
canvas = toyplot.Canvas(300, 300)
axes = canvas.cartesian()
mark = axes.scatterplot(x, y, size=10)
###Output
_____no_output_____
###Markdown
Suppose we want to show the order in which the samples were drawn from some distribution. We could use the `fill` parameter to map each sample's index to a color, but an animation can be more intuitive. We can use :meth:`toyplot.canvas.Canvas.animate` to add a sequence of animation frames to the canvas. We pass the number of frames and a callback function as arguments, and the callback function will be called once per frame with a single :class:`frame ` argument. The callback uses the frame object to retrieve information about the frame and record any changes that should be made to the canvas at that frame. In the example below, we set the opacity of each scatterplot datum to 5% in the first frame, then change them back to 100% over the course of the animation:
###Code
canvas = toyplot.Canvas(300, 300)
axes = canvas.cartesian()
mark = axes.scatterplot(x, y, size=10)
for frame in canvas.frames(len(x) + 1):
if frame.number == 0:
for i in range(len(x)):
frame.set_datum_style(mark, 0, i, style={"opacity":0.1})
else:
frame.set_datum_style(mark, 0, frame.number - 1, style={"opacity":1.0})
###Output
_____no_output_____
###Markdown
Let's try animating something other than a datum style - in the following example, we add a text mark to the canvas, and use it to display information about the frame:
###Code
canvas = toyplot.Canvas(300, 300)
axes = canvas.cartesian()
mark = axes.scatterplot(x, y, size=10)
text = canvas.text(150, 20, " ")
for frame in canvas.frames(len(x) + 1):
label = "%s/%s <small>(%.2f s)</small>" % (frame.number + 1, frame.count, frame.begin)
frame.set_datum_text(text, 0, 0, label)
if frame.number == 0:
for i in range(len(x)):
frame.set_datum_style(mark, 0, i, style={"opacity":0.05})
else:
frame.set_datum_style(mark, 0, frame.number - 1, style={"opacity":1.0})
###Output
_____no_output_____
###Markdown
Note from this example that each frame has a zero-based frame index, along with begin and end times, which are measured in seconds. If you look closely, you'll see that the difference in begin and end times is 0.03 seconds for each frame, which corresponds to a default 30 frames per second. If we want to control the framerate, we can pass a (frames, framerate) tuple when we call :meth:`toyplot.canvas.Canvas.animate` (note that the playback is slower, and the times for the frames are changed):
###Code
canvas = toyplot.Canvas(300, 300)
axes = canvas.cartesian()
mark = axes.scatterplot(x, y, size=10)
text = canvas.text(150, 20, " ")
for frame in canvas.frames((len(x) + 1, 5)):
label = "%s/%s <small>(%.2f s)</small>" % (frame.number + 1, frame.count, frame.begin)
frame.set_datum_text(text, 0, 0, label)
if frame.number == 0:
for i in range(len(x)):
frame.set_datum_style(mark, 0, i, style={"opacity":0.05})
else:
frame.set_datum_style(mark, 0, frame.number - 1, style={"opacity":1.0})
###Output
_____no_output_____
###Markdown
Sometimes the callback approach to animation is awkward, particularly if you simply have a one-time "event" that needs to happen in the middle of the animation. In this case, you can use :meth:`toyplot.canvas.Canvas.time` to record changes for individual frames:
###Code
canvas = toyplot.Canvas(300, 300)
axes = canvas.cartesian()
mark = axes.scatterplot(x, y, size=10)
text = canvas.text(150, 20, " ")
text2 = canvas.text(150, 35, " ")
for frame in canvas.frames(len(x) + 1):
label = "%s/%s <small>(%.2f s)</small>" % (frame.number + 1, frame.count, frame.begin)
frame.set_datum_text(text, 0, 0, label)
if frame.number == 0:
for i in range(len(x)):
frame.set_datum_style(mark, 0, i, style={"opacity":0.05})
else:
frame.set_datum_style(mark, 0, frame.number - 1, style={"opacity":1.0})
canvas.frame(0.0).set_datum_text(text2, 0, 0, "Halfway There!", style={"font-weight":"bold", "opacity":0.2})
canvas.frame(50.0 * (1.0 / 30.0)).set_datum_text(text2, 0, 0, "Halfway There!", style={"font-weight":"bold", "fill":"blue"})
###Output
_____no_output_____
###Markdown
Note that when you combine :meth:`toyplot.canvas.Canvas.animate` and :meth:`toyplot.canvas.Canvas.time`, you don't have to force the "frames" to line-up ... you can record events in any order and at any point in time, whether there are existing frames at those times or not. In fact, you could call :meth:`toyplot.canvas.Canvas.animate` multiple times, if you wanted to animate events happening at different rates:
###Code
canvas = toyplot.Canvas(100, 100, style={"background-color":"ivory"})
t1 = canvas.text(50, 33, " ")
t2 = canvas.text(50, 66, " ")
for frame in canvas.frames((10, 2)):
frame.set_datum_text(t1, 0, 0, "1 hz", style={"opacity":frame.number % 2.0})
for frame in canvas.frames((20, 4)):
frame.set_datum_text(t2, 0, 0, "2 hz", style={"opacity":frame.number % 2.0})
import toyplot.mp4
toyplot.mp4.render(canvas, "test.mp4", progress=print)
###Output
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
lectures-labs/labs/05_conv_nets_2/Fully_Convolutional_Neural_Networks.ipynb | ###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
import keras
from keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
# print(base_model.summary())
res5c = base_model.layers[-2]
type(res5c)
res5c.output_shape
avg_pool = base_model.layers[-1]
type(avg_pool)
avg_pool.pool_size, avg_pool.strides
avg_pool.output_shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$- After this bloc, the ResNet uses an average pooling `AveragePooling2D(pool_size=(7, 7))` with `(7, 7)` strides which divides by 7 the width and height Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = Flatten()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/fchollet/keras/blob/master/keras/applications/resnet50.py Our Version- To keep spatial information as much as possible, we will remove the average pooling.- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged.- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
import keras
from keras.engine import Layer
import keras.backend as K
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self,input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
def call(self, x, mask=None):
e = K.exp(x - K.max(x, axis=self.axis, keepdims=True))
s = K.sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def compute_output_shape(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes)
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's wrap the `SoftmaxMap` class into a test model to process our test data:
###Code
from keras.models import Sequential
model = Sequential([SoftmaxMap(input_shape=(w, h, n_classes))])
model.output_shape
softmax_mapped_data = model.predict(random_data)
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
** Exercise **- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class). - You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras `Convolution2D(output_channels, filter_w, filter_h)` layer and our `SotfmaxMap` to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from keras.layers import Convolution2D
from keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet.predict(np.random.randn(1, 200, 300, 3))
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from skimage.io import imread
from skimage.transform import resize
from keras.applications.imagenet_utils import preprocess_input
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
z = fully_conv_ResNet.predict(img)
return z
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
_____no_output_____
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
_____no_output_____
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(z, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in ids if id_ is not None])
x = z[0, :, :, class_ids].sum(axis=0)
print("size of heatmap: " + str(x.shape))
return x
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(12, 8))
ax0.imshow(dog)
ax0.axis('off')
ax1.imshow(heatmap, interpolation='nearest', cmap="viridis")
ax1.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(400, 640)` - `(800, 1280)` - `(1600, 2560)`- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves.
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
###Output
_____no_output_____
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
# %load solutions/geom_avg.py
###Output
_____no_output_____ |
SongRecommenderEngine.ipynb | ###Markdown
Song Recommender ModelDataset: [Million Song Dataset](http://millionsongdataset.com/)Built using **Turi**.Built by **Vishal Sharma**. Installing and Importing Libraries
###Code
!pip install turicreate
import turicreate as tc
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading and Pre-processing Dataset
###Code
# train_file = 'https://static.turi.com/datasets/millionsong/10000.txt'
#fetching the triplets file and songs metadata file
triplets_file = 'https://static.turi.com/datasets/millionsong/10000.txt'
# songs_metadata_file = 'https://static.turi.com/datasets/millionsong/song_data.csv'
#reading data files for each using pandas and then appending data to data frames with corresponding data frames
song_df_1 = pd.read_csv(triplets_file,header=None, sep='\t')
song_df_1.columns = ['user_id', 'song_id', 'listen_count']
# song_df_1 = song_df_1[:1000]
# song_df_1 = song_df_1.truncate(before=0, after=2000)
#Read song metadata of songs
# song_df_2 = pd.read_csv(songs_metadata_file)
#Merge the two dataframes above to create input dataframe for recommender systems with duplicate column in both dataframes as song_id
# song_df = pd.merge(song_df_1, song_df_2.drop_duplicates(['song_id']), on="song_id", how="left")
# sf = tc.SFrame.read_csv(song_df, header=False, delimiter='\t', verbose=False)
# sf.rename({'X1':'user_id', 'X2':'song_id', 'X3':'listen_count'}).show()
sf = tc.SFrame(song_df_1)
sf.explore()
sf.show()
###Output
_____no_output_____
###Markdown
Train-test split
###Code
train_set, test_set = tc.recommender.util.random_split_by_user(sf, 'user_id', 'song_id', item_test_proportion=0.2)
###Output
_____no_output_____
###Markdown
Building Models
###Code
popularity_model = tc.popularity_recommender.create(train_set, 'user_id', 'song_id', target = 'listen_count')
item_sim_model = tc.item_similarity_recommender.create(train_set, 'user_id', 'song_id', target = 'listen_count')
fac_model = tc.factorization_recommender.create(train_set, 'user_id', 'song_id', target = 'listen_count')
###Output
_____no_output_____
###Markdown
Model Evaluation and Comparision
###Code
popularity_eval = popularity_model.evaluate(test_set)
item_sim_eval = item_sim_model.evaluate(test_set)
fac_eval = fac_model.evaluate(test_set)
# fac_model['precision_recall_overall']['cutoff']
plt.figure(figsize=(16, 12))
plt.plot(fac_eval['precision_recall_overall']['precision'], fac_eval['precision_recall_overall']['recall'], label = "Factorization Model")
plt.plot(popularity_eval['precision_recall_overall']['precision'], popularity_eval['precision_recall_overall']['recall'], label = "Popularity Model")
plt.plot(item_sim_eval['precision_recall_overall']['precision'], item_sim_eval['precision_recall_overall']['recall'], label = "Item Similarity Model")
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.title('Model Comparision')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Results and Recommendations Based on precision recall graph we can conclude that item similarity model performed the best.
###Code
K = 10
users = tc.SArray(sf['user_id'].unique().head(100))
recs = item_sim_model.recommend(users=users, k=K)
recs.head()
###Output
_____no_output_____
###Markdown
Merging metadata of all songs
###Code
# Get the meta data of the songs
# The below will download a 75 MB file.
songs = tc.SFrame.read_csv('https://static.turi.com/datasets/millionsong/song_data.csv', verbose=False)
songs = songs[['song_id', 'title', 'artist_name']]
results = recs.join(songs, on='song_id', how='inner')
# Populate observed user-song data with song info
userset = frozenset(users)
ix = sf['user_id'].apply(lambda x: x in userset, int)
user_data = sf[ix]
user_data = user_data.join(songs, on='song_id')[['user_id', 'title', 'artist_name']]
# Print out some recommendations
for i in range(5):
user = list(users)[i]
print("User: " + str(i + 1))
user_obs = user_data[user_data['user_id'] == user].head(K)
del user_obs['user_id']
user_recs = results[results['user_id'] == str(user)][['title', 'artist_name']]
print("Songs liked by User: ")
print(user_obs.head(K))
print("Further song recommendation by our model:")
print(user_recs.head(K))
print("")
###Output
_____no_output_____ |
examples/examples-cpu/snowflake/snowflake-dask.ipynb | ###Markdown
Snowflake + DaskHow to load data from a Snowflake table or query into a Dask dataframe Connect to SnowflakeSee [README](README.md) for more details on how to set up the credentials environment variables for SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, and SNOWFLAKE_PASSWORD.The other variables can be set on your Jupyter server or overwritten below based on the Snowflake warehouse and schema you used when running `load-data.sql`. Note that in order to update environment variables your Jupyter server will need to be stopped.
###Code
import os
SNOWFLAKE_ACCOUNT = os.environ['SNOWFLAKE_ACCOUNT']
SNOWFLAKE_USER = os.environ['SNOWFLAKE_USER']
SNOWFLAKE_PASSWORD = os.environ['SNOWFLAKE_PASSWORD']
SNOWFLAKE_WAREHOUSE = os.environ['SNOWFLAKE_WAREHOUSE']
TAXI_DATABASE = os.environ['TAXI_DATABASE']
TAXI_SCHEMA = os.environ['TAXI_SCHEMA']
import snowflake.connector
conn_info = {
'account': SNOWFLAKE_ACCOUNT,
'user': SNOWFLAKE_USER,
'password': SNOWFLAKE_PASSWORD,
'warehouse': SNOWFLAKE_WAREHOUSE,
'database': TAXI_DATABASE,
'schema': TAXI_SCHEMA,
}
conn = snowflake.connector.connect(**conn_info)
###Output
_____no_output_____
###Markdown
Set up a query templateWe need to set up a query template containing a bind variable that will result in Dask issuing multiple queries that each extract a slice of the taxi data based on the pickup_datetime column. These slices will become our partitions in a Dask dataframe. We use a [binding for the Snowflake query](https://docs.snowflake.com/en/user-guide/python-connector-example.htmlbinding-data) so that we can pass different date values at execution time.
###Code
query = """
SELECT *
FROM taxi_yellow
WHERE
date(pickup_datetime) = %s
"""
###Output
_____no_output_____
###Markdown
Validate the query is good with pandas
###Code
cur = conn.cursor().execute(query, '2019-01-01')
df = cur.fetch_pandas_all()
len(df), df.memory_usage().sum() / 1e6 # memory size in MB
###Output
_____no_output_____
###Markdown
Initialize Dask cluster
###Code
import dask
from dask.distributed import Client, wait
from dask_saturn import SaturnCluster
n_workers = 3
cluster = SaturnCluster(
n_workers=n_workers,
scheduler_size='medium',
worker_size='large',
nthreads=2
)
client = Client(cluster)
cluster
###Output
_____no_output_____
###Markdown
If you initialized your cluster from right here in this notebook, it might take a few minutes for all your nodes to become available. You can run the chunk below to block until all nodes are ready> **Pro tip:** Create and/or start your cluster from the "Dask" page in Saturn if you want to get a head start!
###Code
client.wait_for_workers(n_workers=n_workers)
###Output
_____no_output_____
###Markdown
Load larger data with Dask!We set up a function with `dask.delayed`. `@delayed` is a decorator that turns a Python function into a function suitable for running on the Dask cluster. When you execute a delayed function, instead of executing the operation, it returns a delayed result that represents what the return value of the function will be. `dask.dataframe.from_delayed` takes a list of these delayed objects, and concatenates them into a Dask dataframe.
###Code
import dask.dataframe as dd
print(query)
@dask.delayed
def load(conn_info, query, day):
conn = snowflake.connector.connect(**conn_info)
cur = conn.cursor().execute(query, str(day))
return cur.fetch_pandas_all()
out = load(conn_info, query, '2019-01-01')
out
###Output
_____no_output_____
###Markdown
We can call `compute()` to execute the function and see the output (in this case a Pandas dataframe)
###Code
type(out.compute())
###Output
_____no_output_____
###Markdown
Now, let's load more days using Dask! First we want to pull a range of dates where know data exists. We can run a quick Snowflake query for that
###Code
date_query = """
SELECT
DISTINCT(DATE(pickup_datetime)) as date
FROM taxi_yellow
WHERE
pickup_datetime BETWEEN '2019-01-01' and '2019-01-31'
"""
dates_df = conn.cursor().execute(date_query).fetch_pandas_all()
dates = dates_df['DATE'].tolist()
dates[:5]
###Output
_____no_output_____
###Markdown
Then, we build up a list of delayed objects that call the `load()` function we created
###Code
delayed_obs = [load(conn_info, query, day) for day in dates]
delayed_obs[:5]
###Output
_____no_output_____
###Markdown
Finally, create a Dask Dataframe!
###Code
ddf = dd.from_delayed(delayed_obs)
ddf
###Output
_____no_output_____
###Markdown
Notice that the above command ran pretty quickly. This is because Dask only executes the task graph when you perform certain actions, such as writing a file or getting the `len` of the DataFrame
###Code
len(ddf)
###Output
_____no_output_____
###Markdown
We can use `repartition()` to introduce more parallelism. This helps downstream processes execute faster by splitting the work across more cores.
###Code
ddf = ddf.repartition(npartitions=100)
ddf
len(ddf)
###Output
_____no_output_____
###Markdown
The cell below will execute the Snowflake queries across the cluster, compute the row count and size of each partition in parallel, and then aggregate the results to present the row count and size of the entire Dask dataframe.
###Code
print(f'Num rows: {len(ddf)}, Size: {ddf.memory_usage(deep=True).sum().compute() / 1e6} MB')
###Output
_____no_output_____
###Markdown
The partitions in the Dask dataframe are pandas dataframes
###Code
ddf_part = ddf.partitions[0].compute()
type(ddf_part)
###Output
_____no_output_____
###Markdown
If we plan on performing a lot of operations using this Dask dataframe (such as training a machine learning model), and the data will fit in the memory of the _cluster_, we should `persist()` the dataframe to perform all the loading up-front.
###Code
from dask.distributed import wait
ddf = ddf.persist()
_ = wait(ddf)
###Output
_____no_output_____
###Markdown
The following cell should execute much faster than previously, because all the data is loaded into memory
###Code
print(f'Num rows: {len(ddf)}, Size: {ddf.memory_usage(deep=True).sum().compute() / 1e6} MB')
###Output
_____no_output_____ |
examples/toy/model-repressilator.ipynb | ###Markdown
Repressilator: Synthetic biological oscillatorThis example shows how the [Repressilator model](http://pints.readthedocs.io/en/latest/toy/repressilator_model.html) can be used.The model, formulated as an ODE, has 6 state variables: 3 mRNA concentrations and 3 protein concentrations. Only the mRNA concentrations are visible. In the example below we'll call the three outputs `m-lacI`, `m-tetR`, and `m-cl`.The model has 4 parameters, `alpha_0`, `alpha`, `beta`, and `n`.For an analysis using ABC, see [Toni et al.](http://rsif.royalsocietypublishing.org/content/6/31/187.short).
###Code
import pints
import pints.toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Create a model
model = pints.toy.RepressilatorModel()
# Run a simulation
parameters = model.suggested_parameters()
times = model.suggested_times()
values = model.simulate(parameters, times)
print('Parameters:')
print(parameters)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.plot(times, values)
plt.legend(['m-lacI', 'm-tetR', 'm-cl'])
plt.show()
###Output
Parameters:
[ 1 1000 5 2]
###Markdown
With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons. We now set up a simple optimisation problem with the model.
###Code
# First add some noise
sigma = 5
noisy = values + np.random.normal(0, sigma, values.shape)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.plot(times, noisy)
plt.show()
###Output
_____no_output_____
###Markdown
Next, we set up a problem. Because this model has three outputs, we use a [MultiOutputProblem](http://pints.readthedocs.io/en/latest/core_classes_and_methods.htmlmulti-output-problem).
###Code
problem = pints.MultiOutputProblem(model, times, noisy)
loglikelihood = pints.GaussianKnownSigmaLogLikelihood(problem, sigma)
###Output
_____no_output_____
###Markdown
Now we're ready to try some inference, for example MCMC:
###Code
# Initial guesses
x0 = [
[2, 800, 3, 3],
[1, 1200, 6, 1],
[3, 2000, 1, 4],
]
mcmc = pints.MCMCController(loglikelihood, 3, x0)
mcmc.set_log_to_screen(False)
mcmc.set_max_iterations(6000)
chains = mcmc.run()
###Output
/home/mrobins/git/pints/pints/toy/_repressilator_model.py:96: RuntimeWarning: invalid value encountered in double_scalars
dy[2] = -y[2] + alpha / (1 + y[4]**n) + alpha_0
/home/mrobins/git/pints/pints/toy/_repressilator_model.py:95: RuntimeWarning: invalid value encountered in double_scalars
dy[1] = -y[1] + alpha / (1 + y[3]**n) + alpha_0
/home/mrobins/git/pints/pints/toy/_repressilator_model.py:94: RuntimeWarning: invalid value encountered in double_scalars
dy[0] = -y[0] + alpha / (1 + y[5]**n) + alpha_0
###Markdown
We can use rhat criterion and look at the trace plot to see if it looks like the chains have converged:
###Code
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains))
plt.figure()
pints.plot.trace(chains, ref_parameters=parameters)
plt.show()
###Output
R-hat:
[1.080485387361437, 1.4665654941931392, 1.0589477266478617, 1.0584576277670263]
###Markdown
So it seems MCMC gets there in the end!We can use the final 1000 samples to look at the predicted plots:
###Code
samples = chains[1][-1000:]
plt.figure(figsize=(12, 6))
pints.plot.series(samples, problem)
plt.show()
###Output
_____no_output_____ |
2d_correlation_plot.ipynb | ###Markdown
Correlation via 2D Histograms* show density plot is necessary with to many data points
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import spearmanr, gaussian_kde
###Output
_____no_output_____
###Markdown
Utility functions
###Code
def create_2_fake_signals(duration=500, noise_level=1):
""" Create two Sine with additional noise """
time = np.linspace(0, 4*np.pi, duration)
signal1 = np.sin(time)+np.random.rand(time.size) * noise_level
signal2 = np.sin(time)+np.random.rand(time.size) * noise_level
return time, signal1, signal2
def show_plots(time, signal1, signal2):
""" Visualize correlation of two signals """
fig1 = plt.figure()
plt.plot(time, signal1,'.')
plt.plot(time, signal2,'.')
plt.xlabel("time [a.u.]")
plt.ylabel("signal [a.u.]")
plt.title("two sine with different noise")
corr, _ = spearmanr(signal1, signal2)
fig2 = plt.figure()
plt.plot(signal1,signal2,'.')
plt.xlabel("signal 1 [a.u.]")
plt.ylabel("signal 2 [a.u.]")
plt.title(f"scatter plot - coorelation: {corr:.3f}")
fig3 = plt.figure()
x, y, z = correlation_density(signal1, signal2, nbins=300)
plt.pcolormesh(x,y,z)
plt.xlabel("signal 1 [a.u.]")
plt.ylabel("signal 2 [a.u.]")
plt.title(f"2D histogram - correlation: {corr:.3f}")
def correlation_density(x, y, nbins = 300):
X, Y = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j]
positions = np.vstack([X.ravel(), Y.ravel()])
values = np.vstack([x,y])
kernel = gaussian_kde(values)
Z = np.reshape(kernel(positions).T, X.shape)
return X, Y, Z
###Output
_____no_output_____
###Markdown
Correlate* low noise level
###Code
noise_level = 1
duration = 5000
time, singal1, signal2 = create_2_fake_signals(duration=duration, noise_level=noise_level)
show_plots(time, singal1, signal2)
###Output
_____no_output_____
###Markdown
* high noise level
###Code
noise_level = 7
duration = 5000
time, singal1, signal2 = create_2_fake_signals(duration=duration, noise_level=noise_level)
show_plots(time, singal1, signal2)
###Output
_____no_output_____ |
05_Nearest_Neighbor_Methods/03_Working_with_Text_Distances/03_text_distances.ipynb | ###Markdown
Text DistancesThis notebook illustrates how to use the Levenstein distance (edit distance) in TensorFlow.Get required libarary and start tensorflow session.
###Code
import tensorflow as tf
sess = tf.Session()
###Output
_____no_output_____
###Markdown
First compute the edit distance between 'bear' and 'beers'
###Code
hypothesis = list('bear')
truth = list('beers')
h1 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3]],
hypothesis,
[1,1,1])
t1 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3],[0,0,4]],
truth,
[1,1,1])
print(sess.run(tf.edit_distance(h1, t1, normalize=False)))
###Output
[[ 2.]]
###Markdown
Compute the edit distance between ('bear','beer') and 'beers':
###Code
hypothesis2 = list('bearbeer')
truth2 = list('beersbeers')
h2 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3], [0,1,0], [0,1,1], [0,1,2], [0,1,3]],
hypothesis2,
[1,2,4])
t2 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3], [0,0,4], [0,1,0], [0,1,1], [0,1,2], [0,1,3], [0,1,4]],
truth2,
[1,2,5])
print(sess.run(tf.edit_distance(h2, t2, normalize=True)))
###Output
[[ 0.40000001 0.2 ]]
###Markdown
Now compute distance between four words and 'beers' more efficiently:
###Code
hypothesis_words = ['bear','bar','tensor','flow']
truth_word = ['beers']
num_h_words = len(hypothesis_words)
h_indices = [[xi, 0, yi] for xi,x in enumerate(hypothesis_words) for yi,y in enumerate(x)]
h_chars = list(''.join(hypothesis_words))
h3 = tf.SparseTensor(h_indices, h_chars, [num_h_words,1,1])
truth_word_vec = truth_word*num_h_words
t_indices = [[xi, 0, yi] for xi,x in enumerate(truth_word_vec) for yi,y in enumerate(x)]
t_chars = list(''.join(truth_word_vec))
t3 = tf.SparseTensor(t_indices, t_chars, [num_h_words,1,1])
print(sess.run(tf.edit_distance(h3, t3, normalize=True)))
###Output
[[ 0.40000001]
[ 0.60000002]
[ 1. ]
[ 1. ]]
###Markdown
Text DistancesThis notebook illustrates how to use the Levenstein distance (edit distance) in TensorFlow.Get required libarary and start tensorflow session.
###Code
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
sess = tf.Session()
###Output
_____no_output_____
###Markdown
First compute the edit distance between 'bear' and 'beers'
###Code
hypothesis = list('bear')
truth = list('beers')
h1 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3]],
hypothesis,
[1,1,1])
t1 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3],[0,0,4]],
truth,
[1,1,1])
print(sess.run(tf.edit_distance(h1, t1, normalize=False)))
###Output
[[2.]]
###Markdown
Compute the edit distance between ('bear','beer') and 'beers':
###Code
hypothesis2 = list('bearbeer')
truth2 = list('beersbeers')
h2 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3], [0,1,0], [0,1,1], [0,1,2], [0,1,3]],
hypothesis2,
[1,2,4])
t2 = tf.SparseTensor([[0,0,0], [0,0,1], [0,0,2], [0,0,3], [0,0,4], [0,1,0], [0,1,1], [0,1,2], [0,1,3], [0,1,4]],
truth2,
[1,2,5])
print(sess.run(tf.edit_distance(h2, t2, normalize=True)))
###Output
[[ 0.40000001 0.2 ]]
###Markdown
Now compute distance between four words and 'beers' more efficiently:
###Code
hypothesis_words = ['bear','bar','tensor','flow']
truth_word = ['beers']
num_h_words = len(hypothesis_words)
h_indices = [[xi, 0, yi] for xi,x in enumerate(hypothesis_words) for yi,y in enumerate(x)]
h_chars = list(''.join(hypothesis_words))
h3 = tf.SparseTensor(h_indices, h_chars, [num_h_words,1,1])
truth_word_vec = truth_word*num_h_words
t_indices = [[xi, 0, yi] for xi,x in enumerate(truth_word_vec) for yi,y in enumerate(x)]
t_chars = list(''.join(truth_word_vec))
t3 = tf.SparseTensor(t_indices, t_chars, [num_h_words,1,1])
print(sess.run(tf.edit_distance(h3, t3, normalize=True)))
###Output
[[ 0.40000001]
[ 0.60000002]
[ 1. ]
[ 1. ]]
|
ch08-analysis-and-viz.ipynb | ###Markdown
Analysis and VisualizationData analysis and visualization are essential to science. This chapter will teach you the best ways to perform data analysis and visualization on the computer, saving time and allowing for more publications.Scientists encounter many types of data. Once those data have been collected and prepared, they must be loaded into the computer. Loading DataThere are numerous python packages for loading data into memory-accesible structures. These will be discussed in detail in chapter 11. Here, we will focus on four tools: NumPy, PyTables, Pandas, and Blaze. Numerous factors determine the right tool for data analysis. The most important factor is often the size of the data. NumPyFor small data that can be loaded into memory all at once, NumPy is often a good choice. We will begin our discussion there. NumPy arranges data into an array of numbers. NumPy arrays are very common and very powerful. Below is code that tabulates the results of a count of a decaying isotope. The left-hand column holds the independent variable, time, and the right hand column holds the dependent variable, the observed number of decays. The data are loaded by NumPy from a comma separated value file with the following code:
###Code
import numpy as np # Imports numpy with alias np
decays_arr = np.loadtxt('data/decays.csv', delimiter=",", skiprows=1) # Creates an object with the loadtxt() function
decays_arr
###Output
_____no_output_____
###Markdown
PandasPandas is a very flexible tool that provides a good alternative to NumPy or PyTables in many cases. It is very easy to load data into pandas. Observe:
###Code
import pandas as pd # Import pandas in alias it as pd
decays_df = pd.read_csv('data/decays.csv') # Creates a data frame object to hold the data loaded by read_csv()
decays_df
###Output
_____no_output_____
###Markdown
We can also use pandas to change the format of data. This code will create to an hdf5 file called _decays.h5_ in a group node called _experimental_ if we ran it:
###Code
# import pandas as pd
# decays_df = pd.read_csv('data/decays.csv')
# decays_df.to_hdf('decays.h5', 'experimental')
###Output
_____no_output_____
###Markdown
Blaze Blaze is another tool. Similar to Panda, it can easily convert data between different formats. However, blaze is still in active development, and not fully stable. Please be cautious if you decide to use blaze. The following code takes the CSV code and turns it into a data descriptor, which it then transforms ito Blaze Table
###Code
import blaze as bz # Imports blaze and aliases as bz
csv_data = bz.CSV('data/decays.csv') # Uses the CSV() constructor to transform the csv into blaze data
decays_tb = bz.Table(csv_data) # Transforms the data descripter csv_data into a blaze table
###Output
_____no_output_____
###Markdown
Cleaning and Munging DataData munging refers to many things, but broadly means dealing with data. Typically, munging means converting data from its raw form to a well-structued format that can be used for plotting. Suppose you performed an experiment counting the decay rate of a radioative source. However, a few things went wrong with the experiment and you cannot repeat the experiment due to time or financial constraints. In particular, let's imagine that during the measurement, a colleague walked through the laboratory with a stronger, more stable source so that many of the measurements are biased by this strong source. Additionally, the lab lost power for a few seconds towards the end of the measurement, so some measurements are nonexistant. First let's use Panda to remove the rows from our table with missing data:
###Code
decay_df = pd.read_csv("data/many_decays.csv")
decay_df.count() # The count() method ignores the NaN values
decay_df.dropna() # The dropna() method returns the dataframe without the NaN values
###Output
_____no_output_____
###Markdown
Visualisation Now that the data are a bit cleaner, let's plot them. MatPlotLibMatplotlib is an amazing plotting tool for scientific computing. The following python script will create a plot of the decay data:
###Code
import numpy as np # Imports and aliases NumPy
# as in the previous example, load decays.csv into a NumPy array
decaydata = np.loadtxt('data/decays.csv', delimiter=",", skiprows=1)
# provide handles for the x and y columns
time = decaydata[:,0]
decays = decaydata[:,1]
# import the matplotlib plotting functionality
import matplotlib
%matplotlib inline
import pylab as plt
plt.plot(time, decays) # Generates a plot of decays vs time
plt.xlabel('Time (s)')
plt.ylabel('Decays')
plt.title('Decays')
plt.grid(True) # Adds gridlines
#plt.savefig("decays_matplotlib.png") # saves the figure as a png
###Output
_____no_output_____
###Markdown
Here is an example of a rather long script to make a nice flyer for a talk about MatPlotLib:
###Code
# Import various necessary Python and matplotlib packages
import numpy as np
import matplotlib.cm as cm # Imports the colormaps library
from matplotlib.pyplot import figure, show, rc # Imports other useful libraries
from matplotlib.patches import Ellipse # We need the ellipse shape for our text boxes
# Create a square figure on which to place the plot
fig = figure(figsize=(8,8))
# Create square axes to hold the circular polar plot
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8], polar=True)
# Generate 20 colored, angular wedges for the polar plot
N = 20
theta = np.arange(0.0, 2*np.pi, 2*np.pi/N)
radii = 10*np.random.rand(N)
width = np.pi/4*np.random.rand(N)
bars = ax.bar(theta, radii, width=width, bottom=0.0)
for r,bar in zip(radii, bars):
bar.set_facecolor(cm.jet(r/10.))
bar.set_alpha(0.5)
# Using dictionaries, create a color scheme for the text boxes
bbox_args = dict(boxstyle="round, pad=0.9", fc="green", alpha=0.5)
bbox_white = dict(boxstyle="round, pad=0.9", fc="1", alpha=0.9)
patch_white = dict(boxstyle="round, pad=1", fc="1", ec="1")
# Create various boxes with text annotations in them at specific
# x and y coordinates
ax.annotate(" ",
xy=(.5,.93), # Places an annotation box at the desired x amd y coordinates
xycoords='figure fraction', # Tells python to read those annotations as fractions of figure height and width
ha="center", va="center", # Aligns the text to the center of the box
bbox=patch_white) # Makes the box white
ax.annotate('Matplotlib and the Python Ecosystem for Scientific Computing',
xy=(.5,.95),
xycoords='figure fraction',
xytext=(0, 0), textcoords='offset points',
size=15,
ha="center", va="center",
bbox=bbox_args)
ax.annotate('Author and Lead Developer \n of Matplotlib ',
xy=(.5,.82),
xycoords='figure fraction',
xytext=(0, 0), textcoords='offset points',
ha="center", va="center",
bbox=bbox_args)
ax.annotate('John D. Hunter',
xy=(.5,.89),
xycoords='figure fraction',
xytext=(0, 0), textcoords='offset points',
size=15,
ha="center", va="center",
bbox=bbox_white)
ax.annotate('Friday November 5th \n 2:00 pm \n1106ME ',
xy=(.5,.25),
xycoords='figure fraction',
xytext=(0, 0), textcoords='offset points',
size=15,
ha="center", va="center",
bbox=bbox_args)
ax.annotate('Sponsored by: \n The Hacker Within, \n'
'The University Lectures Committee, \n The Department of '
'Medical Physics\n and \n The American Nuclear Society',
xy=(.78,.1),
xycoords='figure fraction',
xytext=(0, 0), textcoords='offset points',
size=9,
ha="center", va="center",
bbox=bbox_args)
#fig.savefig("plot.pdf")
###Output
_____no_output_____
###Markdown
Further cool examples of plots made with MatPlotLib may be found at the MatPlotLib gallery BokehBokeh is another plotting tool that is quite similar to MatPlotLib, but specialized for generating interactive plots for the internet. The following script makes an html file holding the plot of the decay data:
###Code
import numpy as np
# import the Bokeh plotting tools
from bokeh import plotting as bp
# as in the matplotlib example, load decays.csv into a NumPy array
decaydata = np.loadtxt('data/decays.csv',delimiter=",",skiprows=1)
# provide handles for the x and y columns
time = decaydata[:,0]
decays = decaydata[:,1]
# define some output file metadata
bp.output_file("decays.html", title="Experiment 1 Radioactivity")
# create a figure with fun Internet-friendly features (optional)
bp.figure(tools="pan,wheel_zoom,box_zoom,reset,previewsave")
# on that figure, create a line plot
bp.figure().line(time, decays, x_axis_label="Time (s)", y_axis_label="Decays (#)",
color='#1F78B4', legend='Decays per second')
# additional customization to the figure can be specified separately
bp.curplot().title = "Decays"
bp.grid().grid_line_alpha=0.3
# open a browser
bp.show()
###Output
_____no_output_____ |
notebooks/figure1.ipynb | ###Markdown
ContentsReload data from `CAISO_efs.ipynb` and `UK_efs.ipynb` to generate figure 1 according to Joule submission guidelines
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
COLORS = [(0.12109375, 0.46484375, 0.703125),
(0.99609375, 0.49609375, 0.0546875),
(0.171875, 0.625, 0.171875),
(0.8359375, 0.15234375, 0.15625),
(0.578125, 0.40234375, 0.73828125),
(0.546875, 0.3359375, 0.29296875),
(0.88671875, 0.46484375, 0.7578125),
(0.49609375, 0.49609375, 0.49609375),
(0.734375, 0.73828125, 0.1328125),
(0.08984375, 0.7421875, 0.80859375)]
import calendar
# Set font sizes
SMALL_SIZE = 7
MEDIUM_SIZE = 8
BIGGER_SIZE = 9
# column sizes
cm_to_in = 0.393701
col_width3 = cm_to_in * 17.2
col_width2 = cm_to_in * 11.2
col_width1 = cm_to_in * 5.3
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Avenir']
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('axes', linewidth=.5) # fontsize of the figure title
plt.rc('xtick.minor', width=.5) # fontsize of the figure title
plt.rc('xtick.major', width=.5) # fontsize of the figure title
plt.rc('ytick.minor', width=.5) # fontsize of the figure title
plt.rc('ytick.major', width=.5) # fontsize of the figure title
CARBON_INTENSITY = {"biomass": 18, "hydro": 4, "nuclear": 16,
"solar": 46, "gas": 469, "wind": 12,
"coal": 1001, "oil": 840}
# Reload data
CA_mefs = pd.read_csv('figures/CA_mefs.csv', index_col=0)
CA_aefs = pd.read_csv('figures/CA_aefs.csv', index_col=0)
UK_mefs = pd.read_csv('figures/UK_mefs.csv', index_col=0)
UK_aefs = pd.read_csv('figures/UK_aefs.csv', index_col=0)
for df in [CA_mefs, CA_aefs, UK_mefs, UK_aefs]:
df.columns = [int(col) for col in df.columns]
fig, ax = plt.subplots(figsize=(col_width3, 2.5))
ax.axis('off')
width = 0.35
height = 0.65
left1 = 0.08
left2 = 0.63
bottom = 0.2
ax1 = fig.add_axes([left1, bottom, width, height])
ax2 = fig.add_axes([left2, bottom, width, height])
lw=1
ms=1.5
ax1.plot([], [], label="AEFs", color=(.33,.33,.33), marker='o',
ms=ms, lw=lw)
ax1.plot([], [], label="MEFs", color=(.33,.33,.33), lw=lw)
for ax, aefs, mefs, ax_title, ylim, yticks, leglab in zip(
[ax1, ax2], [CA_aefs, UK_aefs],
[CA_mefs, UK_mefs],
['(a) California: hourly AEFs and MEFs',
'(b) Great Britain: hourly AEFs and MEFs'],
[[0,500], [0,800]],
[[0, 200, 400], [0, 150, 300, 600, 750]],
[True, False]):
ax.axvspan(0, 6, facecolor='b', alpha=0.05)
ax.axvspan(19, 23, facecolor='b', alpha=0.05)
ax.axvspan(6, 19, facecolor='y', alpha=0.05)
ax.text(.5, .05, "Day", fontsize=BIGGER_SIZE, ha='center', transform=ax.transAxes)
ax.text(.02, .05, "Night", fontsize=BIGGER_SIZE, ha='left', transform=ax.transAxes)
ax.text(.98, .05, "Night", fontsize=BIGGER_SIZE, ha='right', transform=ax.transAxes)
for i, y in enumerate(range(2015, 2019)):
label = '__nolegend__'
if leglab:
label = str(y)
ax.plot(mefs[y], label=label, color=COLORS[i], lw=lw)
ax.plot(aefs[y], color=COLORS[i], marker='o',
label='__nolegend__', lw=lw, ms=ms)
ax.grid(True)
ax.set_xlim([0,23])
ax.set_ylim(ylim)
ax.set_xlabel('Hour of the day')
ax.set_title(ax_title, fontsize=BIGGER_SIZE);
ax.set_ylabel('kg/MWh');
ax.set_yticks(yticks)
ax.set_yticks([CARBON_INTENSITY["solar"], CARBON_INTENSITY["gas"]], minor=True)
ax.set_yticklabels(["Solar", "Gas"], minor=True, fontweight='bold')
ax.grid(linewidth=.5)
i += 1
ax1.plot(CA_mefs[2025], label=str(2025), color=COLORS[i], lw=lw)
ax1.plot(CA_aefs[2025], label="__nolegend__", marker='o',
color=COLORS[i], lw=lw, ms=ms)
ax1.legend(loc=2, bbox_to_anchor=(1.05, .8))
#plt.savefig('figures/fig1.pdf', bbox_inches='tight')
plt.savefig('figures/fig1.pdf', dpi=300)
plt.savefig('figures/fig1.png', dpi=300)
###Output
_____no_output_____
###Markdown
PoRB-NET
###Code
sys.path.append('../src/porbnet')
import networks_porbnet
import util_porbnet
torch.manual_seed(0)
dir_out = 'figure1/porbnet'
if not os.path.exists(dir_out):
os.makedirs(dir_out)
# Model parameters
C = [-2,2] # Poisson process region \mathcal{C}
intensity = 25 # uniform intensity value \lambda (fixed in this case)
s2_0 = 1. # reference scale s^2_0
prior_w_sig2 = .1 # prior variance of output weights \sigma^2_w
prior_b_sig2 = .1 # prior variance of output bias \sigma^2_b
intensity_func = util_porbnet.Piecewise(np.array([C[0],C[1]]),np.array([intensity]))
dim_hidden_initial = int((C[1]-C[0])*intensity)
porbnet = networks_porbnet.RBFN(dim_hidden_initial=dim_hidden_initial, \
dim_hidden_max=3*dim_hidden_initial, \
intensity=intensity_func, \
s2_0=s2_0, \
prior_w_sig2 = prior_w_sig2*np.sqrt(np.pi/s2_0), \
prior_b_sig2 = prior_b_sig2, \
sig2 = .0025)
# Burn-in
writer = SummaryWriter(os.path.join(dir_out, 'log/burnin'))
_, _, eps_adjust = porbnet.sample_posterior(x=x, \
y=y, \
n_samp=500, \
x_plot=x_plot, \
eps=.001, \
n_adapt_eps=25, \
n_rep_resize=0, \
n_bigwrite=5, \
writer=writer, \
record=True, \
n_print=5)
writer.close()
# Samples
writer = SummaryWriter(os.path.join(dir_out, 'log/samples'))
accept, samples, _ = porbnet.sample_posterior(x=x, \
y=y, \
n_samp=10000, \
x_plot=x_plot, \
eps=eps_adjust, \
n_print=5, \
n_rep_resize=1, \
n_bigwrite=2, \
writer=writer, \
record=True)
writer.close()
fig, ax = plt.subplots()
y_plot = samples['y_plot_pred'].numpy()[:,:,np.newaxis]
plot_posterior_predictive(x_plot.numpy(), y_plot, ax=ax, x=x.numpy(),y=y.numpy(), sig2=1e-4, s=2)
###Output
_____no_output_____
###Markdown
BNN
###Code
sys.path.append('../src/bnn')
import networks_bnn
import util_bnn
dir_out = 'figure1/bnn'
if not os.path.exists(dir_out):
os.makedirs(dir_out)
bnn = networks_bnn.BNN(dim_in = 1,\
dim_hidden=100, \
dim_out = 1, \
prior_w1_sig2 = 10., \
prior_b1_sig2 = 1., \
prior_w2_sig2 = .1, \
prior_b2_sig2 = .1, \
sig2=.0025)
f_samp = bnn.sample_functions_prior(x_plot, n_samp=1000).detach()
util_bnn.plot_prior_predictive(x_plot.numpy(), f_samp.numpy(), bins=20, plot_all_functions=True)
# Burn-in
writer = SummaryWriter(os.path.join(dir_out, 'log/burnin'))
_, _, eps_adjust = bnn.sample_posterior(x=x, \
y=y, \
n_samp=500, \
x_plot=x_plot, \
eps=.0001, \
n_adapt_eps=25, \
n_bigwrite=5, \
writer=writer, \
record=True, \
n_print=5)
writer.close()
# Samples
writer = SummaryWriter(os.path.join(dir_out, 'log/samples'))
accept_bnn, samples_bnn, _ = bnn.sample_posterior(x=x, \
y=y, \
n_samp=1000, \
x_plot=x_plot, \
eps=eps_adjust, \
n_print=5, \
n_bigwrite=5, \
writer=writer,\
record=True)
writer.close()
fig, ax = plt.subplots()
y_plot = samples_bnn['y_plot_pred'].numpy()[:,:,np.newaxis]
plot_posterior_predictive(x_plot.numpy(), y_plot, ax=ax, x=x.numpy(),y=y.numpy(), s=2)
plt.rcParams.update({'font.size': 4})
plt.rcParams.update({'legend.fontsize': 5})
plt.rcParams.update({'axes.labelsize': 5})
plt.rcParams.update({'axes.titlesize': 8})
fig, ax = plt.subplots(1,2, sharey=True, sharex=True)
ax[0].set_title('BNN')
y_plot = samples_bnn['y_plot_pred'].numpy()[:,:,np.newaxis]
plot_posterior_predictive(x_plot.numpy(), y_plot, ax=ax[0], x=x.numpy(),y=y.numpy(), s=2)
ax[0].set_xlabel(r'$x$')
ax[1].set_title('PoRB-Net')
y_plot = samples['y_plot_pred'].numpy()[:,:,np.newaxis]
plot_posterior_predictive(x_plot.numpy(), y_plot, ax=ax[1], x=x.numpy(),y=y.numpy(), s=2)
ax[1].set_xlabel(r'$x$')
fig.set_size_inches(2.9, 1.5)
fig.tight_layout()
fig.savefig('figure1/figure1.pdf', bbox_inches='tight', pad_inches=.01)
fig.savefig('figure1/figure1.png', bbox_inches='tight', pad_inches=.01)
###Output
_____no_output_____
###Markdown
PICIGS youth unemployment
###Code
# read data
df = pd.read_csv('../data/estat_une_rt_m.csv')
# set code as index
df.set_index('freq;s_adj;age;unit;sex;geo\TIME_PERIOD', inplace=True)
# convert column names into datetime
df.columns = df.columns.to_series().apply(lambda s: pd.to_datetime(s.strip(), format='%Y-%m'))
# filter to: from January 2010 to January 2016
df = df.loc[:,'2008-01':'2016-12']
country_codes = {
'Portugal': 'PT',
'Ireland': 'IE',
'Cyprus': 'CY',
'Italy': 'IT',
'Greece': 'EL',
'Spain': 'ES',
'EU': 'EU27_2007' # 27 countries (EU 2007-2013)
}
def filter_seasonally_adjusted(index):
return index.to_series().apply(lambda s: s.split(';')[1] == 'SA')
df = df[filter_seasonally_adjusted(df.index)]
def filter_youth(index):
return index.to_series().apply(lambda s: s.split(';')[2] == 'Y_LT25')
df = df[filter_youth(df.index)]
def filter_unit(index):
return index.to_series().apply(lambda s: s.split(';')[3] == 'PC_ACT')
df = df[filter_unit(df.index)]
def filter_sex(index):
return index.to_series().apply(lambda s: s.split(';')[4] == 'T')
df = df[filter_sex(df.index)]
def filter_picigs(index):
return index.to_series().apply(lambda s: s.split(';')[-1] in country_codes.values()
and s.split(';')[-1] != 'EU27_2007')
piigs = df[filter_picigs(df.index)]
piigs = piigs.astype(float)
print(f'Average unemployment in PICIGS between 01/2010 and 01/2016 is {np.round(piigs.mean().mean(),2)} %.')
def filter_countries(index):
return index.to_series().apply(lambda s: s.split(';')[-1] in country_codes.values())
df = df[filter_countries(df.index)]
df = df.astype(float)
unemployment_df = pd.DataFrame(
data = np.hstack(
[df.loc[:,f'{year}-01':f'{year}-12'].mean(axis=1).to_numpy().reshape(-1,1)
for year in np.arange(2008,2017)]
),
index = df.index,
columns = pd.to_datetime(np.arange(2008,2017), format='%Y')
)
unemployment_df
labels = ['Cyprus', 'Greece', 'Spain', 'EU', 'Ireland', 'Italy', 'Portugal']
markers = ['1','o', '+', '^', 'v', 's', '*']
fig, ax = plt.subplots(figsize=(10,6))
for i,series in df.reset_index(drop=True).iterrows():
ax.plot(series[::3], marker=markers[i], label=labels[i])
_ = ax.set_xticks(series.index[::12])
_ = ax.set_xticklabels(series.index[::12].to_series().apply(lambda s: pd.to_datetime(s).strftime('%m/%y')), rotation=45)
ax.set_ylabel('youth unemployment')
ax.set_xlabel('month')
ax.legend(loc='upper left')
ax.grid()
#fig.savefig('../figures/picigs-youth_unemployment.png')
###Output
_____no_output_____
###Markdown
Greece: emmigration
###Code
# read data
df = pd.read_csv('../data/ELSTAT-greece-emigration.csv', index_col=0)
df = pd.DataFrame(
data=np.vectorize(lambda s: s.replace(',',''))(df.values.flatten()).reshape(df.shape).astype(int),
index=df.index,
columns=df.columns.to_series().apply(lambda s: pd.to_datetime(s, format='%Y'))
)
twennies = df.loc[:,'2008':'2016'].iloc[5:7].sum()
rest = df.loc[:,'2008':'2016'].iloc[np.r_[1:5,7:19]].sum()
fig, ax = plt.subplots()
ax.plot(twennies, marker='+', label='20-30')
ax.plot(rest, marker='o', label='rest')
ax.legend(loc='upper left')
fig, ax = plt.subplots()
ax.plot(df.loc['TOTAL','2008':'2016'], marker='o', label='total')
ax.legend(loc='upper left')
ax.set_yticks(np.arange(40000,120001,20000))
ax.grid()
emmigration_series = df.loc['TOTAL','2008':'2016'].copy()
###Output
_____no_output_____
###Markdown
Trust in the ECB
###Code
# read data
df = pd.read_csv('../data/estat_sdg_16_60.csv', index_col=0)
# convert columns to datetime
df.columns = df.columns.to_series().apply(lambda s: pd.to_datetime(s.strip(), format='%Y'))
# filter to 2008-2016
df = df.loc[:,'2008':'2016']
# filter to ECB
df = df[df.index.to_series().apply(lambda s: s.split(';')[2] == 'ECB')]
picigs = {
'Portugal': 'PT',
'Ireland': 'IE',
'Cyprus': 'CY',
'Italy': 'IT',
'Greece': 'EL',
'Spain': 'ES',
}
eu = {
# picigs
'Portugal': 'PT',
'Ireland': 'IE',
'Cyprus': 'CY',
'Italy': 'IT',
'Greece': 'EL',
'Spain': 'ES',
# rest
'Austria': 'AT',
'Belgium': 'BE',
'Bulgaria': 'BG',
'Czechia': 'CZ',
'Denmark': 'DK',
'Estonia': 'EE',
'Finland': 'FI',
'France': 'FR',
'Germany': 'DE',
'Hungary': 'HU',
'Latvia': 'LV',
'Lithuania': 'LT',
'Luxembourg': 'LU',
'Malta': 'MT',
'Netherlands': 'NL',
'Poland': 'PL',
'Romania': 'RO',
'Slovakia': 'SK',
'Slovenia': 'SI',
'Sweden': 'SE',
'United Kingdom': 'UK'
}
# drop non-EU states and EU mean
df = df[df.index.to_series().apply(lambda s: s.split(';')[-1] in eu.values())]
df = df.astype(float)
trust_eu_series = df.mean()
trust_picigs_series = df[df.index.to_series().apply(lambda s: s.split(';')[-1] in picigs.values())].mean()
trust_greece_series = df[df.index.to_series().apply(lambda s: s.split(';')[-1] == 'EL')].T.iloc[:,0]
fig, ax = plt.subplots()
ax.plot(trust_eu_series, label='EU', marker='o')
ax.plot(trust_picigs_series, label='PICIGS', marker='v')
#_ = ax.set_xticks(series.index[::12])
#_ = ax.set_xticklabels(series.index[::12].to_series().apply(lambda s: pd.to_datetime(s).strftime('%m/%y')), rotation=45)
#ax.set_ylabel('youth unemployment')
#ax.set_xlabel('month')
ax.legend(loc='upper right')
ax.grid()
#fig.savefig('../figures/piigs-youth_unemployment.png')
###Output
_____no_output_____
###Markdown
Greece: electoral rise of Golden Dawn
###Code
hellenic_parliament = pd.Series(
data=[0.3, 7, 6.9, 6.4, 7],
index=pd.to_datetime(['10/09', '05/12', '06/12', '01/15', '09/15'], format='%m/%y')
)
eu_parliament = pd.Series(
data=[0.5, 9.4],
index=pd.to_datetime(['06/09', '05/14'], format='%m/%y')
)
xticks = np.sort(np.unique(np.concatenate((hellenic_parliament.index.to_numpy(), eu_parliament.index.to_numpy()))))
xticks = np.delete(xticks, 2)
fig, ax = plt.subplots()
ax.plot(hellenic_parliament, marker='o', label='Hellenic Parliament')
ax.plot(eu_parliament, marker='v', label='European Parliament')
_ = ax.set_xticks(xticks)
_ = ax.set_xticklabels(pd.Series(xticks).apply(lambda s: pd.to_datetime(s).strftime('%m/%y')), rotation=45)
ax.set_ylabel('%', rotation='horizontal')
ax.legend()
ax.grid()
###Output
_____no_output_____
###Markdown
Combined
###Code
sorted_labels = ['Cyprus','Portugal', 'Ireland', 'Italy', 'Greece', 'Spain', 'EU']
unemployment_df = unemployment_df.set_index(pd.Index(labels)).reindex(sorted_labels)
sorted_markers = ['*', 'v', 's', 'o', '+', '^']
plt.rcParams.update({'font.size': 16})
fig, axes = plt.subplots(2, 2, figsize=(12,10))
lines = []
line_labels = []
#ax1 = plt.subplot2grid((3,2), (0,0), colspan=2)
#ax2 = plt.subplot2grid((3,2), (1,0), colspan=2)
#ax3 = axes[2,0]
#ax4 = axes[2,1]
ax1 = axes[0,0]
ax2 = axes[0,1]
ax3 = axes[1,0]
ax4 = axes[1,1]
#############################
### A: youth unemployment ###
#############################
# greece
l, = ax1.plot(unemployment_df[unemployment_df.index == 'Greece'].T, marker='o', label='Greece')
lines.append(l)
line_labels.append('Greece')
# picigs
l, = ax1.plot(unemployment_df[unemployment_df.index != 'EU'].mean(), marker='v', label='PICIGS')
lines.append(l)
line_labels.append('PICIGS')
# EU
l, = ax1.plot(unemployment_df[unemployment_df.index == 'EU'].T, marker='s', label='EU')
lines.append(l)
line_labels.append('EU')
_ = ax1.set_xticks(unemployment_df.columns)
_ = ax1.set_xticklabels(unemployment_df.columns.to_series().apply(lambda s: s.strftime('%y')), rotation=45)
ax1.set_ylabel('youth unemployment (%)', rotation='vertical')
#ax1.legend(loc='upper left')
ax1.grid()
ax1.set_title('A')
#######################
### B: trust in ECB ###
#######################
ax2.plot(trust_greece_series, label='Greece', marker='o', color='C0')
ax2.plot(trust_eu_series, label='EU', marker='s', color='C2') # keep marker and color consistent with ax1 and ax2
ax2.plot(trust_picigs_series, label='PICIGS', marker='v', color='C1') # use so far unused color because new entity
#lines.append(l)
#line_labels.append('PIIGS')
_ = ax2.set_xticks(trust_eu_series.index)
_ = ax2.set_xticklabels(trust_eu_series.index.to_series().apply(lambda s: s.strftime('%y')), rotation=45)
#_ = ax3.set_yticks(np.arange(30,51,10))
#_ = ax3.set_yticklabels(['40k','60k','80k','100k','120k'])
ax2.set_ylabel('trust in ECB (%)', rotation='vertical')
#ax3.legend(loc='upper right')
ax2.grid()
ax2.set_title('B')
#############################
### C: greece emmigration ###
#############################
l, = ax3.plot(emmigration_series, marker='o', label='Greece', color='C0') # keep marker and color consistent with ax1
# don't add to legend because already present
_ = ax3.set_xticks(emmigration_series.index)
_ = ax3.set_xticklabels(emmigration_series.index.to_series().apply(lambda s: s.strftime('%y')), rotation=45)
_ = ax3.set_yticks(np.arange(40000,120001,20000))
_ = ax3.set_yticklabels(['40k','60k','80k','100k','120k'])
ax3.set_ylabel('emigration', rotation='vertical')
#ax3.legend(loc='upper left')
ax3.grid()
ax3.set_title('C')
########################################
### D: electoral rise of golden dawn ###
########################################
ax4.plot(hellenic_parliament, marker='o', color='C0', linestyle='dashed', label='Hellenic Parliament')
ax4.plot(eu_parliament, marker='o', color='C0', linestyle='dotted', label='European Parliament')
_ = ax4.set_xticks(xticks)
_ = ax4.set_xticklabels(pd.Series(xticks).apply(lambda s: pd.to_datetime(s).strftime('%m/%y')), rotation=45)
ax4.set_ylabel('Golden Dawn: total votes (%)', rotation='vertical')
ax4.legend()
ax4.grid()
ax4.set_title('D')
#########################
### figure aesthetics ###
#########################
# shared legend
fig.legend(handles=lines, # The line objects
labels=line_labels, # The labels for each line
loc="center right", # Position of legend
borderaxespad=0.1, # Small spacing around legend box
#title="Legend Title" # Title for the legend
)
fig.tight_layout()
fig.subplots_adjust(right=0.85)
fig.savefig('../figures/figure1.png')
###Output
_____no_output_____ |
permutation_test.ipynb | ###Markdown
Testing differences between groups
###Code
# Import numerical, data and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# Only show 4 decimals when printing
np.set_printoptions(precision=4)
# Show the plots in the notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Imagine we have some some measures of psychopathy in 12 students. 4 students are from Berkeley, and 4 students are from MIT.
###Code
psychos = pd.read_csv('psycho_students.csv')
psychos
###Output
_____no_output_____
###Markdown
We find that the mean score for the Berkeley students is different from the mean score for the MIT students:
###Code
berkeley_students = psychos[psychos['university'] == 'Berkeley']
berkeley_students
mit_students = psychos[psychos['university'] == 'MIT']
mit_students
berkeley_scores = berkeley_students['psychopathy']
mit_scores = mit_students['psychopathy']
berkeley_scores.mean(), mit_scores.mean()
###Output
_____no_output_____
###Markdown
Here is the difference between the means:
###Code
mean_diff = berkeley_scores.mean() - mit_scores.mean()
mean_diff
###Output
_____no_output_____
###Markdown
That's the difference we see. But - if we take any 8 students from a single university, and take the mean of the first four, and the mean of the second four, there will almost certainly be a difference in the means, just because there's some difference across individuals in the psychopathy score. Is this difference we see unusual compared to the differences we would see if we took eight students from the same university, and compared the means of the first four and the second four? For a moment, let us pretend that all our Berkeley and MIT students come from the same university. Then I can pool the Berkely and MIT students together.
###Code
pooled = pd.concat([berkeley_scores, mit_scores]).values
pooled
###Output
_____no_output_____
###Markdown
If there is no difference between Berkeley and MIT, then it should be OK to just shuffle the students to a random order, like this:
###Code
np.random.shuffle(pooled)
pooled
###Output
_____no_output_____
###Markdown
Now I can just pretend that the first four students are from one university, and the last four are from another university. Then I can compare the means.
###Code
fake_berkeley = pooled[:4]
fake_mit = pooled[4:]
np.mean(fake_berkeley) - np.mean(fake_mit)
###Output
_____no_output_____
###Markdown
This is one difference in means I might see, if there was no real difference between the groups. Put more formally, the difference in means is my *statistic*. The value of the statistic above is one value from the distribution of all possible values, that would arise from random sampling. This distribution is called the *sampling distribution* of the statistic.Now let us build up this distribution by repeating the procedure 10000 times.
###Code
fake_differences = np.zeros(10000)
for i in range(10000):
np.random.shuffle(pooled)
diff = np.mean(pooled[:4]) - np.mean(pooled[4:])
fake_differences[i] = diff
###Output
_____no_output_____
###Markdown
The 10000 values we calculated form the *sampling distribution*. Let's have a look:
###Code
plt.hist(fake_differences)
plt.title("Sampling distribution of mean difference");
###Output
_____no_output_____
###Markdown
Where does the value we actually see, sit in this histogram? More specifically, how many of the values in this histogram are less then or equal to the value we actually see?
###Code
# We will count the number of fake_differences <= our observed
count = 0
# Go through each of the 10000 values one by one
for diff in fake_differences:
if diff <= mean_diff:
count = count + 1
proportion = count / 10000
proportion
###Output
_____no_output_____
###Markdown
KS-test
###Code
sns.set(context="notebook", style="ticks", font="Helvetica")
def get_norm(arr):
return {
"weights": np.ones(len(arr)) / len(arr)
}
ks_summaries = []
for stretch in [7,14,21]:
n_bins=50
display("Stretch size {}".format(stretch))
sequential = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)].ic
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes > stretch / 7 * 6)].ic
display(ks_2samp(sequential, shuffled))
ks_summaries += [{
"stretch_size": stretch,
"ks": ks_2samp(sequential, shuffled)
}]
bins=np.histogram(shuffled.dropna(), bins=n_bins)[1]
sns.distplot(sequential.dropna(), kde=False, hist_kws=get_norm(sequential.dropna()), label="Original", bins=bins)
sns.distplot(shuffled.dropna(), kde=False, hist_kws=get_norm(shuffled.dropna()), label="Shuffled", bins=bins)
plt.legend()
# plt.arrow(2.18,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.23,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.57,1.2,0,-1,head_width=0.05, fc='k', ec='k')
# plt.arrow(2.63,1.2,0,-1,head_width=0.05, fc='k', ec='k')
plt.show()
df_ecdf = ECDF(sequential)
shuffled_df_ecdf = ECDF(shuffled)
x = np.arange(0,5,0.01)
sns.lineplot(x, df_ecdf(x), drawstyle='steps-post')
sns.lineplot(x, shuffled_df_ecdf(x), drawstyle='steps-post')
sns.lineplot(x, shuffled_df_ecdf(x) - df_ecdf(x), drawstyle='steps-post')
plt.show()
###Output
_____no_output_____
###Markdown
For the figure
###Code
stretch=14
n_bins=50
sns.set(font_scale=1.5, style="ticks", font="Arial")
display("Stretch size {}".format(stretch))
sequential = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)].ic
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes > stretch / 7 * 6)].ic
display(ks_2samp(sequential, shuffled))
ks_summaries += [{
"stretch_size": stretch,
"ks": ks_2samp(sequential, shuffled)
}]
bins=np.histogram(shuffled.dropna(), bins=n_bins)[1]
sns.distplot(sequential.dropna(), kde=False, hist_kws=get_norm(sequential.dropna()), label="Original", bins=bins)
sns.distplot(shuffled.dropna(), kde=False, hist_kws=get_norm(shuffled.dropna()), label="Shuffled", bins=bins)
plt.legend()
plt.xlabel("IC")
plt.ylabel("Frequency")
# plt.arrow(2.18,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.23,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.57,1.2,0,-1,head_width=0.05, fc='k', ec='k')
# plt.arrow(2.63,1.2,0,-1,head_width=0.05, fc='k', ec='k')
plt.show()
pd.DataFrame([{"stretch": x['stretch_size'], "pvalue": x["ks"][1]} for x in ks_summaries]).set_index("stretch").T.to_csv("{}/chr_ks.csv".format(dataset), index=False)
###Output
_____no_output_____
###Markdown
Permutation tests (genes shuffled)
###Code
def get_statistics(df):
ics = df.ic
return pd.Series({
"median": ics.median(),
"percentile_90": ics.quantile(0.9),
"percentile_10": ics.quantile(0.1),
"percentile_97.5": ics.quantile(0.975),
"percentile_02.5": ics.quantile(0.025),
"quantile_ratio": ics.quantile(0.9) / ics.quantile(0.1),
# "skew": ics.skew()
})
sns.set(context="notebook", style="ticks", font="Helvetica")
permutation_summaries = []
for stretch in [7,14,21]:
display("Stretch "+str(stretch))
orig = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)]
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes >= stretch / 7 * 6)]
orig_statistics = get_statistics(orig)
shuffled_statistics = shuffled.groupby("iteration").progress_apply(get_statistics)
# total_shuffled_statistic = get_statistics(shuffled) #WRONG, this is not median
total_shuffled_statistic = shuffled_statistics.median()
# lower_count = (shuffled_statistics <= orig_statistics).sum()
# upper_count = (shuffled_statistics >= orig_statistics).sum()
# lower_pvalue = (lower_count + 1) / (shuffled_statistics.shape[0] + 1)
# upper_pvalue = (upper_count + 1) / (shuffled_statistics.shape[0] + 1)
shuf_mean = shuffled_statistics.mean(axis=0)
orig_diff = np.abs(orig_statistics - shuf_mean)
shuf_diff = shuffled_statistics.subtract(shuf_mean).abs()
pvalue = ((shuf_diff >= orig_diff).sum(axis=0) + 1) / (shuffled_statistics.shape[0] + 1)
print("shuf_mean",shuf_mean)
print("OrigDiff",orig_diff)
print("shuf_diff",shuf_diff)
pvalues = pd.DataFrame({
"orig_value": orig_statistics,
"shuffled_value": total_shuffled_statistic,
# "lower_count": lower_count,
# "lower_pvalue": lower_pvalue,
# "upper_count": upper_count,
# "upper_pvalue": upper_pvalue,
"pvalue": pvalue,
})
# pvalues["significance"] = pvalues.apply(lambda x: "LOWER" if x.lower_pvalue <= 0.025 else ("HIGHER" if x.upper_pvalue <= 0.025 else "-----"), axis=1)
permutation_summaries += [pvalues]
display(pvalues)
_, axs = plt.subplots(3,3,figsize=(15,12))
for ax, statistic in zip(np.array(axs).flatten(), orig_statistics.index):
sns.distplot(shuffled_statistics[statistic], ax=ax, kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], ax=ax, kde=False, hist=False, rug=True, rug_kws={"height": 0.5}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], ax=ax, kde=False, hist=False, rug=True, rug_kws={"height": 0.5}, label="Median Shuffled")
ax.legend()
plt.show()
statistic = "quantile_ratio"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("Quantile Ratio")
plt.show()
statistic = "percentile_10"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("10th Percentile")
plt.show()
statistic = "percentile_90"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("90th Percentile")
plt.show()
df = pd.DataFrame([
["Quantile Ratio", pvalues.loc["quantile_ratio", "orig_value"], "Original"],
["Quantile Ratio", pvalues.loc["quantile_ratio", "shuffled_value"], "Median Shuffled"],
["90%", pvalues.loc["percentile_90", "orig_value"], "Original"],
["90%", pvalues.loc["percentile_90", "shuffled_value"], "Median Shuffled"],
["97.5%", pvalues.loc["percentile_97.5", "orig_value"], "Original"],
["97.5%", pvalues.loc["percentile_97.5", "shuffled_value"], "Median Shuffled"],
["2.5%", pvalues.loc["percentile_02.5", "orig_value"], "Original"],
["2.5%", pvalues.loc["percentile_02.5", "shuffled_value"], "Median Shuffled"],
["10%", pvalues.loc["percentile_10", "orig_value"], "Original"],
["10%", pvalues.loc["percentile_10", "shuffled_value"], "Median Shuffled"],
], columns=["metric", "value", "Distribution"])
sns.catplot(data=df, x="metric", y="value", hue="Distribution", kind="bar", sharey=False)
plt.xlabel("")
plt.ylabel("")
plt.show()
# if stretch == 14:
pvalues.index.name = "metric"
pvalues.to_csv("{}/chr_stat_test_pvalues_{}.csv".format(dataset, stretch))
###Output
_____no_output_____
###Markdown
Testing differences between groups
###Code
# Import numerical, data and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# Only show 4 decimals when printing
np.set_printoptions(precision=4)
# Show the plots in the notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Imagine we have some some measures of psychopathy in 12 students. 4 students are from Berkeley, and 4 students are from MIT.
###Code
psychos = pd.read_csv('psycho_students.csv')
psychos
###Output
_____no_output_____
###Markdown
We find that the mean score for the Berkeley students is different from the mean score for the MIT students:
###Code
berkeley_students = psychos[psychos['university'] == 'Berkeley']
berkeley_students
mit_students = psychos[psychos['university'] == 'MIT']
mit_students
berkeley_scores = berkeley_students['psychopathy']
mit_scores = mit_students['psychopathy']
berkeley_scores.mean(), mit_scores.mean()
###Output
_____no_output_____
###Markdown
Here is the difference between the means:
###Code
mean_diff = berkeley_scores.mean() - mit_scores.mean()
mean_diff
###Output
_____no_output_____
###Markdown
That's the difference we see. But - if we take any 8 students from a single university, and take the mean of the first four, and the mean of the second four, there will almost certainly be a difference in the means, just because there's some difference across individuals in the psychopathy score. Is this difference we see unusual compared to the differences we would see if we took eight students from the same university, and compared the means of the first four and the second four? For a moment, let us pretend that all our Berkeley and MIT students come from the same university. Then I can pool the Berkely and MIT students together.
###Code
all_pooled = list(berkeley_scores) + list(mit_scores)
all_pooled
###Output
_____no_output_____
###Markdown
If there is no difference between Berkeley and MIT, then it should be OK to just shuffle the students to a random order, like this:
###Code
from random import shuffle
shuffle(all_pooled)
all_pooled
###Output
_____no_output_____
###Markdown
Now I can just pretend that the first four students are from one university, and the last four are from another university. Then I can compare the means.
###Code
fake_berkeley = all_pooled[:4]
fake_mit = all_pooled[4:]
np.mean(fake_berkeley) - np.mean(fake_mit)
###Output
_____no_output_____
###Markdown
This is one difference in means I might see, if there was no real difference between the groups. Now let's do that 10000 times.
###Code
fake_differences = []
for i in range(10000):
np.random.shuffle(all_pooled)
diff = np.mean(all_pooled[:4]) - np.mean(all_pooled[4:])
fake_differences.append(diff)
###Output
_____no_output_____
###Markdown
The 10000 values we calculated form the *sampling distribution*. Let's have a look:
###Code
plt.hist(fake_differences)
plt.title("Sampling distribution of mean difference");
###Output
_____no_output_____
###Markdown
Where does the value we actually see, sit in this histogram? More specifically, how many of the values in this histogram are less then or equal to the value we actually see?
###Code
# We will count the number of fake_differences <= our observed
count = 0
# Go through each of the 10000 values one by one
for diff in fake_differences:
if diff <= mean_diff:
count = count + 1
proportion = count / 10000
proportion
###Output
_____no_output_____
###Markdown
KS-test
###Code
sns.set(context="notebook", style="ticks", font="Helvetica")
def get_norm(arr):
return {
"weights": np.ones(len(arr)) / len(arr)
}
ks_summaries = []
for stretch in [7,14,21]:
n_bins=50
display("Stretch size {}".format(stretch))
sequential = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)].ic
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes > stretch / 7 * 6)].ic
display(ks_2samp(sequential, shuffled))
ks_summaries += [{
"stretch_size": stretch,
"ks": ks_2samp(sequential, shuffled)
}]
bins=np.histogram(shuffled.dropna(), bins=n_bins)[1]
sns.distplot(sequential.dropna(), kde=False, hist_kws=get_norm(sequential.dropna()), label="Original", bins=bins)
sns.distplot(shuffled.dropna(), kde=False, hist_kws=get_norm(shuffled.dropna()), label="Shuffled", bins=bins)
plt.legend()
# plt.arrow(2.18,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.23,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.57,1.2,0,-1,head_width=0.05, fc='k', ec='k')
# plt.arrow(2.63,1.2,0,-1,head_width=0.05, fc='k', ec='k')
plt.show()
df_ecdf = ECDF(sequential)
shuffled_df_ecdf = ECDF(shuffled)
x = np.arange(0,5,0.01)
sns.lineplot(x, df_ecdf(x), drawstyle='steps-post')
sns.lineplot(x, shuffled_df_ecdf(x), drawstyle='steps-post')
sns.lineplot(x, shuffled_df_ecdf(x) - df_ecdf(x), drawstyle='steps-post')
plt.show()
###Output
_____no_output_____
###Markdown
For the figure
###Code
stretch=14
n_bins=50
sns.set(font_scale=1.5, style="ticks", font="Arial")
display("Stretch size {}".format(stretch))
sequential = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)].ic
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes >= stretch / 7 * 6)].ic
display(ks_2samp(sequential, shuffled))
ks_summaries += [{
"stretch_size": stretch,
"ks": ks_2samp(sequential, shuffled)
}]
bins=np.histogram(shuffled.dropna(), bins=n_bins)[1]
sns.distplot(sequential.dropna(), kde=False, hist_kws=get_norm(sequential.dropna()), label="Original", bins=bins)
sns.distplot(shuffled.dropna(), kde=False, hist_kws=get_norm(shuffled.dropna()), label="Shuffled", bins=bins)
plt.legend()
plt.xlabel("IC")
plt.ylabel("Frequency")
# plt.arrow(2.18,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.23,1.5,0,-1,head_width=0.02, fc='k', ec='k')
# plt.arrow(2.57,1.2,0,-1,head_width=0.05, fc='k', ec='k')
# plt.arrow(2.63,1.2,0,-1,head_width=0.05, fc='k', ec='k')
plt.show()
pd.DataFrame([{"stretch": x['stretch_size'], "pvalue": x["ks"][1]} for x in ks_summaries]).set_index("stretch").T.to_csv("{}/chr_ks.csv".format(dataset), index=False)
###Output
_____no_output_____
###Markdown
Permutation tests (genes shuffled)
###Code
def get_statistics(df):
ics = df.ic
return pd.Series({
"median": ics.median(),
"percentile_90": ics.quantile(0.9),
"percentile_10": ics.quantile(0.1),
"percentile_97.5": ics.quantile(0.975),
"percentile_02.5": ics.quantile(0.025),
"quantile_ratio": ics.quantile(0.9) / ics.quantile(0.1),
# "skew": ics.skew()
})
sns.set(context="notebook", style="ticks", font="Helvetica")
permutation_summaries = []
for stretch in [7,14,21]:
display("Stretch "+str(stretch))
orig = ic_df[(ic_df.stretch == stretch) & (ic_df.n_genes >= stretch / 7 * 6)]
shuffled = shuffled_ic_df[(shuffled_ic_df.stretch == stretch) & (shuffled_ic_df.n_genes >= stretch / 7 * 6)]
orig_statistics = get_statistics(orig)
shuffled_statistics = shuffled.groupby("iteration").progress_apply(get_statistics)
# total_shuffled_statistic = get_statistics(shuffled) #WRONG, this is not median
total_shuffled_statistic = shuffled_statistics.median()
# lower_count = (shuffled_statistics <= orig_statistics).sum()
# upper_count = (shuffled_statistics >= orig_statistics).sum()
# lower_pvalue = (lower_count + 1) / (shuffled_statistics.shape[0] + 1)
# upper_pvalue = (upper_count + 1) / (shuffled_statistics.shape[0] + 1)
shuf_mean = shuffled_statistics.mean(axis=0)
orig_diff = np.abs(orig_statistics - shuf_mean)
shuf_diff = shuffled_statistics.subtract(shuf_mean).abs()
pvalue = ((shuf_diff >= orig_diff).sum(axis=0) + 1) / (shuffled_statistics.shape[0] + 1)
print("shuf_mean",shuf_mean)
print("OrigDiff",orig_diff)
print("shuf_diff",shuf_diff)
pvalues = pd.DataFrame({
"orig_value": orig_statistics,
"shuffled_value": total_shuffled_statistic,
# "lower_count": lower_count,
# "lower_pvalue": lower_pvalue,
# "upper_count": upper_count,
# "upper_pvalue": upper_pvalue,
"pvalue": pvalue,
})
# pvalues["significance"] = pvalues.apply(lambda x: "LOWER" if x.lower_pvalue <= 0.025 else ("HIGHER" if x.upper_pvalue <= 0.025 else "-----"), axis=1)
permutation_summaries += [pvalues]
display(pvalues)
_, axs = plt.subplots(3,3,figsize=(15,12))
for ax, statistic in zip(np.array(axs).flatten(), orig_statistics.index):
sns.distplot(shuffled_statistics[statistic], ax=ax, kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], ax=ax, kde=False, hist=False, rug=True, rug_kws={"height": 0.5}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], ax=ax, kde=False, hist=False, rug=True, rug_kws={"height": 0.5}, label="Median Shuffled")
ax.legend()
plt.show()
statistic = "quantile_ratio"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("Quantile Ratio")
plt.show()
statistic = "percentile_10"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("10th Percentile")
plt.show()
statistic = "percentile_90"
sns.distplot(shuffled_statistics[statistic], kde=False, rug=False, label="Shuffled")
sns.distplot([orig_statistics[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Original")
sns.distplot([total_shuffled_statistic[statistic]], kde=False, hist=False, rug=True, rug_kws={"height": 0.95}, label="Median Shuffled")
plt.legend()
plt.xlabel("90th Percentile")
plt.show()
df = pd.DataFrame([
["Quantile Ratio", pvalues.loc["quantile_ratio", "orig_value"], "Original"],
["Quantile Ratio", pvalues.loc["quantile_ratio", "shuffled_value"], "Median Shuffled"],
["90%", pvalues.loc["percentile_90", "orig_value"], "Original"],
["90%", pvalues.loc["percentile_90", "shuffled_value"], "Median Shuffled"],
["97.5%", pvalues.loc["percentile_97.5", "orig_value"], "Original"],
["97.5%", pvalues.loc["percentile_97.5", "shuffled_value"], "Median Shuffled"],
["2.5%", pvalues.loc["percentile_02.5", "orig_value"], "Original"],
["2.5%", pvalues.loc["percentile_02.5", "shuffled_value"], "Median Shuffled"],
["10%", pvalues.loc["percentile_10", "orig_value"], "Original"],
["10%", pvalues.loc["percentile_10", "shuffled_value"], "Median Shuffled"],
], columns=["metric", "value", "Distribution"])
sns.catplot(data=df, x="metric", y="value", hue="Distribution", kind="bar", sharey=False)
plt.xlabel("")
plt.ylabel("")
plt.show()
# if stretch == 14:
pvalues.index.name = "metric"
pvalues.to_csv("{}/chr_stat_test_pvalues_{}.csv".format(dataset, stretch))
###Output
_____no_output_____ |
ala2-lag-time/comparison-dt/ala_dt-rates.ipynb | ###Markdown
Lag-time dependency of lifetimes from MD* load MD transition paths and transition-based state assignments Calculate lifetimes MD
###Code
md_dt_tp_d = {}
md_dt_state_d = {}
#for key in md_r_dt_d.keys():
for key in dt_key_l:
#print "../dt{}/md*/*blNone_tp.pickle".format(dt)
#dt_tp_l = [glob.glob("../dt{}/md*/md_st{}*blNone_tp.pickle".format(dt,st))[0] for st in range(1,4)]
_dt_tp_l = []
_dt_st_l = []
for st in range(1,4):
_dt_tp_l.extend(glob.glob("../dt{}/md*_st{}/md_st{}md*_27jan16_blNone_tp.pickle".format(key,st,st)))
_dt_st_l.extend(glob.glob("../dt{}/md*_st{}/md_st{}md*_27jan16_blNone_tba.pickle".format(key,st,st)))
# print _dt_st_l
md_dt_tp_d[key] = _dt_tp_l
md_dt_state_d[key] = _dt_st_l
trans_from_h = [t for t in possible_transitions(4) if t[:2] == (0,0,)]
trans_from_c = [t for t in possible_transitions(4) if t[:2] == (0,1,)]
trans_from_11 = [t for t in possible_transitions(4) if t[:2] == (1,1,)]
trans_from_10 = [t for t in possible_transitions(4) if t[:2] == (1,0,)]
md_dt_hdw_d = {}
md_dt_cdw_d = {}
md_dt_11dw_d = {}
md_dt_10dw_d = {}
bar = pyprind.ProgBar(len(dt_key_l)*3)
for k, v in md_dt_tp_d.items():
_tba = md_dt_state_d[k]
print k
md_h_l = []
md_c_l = []
md_11_l = []
md_10_l = []
for i, _tp_fn in enumerate(v):
_st_fn = _tba[i]
_tp_df = pd.read_pickle(_tp_fn)
_st_df = pd.read_pickle(_st_fn)
_tp_df0 = _tp_df[_tp_df.temperature==0]
_st_df0 = _st_df[_st_df.temperature==0]
_dw_h = loop_dwell_trans_temp(_tp_df0,
_st_df0, trans_from_h, tu=float(k))
md_h_l.append(_dw_h)
_dw_c = loop_dwell_trans_temp(_tp_df0,
_st_df0, trans_from_c, tu=float(k))
md_c_l.append(_dw_c)
# lifetimes in the minor states
_dw_10 = loop_dwell_trans_temp(_tp_df0,
_st_df0, trans_from_10, tu=float(k))
md_10_l.append(_dw_10)
_dw_11 = loop_dwell_trans_temp(_tp_df0,
_st_df0, trans_from_11, tu=float(k))
md_11_l.append(_dw_11)
md_dt_hdw_d[k] = pd.concat(md_h_l)
md_dt_cdw_d[k] = pd.concat(md_c_l)
md_dt_10dw_d[k] = pd.concat(md_10_l)
md_dt_11dw_d[k] = pd.concat(md_11_l)
bar.update()
###Output
0% 100%
[ ]
###Markdown
Quick comparison of MD lifetime distributions
###Code
_blog = np.logspace(-3,1, 5000)
fig, ax = plt.subplots(2,2, figsize=(8,4))
tf = 1.0 / 1000
_ = fit_plot_cdf(ax[0,0], md_dt_hdw_d['1'].wait_T * tf, bins=_blog,
dist_label="$h$, dt=1 ps", )
_ = fit_plot_cdf(ax[0,1], md_dt_hdw_d['5'].wait_T * tf, bins=_blog, plot_fit=False,
dist_label="$h$, dt=5 ps")
_ = fit_plot_cdf(ax[1,0], md_dt_hdw_d['10'].wait_T * tf, bins=_blog, plot_fit=True,
dist_label="$h$, dt=10 ps")
_ = fit_plot_cdf(ax[1,1], md_dt_hdw_d['25'].wait_T * tf, bins=_blog, plot_fit=True,
dist_label="$h$, dt=25 ps")
#_ = fit_plot_cdf(ax[2,1], md_dt_hdw_d['50'].wait_T * tf, bins=_blog, plot_fit=True)
_ = fit_plot_cdf(ax[0,0], md_dt_cdw_d['1'].wait_T * tf, bins=_blog,
dist_label="$c$, dt=1 ps")
_ = fit_plot_cdf(ax[0,1], md_dt_cdw_d['5'].wait_T * tf, bins=_blog, plot_fit=True,
dist_label="$c$, dt=5 ps")
_ = fit_plot_cdf(ax[1,0], md_dt_cdw_d['10'].wait_T * tf, bins=_blog, plot_fit=True,
dist_label="$c$, dt=10 ps")
_ = fit_plot_cdf(ax[1,1], md_dt_cdw_d['25'].wait_T * tf, bins=_blog, plot_fit=True,
dist_label="$c$, dt=25 ps")
#_ = fit_plot_cdf(ax[2,1], md_dt_cdw_d['50'].wait_T * tf, bins=_blog, plot_fit=True)
for a in ax.flat:
a.semilogx()
a.legend(loc=2)
###Output
_____no_output_____
###Markdown
Uncertainty in $\tau$ REMD from counts
###Code
md_r_dt_d = {}
for dt_fn_name in glob.glob("../dt*/c_md_dt*/rates_md_st1-3_sym_ln__27jan16.pickle"):
dt = dt_fn_name.split("/")[1][2:]
md_r_dt_d[dt] = pd.read_pickle(dt_fn_name)
###Output
_____no_output_____
###Markdown
Lag-time dependency of lifetimes from REMD Calculate
###Code
remd_dt_tp_l = glob.glob("../dt*/remd_dt*/*_27jan16_tp.pickle")
remd_dt_st_l = glob.glob("../dt*/remd_dt*/*_27jan16_tba.pickle")
remd_dt_tp_d = {}
remd_dt_st_d = {}
for fn in remd_dt_tp_l:
dt = fn.split("/")[1][2:]
#print dt
remd_dt_tp_d[dt] = pd.read_pickle(fn)
for fn in remd_dt_st_l:
dt = fn.split("/")[1][2:]
remd_dt_st_d[dt] = pd.read_pickle(fn)
remd_dt_hdw_d = {}
remd_dt_cdw_d = {}
remd_dt_10w_d = {}
remd_dt_11dw_d = {}
bar = pyprind.ProgBar(len(remd_dt_tp_l)*4)
for k, _tp in remd_dt_tp_d.items():
_tba = remd_dt_st_d[k]
_tp0 = _tp[_tp.temperature==0]
_tba0 = _tba[_tba.temperature==0]
_dw_h = loop_dwell_trans_temp(_tp0,
_tba0, trans_from_h, tu=float(k))
_dw_c = loop_dwell_trans_temp(_tp0,
_tba0, trans_from_c, tu=float(k))
remd_dt_hdw_d[k] = _dw_h
remd_dt_cdw_d[k] = _dw_c
remd_dt_10w_d[k] = loop_dwell_trans_temp(_tp0, _tba0,
trans_from_10,
tu=float(k))
remd_dt_11dw_d[k] = loop_dwell_trans_temp(_tp0, _tba0,
trans_from_11,
tu=float(k))
bar.update()
###Output
0% 100%
[####### ] | ETA: 00:03:32
###Markdown
Uncertainty in $\tau$ REMD from counts
###Code
remd_r_dt_d = {}
for dt_fn_name in glob.glob("../dt*/remd_dt*/ala_remd_st1_dt*ps__27jan16_rates_sym_ln__27jan16.pickle"):
dt = dt_fn_name.split("/")[1][2:]
#print dt
remd_r_dt_d[dt] = pd.read_pickle(dt_fn_name)
###Output
_____no_output_____
###Markdown
Comparing lag-time dependences from MD and REMD Lagtime independent rate coefficientsassume that Ala2 is a quasi two-state system.First fit $k$\begin{equation}\frac{1}{_{\text{app}}} + \frac{1}{_{\text{app}}} = \frac{1 - \exp(-k t) }{ t}\end{equation}Then fit the relative populations.\begin{equation}\frac{1}{_{\text{app}}} = \frac{pU' (1 - \exp(-kt)}{t} = k_{U \leftarrow F}\end{equation} rate arrays
###Code
md_dt_cdw_d['1'].head() # wait in the coil state
md_dt_hdw_d['1'].head() # wait in the helix state
tf = 1.0 / 1000.0
md_lagtime_l = []
md_kc_l, md_kh_l = [], []
for dt in sorted([int(k) for k in md_dt_cdw_d.keys()]):
md_lagtime_l.append(dt)
_c = md_dt_cdw_d[str(dt)]
_h = md_dt_hdw_d[str(dt)]
md_kh_l.extend([_c[_c.temperature==0].wait_T.mean()])
md_kc_l.extend([_h[_h.temperature==0].wait_T.mean()])
md_kc_ar = 1.0 / (np.array(md_kc_l)*tf)
md_kh_ar = 1.0/ (np.array(md_kh_l)*tf)
md_kc_kh_ar = md_kc_ar + md_kh_ar
md_lagtime_ar = np.array(md_lagtime_l)
md_kc_kh_ar
1/ (md_dt_cdw_d['1'][md_dt_cdw_d['1'].temperature==0].wait_T.mean() / 1000.0)
1/ (md_dt_hdw_d['1'][md_dt_hdw_d['1'].temperature==0].wait_T.mean() / 1000.0)
md_r_dt_d['1'][(md_r_dt_d['1'].temperature==0) & (md_r_dt_d['1'].type==(0,1,0,0))]
remd_lagtime_l = []
remd_kc_l, remd_kh_l = [], []
for dt in sorted([int(k) for k in remd_dt_hdw_d.keys()]):
print dt
remd_lagtime_l.append(dt)
_c = remd_dt_cdw_d[str(dt)]
_h = remd_dt_hdw_d[str(dt)]
# tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
_c0 = _c[_c.temperature==0]
_h0 = _h[_h.temperature==0]
remd_kh_l.extend([np.average(_c0.wait_T / _c0.weight, weights=_c0.weight)])
remd_kc_l.extend([np.average(_h0.wait_T / _h0.weight, weights=_h0.weight)])
remd_kc_ar = 1.0 / (np.array(remd_kc_l)*tf)
remd_kh_ar = 1.0/ (np.array(remd_kh_l)*tf)
remd_kc_kh_ar = remd_kc_ar + remd_kh_ar
remd_lagtime_ar = np.array(remd_lagtime_l)
1/ (remd_dt_hdw_d['1'][remd_dt_hdw_d['1'].temperature==0].wait_T.mean() / 1000.0)
1/ (remd_dt_cdw_d['1'][remd_dt_cdw_d['1'].temperature==0].wait_T.mean() / 1000.0)
md_kc_kh_ar
remd_kc_kh_ar
remd_lagtime_ar
popt_kex_remd = curve_fit(sum_inv_lifetimes_two_state_lagtime,
remd_lagtime_ar[:]*tf, remd_kc_kh_ar[:], p0=10)
kex_fit_remd = popt_kex_remd[0][0]
print popt_kex_remd
popt = curve_fit(sum_inv_lifetimes_two_state_lagtime, md_lagtime_ar[:]*tf, md_kc_kh_ar[:], p0=[10.0])
print popt
k_ex_fit = popt[0][0]
fig, ax = plt.subplots(figsize=(3,3))
plt.plot(md_lagtime_ar / 1000.0, md_kc_kh_ar, ".")
plt.plot(md_lagtime_ar/ 1000.0, sum_inv_lifetimes_two_state_lagtime(md_lagtime_ar*tf, k_ex_fit))
plt.plot(remd_lagtime_ar / 1000.0, remd_kc_kh_ar, ".")
plt.plot(remd_lagtime_ar/ 1000.0, sum_inv_lifetimes_two_state_lagtime(remd_lagtime_ar*tf, kex_fit_remd))
plt.loglog()
###Output
_____no_output_____
###Markdown
$p'$ fit to determine lag time independent rate coefficients and populations MD
###Code
f = lambda t, p : inv_lifetime_two_state_lagtime(t, p, k_ex_fit)
popt_pc = curve_fit(f, md_lagtime_ar[:] / 1000.0, md_kc_ar[:])
print popt_pc
popt_ph = curve_fit(f, md_lagtime_ar[:]/1000.0, md_kh_ar[:])
print popt_ph
pc = popt_pc[0][0] / (popt_pc[0][0] + popt_ph[0][0])
ph = popt_ph[0][0] / (popt_pc[0][0] + popt_ph[0][0])
k_ex_fit * pc, k_ex_fit * ph
md_r_dt_d['1'][(md_r_dt_d['1'].temperature==0) & (md_r_dt_d['1'].type.isin([(0,0,0,1), (0,1,0,0)]))]
###Output
_____no_output_____
###Markdown
REMD
###Code
f_remd = lambda t, p: inv_lifetime_two_state_lagtime(t, p, kex_fit_remd)
popt_pc_remd = curve_fit(f, remd_lagtime_ar[:]*tf, remd_kc_ar[:])
print popt_pc_remd
popt_ph_remd = curve_fit(f, remd_lagtime_ar[:]*tf, remd_kh_ar[:])
print popt_ph_remd
pc_remd = popt_pc_remd[0][0] / (popt_pc_remd[0][0] + popt_ph_remd[0][0])
ph_remd = popt_ph_remd[0][0] / (popt_pc_remd[0][0] + popt_ph_remd[0][0])
print pc_remd, ph_remd
kex_fit_remd * pc, kex_fit_remd * ph
###Output
_____no_output_____
###Markdown
Load minor state lag-time dependences expected for ideal rate kinetics
###Code
syn_ar = np.genfromtxt("../from-syn-trj/syn_dt_ala_st3_st4_c.txt")
###Output
_____no_output_____
###Markdown
Comparison plot
###Code
! ls ../dt10/c_md_dt10/
! mkdir -p plot
###Output
pt_md_st1-3__27jan16.txt rates_md_st1-3_sym_ln__27jan16.pickle
rates_md_st1-3__27jan16.txt rep_ar_md_st1-3__27jan16.txt
###Markdown
* error estimates from the count statistics
###Code
remd_r_dt_d['1'][(remd_r_dt_d['1'].type.isin(trans_from_10))].head()
def _count_err_from_rate_calc(tau, r_df, trans):
md_r = r_df[(r_df.type.isin(trans))]
_sum = md_r.rate.sum()
if _sum > 0:
_N = md_r.sym_weight.sum()
return tau*np.exp(1/_N**0.5), tau*np.exp(-1/_N**0.5)
def _count_err_at_temp(r_df, trans, temp):
p, m = _count_err_from_rate_calc(r_df[r_df.temperature==temp], trans)
return m, p
fig, ax = plt.subplots(2,2, figsize=(6.5,6))
sns.set_style("ticks")
for k, v in remd_dt_hdw_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,0,0,1))]
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
lg1_remd = ax[0,1].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,
md_r.err_p.values ],c=cl[1], fmt=".",
label=r"$h \rightarrow c$")
for k, v in remd_dt_cdw_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,1,0,0))]
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
#lg2_remd, = ax[0,1].plot(int(k)*tf, 1.0 / tau_av*1000.0 , "s", c=cl[3])
lg2_remd = ax[0,1].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,
md_r.err_p.values ],c=cl[3], fmt=".",
label=r"$c \rightarrow h$")
for k, v in remd_dt_10w_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]
if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
_m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,
md_r, trans_from_10)
ax[1,1].plot([int(k)*tf]*2, [ _m, _p], "-", c=cl[4] )
lg3_remd, = ax[1,1].plot(int(k)*tf, 1.0/ tau_av * 1000.0 , ".", c=cl[4], mec=cl[4]) # mew=1.0, mfc="None"
for k, v in remd_dt_11dw_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]
if np.any(v.wait_T.values > 0) and md_r.rate.sum() > 0 :
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
_m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,
md_r, trans_from_11)
ax[1,1].plot([int(k)*tf]*2, [ _m, _p], "-", c=cl[5] )
lg4_remd, = ax[1,1].plot(int(k)*tf, 1.0/ tau_av * 1000.0 , ".", c=cl[5], mec=cl[5]) # mew=1.0, mfc="None"
# mfpt h
ax[0,1].plot(remd_lagtime_ar*tf, inv_lifetime_two_state_lagtime(
remd_lagtime_ar*tf, popt_ph[0][0], kex_fit_remd),
c=cl[3])
# mfpt c
ax[0,1].plot(remd_lagtime_ar*tf, inv_lifetime_two_state_lagtime(
remd_lagtime_ar*tf, popt_pc[0][0], kex_fit_remd),
c=cl[1])
for k, v in md_dt_hdw_d.items():
dt_r = md_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,0,0,1))]
_temp =v.wait_T / v.weight
if np.any(_temp.values > 0):
tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)
lg1_md = ax[0,0].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,
md_r.err_p.values ],c=cl[0], fmt=".",
label=r"$h \rightarrow c$")
for k, v in md_dt_cdw_d.items():
dt_r = md_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type==(0,1,0,0))]
_temp =v.wait_T / v.weight
if np.any(_temp.values > 0):
tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)
#lg2_md, = ax[0,0].plot(int(k)*tf, 1.0/tau_av*1000.0, "o", c=cl[2])
lg2_md = ax[0,0].errorbar(int(k)*tf, 1.0/tau_av*1000.0, yerr=[md_r.err_m.values ,
md_r.err_p.values ],c=cl[2], fmt=".",
label=r"$c \rightarrow h$")
ax[0,0].legend([lg1_md, lg2_md], [r"MD $h \rightarrow c$",
r"MD $c \rightarrow h$"],
loc=3)
ax[0,1].legend([lg1_remd, lg2_remd], [r"REMD $h \rightarrow c$",
r"REMD $c \rightarrow h$"], loc=3)
for k, v in md_dt_10dw_d.items():
dt_r = md_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]
if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
_m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,
md_r, trans_from_10)
ax[1,0].plot([int(k)*tf]*2, [ _m, _p], "-", c=cl[4] )
#tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)
lg3_md, = ax[1,0].plot(int(k)*tf, 1.0/tau_av*1000.0, ".", c=cl[4], mfc=cl[4], mec=cl[4])
for k, v in md_dt_11dw_d.items():
dt_r = md_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]
if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :
tau_av = np.average(v.wait_T/ v.weight, weights=v.weight)
_m, _p = _count_err_from_rate_calc( 1.0/ tau_av * 1000.0,
md_r, trans_from_11)
ax[1,0].plot([int(k)*tf]*2, [ _m, _p], "-", c=cl[5] )
#tau_av, var_tau = weights_first_second_moment(v.weight, v.wait_T)
lg4_md, = ax[1,0].plot(int(k)*tf, 1.0/tau_av*1000.0, ".", c=cl[5], mfc=cl[5], mec=cl[5])
ax[1,0].legend([lg3_md, lg4_md], [r'MD $\sum_{l(\neq)3}k_{l3}$',
r'MD $\sum_{l(\neq)4}k_{l4}$'],
loc=3, handletextpad=-0.5,handleheight=0.5,
borderaxespad=0.01)
#mfpt h
ax[0,0].plot(md_lagtime_ar*tf, inv_lifetime_two_state_lagtime(
md_lagtime_ar*tf, popt_ph[0][0], k_ex_fit), "-",
c=cl[2])
#mfpt c
ax[0,0].plot(md_lagtime_ar*tf, inv_lifetime_two_state_lagtime(
md_lagtime_ar*tf, popt_pc[0][0], k_ex_fit),
c=cl[0])
ax[1,1].legend([lg3_remd, lg4_remd], [r'REMD $\sum_{l(\neq)3}k_{l3}$',
r'REMD $\sum_{l(\neq)4}k_{l4}$'],
loc=3, handletextpad=-0.5,handleheight=0.5,
borderaxespad=0.01)
for a in ax.flat:
a.loglog()
a.set_xlabel("$\mathregular{\Delta t \, [ns]}$", fontsize=12)
a.set_xlim(10**-3.2, 10**1)
a.tick_params(axis='both', which='major', labelsize=12)
a.tick_params(axis='both', which='both', top=True , right=True)
for i, a in enumerate(["A", "B"]):
ax.flat[i].text(10**-5, 10**0.7, a, fontsize=20)
ax.flat[i].set_ylabel(r"$\mathregular{k \, [ns^{-1}]}$", fontsize=12)
for i, a in enumerate(["C", "D"]):
ax.flat[i+2].text(10**-5, 10**1.7, a, fontsize=20)
ax.flat[i+2].set_ylabel(r"$\mathregular{k \, [ns^{-1}]}$", fontsize=12)
ax[1,0].plot(syn_ar[:,0], syn_ar[:,1], c='grey')
ax[1,0].plot(syn_ar[:,0], syn_ar[:,2], c='black')
ax[1,1].plot(syn_ar[:,0], syn_ar[:,1], c='grey')
ax[1,1].plot(syn_ar[:,0], syn_ar[:,2], c='black')
for a in ax[0,:]:
a.set_yticks([10**-2, 10**-1, 10**0, 10**1])
a.set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])
for a in ax[1,:]:
a.set_yticks([10**-2, 10**-1, 10**0, 10**1, 10**2])
a.set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])
fig.tight_layout()
fig.savefig("plot/ala_dt-rate.png")
fig.savefig("plot/ala_dt-rate.pdf")
###Output
_____no_output_____
###Markdown
Lag-time dependence of $\mathrm{var}(\tau)$
###Code
var_mcmc_h = np.genfromtxt("../from-syn-trj/var_dt/var_lifetimes_h_ns.txt")
var_mcmc_c = np.genfromtxt("../from-syn-trj/var_dt/var_lifetimes_c_ns.txt")
var_mcmc_st3 = np.genfromtxt("../from-syn-trj/var_dt/var_lifetimes_st3_ns.txt")
var_mcmc_st4 = np.genfromtxt("../from-syn-trj/var_dt/var_lifetimes_st4_ns.txt")
fig_l = ["A", "B", "C", "D"]
fig, ax = plt.subplots(2,2, figsize=(6.5,6))
for k, v in md_dt_hdw_d.items():
_exp = check_moments_w(v.wait_T*tf, v.weight)
lg_md, = ax[0,0].plot(int(k)*tf, _exp[1] , "o", c=cl[0])
for k, v in md_dt_cdw_d.items():
_exp = check_moments_w(v.wait_T*tf, v.weight)
lg2_md, = ax[0,0].plot(int(k)*tf, _exp[1] , "o", c=cl[2], label="$c \rightarrow h$")
for k, v in md_dt_10dw_d.items():
if np.any(v.wait_T > 0) :
_exp = check_moments_w(v.wait_T*tf , v.weight)
lg3_md, = ax[1,0].plot(int(k)*tf, _exp[1], "o", c=cl[4])
for k, v in md_dt_11dw_d.items():
if np.any(v.wait_T > 0) :
_exp = check_moments_w(v.wait_T*tf , v.weight)
lg4_md, = ax[1,0].plot(int(k)*tf, _exp[1], "o", c=cl[5])
# exclude empty arrays
for k, v in remd_dt_hdw_d.items():
_exp = check_moments_w( v.wait_T*tf, v.weight)
lg1_remd, = ax[0,1].plot(int(k)*tf, _exp[1] , "s", c=cl[1])
for k, v in remd_dt_cdw_d.items():
_exp = check_moments_w(v.wait_T*tf , v.weight)
lg2_remd, = ax[0,1].plot(int(k)*tf, _exp[1] , "s", c=cl[3], label="$c \rightarrow h$")
for k, v in remd_dt_10w_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_10))]
if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :
_exp = check_moments_w(v.wait_T*tf , v.weight)
lg3_remd, = ax[1,1].plot(int(k)*tf, _exp[1], "s", c=cl[4])
for k, v in remd_dt_11dw_d.items():
dt_r = remd_r_dt_d[str(k)]
md_r = dt_r[(dt_r.temperature==0) & (dt_r.type.isin(trans_from_11))]
if np.any(v.wait_T > 0) and md_r.rate.sum() > 0 :
_exp = check_moments_w(v.wait_T*tf , v.weight)
lg4_remd, = ax[1,1].plot(int(k)*tf, _exp[1], "s", c=cl[5])
for a in ax[0,:]:
a.plot(var_mcmc_h[:,0]/1000.0, var_mcmc_h[:,2], c="k")
a.plot(var_mcmc_c[:,0]/1000.0, var_mcmc_c[:,2], c="grey")
a.set_ylim(10**-2, 10**3)
for a in ax[1,:]:
a.plot(var_mcmc_st3[:,0]/1000.0, var_mcmc_st3[:,2], "-D", c='k',
mfc="None", mec='k', mew=1.0, zorder=1)
a.plot(var_mcmc_st4[:,0]/1000.0, var_mcmc_st4[:,2], "-D", c='grey',
mfc="None", mec='grey', mew=1.0, zorder=1)
a.set_ylim(10**-4, 10**3)
for a in ax.flat:
a.loglog()
a.set_xlabel("$\mathregular{\Delta t \, [ns]}$", fontsize=12)
a.set_xlim(10**-3.2, 10**1)
a.tick_params(axis='both', which='major', labelsize=12)
a.tick_params(axis='both', which='both', top=True , right=True)
for i, a in enumerate(fig_l):
ax.flat[i].text(10**-5, 10**3, a, fontsize=20)
ax.flat[i].set_ylabel(r"$\mathregular{ var(\tau) \, [ns^2]}$", fontsize=12)
ax.flat[i].set_xticks([10**-3, 10**-2, 10**-1, 10**0, 10**1])
ax[0,1].legend([lg1_remd, lg2_remd], [r"REMD $h \rightarrow c$",
r"REMD $c \rightarrow h$"], loc=2, borderaxespad=-0.1)
ax[0,0].legend([lg1_md, lg2_md], [r"MD $h \rightarrow c$",
r"MD $c \rightarrow h$"], loc=2, borderaxespad=-0.1)
ax[1,1].legend([lg3_remd, lg4_remd], [r"REMD state 3",
r"REMD state 4"], loc=2, borderaxespad=-0.1)
ax[1,0].legend([lg3_md, lg4_md], [r"MD state 3",
r"MD state 4"], loc=2, borderaxespad=-0.1)
fig.tight_layout()
fig.savefig("plot/ala_dt-var_tau.png")
fig.savefig("plot/ala_dt-var_tau.pdf")
###Output
_____no_output_____ |
Code/XGBoost/.ipynb_checkpoints/Part_02_modeling_Pyspark-checkpoint.ipynb | ###Markdown
Part 01 - EDA with PysparkGradient Boosted Trees applied to Fraud detection Pyspark libraries
###Code
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql.functions import col, countDistinct
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, explode, array, lit
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import GBTClassifier
###Output
_____no_output_____
###Markdown
Python libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter(action='ignore', category=FutureWarning)
spark = SparkSession.builder.appName('FraudTreeMethods').getOrCreate()
###Output
_____no_output_____
###Markdown
Read Data
###Code
# Load and parse the data file, converting it to a DataFrame.
#data = sqlContext.sql("SELECT * FROM fraud_train_sample_csv")
RDD = spark.read.csv('train_sample.csv', inferSchema=True, header=True)
RDD.show(5)
###Output
+------+---+------+---+-------+-------------------+---------------+-------------+
| ip|app|device| os|channel| click_time|attributed_time|is_attributed|
+------+---+------+---+-------+-------------------+---------------+-------------+
| 87540| 12| 1| 13| 497|2017-11-07 09:30:38| null| 0|
|105560| 25| 1| 17| 259|2017-11-07 13:40:27| null| 0|
|101424| 12| 1| 19| 212|2017-11-07 18:05:24| null| 0|
| 94584| 13| 1| 13| 477|2017-11-07 04:58:08| null| 0|
| 68413| 12| 1| 1| 178|2017-11-09 09:00:09| null| 0|
+------+---+------+---+-------+-------------------+---------------+-------------+
only showing top 5 rows
###Markdown
Convert the click time to day and hour and add it to data.
###Code
import datetime
from pyspark.sql.functions import year, month, dayofmonth
from pyspark.sql.functions import hour, minute, dayofmonth
RDD = RDD.withColumn('hour',hour(RDD.click_time)).\
withColumn('day',dayofmonth(RDD.click_time))
RDD.show(5)
###Output
+------+---+------+---+-------+-------------------+---------------+-------------+----+---+
| ip|app|device| os|channel| click_time|attributed_time|is_attributed|hour|day|
+------+---+------+---+-------+-------------------+---------------+-------------+----+---+
| 87540| 12| 1| 13| 497|2017-11-07 09:30:38| null| 0| 9| 7|
|105560| 25| 1| 17| 259|2017-11-07 13:40:27| null| 0| 13| 7|
|101424| 12| 1| 19| 212|2017-11-07 18:05:24| null| 0| 18| 7|
| 94584| 13| 1| 13| 477|2017-11-07 04:58:08| null| 0| 4| 7|
| 68413| 12| 1| 1| 178|2017-11-09 09:00:09| null| 0| 9| 9|
+------+---+------+---+-------+-------------------+---------------+-------------+----+---+
only showing top 5 rows
###Markdown
FeatheringFeathering, grouping-merging as follow. In python EDA we did following:```pythongp = df[['ip','day','hour','channel']]\ .groupby(by=['ip','day','hour'])[['channel']]\ .count().reset_index()\ .rename(index=str, columns={'channel': '*ip_day_hour_count_channel'})df = df.merge(gp, on=['ip','day','hour'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select("ip","day","hour", "channel")\
.groupBy("ip","day","hour")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_day_hour_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ["ip","day","hour"])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
RDD Columns name =
['ip', 'day', 'hour', 'app', 'device', 'os', 'channel', 'click_time', 'attributed_time', 'is_attributed', '*ip_day_hour_count_channel']
###Markdown
In python EDA we did following:```pythongp = df[['ip', 'app', 'channel']].groupby(by=['ip', 'app'])[['channel']].\ count().reset_index().\ rename(index=str, columns={'channel': '*ip_app_count_channel'})df = df.merge(gp, on=['ip','app'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select("ip","app", "channel")\
.groupBy("ip","app")\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ["ip","app"])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
RDD Columns name =
['ip', 'app', 'day', 'hour', 'device', 'os', 'channel', 'click_time', 'attributed_time', 'is_attributed', '*ip_day_hour_count_channel', '*ip_app_count_channel']
###Markdown
In python EDA we did following:```pythongp = df[['ip','app', 'os', 'channel']].\ groupby(by=['ip', 'app', 'os'])[['channel']].\ count().reset_index().\ rename(index=str, columns={'channel': '*ip_app_os_count_channel'})df = df.merge(gp, on=['ip','app', 'os'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','app', 'os', 'channel')\
.groupBy('ip', 'app', 'os')\
.agg({"channel":"count"})\
.withColumnRenamed("count(channel)", "*ip_app_os_count_channel")\
.sort(col("ip"))
RDD = RDD.join(gp, ['ip','app', 'os'])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
RDD Columns name =
['ip', 'app', 'os', 'day', 'hour', 'device', 'channel', 'click_time', 'attributed_time', 'is_attributed', '*ip_day_hour_count_channel', '*ip_app_count_channel', '*ip_app_os_count_channel']
###Markdown
In python EDA we did following:```pythongp = df[['ip','day','hour','channel']].\ groupby(by=['ip','day','channel'])[['hour']].\ var().reset_index().\ rename(index=str, columns={'hour': '*ip_day_chan_var_hour'})df = df.merge(gp, on=['ip','day','channel'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','day','hour','channel')\
.groupBy('ip','day','channel')\
.agg({"hour":"variance"})\
.withColumnRenamed("variance(hour)", "*ip_day_chan_var_hour")\
.sort(col("ip"))
###Output
_____no_output_____
###Markdown
Check out the number of nan and null in the gp.
###Code
from pyspark.sql.functions import isnan, when, count, col
gp.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in gp.columns]).show()
###Output
+---+---+-------+---------------------+
| ip|day|channel|*ip_day_chan_var_hour|
+---+---+-------+---------------------+
| 0| 0| 0| 89123|
+---+---+-------+---------------------+
###Markdown
We remeber from python EDA the following ```pythonip 0app 0device 0os 0channel 0click_time 0is_attributed 0hour 0day 0*ip_day_hour_count_channel 0*ip_app_count_channel 0*ip_app_os_count_channel 0*ip_tchan_count 89123*ip_app_os_var 89715*ip_app_channel_var_day 84834*ip_app_channel_mean_hour 0dtype: int64```Therefore we skip the following grouping (columns)as follow.```python*ip_tchan_count 10877 non-null float64*ip_app_os_var 10285 non-null float64*ip_app_channel_var_day 15166 non-null float64```Note that the last gp was not joined into the data. **Let's Keep going:**In python EDA we did following:```pythongp = df[['ip','app', 'channel','hour']].\ groupby(by=['ip', 'app', 'channel'])[['hour']].\ mean().reset_index().\ rename(index=str, columns={'hour': '*ip_app_channel_mean_hour'})df = df.merge(gp, on=['ip','app', 'channel'], how='left')```We translate it to Pyspark as follow.
###Code
gp = RDD.select('ip','app', 'channel','hour')\
.groupBy('ip', 'app', 'channel')\
.agg({"hour":"mean"})\
.withColumnRenamed("avg(hour)", "*ip_app_channel_mean_hour")\
.sort(col("ip"))
RDD = RDD.join(gp, ['ip', 'app', 'channel'])\
.sort(col("ip"))
print("RDD Columns name = \n", RDD.columns)
###Output
RDD Columns name =
['ip', 'app', 'channel', 'os', 'day', 'hour', 'device', 'click_time', 'attributed_time', 'is_attributed', '*ip_day_hour_count_channel', '*ip_app_count_channel', '*ip_app_os_count_channel', '*ip_app_channel_mean_hour']
###Markdown
Get summary
###Code
# data.summary().show()
cols1 = ['ip', 'app', 'channel',
'os', 'day', 'hour']
RDD.describe(cols1).show()
cols2 = ['device', 'click_time',
'attributed_time','is_attributed']
RDD.describe(cols2).show()
cols3 = ['*ip_day_hour_count_channel',
'*ip_app_count_channel',
'*ip_app_os_count_channel']
RDD.describe(cols3).show()
###Output
+-------+--------------------------+---------------------+------------------------+
|summary|*ip_day_hour_count_channel|*ip_app_count_channel|*ip_app_os_count_channel|
+-------+--------------------------+---------------------+------------------------+
| count| 100000| 100000| 100000|
| mean| 1.49328| 3.58026| 1.29488|
| stddev| 2.0205929005014074| 10.553763885539674| 1.6443882831400423|
| min| 1| 1| 1|
| max| 28| 132| 33|
+-------+--------------------------+---------------------+------------------------+
###Markdown
Check out the uniques number for each column in data.
###Code
from pyspark.sql.functions import col, countDistinct
cols4 = cols1 + cols2
RDD.agg(*(countDistinct(col(c)).alias(c) for c in cols4)).show()
RDD.agg(*(countDistinct(col(c)).alias(c) for c in cols3)).show()
###Output
+--------------------------+---------------------+------------------------+
|*ip_day_hour_count_channel|*ip_app_count_channel|*ip_app_os_count_channel|
+--------------------------+---------------------+------------------------+
| 27| 58| 23|
+--------------------------+---------------------+------------------------+
###Markdown
Over sampling the data* Over sampling* Duplicate the minority rows* Combine both oversampled minority rows and previous majority rows
###Code
# over sampling
major_df = RDD.filter(col("is_attributed") == 0)
minor_df = RDD.filter(col("is_attributed") == 1)
ratio = int(major_df.count()/minor_df.count())
print("ratio: {}".format(ratio))
a = range(ratio)
# duplicate the minority rows
oversampled_df = minor_df.withColumn("dummy", explode(array([lit(x) for x in a]))).drop('dummy')
# combine both oversampled minority rows and previous majority rows combined_df = major_df.unionAll(oversampled_df)
RDD = major_df.unionAll(oversampled_df)
print("RDD Columns name = \n", RDD.columns)
###Output
RDD Columns name =
['ip', 'app', 'channel', 'os', 'day', 'hour', 'device', 'click_time', 'attributed_time', 'is_attributed', '*ip_day_hour_count_channel', '*ip_app_count_channel', '*ip_app_os_count_channel', '*ip_app_channel_mean_hour']
###Markdown
Turn RDD to pandas and use pandas ability for visualization* First take a sample from big RDD* Pass the sample into the pandas data frame
###Code
sub_RDD = RDD.sample(False, 0.01, 42)
data_pd = sub_RDD.toPandas()
data_pd.hist(bins=50,
figsize=(20,15),
facecolor='green')
plt.show()
data_pd.plot(kind="scatter",
x="app",
y="channel",
alpha=0.1,
figsize=(8,5))
plt.figure(figsize=(20,24))
cols = ['app','device','os',
'channel', 'hour', 'day',
'*ip_day_hour_count_channel', '*ip_app_count_channel',
'*ip_app_os_count_channel', '*ip_app_channel_mean_hour']
sub_attributed_mask = data_pd["is_attributed"] == 1
sub_Not_attributed_mask = data_pd["is_attributed"] == 0
for count, col in enumerate(cols, 1):
plt.subplot(4, 3, count)
plt.hist([data_pd[sub_attributed_mask][col],
data_pd[sub_Not_attributed_mask][col]],
color=['goldenrod', 'grey'],
bins=20, ec='k', density=True)
plt.title('Count distribution by {}'.format(col), fontsize=12)
plt.legend(['attributed', 'Not_attributed'])
plt.xlabel(col); plt.ylabel('density')
# path = '../Figures/'
# file_name = 'hist_dens_by_par.png'
# plt.savefig(path+file_name)
###Output
_____no_output_____
###Markdown
TransferingApplying the transfering achieved from previous EDA.
###Code
trans_colmns = ['app','device','os', 'day',
'*ip_day_hour_count_channel',
'*ip_app_count_channel',
'*ip_app_os_count_channel']
a = RDD.select('app').map(lambda x:(x,1))
a
data = data.drop('click_time','attributed_time')
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])
assembler = VectorAssembler(inputCols=['ip', 'app', 'device', 'os', 'channel'],outputCol="features")
trainingData = assembler.transform(trainingData)
testData = assembler.transform(testData)
###Output
_____no_output_____
###Markdown
Train the model
###Code
# Train a GBT model.
gbt = GBTClassifier(labelCol="is_attributed", featuresCol="features", maxIter=20, maxDepth=4)
# Train model. This also runs the indexers.
model = gbt.fit(trainingData)
# Make predictions.
predictions = model.transform(testData)
# Select example rows to display.
predictions.select("prediction", "is_attributed", "features").show(5)
# Select (prediction, true label) and compute test error
evaluator = MulticlassClassificationEvaluator(labelCol="is_attributed", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predictions)
print("Test Error = %g" % (1.0 - accuracy))
print("Test accuracy = %g" % (accuracy))
predictions.groupBy('prediction').count().show()
###Output
_____no_output_____
###Markdown
Apply to test, predict
###Code
test = spark.read.csv('test.csv', inferSchema=True, header=True)
#test.show(5)
assembler = VectorAssembler(inputCols=['ip', 'app', 'device', 'os', 'channel'],outputCol="features")
test = assembler.transform(test)
#test.show(3)
predictions = model.transform(test)
#predictions.show(2)
data_to_submit = predictions.select(['click_id','prediction'])
data_to_submit.show(3)
data_to_submit = data_to_submit.withColumnRenamed('prediction','is_attributed')
data_to_submit.show(3)
data_to_submit.groupBy('is_attributed').count().show()
print('it is runing now')
###Output
_____no_output_____ |
Informatics/Deep Learning/Deep Learning - deeplearning.ai/5. Sequence Models/week_1/Building_a_Recurrent_Neural_Network_Step_by_Step_v3b.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement key components of a Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a unidirectional RNN to take information from the past to process later inputs. A bidirectional RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - **Sub**script $i$ denotes the $i^{th}$ entry of a vector.Example: - $a^{(2)[3]}_5$ denotes the activation of the 2nd training example (2), 3rd layer [3], 4th time step , and 5th entry in the vector. Pre-requisites* We assume that you are already familiar with `numpy`. * To refresh your knowledge of numpy, you can review course 1 of this specialization "Neural Networks and Deep Learning". * Specifically, review the week 2 assignment ["Python Basics with numpy (optional)"](https://www.coursera.org/learn/neural-networks-deep-learning/item/Zh0CU). Be careful when modifying the starter code* When working on graded functions, please remember to only modify the code that is between the```Python START CODE HERE```and```Python END CODE HERE```* In particular, Be careful to not modify the first line of graded routines. These start with:```Python GRADED FUNCTION: routine_name```* The automatic grader (autograder) needs these to locate the function.* Even a change in spacing will cause issues with the autograder. * It will return 'failed' if these are modified or missing." Updates for 3b If you were working on the notebook before this update...* The current notebook is version "3b".* You can find your original work saved in the notebook with the previous version name ("v3a") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* `rnn_cell_backward` - fixed error in equations - harmonize rnn backward diagram with rnn_forward diagram and fixed Wax multiple (changed from at to xt). - clarified dba batch as summing 'm' examples - aligned equations* `lstm_cell_backward` - aligned equations* `lstm_forward` - fixed typo, Wb to bf* `lstm_cell_forward` - changed c_next_tmp.shape to a_next_tmp.shape in test case - clarified dbxx batch as summing 'm' examples Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Dimensions of input $x$ Input with $n_x$ number of units* For a single timestep of a single input example, $x^{(i) \langle t \rangle }$ is a one-dimensional input vector.* Using language as an example, a language with a 5000 word vocabulary could be one-hot encoded into a vector that has 5000 units. So $x^{(i)\langle t \rangle}$ would have the shape (5000,). * We'll use the notation $n_x$ to denote the number of units in a single timestep of a single training example. Time steps of size $T_{x}$* A recurrent neural network has multiple time steps, which we'll index with $t$.* In the lessons, we saw a single training example $x^{(i)}$ consist of multiple time steps $T_x$. For example, if there are 10 time steps, $T_{x} = 10$ Batches of size $m$* Let's say we have mini-batches, each with 20 training examples. * To benefit from vectorization, we'll stack 20 columns of $x^{(i)}$ examples.* For example, this tensor has the shape (5000,20,10). * We'll use $m$ to denote the number of training examples. * So the shape of a mini-batch is $(n_x,m,T_x)$ 3D Tensor of shape $(n_{x},m,T_{x})$* The 3-dimensional tensor $x$ of shape $(n_x,m,T_x)$ represents the input $x$ that is fed into the RNN. Taking a 2D slice for each time step: $x^{\langle t \rangle}$* At each time step, we'll use a mini-batches of training examples (not just a single example).* So, for each time step $t$, we'll use a 2D slice of shape $(n_x,m)$.* We're referring to this 2D slice as $x^{\langle t \rangle}$. The variable name in the code is `xt`. Definition of hidden state $a$* The activation $a^{\langle t \rangle}$ that is passed to the RNN from one time step to another is called a "hidden state." Dimensions of hidden state $a$* Similar to the input tensor $x$, the hidden state for a single training example is a vector of length $n_{a}$.* If we include a mini-batch of $m$ training examples, the shape of a mini-batch is $(n_{a},m)$.* When we include the time step dimension, the shape of the hidden state is $(n_{a}, m, T_x)$* We will loop through the time steps with index $t$, and work with a 2D slice of the 3D tensor. * We'll refer to this 2D slice as $a^{\langle t \rangle}$. * In the code, the variable names we use are either `a_prev` or `a_next`, depending on the function that's being implemented.* The shape of this 2D slice is $(n_{a}, m)$ Dimensions of prediction $\hat{y}$* Similar to the inputs and hidden states, $\hat{y}$ is a 3D tensor of shape $(n_{y}, m, T_{y})$. * $n_{y}$: number of units in the vector representing the prediction. * $m$: number of examples in a mini-batch. * $T_{y}$: number of time steps in the prediction.* For a single time step $t$, a 2D slice $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$.* In the code, the variable names are: - `y_pred`: $\hat{y}$ - `yt_pred`: $\hat{y}^{\langle t \rangle}$ Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. 1.1 - RNN cellA recurrent neural network can be seen as the repeated use of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $\hat{y}^{\langle t \rangle}$ rnn cell versus rnn_cell_forward* Note that an RNN cell outputs the hidden state $a^{\langle t \rangle}$. * The rnn cell is shown in the figure as the inner box which has solid lines. * The function that we will implement, `rnn_cell_forward`, also calculates the prediction $\hat{y}^{\langle t \rangle}$ * The rnn_cell_forward is shown in the figure as the outer box that has dashed lines. **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided the function `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in a `cache`.4. Return $a^{\langle t \rangle}$ , $\hat{y}^{\langle t \rangle}$ and `cache` Additional Hints* [numpy.tanh](https://www.google.com/search?q=numpy+tanh&rlz=1C5CHFA_enUS854US855&oq=numpy+tanh&aqs=chrome..69i57j0l5.1340j0j7&sourceid=chrome&ie=UTF-8)* We've created a `softmax` function that you can use. It is located in the file 'rnn_utils.py' and has been imported.* For matrix multiplication, use [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Waa,a_prev) + np.dot(Wax,xt) + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya,a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, yt_pred_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)
print("a_next[4] = \n", a_next_tmp[4])
print("a_next.shape = \n", a_next_tmp.shape)
print("yt_pred[1] =\n", yt_pred_tmp[1])
print("yt_pred.shape = \n", yt_pred_tmp.shape)
###Output
a_next[4] =
[ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape =
(5, 10)
yt_pred[1] =
[ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape =
(2, 10)
###Markdown
**Expected Output**: ```Pythona_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037]a_next.shape = (5, 10)yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526]yt_pred.shape = (2, 10)``` 1.2 - RNN forward pass - A recurrent neural network (RNN) is a repetition of the RNN cell that you've just built. - If your input sequence of data is 10 time steps long, then you will re-use the RNN cell 10 times. - Each cell takes two inputs at each time step: - $a^{\langle t-1 \rangle}$: The hidden state from the previous cell. - $x^{\langle t \rangle}$: The current time-step's input data.- It has two outputs at each time step: - A hidden state ($a^{\langle t \rangle}$) - A prediction ($y^{\langle t \rangle}$)- The weights and biases $(W_{aa}, b_{a}, W_{ax}, b_{x})$ are re-used each time step. - They are maintained between calls to rnn_cell_forward in the 'parameters' dictionary. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:* Create a 3D array of zeros, $a$ of shape $(n_{a}, m, T_{x})$ that will store all the hidden states computed by the RNN.* Create a 3D array of zeros, $\hat{y}$, of shape $(n_{y}, m, T_{x})$ that will store the predictions. - Note that in this case, $T_{y} = T_{x}$ (the prediction and input have the same number of time steps).* Initialize the 2D hidden state `a_next` by setting it equal to the initial hidden state, $a_{0}$.* At each time step $t$: - Get $x^{\langle t \rangle}$, which is a 2D slice of $x$ for a single time step $t$. - $x^{\langle t \rangle}$ has shape $(n_{x}, m)$ - $x$ has shape $(n_{x}, m, T_{x})$ - Update the 2D hidden state $a^{\langle t \rangle}$ (variable name `a_next`), the prediction $\hat{y}^{\langle t \rangle}$ and the cache by running `rnn_cell_forward`. - $a^{\langle t \rangle}$ has shape $(n_{a}, m)$ - Store the 2D hidden state in the 3D tensor $a$, at the $t^{th}$ position. - $a$ has shape $(n_{a}, m, T_{x})$ - Store the 2D $\hat{y}^{\langle t \rangle}$ prediction (variable name `yt_pred`) in the 3D tensor $\hat{y}_{pred}$ at the $t^{th}$ position. - $\hat{y}^{\langle t \rangle}$ has shape $(n_{y}, m)$ - $\hat{y}$ has shape $(n_{y}, m, T_x)$ - Append the cache to the list of caches.* Return the 3D tensor $a$ and $\hat{y}$, as well as the list of caches. Additional Hints- [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html)- If you have a 3 dimensional numpy array and are indexing by its third dimension, you can use array slicing like this: `var_name[:,:,i]`.
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y_pred" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps of the input 'x' (1 line)
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈2 lines)
xt = x[:,:,t]
a_next, yt_pred, cache = rnn_cell_forward(xt, a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x_tmp = np.random.randn(3,10,4)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_tmp, y_pred_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)
print("a[4][1] = \n", a_tmp[4][1])
print("a.shape = \n", a_tmp.shape)
print("y_pred[1][3] =\n", y_pred_tmp[1][3])
print("y_pred.shape = \n", y_pred_tmp.shape)
print("caches[1][1][3] =\n", caches_tmp[1][1][3])
print("len(caches) = \n", len(caches_tmp))
###Output
a[4][1] =
[-0.99999375 0.77911235 -0.99861469 -0.99833267]
a.shape =
(5, 10, 4)
y_pred[1][3] =
[ 0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape =
(2, 10, 4)
caches[1][1][3] =
[-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) =
2
###Markdown
**Expected Output**:```Pythona[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]a.shape = (5, 10, 4)y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]y_pred.shape = (2, 10, 4)caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]len(caches) = 2``` Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. Situations when this RNN will perform better:- This will work well enough for some applications, but it suffers from the vanishing gradient problems. - The RNN works best when each output $\hat{y}^{\langle t \rangle}$ can be estimated using "local" context. - "Local" context refers to information that is close to the prediction's time step $t$.- More formally, local context refers to inputs $x^{\langle t' \rangle}$ and predictions $\hat{y}^{\langle t \rangle}$ where $t'$ is close to $t$.In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThe following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Note, the $softmax^{*}$ includes a dense layer and softmaxSimilar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a "for-loop" to have it process an input with $T_x$ time-steps. Overview of gates and states - Forget gate $\mathbf{\Gamma}_{f}$* Let's assume we are reading words in a piece of text, and plan to use an LSTM to keep track of grammatical structures, such as whether the subject is singular ("puppy") or plural ("puppies"). * If the subject changes its state (from a singular word to a plural word), the memory of the previous state becomes outdated, so we "forget" that outdated state.* The "forget gate" is a tensor containing values that are between 0 and 1. * If a unit in the forget gate has a value close to 0, the LSTM will "forget" the stored state in the corresponding unit of the previous cell state. * If a unit in the forget gate has a value close to 1, the LSTM will mostly remember the corresponding value in the stored state. Equation$$\mathbf{\Gamma}_f^{\langle t \rangle} = \sigma(\mathbf{W}_f[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_f)\tag{1} $$ Explanation of the equation:* $\mathbf{W_{f}}$ contains weights that govern the forget gate's behavior. * The previous time step's hidden state $[a^{\langle t-1 \rangle}$ and current time step's input $x^{\langle t \rangle}]$ are concatenated together and multiplied by $\mathbf{W_{f}}$. * A sigmoid function is used to make each of the gate tensor's values $\mathbf{\Gamma}_f^{\langle t \rangle}$ range from 0 to 1.* The forget gate $\mathbf{\Gamma}_f^{\langle t \rangle}$ has the same dimensions as the previous cell state $c^{\langle t-1 \rangle}$. * This means that the two can be multiplied together, element-wise.* Multiplying the tensors $\mathbf{\Gamma}_f^{\langle t \rangle} * \mathbf{c}^{\langle t-1 \rangle}$ is like applying a mask over the previous cell state.* If a single value in $\mathbf{\Gamma}_f^{\langle t \rangle}$ is 0 or close to 0, then the product is close to 0. * This keeps the information stored in the corresponding unit in $\mathbf{c}^{\langle t-1 \rangle}$ from being remembered for the next time step.* Similarly, if one value is close to 1, the product is close to the original value in the previous cell state. * The LSTM will keep the information from the corresponding unit of $\mathbf{c}^{\langle t-1 \rangle}$, to be used in the next time step. Variable names in the codeThe variable names in the code are similar to the equations, with slight differences. * `Wf`: forget gate weight $\mathbf{W}_{f}$* `bf`: forget gate bias $\mathbf{b}_{f}$* `ft`: forget gate $\Gamma_f^{\langle t \rangle}$ Candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$* The candidate value is a tensor containing information from the current time step that **may** be stored in the current cell state $\mathbf{c}^{\langle t \rangle}$.* Which parts of the candidate value get passed on depends on the update gate.* The candidate value is a tensor containing values that range from -1 to 1.* The tilde "~" is used to differentiate the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ from the cell state $\mathbf{c}^{\langle t \rangle}$. Equation$$\mathbf{\tilde{c}}^{\langle t \rangle} = \tanh\left( \mathbf{W}_{c} [\mathbf{a}^{\langle t - 1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{c} \right) \tag{3}$$ Explanation of the equation* The 'tanh' function produces values between -1 and +1. Variable names in the code* `cct`: candidate value $\mathbf{\tilde{c}}^{\langle t \rangle}$ - Update gate $\mathbf{\Gamma}_{i}$* We use the update gate to decide what aspects of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to add to the cell state $c^{\langle t \rangle}$.* The update gate decides what parts of a "candidate" tensor $\tilde{\mathbf{c}}^{\langle t \rangle}$ are passed onto the cell state $\mathbf{c}^{\langle t \rangle}$.* The update gate is a tensor containing values between 0 and 1. * When a unit in the update gate is close to 1, it allows the value of the candidate $\tilde{\mathbf{c}}^{\langle t \rangle}$ to be passed onto the hidden state $\mathbf{c}^{\langle t \rangle}$ * When a unit in the update gate is close to 0, it prevents the corresponding value in the candidate from being passed onto the hidden state.* Notice that we use the subscript "i" and not "u", to follow the convention used in the literature. Equation$$\mathbf{\Gamma}_i^{\langle t \rangle} = \sigma(\mathbf{W}_i[a^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_i)\tag{2} $$ Explanation of the equation* Similar to the forget gate, here $\mathbf{\Gamma}_i^{\langle t \rangle}$, the sigmoid produces values between 0 and 1.* The update gate is multiplied element-wise with the candidate, and this product ($\mathbf{\Gamma}_{i}^{\langle t \rangle} * \tilde{c}^{\langle t \rangle}$) is used in determining the cell state $\mathbf{c}^{\langle t \rangle}$. Variable names in code (Please note that they're different than the equations)In the code, we'll use the variable names found in the academic literature. These variables don't use "u" to denote "update".* `Wi` is the update gate weight $\mathbf{W}_i$ (not "Wu") * `bi` is the update gate bias $\mathbf{b}_i$ (not "bu")* `it` is the forget gate $\mathbf{\Gamma}_i^{\langle t \rangle}$ (not "ut") - Cell state $\mathbf{c}^{\langle t \rangle}$* The cell state is the "memory" that gets passed onto future time steps.* The new cell state $\mathbf{c}^{\langle t \rangle}$ is a combination of the previous cell state and the candidate value. Equation$$ \mathbf{c}^{\langle t \rangle} = \mathbf{\Gamma}_f^{\langle t \rangle}* \mathbf{c}^{\langle t-1 \rangle} + \mathbf{\Gamma}_{i}^{\langle t \rangle} *\mathbf{\tilde{c}}^{\langle t \rangle} \tag{4} $$ Explanation of equation* The previous cell state $\mathbf{c}^{\langle t-1 \rangle}$ is adjusted (weighted) by the forget gate $\mathbf{\Gamma}_{f}^{\langle t \rangle}$* and the candidate value $\tilde{\mathbf{c}}^{\langle t \rangle}$, adjusted (weighted) by the update gate $\mathbf{\Gamma}_{i}^{\langle t \rangle}$ Variable names and shapes in the code* `c`: cell state, including all time steps, $\mathbf{c}$ shape $(n_{a}, m, T)$* `c_next`: new (next) cell state, $\mathbf{c}^{\langle t \rangle}$ shape $(n_{a}, m)$* `c_prev`: previous cell state, $\mathbf{c}^{\langle t-1 \rangle}$, shape $(n_{a}, m)$ - Output gate $\mathbf{\Gamma}_{o}$* The output gate decides what gets sent as the prediction (output) of the time step.* The output gate is like the other gates. It contains values that range from 0 to 1. Equation$$ \mathbf{\Gamma}_o^{\langle t \rangle}= \sigma(\mathbf{W}_o[\mathbf{a}^{\langle t-1 \rangle}, \mathbf{x}^{\langle t \rangle}] + \mathbf{b}_{o})\tag{5}$$ Explanation of the equation* The output gate is determined by the previous hidden state $\mathbf{a}^{\langle t-1 \rangle}$ and the current input $\mathbf{x}^{\langle t \rangle}$* The sigmoid makes the gate range from 0 to 1. Variable names in the code* `Wo`: output gate weight, $\mathbf{W_o}$* `bo`: output gate bias, $\mathbf{b_o}$* `ot`: output gate, $\mathbf{\Gamma}_{o}^{\langle t \rangle}$ - Hidden state $\mathbf{a}^{\langle t \rangle}$* The hidden state gets passed to the LSTM cell's next time step.* It is used to determine the three gates ($\mathbf{\Gamma}_{f}, \mathbf{\Gamma}_{u}, \mathbf{\Gamma}_{o}$) of the next time step.* The hidden state is also used for the prediction $y^{\langle t \rangle}$. Equation$$ \mathbf{a}^{\langle t \rangle} = \mathbf{\Gamma}_o^{\langle t \rangle} * \tanh(\mathbf{c}^{\langle t \rangle})\tag{6} $$ Explanation of equation* The hidden state $\mathbf{a}^{\langle t \rangle}$ is determined by the cell state $\mathbf{c}^{\langle t \rangle}$ in combination with the output gate $\mathbf{\Gamma}_{o}$.* The cell state state is passed through the "tanh" function to rescale values between -1 and +1.* The output gate acts like a "mask" that either preserves the values of $\tanh(\mathbf{c}^{\langle t \rangle})$ or keeps those values from being included in the hidden state $\mathbf{a}^{\langle t \rangle}$ Variable names and shapes in the code* `a`: hidden state, including time steps. $\mathbf{a}$ has shape $(n_{a}, m, T_{x})$* 'a_prev`: hidden state from previous time step. $\mathbf{a}^{\langle t-1 \rangle}$ has shape $(n_{a}, m)$* `a_next`: hidden state for next time step. $\mathbf{a}^{\langle t \rangle}$ has shape $(n_{a}, m)$ - Prediction $\mathbf{y}^{\langle t \rangle}_{pred}$* The prediction in this use case is a classification, so we'll use a softmax.The equation is:$$\mathbf{y}^{\langle t \rangle}_{pred} = \textrm{softmax}(\mathbf{W}_{y} \mathbf{a}^{\langle t \rangle} + \mathbf{b}_{y})$$ Variable names and shapes in the code* `y_pred`: prediction, including all time steps. $\mathbf{y}_{pred}$ has shape $(n_{y}, m, T_{x})$. Note that $(T_{y} = T_{x})$ for this example.* `yt_pred`: prediction for the current time step $t$. $\mathbf{y}^{\langle t \rangle}_{pred}$ has shape $(n_{y}, m)$ 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (4).**Instructions**:1. Concatenate the hidden state $a^{\langle t-1 \rangle}$ and input $x^{\langle t \rangle}$ into a single matrix: $$concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$$ 2. Compute all the formulas 1 through 6 for the gates, hidden state, and cell state.3. Compute the prediction $y^{\langle t \rangle}$. Additional Hints* You can use [numpy.concatenate](https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html). Check which value to use for the `axis` parameter.* The functions `sigmoid()` and `softmax` are imported from `rnn_utils.py`.* [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html)* Use [np.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) for matrix multiplication.* Notice that the variable names `Wi`, `bi` refer to the weights and biases of the **update** gate. There are no variables named "Wu" or "bu" in this function.
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the cell state (memory)
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"] # forget gate weight
bf = parameters["bf"]
Wi = parameters["Wi"] # update gate weight (notice the variable name)
bi = parameters["bi"] # (notice the variable name)
Wc = parameters["Wc"] # candidate value weight
bc = parameters["bc"]
Wo = parameters["Wo"] # output gate weight
bo = parameters["bo"]
Wy = parameters["Wy"] # prediction weight
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈1 line)
concat = np.concatenate((a_prev, xt)) # (8,10)
# Compute values for ft (forget gate), it (update gate),
# cct (candidate value), c_next (cell state),
# ot (output gate), a_next (hidden state) (≈6 lines)
ft = sigmoid(Wf @ concat + bf) # forget gate
it = sigmoid(Wi @ concat + bi) # update gate
cct = np.tanh(Wc @ concat + bc) # candidate value
c_next = ft * c_prev + it * cct # cell state
ot = sigmoid(Wo @ concat + bo) # output gate
a_next = ot * np.tanh(c_next) # hidden state
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(Wy @ a_next + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
c_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.random.randn(2,5)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)
print("a_next[4] = \n", a_next_tmp[4])
print("a_next.shape = ", a_next_tmp.shape)
print("c_next[2] = \n", c_next_tmp[2])
print("c_next.shape = ", c_next_tmp.shape)
print("yt[1] =", yt_tmp[1])
print("yt.shape = ", yt_tmp.shape)
print("cache[1][3] =\n", cache_tmp[1][3])
print("len(cache) = ", len(cache_tmp))
###Output
a_next[4] =
[-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] =
[ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] =
[-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**:```Pythona_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275]a_next.shape = (5, 10)c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932]c_next.shape = (5, 10)yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381]yt.shape = (2, 10)cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422]len(cache) = 10``` 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 5**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Instructions*** Get the dimensions $n_x, n_a, n_y, m, T_x$ from the shape of the variables: `x` and `parameters`.* Initialize the 3D tensors $a$, $c$ and $y$. - $a$: hidden state, shape $(n_{a}, m, T_{x})$ - $c$: cell state, shape $(n_{a}, m, T_{x})$ - $y$: prediction, shape $(n_{y}, m, T_{x})$ (Note that $T_{y} = T_{x}$ in this example). - **Note** Setting one variable equal to the other is a "copy by reference". In other words, don't do `c = a', otherwise both these variables point to the same underlying variable.* Initialize the 2D tensor $a^{\langle t \rangle}$ - $a^{\langle t \rangle}$ stores the hidden state for time step $t$. The variable name is `a_next`. - $a^{\langle 0 \rangle}$, the initial hidden state at time step 0, is passed in when calling the function. The variable name is `a0`. - $a^{\langle t \rangle}$ and $a^{\langle 0 \rangle}$ represent a single time step, so they both have the shape $(n_{a}, m)$ - Initialize $a^{\langle t \rangle}$ by setting it to the initial hidden state ($a^{\langle 0 \rangle}$) that is passed into the function.* Initialize $c^{\langle t \rangle}$ with zeros. - The variable name is `c_next`. - $c^{\langle t \rangle}$ represents a single time step, so its shape is $(n_{a}, m)$ - **Note**: create `c_next` as its own variable with its own location in memory. Do not initialize it as a slice of the 3D tensor $c$. In other words, **don't** do `c_next = c[:,:,0]`.* For each time step, do the following: - From the 3D tensor $x$, get a 2D slice $x^{\langle t \rangle}$ at time step $t$. - Call the `lstm_cell_forward` function that you defined previously, to get the hidden state, cell state, prediction, and cache. - Store the hidden state, cell state and prediction (the 2D tensors) inside the 3D tensors. - Also append the cache to the list of caches.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
c -- The value of the cell state, numpy array of shape (n_a, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
Wy = parameters['Wy'] # saving parameters['Wy'] in a local variable in case students use Wy instead of parameters['Wy']
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros_like(a)
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Get the 2D slice 'xt' from the 3D input 'x' at time step 't'
xt = x[:,:,t]
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x_tmp = np.random.randn(3,10,7)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi']= np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.random.randn(2,5)
parameters_tmp['by'] = np.random.randn(2,1)
a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)
print("a[4][3][6] = ", a_tmp[4][3][6])
print("a.shape = ", a_tmp.shape)
print("y[1][4][3] =", y_tmp[1][4][3])
print("y.shape = ", y_tmp.shape)
print("caches[1][1][1] =\n", caches_tmp[1][1][1])
print("c[1][2][1]", c_tmp[1][2][1])
print("len(caches) = ", len(caches_tmp))
###Output
a[4][3][6] = 0.172117767533
a.shape = (5, 10, 7)
y[1][4][3] = 0.95087346185
y.shape = (2, 10, 7)
caches[1][1][1] =
[ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] -0.855544916718
len(caches) = 2
###Markdown
**Expected Output**:```Pythona[4][3][6] = 0.172117767533a.shape = (5, 10, 7)y[1][4][3] = 0.95087346185y.shape = (2, 10, 7)caches[1][1][1] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165]c[1][2][1] -0.855544916718len(caches) = 2``` Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier [course](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. Note that this notebook does not implement the backward path from the Loss 'J' backwards to 'a'. This would have included the dense layer and softmax which are a part of the forward path. This is assumed to be calculated elsewhere and the result passed to rnn_backward in 'da'. It is further assumed that loss has been adjusted for batch size (m) and division by the number of examples is not required here. This section is optional and ungraded. It is more difficult and has fewer details regarding its implementation. This section only implements key elements of the full path. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell and then in the following sections, iterate through the cells. **Figure 6**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the time steps of the RNN by following the chain-rule from calculus. Internal to the cell, the chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. The operation can utilize the cached results from the forward path. Recall from lecture, the shorthand for the partial derivative of cost relative to a variable is dVariable. For example, $\frac{\partial J}{\partial W_{ax}}$ is $dW_{ax}$. This will be used throughout the remaining sections. **Figure 7**: This implementation of rnn_cell_backward does **not** include the output dense layer and softmax which are included in rnn_cell_forward. $da_{next}$ is $\frac{\partial{J}}{\partial a^{\langle t \rangle}}$ and includes loss from previous stages and current stage output logic. The addition shown in green will be part of your implementation of rnn_backward. EquationsTo compute the rnn_cell_backward you can utilize the following equations. It is a good exercise to derive them by hand. Here, $*$ denotes element-wise multiplication while the absence of a symbol indicates matrix multiplication.\begin{align}\displaystyle a^{\langle t \rangle} &= \tanh(W_{ax} x^{\langle t \rangle} + W_{aa} a^{\langle t-1 \rangle} + b_{a})\tag{-} \\[8pt]\displaystyle \frac{\partial \tanh(x)} {\partial x} &= 1 - \tanh^2(x) \tag{-} \\[8pt]\displaystyle {dW_{ax}} &= (da_{next} * ( 1-\tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a}) )) x^{\langle t \rangle T}\tag{1} \\[8pt]\displaystyle dW_{aa} &= (da_{next} * ( 1-\tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a}) )) a^{\langle t-1 \rangle T}\tag{2} \\[8pt]\displaystyle db_a& = \sum_{batch}( da_{next} * ( 1-\tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a}) ))\tag{3} \\[8pt]\displaystyle dx^{\langle t \rangle} &= { W_{ax}}^T (da_{next} * ( 1-\tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a}) ))\tag{4} \\[8pt]\displaystyle da_{prev} &= { W_{aa}}^T(da_{next} * ( 1-\tanh^2(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a}) ))\tag{5}\end{align} Implementing rnn_cell_backwardThe results can be computed directly by implementing the equations above. However, the above can optionally be simplified by computing 'dz' and utlilizing the chain rule. This can be further simplified by noting that $\tanh(W_{ax}x^{\langle t \rangle}+W_{aa} a^{\langle t-1 \rangle} + b_{a})$ was computed and saved in the forward pass. To calculate dba, the 'batch' above is a sum across all 'm' examples (axis= 1). Note that you should use the keepdims = True option.It may be worthwhile to review Course 1 [Derivatives with a computational graph](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/0VSHe/derivatives-with-a-computation-graph) through [Backpropagation Intuition](https://www.coursera.org/learn/neural-networks-deep-learning/lecture/6dDj7/backpropagation-intuition-optional), which decompose the calculation into steps using the chain rule. Matrix vector derivatives are described [here](http://cs231n.stanford.edu/vecDerivs.pdf), though the equations above incorporate the required transformations.Note rnn_cell_backward does __not__ include the calculation of loss from $y \langle t \rangle$, this is incorporated into the incoming da_next. This is a slight mismatch with rnn_cell_forward which includes a dense layer and softmax. Note: in the code: $\displaystyle dx^{\langle t \rangle}$ is represented by dxt, $\displaystyle d W_{ax}$ is represented by dWax, $\displaystyle da_{prev}$ is represented by da_prev, $\displaystyle dW_{aa}$ is represented by dWaa, $\displaystyle db_{a}$ is represented by dba, dz is not derived above but can optionally be derived by students to simplify the repeated calculations.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of the loss with respect to z (optional) (≈1 line)
dz = (1- a_next**2) * da_next
assert dz.shape == a_next.shape
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = Wax.T @ dz
dWax = dz @ xt.T
# compute the gradient with respect to Waa (≈2 lines)
da_prev = Waa.T @ dz
dWaa = dz @ a_prev.T
# compute the gradient with respect to b (≈1 line)
dba = np.sum((da_next * dz**2), axis=1, keepdims=True)
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, yt_tmp, cache_tmp = rnn_cell_forward(xt_tmp, a_prev_tmp, parameters_tmp)
da_next_tmp = np.random.randn(5,10)
gradients_tmp = rnn_cell_backward(da_next_tmp, cache_tmp)
print("gradients[\"dxt\"][1][2] =", gradients_tmp["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients_tmp["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients_tmp["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients_tmp["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients_tmp["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients_tmp["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients_tmp["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients_tmp["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients_tmp["dba"][4])
print("gradients[\"dba\"].shape =", gradients_tmp["dba"].shape)
###Output
gradients["dxt"][1][2] = -1.3872130506
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.152399493774
gradients["da_prev"].shape = (5, 10)
gradients["dWax"][3][1] = 0.410772824935
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 1.15034506685
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [ 0.03566923]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -1.3872130506 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.152399493774 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.410772824935 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 1.15034506685 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.20023491] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly. * Note that this notebook does not implement the backward path from the Loss 'J' backwards to 'a'. * This would have included the dense layer and softmax which are a part of the forward path. * This is assumed to be calculated elsewhere and the result passed to rnn_backward in 'da'. * You must combine this with the loss from the previous stages when calling rnn_cell_backward (see figure 7 above).* It is further assumed that loss has been adjusted for batch size (m). * Therefore, division by the number of examples is not required here.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = caches
(a1, a0, x1, parameters) = caches[0]
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m =x1.shape
# initialize the gradients with the right sizes (≈6 lines)
dx = np.zeros((n_x,m,T_x))
dWax = np.zeros((n_a,n_x))
dWaa = np.zeros((n_a,n_a))
dba = np.zeros((n_a,1))
da0 = np.zeros((n_a,m))
da_prevt = np.zeros_like(da0)
# Loop through all the time steps
for t in reversed(range(T_x)):
# Compute gradients at time step t.
# Remember to sum gradients from the output path (da) and the previous timesteps (da_prevt) (≈1 line)
gradients = rnn_cell_backward(da[:,:,t] + da_prevt, caches[t])
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = dxt
dWax += dWaxt
dWaa += dWaat
dba += dbat
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x_tmp = np.random.randn(3,10,4)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wax'] = np.random.randn(5,3)
parameters_tmp['Waa'] = np.random.randn(5,5)
parameters_tmp['Wya'] = np.random.randn(2,5)
parameters_tmp['ba'] = np.random.randn(5,1)
parameters_tmp['by'] = np.random.randn(2,1)
a_tmp, y_tmp, caches_tmp = rnn_forward(x_tmp, a0_tmp, parameters_tmp)
da_tmp = np.random.randn(5, 10, 4)
gradients_tmp = rnn_backward(da_tmp, caches_tmp)
print("gradients[\"dx\"][1][2] =", gradients_tmp["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients_tmp["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients_tmp["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients_tmp["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients_tmp["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients_tmp["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients_tmp["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients_tmp["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients_tmp["dba"][4])
print("gradients[\"dba\"].shape =", gradients_tmp["dba"].shape)
###Output
gradients["dx"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.314942375127
gradients["da0"].shape = (5, 10)
gradients["dWax"][3][1] = 11.2641044965
gradients["dWax"].shape = (5, 3)
gradients["dWaa"][1][2] = 2.30333312658
gradients["dWaa"].shape = (5, 5)
gradients["dba"][4] = [-0.75182935]
gradients["dba"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slightly more complicated than the forward pass. **Figure 8**: lstm_cell_backward. Note the output functions, while part of the lstm_cell_forward, are not included in lstm_cell_backward The equations for the LSTM backward pass are provided below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivativesNote the location of the gate derivatives ($\gamma$..) between the dense layer and the activation function (see graphic above). This is convenient for computing parameter derivatives in the next step. \begin{align}d\gamma_o^{\langle t \rangle} &= da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*\left(1-\Gamma_o^{\langle t \rangle}\right)\tag{7} \\[8pt]dp\widetilde{c}^{\langle t \rangle} &= \left(dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle}* (1-\tanh^2(c_{next})) * \Gamma_u^{\langle t \rangle} * da_{next} \right) * \left(1-\left(\widetilde c^{\langle t \rangle}\right)^2\right) \tag{8} \\[8pt]d\gamma_u^{\langle t \rangle} &= \left(dc_{next}*\widetilde{c}^{\langle t \rangle} + \Gamma_o^{\langle t \rangle}* (1-\tanh^2(c_{next})) * \widetilde{c}^{\langle t \rangle} * da_{next}\right)*\Gamma_u^{\langle t \rangle}*\left(1-\Gamma_u^{\langle t \rangle}\right)\tag{9} \\[8pt]d\gamma_f^{\langle t \rangle} &= \left(dc_{next}* c_{prev} + \Gamma_o^{\langle t \rangle} * (1-\tanh^2(c_{next})) * c_{prev} * da_{next}\right)*\Gamma_f^{\langle t \rangle}*\left(1-\Gamma_f^{\langle t \rangle}\right)\tag{10}\end{align} 3.2.3 parameter derivatives $ dW_f = d\gamma_f^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{11} $$ dW_u = d\gamma_u^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{12} $$ dW_c = dp\widetilde c^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{13} $$ dW_o = d\gamma_o^{\langle t \rangle} \begin{bmatrix} a_{prev} \\ x_t\end{bmatrix}^T \tag{14}$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across all 'm' examples (axis= 1) on $d\gamma_f^{\langle t \rangle}, d\gamma_u^{\langle t \rangle}, dp\widetilde c^{\langle t \rangle}, d\gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keepdims = True` option.$\displaystyle db_f = \sum_{batch}d\gamma_f^{\langle t \rangle}\tag{15}$$\displaystyle db_u = \sum_{batch}d\gamma_u^{\langle t \rangle}\tag{16}$$\displaystyle db_c = \sum_{batch}d\gamma_c^{\langle t \rangle}\tag{17}$$\displaystyle db_o = \sum_{batch}d\gamma_o^{\langle t \rangle}\tag{18}$Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$ da_{prev} = W_f^T d\gamma_f^{\langle t \rangle} + W_u^T d\gamma_u^{\langle t \rangle}+ W_c^T dp\widetilde c^{\langle t \rangle} + W_o^T d\gamma_o^{\langle t \rangle} \tag{19}$Here, to account for concatenation, the weights for equations 19 are the first n_a, (i.e. $W_f = W_f[:,:n_a]$ etc...)$ dc_{prev} = dc_{next}*\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh^2(c_{next}))*\Gamma_f^{\langle t \rangle}*da_{next} \tag{20}$$ dx^{\langle t \rangle} = W_f^T d\gamma_f^{\langle t \rangle} + W_u^T d\gamma_u^{\langle t \rangle}+ W_c^T dp\widetilde c^{\langle t \rangle} + W_o^T d\gamma_o^{\langle t \rangle}\tag{21} $where the weights for equation 21 are from n_a to the end, (i.e. $W_f = W_f[:,n_a:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-21$ below. Note: In the code:$d\gamma_o^{\langle t \rangle}$ is represented by `dot`, $dp\widetilde{c}^{\langle t \rangle}$ is represented by `dcct`, $d\gamma_u^{\langle t \rangle}$ is represented by `dit`, $d\gamma_f^{\langle t \rangle}$ is represented by `dft`
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = xt.shape
n_a, m = a_next.shape
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = da_next * np.tanh(c_next) * ot * (1-ot)
dcct = (dc_next * it + ot * (1-np.tanh(c_next)**2) * it * da_next) * (1-cct**2)
dit = (dc_next * cct + ot * (1-np.tanh(c_next)**2) * cct * da_next) * it * (1-it)
dft = (dc_next * c_prev + ot * (1-np.tanh(c_next)**2) * c_prev * da_next) * ft * (1-ft)
# Compute parameters related derivatives. Use equations (11)-(18) (≈8 lines)
dWf = dft @ np.hstack([a_prev.T, xt.T]) # a_prev:5,10
dWi = dit @ np.hstack([a_prev.T, xt.T]) #xt: 3,10
dWc = dcct @ np.hstack([a_prev.T, xt.T])
dWo = dot @ np.hstack([a_prev.T, xt.T])
dbf = np.sum((dft), axis=1, keepdims=True)
dbi = np.sum((dit), axis=1, keepdims=True)
dbc = np.sum((dcct), axis=1, keepdims=True)
dbo = np.sum((dot), axis=1, keepdims=True)
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (19)-(21). (≈3 lines)
da_prev = parameters['Wf'][:,:n_a].T@dft + parameters['Wi'][:,:n_a].T@dit + parameters['Wc'][:,:n_a].T@dcct + parameters['Wo'][:,:n_a].T@dot
dc_prev = dc_next*ft + ot * (1-np.tanh(c_next)**2) * ft*da_next
dxt = parameters['Wf'][:,n_a:].T@dft + parameters['Wi'][:,n_a:].T@dit + parameters['Wc'][:,n_a:].T@dcct + parameters['Wo'][:,n_a:].T@dot
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt_tmp = np.random.randn(3,10)
a_prev_tmp = np.random.randn(5,10)
c_prev_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.random.randn(2,5)
parameters_tmp['by'] = np.random.randn(2,1)
a_next_tmp, c_next_tmp, yt_tmp, cache_tmp = lstm_cell_forward(xt_tmp, a_prev_tmp, c_prev_tmp, parameters_tmp)
da_next_tmp = np.random.randn(5,10)
dc_next_tmp = np.random.randn(5,10)
gradients_tmp = lstm_cell_backward(da_next_tmp, dc_next_tmp, cache_tmp)
print("gradients[\"dxt\"][1][2] =", gradients_tmp["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients_tmp["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients_tmp["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients_tmp["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients_tmp["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients_tmp["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients_tmp["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients_tmp["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients_tmp["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients_tmp["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients_tmp["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients_tmp["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients_tmp["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients_tmp["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients_tmp["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients_tmp["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients_tmp["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients_tmp["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients_tmp["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients_tmp["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients_tmp["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients_tmp["dbo"].shape)
###Output
gradients["dxt"][1][2] = 3.23055911511
gradients["dxt"].shape = (3, 10)
gradients["da_prev"][2][3] = -0.0639621419711
gradients["da_prev"].shape = (5, 10)
gradients["dc_prev"][2][3] = 0.797522038797
gradients["dc_prev"].shape = (5, 10)
gradients["dWf"][3][1] = -0.147954838164
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 1.05749805523
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = 2.30456216369
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.331311595289
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [ 0.18864637]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.40142491]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [ 0.25587763]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [ 0.13893342]
gradients["dbo"].shape = (5, 1)
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈12 lines)
dx = np.zeros((n_x,m,T_x))
da0 = np.zeros((n_a,m))
da_prevt = np.zeros_like((da0))
dc_prevt = np.zeros_like((da0))
dWf = np.zeros((n_a,n_a+n_x))
dWi = np.zeros((n_a,n_a+n_x))
dWc = np.zeros((n_a,n_a+n_x))
dWo = np.zeros((n_a,n_a+n_x))
dbf = np.zeros((n_a,1))
dbi = np.zeros((n_a,1))
dbc = np.zeros((n_a,1))
dbo = np.zeros((n_a,1))
# loop back over the whole sequence
for t in reversed(range(T_x)):
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:,:,t]+da_prevt, dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
da_prevt = gradients['da_prev']
dc_prevt = gradients['dc_prev']
dx[:,:,t] = gradients['dxt']
dWf += gradients['dWf']
dWi += gradients['dWi']
dWc += gradients['dWc']
dWo += gradients['dWo']
dbf += gradients['dbf']
dbi += gradients['dbi']
dbc += gradients['dbc']
dbo += gradients['dbo']
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = gradients['da_prev']
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x_tmp = np.random.randn(3,10,7)
a0_tmp = np.random.randn(5,10)
parameters_tmp = {}
parameters_tmp['Wf'] = np.random.randn(5, 5+3)
parameters_tmp['bf'] = np.random.randn(5,1)
parameters_tmp['Wi'] = np.random.randn(5, 5+3)
parameters_tmp['bi'] = np.random.randn(5,1)
parameters_tmp['Wo'] = np.random.randn(5, 5+3)
parameters_tmp['bo'] = np.random.randn(5,1)
parameters_tmp['Wc'] = np.random.randn(5, 5+3)
parameters_tmp['bc'] = np.random.randn(5,1)
parameters_tmp['Wy'] = np.zeros((2,5)) # unused, but needed for lstm_forward
parameters_tmp['by'] = np.zeros((2,1)) # unused, but needed for lstm_forward
a_tmp, y_tmp, c_tmp, caches_tmp = lstm_forward(x_tmp, a0_tmp, parameters_tmp)
da_tmp = np.random.randn(5, 10, 4)
gradients_tmp = lstm_backward(da_tmp, caches_tmp)
print("gradients[\"dx\"][1][2] =", gradients_tmp["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients_tmp["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients_tmp["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients_tmp["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients_tmp["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients_tmp["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients_tmp["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients_tmp["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients_tmp["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients_tmp["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients_tmp["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients_tmp["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients_tmp["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients_tmp["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients_tmp["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients_tmp["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients_tmp["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients_tmp["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients_tmp["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients_tmp["dbo"].shape)
###Output
gradients["dx"][1][2] = [ 0.00218254 0.28205375 -0.48292508 -0.43281115]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = 0.312770310257
gradients["da0"].shape = (5, 10)
gradients["dWf"][3][1] = -0.0809802310938
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 0.40512433093
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = -0.0793746735512
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.038948775763
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [-0.15745657]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.50848333]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [-0.42510818]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [-0.17958196]
gradients["dbo"].shape = (5, 1)
|
notebooks/05-conditionals.ipynb | ###Markdown
Making Choices
###Code
num = 37
if num > 100:
print('greater')
else:
print('not greater')
print('Done')
num = 57
print('Before conditional')
if num > 100:
print( num, 'is greater than 100')
print('...after conditional')
num = 101
print('Before conditional')
if num > 100:
print( num, 'is greater than 100')
print('...after conditional')
num = -3
if num > 0:
print(num, 'is positive')
elif num == 0:
print(num, 'is zero')
else:
print(num, 'is negative')
if (1>0) and (-1>0):
print('both parts are true')
else:
print('at least one part is true')
if (1>0) or (-1>0):
print('at least one part is true')
if '':
print('empty string')
if 0:
print('zero is true')
import numpy as np
data = np.loadtxt(fname='data/inflammation-03.csv',delimiter=',')
max_inflammation_0 = np.max(data, axis=0)[0]
max_inflammation_20 = np.max(data, axis=0)[20]
if max_inflammation_0 == 0 and max_inflammation_20 == 20:
print('Supspicious looking data')
if np.sum(np.min(data, axis=0))==0:
print('Minima add up to zero ')
data = np.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
max_inflammation_0 = np.max(data, axis=0)[0]
max_inflammation_20 = np.max(data, axis=0)[20]
if max_inflammation_0 == 0 and max_inflammation_20 ==20:
print('Suspicious looking data')
elif np.sum(np.min(data, axis=0)) ==0:
print('Minima add up to zero')
else:
print('Seems OK!')
###Output
Suspicious looking data
|
all_dawgs_go_to_heaven/all_dawgs_go_to_heaven.ipynb | ###Markdown
All dawgs go to heaven, but only some go to national championshipsThis notebook walks through my analysis of the live mascots used at the University of Washington and their winning percentages for football across their career.The data used in this analysis comes from: * [The Seattle Times](https://www.seattletimes.com/sports/uw-husky-football/woo-woos-for-a-weary-world-uws-live-mascot-dubs-ii-spreads-cute-dog-content-to-the-masses/) * [Go Huskies](https://gohuskies.com/sports/2013/4/18/208229209.aspx) * [College Football Data API](https://api.collegefootballdata.com/api/docs/?url=/api-docs.json) ------------- Setup
###Code
import pandas as pd
import requests
import matplotlib.pyplot as plt
from matplotlib import rcParams
import matplotlib.ticker as mtick
rcParams.update({'figure.autolayout': True})
###Output
_____no_output_____
###Markdown
Read in data about mascot history:
###Code
mascot_data = pd.read_csv("mascot_data.csv")
mascot_data.head()
###Output
_____no_output_____
###Markdown
--------------- Data preparationWe can calculate the number of seasons each mascot was active for by finding the years between the start and end date:
###Code
mascot_data["seasons"] = mascot_data["end_date"] - mascot_data["start_date"]
mascot_data.head()
###Output
_____no_output_____
###Markdown
I have to manually code the national championships because some sources claim the '91 title was shared (the reason you'll never see any Dawgs cheering for Miami...)
###Code
national_championships = [1960, 1984, 1990, 1991]
###Output
_____no_output_____
###Markdown
For each year we make an API call to the College Football Database to get the number of wins for that season:
###Code
win_pcts = []
total_wins = []
colors = []
for index, row in mascot_data.iterrows():
wins = 0
total_games = 0
seasons = list(range(row["start_date"], row["end_date"] + 1))
color = "#363C74" # UW purple
for season in seasons:
if season in national_championships:
color = "#E8D3A2" # color data UW gold for years a National Championship was won
parameters = {"year": season,"seasonType": "regular", "team": "Washington"}
response = requests.get("https://api.collegefootballdata.com/games", params=parameters)
games = response.json()
for game in games:
total_games += 1
# see if UW won as either home or away team and incriment win counter if so
if (game["home_team"] == "Washington" and game["home_points"] > game["away_points"]) or (game["away_team"] == "Washington" and game["away_points"] > game["home_points"]):
wins += 1
win_pcts.append(wins / total_games)
colors.append(color)
total_wins.append(wins)
mascot_data["win_percentages"] = win_pcts
mascot_data["colors"] = colors
mascot_data.head()
###Output
_____no_output_____
###Markdown
We can then do some string manipulation to make the dates format nicely:
###Code
mascot_data.loc[mascot_data["mascot_name"] == "Dubs II", "end_date"] = "current"
mascot_data["labels"] = mascot_data["mascot_name"] + "\n (" + mascot_data["start_date"].apply(str) + "-" + mascot_data["end_date"].apply(str) + ")"
###Output
_____no_output_____
###Markdown
We can convert the `win_percentages` to a proper percentage:
###Code
mascot_data["win_percentages"] = mascot_data["win_percentages"] * 100.0
###Output
_____no_output_____
###Markdown
---------- Data visualizationWe can finally create a bar plot for all the mascots using `matplotlib`:
###Code
plt.rcParams.update({'font.size': 14})
fig = plt.figure(figsize=(14,8))
ax = fig.add_subplot(1,1,1)
plt.bar(x=mascot_data.index, height=mascot_data.win_percentages, color=mascot_data.colors)
plt.xticks(mascot_data.index, mascot_data.labels, fontsize=14)
plt.title("Win Percentages for UW Live Mascots", fontsize=20)
plt.xticks(rotation=90)
plt.ylabel("Win Percentage", fontsize=14)
fmt = '%.f%%' # Format you want the ticks, e.g. '40%'
yticks = mtick.FormatStrFormatter(fmt)
ax.yaxis.set_major_formatter(yticks)
plt.savefig("all_dawgs_go_to_heaven.jpg")
plt.show()
###Output
_____no_output_____ |
_pages/Language/R/src/mpg.ipynb | ###Markdown
**[MPG]** ggplot2 패키지에서 제공되는 연비관련 데이터
###Code
install.packages("ggplot2")
library(ggplot2)
head(mpg)
mpg1 <- mpg
avg_mileage <- (mpg1$cty+ mpg1$hwy) /2
avg_mileage
length(avg_mileage)
mpg1$avg_mileage <- avg_mileage
names(mpg1)
summary(mpg1$avg_mileage)
round(mean(mpg1$avg_mileage),3)
table(mpg1$avg_mileage)
hist(mpg1$avg_mileage)
qplot(mpg1$avg_mileage)
hist(mpg1$avg_mileage,xlim=c(0,50), ylim=c(0,100),main="테스트", border=2)
mpg1$test_result <- ifelse(mpg1$avg_mileage >= 20, "pass", "fail")
head(mpg1,10)
mpg1$grade <- ifelse(mpg1$avg_mileage >= 30, "A", ifelse(mpg1$avg_mileage >=20, "B","C"))
table(mpg1$grade)
###Output
_____no_output_____ |
docs/_downloads/8601ed4b358859cbe0d851d36ba6feb3/plot_projected_density.ipynb | ###Markdown
Plotting projected density obtained from vasp_dosThis example shows how to plot projected density of states
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
from pdos_overlap.vasp_dos import get_example_data
from pdos_overlap.vasp_dos import VASP_DOS
from pdos_overlap.plotting_tools import set_figure_settings
###Output
_____no_output_____
###Markdown
Load DOSCAR file----------------First we will, get the example data, load a DOSCAR file and use it toinstantiate a VASP_DOS object.
###Code
set_figure_settings('paper')
example_path = get_example_data()
DOSCAR = os.path.join(example_path, 'C2H4/DOSCAR')
PDOS = VASP_DOS(DOSCAR)
###Output
_____no_output_____
###Markdown
Obtain projected density------------------------We get the site and spin orbital projected density. We sum the individualspin orbital densities to get energy sub-level site projected densities.
###Code
orbitals, projected_density = PDOS.get_site_dos(atom_indices=np.arange(-6,0)\
, orbital_list=['s', 'p', 'd']\
, sum_density = True)
###Output
_____no_output_____
###Markdown
Plot projected density----------------------We plot the projected density with the fermi-level indicated.
###Code
plt.figure(figsize=(3,3))
colors = ['b','g','r']
zorder = [2,3,4]
for count, density in enumerate(projected_density):
plt.plot(density, PDOS.get_energies(), colors[count], zorder=zorder[count])
plt.plot([np.min(projected_density), np.max(projected_density)]\
,[PDOS.e_fermi, PDOS.e_fermi],'k--', zorder=1, linewidth=5)
plt.legend([i for i in orbitals]+ ['fermi level'])
plt.xlabel('State density')
plt.ylabel('Energy [eV]')
plt.show()
###Output
_____no_output_____ |
dataScience/02DataMining/02_Data-Computation-Analysis-Visualization/2-6_Cleaning_Data.ipynb | ###Markdown
清洗数据清洗和处理数据通常也是非常重要一个环节,这节提提这个内容。
###Code
# The usual preamble
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Make the graphs a bit prettier, and bigger
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = (15, 5)
plt.rcParams['font.family'] = 'sans-serif'
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
###Output
/opt/anaconda3/envs/python27/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2881: FutureWarning:
mpl_style had been deprecated and will be removed in a future version.
Use `matplotlib.pyplot.style.use` instead.
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
什么样的数据叫做脏数据/有问题的数据?我们用NYC 311服务请求数据来一起看看,这个数据量不算小,同时也有一些东西确实可以处理一下。
###Code
requests = pd.read_csv('./data/311-service-requests.csv')
###Output
_____no_output_____
###Markdown
6.1 怎么找到脏数据? 其实也没有特别好的办法,还是得先拿点数据出来看看。比如说我们这里观察到邮政编码可能有问题的字段。 需要提到的一点是 `.unique()` 函数有很巧的用处,我们把所有出现过的邮政编码列出来(之后再看看分布?),也许会有一些想法。下面我们就把unique()用起来,然后你会发现,确确实实是存在一些问题的,比如:* 为什么大部分被解析出数值,而有些被解析出字符串了?* 好多缺省值(`nan`) * 格式不一样,有些是`29616-0759`,有些是`83`* 有一些pandas不认的,比如'N/A'或者'NO CLUE'那我们能做什么呢?* 规整'N/A'和'NO CLUE'到缺省值的“队列”里* 看看83是什么鬼,然后再决定怎么处理* 统一一下,全处理成字符串好啦
###Code
requests['Incident Zip'].unique()
###Output
_____no_output_____
###Markdown
6.3 处理缺省值和字符串/浮点混乱 我们可以在`pd.read_csv`读数据的时候,传一个`na_values`给它,清理掉一部分的脏数据,我们还可以指明说,我们就要保证邮政编码是字符串型的,不要给我整些数值型出来!!
###Code
na_values = ['NO CLUE', 'N/A', '0']
requests = pd.read_csv('./data/311-service-requests.csv', na_values=na_values, dtype={'Incident Zip': str})
requests['Incident Zip'].unique()
###Output
_____no_output_____
###Markdown
6.4 那些用“-”连接的邮编是什么鬼?
###Code
requests['Incident Zip'].str.contains('-').fillna(False).value_counts()
###Output
_____no_output_____
###Markdown
真心是很烦人啊,其实只有5个,输出来看看是什么
###Code
requests[rows_with_dashes]
###Output
_____no_output_____
###Markdown
本来就5个,打算直接把这些都设置成缺省值(nan)的:`requests['Incident Zip'][rows_with_dashes] = np.nan`后来查了查,发现可能前5位置是真实的邮编,所以干脆截取一下好了。
###Code
long_zip_codes = requests['Incident Zip'].str.len() > 5
requests['Incident Zip'][long_zip_codes].value_counts()
requests['Incident Zip'] = requests['Incident Zip'].str.slice(0, 5)
###Output
_____no_output_____
###Markdown
搞定啦! 妈蛋查了下00000,发现根本不是什么美国加拿大的邮编,所以这个是不能这么处理的,还真得重新设为缺省值。
###Code
len(requests[requests['Incident Zip'] == '00000'])
zero_zips = requests['Incident Zip'] == '00000'
requests.loc[zero_zips, 'Incident Zip'] = np.nan
###Output
_____no_output_____
###Markdown
完工!!再来看看现在的数据什么样了。
###Code
unique_zips = requests['Incident Zip'].unique()
unique_zips.sort()
unique_zips
###Output
_____no_output_____
###Markdown
看起来干净多了。但是真的做完了吗?
###Code
zips = requests['Incident Zip']
# 用is_close表示0或者1开始的比较正确的邮编
is_close = zips.str.startswith('0') | zips.str.startswith('1')
# 非缺省值但不以0或者1开始的邮编认为是有些困惑的
is_far = ~(is_close) & zips.notnull()
zips[is_far].value_counts()
###Output
_____no_output_____
###Markdown
可以排个序,然后对应输出一些东西
###Code
requests[is_far][['Incident Zip', 'Descriptor', 'City']].sort_values(by=['Incident Zip'])
###Output
_____no_output_____
###Markdown
咳咳,突然觉得,恩,刚才做的一大堆工作,其实只是告诉你,我们可以这样去处理和补齐数据。但你实际上会发现,好像其实用city直接对应一下就可以补上一些东西啊。 总结 所以汇总一下,我们在邮编这个字段,是这样做数据清洗的:
###Code
# The usual preamble
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Make the graphs a bit prettier, and bigger
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = (15, 5)
plt.rcParams['font.family'] = 'sans-serif'
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
def fix_zip_codes(zips):
# Truncate everything to length 5
zips = zips.str.slice(0, 5)
# Set 00000 zip codes to nan
zero_zips = zips == '00000'
zips[zero_zips] = np.nan
print "'00000' values has changed to Nan: {}".format(len(zips[zero_zips]))
return zips
# Load data
na_values = ['NO CLUE', 'N/A', '0']
requests = pd.read_csv('./data/311-service-requests.csv',
na_values=na_values,
dtype={'Incident Zip': str})
# Preprocessing
requests['Incident Zip'] = fix_zip_codes(requests['Incident Zip'])
# Print
requests['Incident Zip'].unique()
def merge_zips_by_city(requests):
# 按照 City 分组
requests['City'] = requests['City'].str.upper()
city_zips = requests[['Incident Zip', 'City']].groupby('City')\
.aggregate('first').reset_index(drop=False)
city_zips.rename(columns={'Incident Zip':'Normal_zips'}, inplace=True)
# 提取 异常 邮编值
zips = requests['Incident Zip']
# 缺省值
is_null = zips.isnull()
# is_close: 0或者1开始的比较正确的邮编
is_close = zips.str.startswith('0') | zips.str.startswith('1')
# is_far: 非缺省值,但不以0或者1开始的邮编,认为是有些困惑的
is_far = ~(is_close | is_null)
print "All data: {}".format(len(zips))
print "Nan:{} + Normal:{} + Abnormal:{} = {}".format(
len(zips[is_null]), len(zips[is_close]), len(zips[is_far]),
len(zips[is_null])+len(zips[is_close])+len(zips[is_far])
)
# 获取异常邮编 字段
abnormal_zips = requests[is_far][['Incident Zip', 'City']]
abnormal_zips.rename(columns={'Incident Zip':'Abnormal_zips'}, inplace=True)
# 相同列 merge
abnormal_zips_res = abnormal_zips.merge(city_zips)
return abnormal_zips_res
merge_zips_by_city(requests)
###Output
All data: 111069
Nan:12265 + Normal:98791 + Abnormal:13 = 111069
|
_build/html/_sources/notebooks/05/bilinear-relaxations.ipynb | ###Markdown
McCormick Envelopes McCormick EnvelopeLet $w = xy$ with upper and lower bounds on $x$$$\begin{align*}x_1 \leq x \leq x_2 \\y_1 \leq y \leq y_2 \\\end{align*}$$The "McCormick envelope" is a convex region satisfying the constraints$$\begin{align*}w & \geq y_1 x + x_1 y - x_1 y_1 \\w & \geq y_2 x + x_2 y - x_2 y_2 \\w & \leq y_2 x + x_1 y - x_1 y_2 \\w & \leq y_1 x + x_2 y - x_2 y_1 \\\end{align*}$$The following cells attempt to illustrate how this works.
###Code
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from mpl_toolkits import mplot3d
import mpl_toolkits.mplot3d.art3d as art3d
from matplotlib.patches import Rectangle
from mpl_toolkits.mplot3d import axes3d
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from matplotlib import style
n = 10
x1, x2 = 0.5, 10
y1, y2 = 0.5, 10
X, Y = np.meshgrid(np.linspace(x1, x2, n+1), np.linspace(y1, y2, n+1))
fig, ax = plt.subplots()
cp = ax.contourf(X, Y, X*Y, cmap=cm.cool, levels=n)
fig.colorbar(cp)
ax.axis('equal')
ax.set_xlim(0, x2 + x1)
ax.set_ylim(0, y2 + y1)
ax.plot([x1, x1, x2, x2, x1], [y1, y2, y2, y1, y1])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('Bilinear function x*y')
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from mpl_toolkits import mplot3d
import mpl_toolkits.mplot3d.art3d as art3d
from matplotlib.patches import Rectangle
from mpl_toolkits.mplot3d import axes3d
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
from matplotlib import style
n = 10
x1, x2 = 0, 1
y1, y2 = 0, 1
X, Y = np.meshgrid(np.linspace(x1, x2, n+1), np.linspace(y1, y2, n+1))
fig, ax = plt.subplots(1, 1, subplot_kw={"projection": "3d"}, figsize=(10,10))
# surface plot
ax.plot_surface(X, Y, X*Y, alpha=1, cmap=cm.cool)
ax.plot_wireframe(X, Y, X*Y, lw=.3)
# annotate axis
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('w = x * y')
ax.view_init(elev=20, azim=-10)
# corner points (clockwise a -> b -> c -> d -> a)
a = np.array([x1, y1, x1*y1])
b = np.array([x1, y2, x1*y2])
c = np.array([x2, y2, x2*y2])
d = np.array([x2, y1, x2*x1])
def plot_line(a, b, color='r'):
ax.plot3D([a[0], b[0]], [a[1], b[1]], [a[2], b[2]], lw=4, color=color, solid_capstyle="round")
# four edges
plot_line(a, b)
plot_line(b, c)
plot_line(c, d)
plot_line(d, a)
# catty corners
plot_line(b, d)
plot_line(a, c)
def show_surf(a, b, c):
x = np.array([a[0], b[0], c[0]])
y = np.array([a[1], b[1], c[1]])
z = np.array([a[2], b[2], c[2]])
ax.plot_trisurf(x, y, z, alpha=0.2)
show_surf(a, b, c)
show_surf(a, b, d)
show_surf(a, c, d)
show_surf(b, c, d)
plot_line([x1, y1, 0], a, 'k')
plot_line([x1, y2, 0], b, 'k')
plot_line([x2, y2, 0], c, 'k')
plot_line([x2, y1, 0], d, 'k')
#
###Output
_____no_output_____ |
Module1/Lesson_2_Data_Types.ipynb | ###Markdown
**Data Types Revisited** We'll now take a closer look at all the possible data types in Python so that we can stop worrying about them once and for all. **Identifiers**An identifier is simply the name we give to a certain variable. For example if we were to type into Python ``a=1``we'd be defining a variable of type `` int``and giving it the identifier ``a``.There are rules which establish what we can and can't use as a valid identifier in Python. Essentially we want a sequence of alphanumerical characters (we can actually use all UTF-8 characters but please, don't) where the first must be a letter or an underscore. It is generally a bad idea to start the name of a variable by an underscore since these names are often reserved by Python to indicate special functions pertaining to classes (we'll cover them later on). Another rule is that variable identifiers cannot match any of Python's protected keywords: such as ``if``, ``for``,``break``, ``return``, and so on. **Numerical data**We'll now give a comprehensive list of the basic built-in mathematical operators that Python offers, and specify those that can only be used on integers:* ``x+y``addition of x and y* ``x-y``change of sign/subtraction of y from x* ``x*y``multiplication of x and y* ``x**y`` x to the power of y, same as ``pow(x,y```* ``x/y`` x divided by y* ``x//y`` integer division (returns an int)* ``x % y`` x modulo y (remainder of x // y, only for ``int``-type)* ``abs(x)``absolute value of x* ``x+=y`` is shorthand for ``x = x+y```* ``x-=y`` * ``x*=y```* ``x/=y`` **Strings**Strings offer us a practical way of transitioning from basic data-types to collections of data. In fact, to the eyes of Python a string is nothing but a sequence or list of characters! Let's start with the basics. A string in Python is any sequence of UTF-8 characters delimited by single or double quotation marks:
###Code
a = "This is a string!"
b = 'This is also a string!'
c = " even 'this' is a string!"
###Output
_____no_output_____
###Markdown
The only important thing is that the two delimiters must be the same. We can access single characters within a string as we would elements in a list. For example:
###Code
len(a)
a[0], a[2], a[5]
###Output
_____no_output_____
###Markdown
Now we introduce some indexing techniques that are often very useful when dealing with aggregate data. All lists in python (as well as arrays, which we will encounter later on) can be accessed from the start, using increasing indices starting from 0, or from the end, using decreasing indices starting from -1. So:
###Code
a[16],a[-1]
###Output
_____no_output_____
###Markdown
We can also access sections of a list via index **slicing**, the general syntax going as follows:```list[start:end:step]```We can omit any of those, and they default to the obvious values: ``0``for start, ``-1``for end, ``1``for stepSo for example:
###Code
a[0:4:1]
a[0:4]
a[:4]
a[0::2]
a[::2]
###Output
_____no_output_____
###Markdown
Experiment with these, it takes some getting used to. **Collection Data Types**We'll now discuss data types which consist of a collection of data elements of different classes. We'll start by briefly going back to lists and tuples to then move on to a new, very important, Python data type: the Dictionary. **Sequences**Sequence types are all of those data classes that support the ``len()``method, are ``iterable``(we'll see shortly what this exactly means) and can be sliced using the ``[]``operator. These are essentially ``str``,``tuples``and ``lists``(plus another couple variants of these that are not very commonly used, being ``bytearray``and ``bytes``. As we've already discussed, a ``tuple``is a sequence of data that cannot be modified once it is defined, the sintax for defining a ``tuple``is as follows:
###Code
a = 12
mytuple = (1,a,"hello!!")
###Output
_____no_output_____
###Markdown
Lists are very similar to tuples, but unlike those, we can modify a list once it has been created, meaning we can replace, add, or remove items from it. Lists and tuples' elements can be accessed via the slicing operator ``[]``exactly in the same way as strings' characters. Here's a list of useful methods which apply to lists: (in all that follows, ``myList`` is a ``list``type object)* ``myList.append(x)``appends ``x``at the end of the list* ``myList.count(x)``counts the number of occurrences of the element ``x``in the list* ``myList.remove(x)``removes the earliest occurence of item ``x``from the list* ``myList.pop(i)``returns and removes the element at index ``i``* ``myList.insert(i,x) ``inserts item ``x``at index i in the list
###Code
###Output
_____no_output_____ |
docs_src/utils.mem.ipynb | ###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, used, free)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, used_0, free_0), GPUMemory(total_1, used_1, free_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_free_no_cache`](/utils.mem.htmlgpu_mem_get_free_no_cache)
###Code
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
Usage examples:```from fastai.utils.mem import GPUMemTracememtrace = GPUMemTrace()memtrace.start() start tracingdef some_code(): passsome_code()memtrace.report() print intermediary cumulative reportdelta_used, delta_peaked = memtrace.data() same but as datasome_code()memtrace.report('2nd run') print intermediary cumulative reportdelta_used, delta_peaked = memtrace.data()for i in range(10): memtrace.reset() some_code() memtrace.report(f'i={i}') report for just the last code run since reset combine report+resetmemtrace.reset()for i in range(10): some_code() memtrace.report_n_reset(f'i={i}') report for just the last code run since resetmemtrace.stop() stop the monitor thread```It can also be used as a context manager:```with GPUMemTrace() as memtrace: some_code()delta_used, delta_peaked = memtrace.data()mem_trace.report("measured in ctx")``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent` shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `note` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.Example 4: Silencing report calls w/o removing them```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```Print the used and peaked deltas:```with GPUMemTrace() as mtrace: some_code()mem_trace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace() as mtrace: some_code()print(mtrace)``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____
###Markdown
if function decorator is not a good option, you can use a context manager instead. For example:```with gpu_mem_restore_ctx(): learn.fit_one_cycle(1,1e-2)```This particular one will clear tb on any exception. Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(get_ref_free_exc_info)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, used, free)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, used_0, free_0), GPUMemory(total_1, used_1, free_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_free_no_cache`](/utils.mem.htmlgpu_mem_get_free_no_cache)
###Code
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
Usage examples:```memtrace = GPUMemTrace()memtrace.start() start tracingsome_code()memtrace.report() print intermediary cumulative reportused, peak = memtrace.data() same but as datasome_code()memtrace.report('2nd run') print intermediary cumulative reportused, peak = memtrace.data()for i in range(10): memtrace.reset() code() memtrace.report(f'i={i}') report for just the last code run since reset combine report+resetmemtrace.reset()for i in range(10): code() memtrace.report_n_reset(f'i={i}') report for just the last code run since resetmemtrace.stop() stop the monitor thread``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____
###Markdown
if function decorator is not a good option, you can use a context manager instead. For example:```with gpu_mem_restore_ctx(): learn.fit_one_cycle(1,1e-2)```This particular one will clear tb on any exception. Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(get_ref_free_exc_info)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____
###Markdown
if function decorator is not a good option, you can use a context manager instead. For example:```with gpu_mem_restore_ctx(): learn.fit_one_cycle(1,1e-2)```This particular one will clear tb on any exception. Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(get_ref_free_exc_info)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free)
show_doc(gpu_mem_get_free_no_cache)
show_doc(gpu_mem_get_used)
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
**Arguments**:* `silent`: a shortcut to make `report` and `report_n_reset` silent w/o needing to remove those calls - this can be done from the constructor, or alternatively you can call `silent` method anywhere to do the same.* `ctx`: default context note in reports* `on_exit_report`: auto-report on ctx manager exit (default `True`)**Definitions**:* **Delta Used** is the difference between current used memory and used memory at the start of the counter.* **Delta Peaked** is the memory overhead if any. It's calculated in two steps: 1. The base measurement is the difference between the peak memory and the used memory at the start of the counter. 2. Then if delta used is positive it gets subtracted from the base value. It indicates the size of the blip. **Warning**: currently the peak memory usage tracking is implemented using a python thread, which is very unreliable, since there is no guarantee the thread will get a chance at running at the moment the peak memory is occuring (or it might not get a chance to run at all). Therefore we need pytorch to implement multiple concurrent and resettable [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/cuda.htmltorch.cuda.max_memory_allocated) counters. Please vote for this [feature request](https://github.com/pytorch/pytorch/issues/16266).**Usage Examples**:Setup:```from fastai.utils.mem import GPUMemTracedef some_code(): passmtrace = GPUMemTrace()```Example 1: basic measurements via `report` (prints) and via [`data`](/tabular.data.htmltabular.data) (returns) accessors```some_code()mtrace.report()delta_used, delta_peaked = mtrace.data()some_code()mtrace.report('2nd run of some_code()')delta_used, delta_peaked = mtrace.data()````report`'s optional `subctx` argument can be helpful if you have many `report` calls and you want to understand which is which in the outputs.Example 2: measure in a loop, resetting the counter before each run```for i in range(10): mtrace.reset() some_code() mtrace.report(f'i={i}')````reset` resets all the counters.Example 3: like example 2, but having `report` automatically reset the counters```mtrace.reset()for i in range(10): some_code() mtrace.report_n_reset(f'i={i}')```The tracing starts immediately upon the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object creation, and stops when that object is deleted. But it can also be `stop`ed, `start`ed manually as well.```mtrace.start()mtrace.stop()````stop` is in particular useful if you want to **freeze** the [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) object and to be able to query its data on `stop` some time down the road.**Reporting**:In reports you can print a main context passed via the constructor:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report()```prints:```△Used Peaked MB: 0 0 (foobar)```and then add subcontext notes as needed:```mtrace = GPUMemTrace(ctx="foobar")mtrace.report('1st try')mtrace.report('2nd try')```prints:```△Used Peaked MB: 0 0 (foobar: 1st try)△Used Peaked MB: 0 0 (foobar: 2nd try)```Both context and sub-context are optional, and are very useful if you sprinkle [`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) in different places around the code.You can silence report calls w/o needing to remove them via constructor or `silent`:```mtrace = GPUMemTrace(silent=True)mtrace.report() nothing will be printedmtrace.silent(silent=False)mtrace.report() printing resumedmtrace.silent(silent=True)mtrace.report() nothing will be printed```**Context Manager**:[`GPUMemTrace`](/utils.mem.htmlGPUMemTrace) can also be used as a context manager:Report the used and peaked deltas automatically:```with GPUMemTrace(): some_code()```If you wish to add context:```with GPUMemTrace(ctx='some context'): some_code()```The context manager uses subcontext `exit` to indicate that the report comes after the context exited.The reporting is done automatically, which is especially useful in functions due to return call:```def some_func(): with GPUMemTrace(ctx='some_func'): some code return 1some_func()```prints:```△Used Peaked MB: 0 0 (some_func: exit)```so you still get a perfect report despite the `return` call here. `ctx` is useful for specifying the *context* in case you have many of those calls through your code and you want to know which is which.And, of course, instead of doing the above, you can use [`gpu_mem_trace`](/utils.mem.htmlgpu_mem_trace) decorator to do it automatically, including using the function or method name as the context. Therefore, the example below does the same without modifying the function.```@gpu_mem_tracedef some_func(): some code return 1some_func()```If you don't wish the automatic reporting, just pass `on_exit_report=False` in the constructor:```with GPUMemTrace(ctx='some_func', on_exit_report=False) as mtrace: some_code()mtrace.report("measured in ctx")```or the same w/o the context note:```with GPUMemTrace(on_exit_report=False) as mtrace: some_code()print(mtrace) or mtrace.report()```And, of course, you can get the numerical data (in rounded MBs):```with GPUMemTrace() as mtrace: some_code()delta_used, delta_peaked = mtrace.data()```
###Code
show_doc(gpu_mem_trace)
###Output
_____no_output_____
###Markdown
This allows you to decorate any function or method with:```@gpu_mem_tracedef my_function(): pass run:my_function()```and it will automatically print the report including the function name as a context:```△Used Peaked MB: 0 0 (my_function: exit)```In the case of methods it'll print a fully qualified method, e.g.:```△Used Peaked MB: 0 0 (Class.function: exit)``` Undocumented Methods - Methods moved below this line will intentionally be hidden
###Code
show_doc(GPUMemTrace.report)
show_doc(GPUMemTrace.silent)
show_doc(GPUMemTrace.start)
show_doc(GPUMemTrace.reset)
show_doc(GPUMemTrace.peak_monitor_stop)
show_doc(GPUMemTrace.stop)
show_doc(GPUMemTrace.report_n_reset)
show_doc(GPUMemTrace.peak_monitor_func)
show_doc(GPUMemTrace.data_set)
show_doc(GPUMemTrace.data)
show_doc(GPUMemTrace.peak_monitor_start)
###Output
_____no_output_____
###Markdown
Memory management utils Utility functions for memory management. Currently primarily for GPU.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.utils.mem import *
show_doc(gpu_mem_get)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get`](/utils.mem.htmlgpu_mem_get)* for gpu returns `GPUMemory(total, free, used)`* for cpu returns `GPUMemory(0, 0, 0)`* for invalid gpu id returns `GPUMemory(0, 0, 0)`
###Code
show_doc(gpu_mem_get_all)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all)* for gpu returns `[ GPUMemory(total_0, free_0, used_0), GPUMemory(total_1, free_1, used_1), .... ]`* for cpu returns `[]`
###Code
show_doc(gpu_mem_get_free_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_free_no_cache`](/utils.mem.htmlgpu_mem_get_free_no_cache)
###Code
show_doc(gpu_mem_get_used_no_cache)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_no_cache`](/utils.mem.htmlgpu_mem_get_used_no_cache)
###Code
show_doc(gpu_mem_get_used_fast)
###Output
_____no_output_____
###Markdown
[`gpu_mem_get_used_fast`](/utils.mem.htmlgpu_mem_get_used_fast)
###Code
show_doc(gpu_with_max_free_mem)
###Output
_____no_output_____
###Markdown
[`gpu_with_max_free_mem`](/utils.mem.htmlgpu_with_max_free_mem):* for gpu returns: `gpu_with_max_free_ram_id, its_free_ram`* for cpu returns: `None, 0`
###Code
show_doc(preload_pytorch)
###Output
_____no_output_____
###Markdown
[`preload_pytorch`](/utils.mem.htmlpreload_pytorch) is helpful when GPU memory is being measured, since the first time any operation on `cuda` is performed by pytorch, usually about 0.5GB gets used by CUDA context.
###Code
show_doc(GPUMemory, title_level=4)
###Output
_____no_output_____
###Markdown
[`GPUMemory`](/utils.mem.htmlGPUMemory) is a namedtuple that is returned by functions like [`gpu_mem_get`](/utils.mem.htmlgpu_mem_get) and [`gpu_mem_get_all`](/utils.mem.htmlgpu_mem_get_all).
###Code
show_doc(b2mb)
###Output
_____no_output_____
###Markdown
[`b2mb`](/utils.mem.htmlb2mb) is a helper utility that just does `int(bytes/2**20)` Memory Tracing Utils
###Code
show_doc(GPUMemTrace, title_level=4)
###Output
_____no_output_____
###Markdown
Usage examples:```from fastai.utils.mem import GPUMemTracememtrace = GPUMemTrace()memtrace.start() start tracingdef some_code(): passsome_code()memtrace.report() print intermediary cumulative reportdelta_used, delta_peaked = memtrace.data() same but as datasome_code()memtrace.report('2nd run') print intermediary cumulative reportdelta_used, delta_peaked = memtrace.data()for i in range(10): memtrace.reset() some_code() memtrace.report(f'i={i}') report for just the last code run since reset combine report+resetmemtrace.reset()for i in range(10): some_code() memtrace.report_n_reset(f'i={i}') report for just the last code run since resetmemtrace.stop() stop the monitor thread```It can also be used as a context manager:```with GPUMemTrace() as memtrace: some_code()delta_used, delta_peaked = memtrace.data()mem_trace.report("measured in ctx")``` Workarounds to the leaky ipython traceback on exceptionipython has a feature where it stores tb with all the `locals()` tied in, whichprevents `gc.collect()` from freeing those variables and leading to a leakage.Therefore we cleanse the tb before handing it over to ipython. The 2 ways of doing it are by either using the [`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) decorator or the [`gpu_mem_restore_ctx`](/utils.mem.htmlgpu_mem_restore_ctx) context manager which are described next:
###Code
show_doc(gpu_mem_restore)
###Output
_____no_output_____
###Markdown
[`gpu_mem_restore`](/utils.mem.htmlgpu_mem_restore) is a decorator to be used with any functions that interact with CUDA (top-level is fine)* under non-ipython environment it doesn't do anything.* under ipython currently it strips tb by default only for the "CUDA out of memory" exception.The env var `FASTAI_TB_CLEAR_FRAMES` changes this behavior when run under ipython,depending on its value: * "0": never strip tb (makes it possible to always use `%debug` magic, but with leaks)* "1": always strip tb (never need to worry about leaks, but `%debug` won't work)e.g. `os.environ['FASTAI_TB_CLEAR_FRAMES']="0"` will set it to 0.
###Code
show_doc(gpu_mem_restore_ctx, title_level=4)
###Output
_____no_output_____ |
scikit-tutorial/Scikit-Tutorial PCA.ipynb | ###Markdown
Dimensionality Reduction with PCAPCA stands for Pricipal Component Analysis.Dimensionality reduction is motivated by several problems. First, it can be used to mitigate problems caused by the curse of dimensionality. Second, dimensionality reduction can be used to compress data whileminimizing the amount of information that is lost. Third, understanding the structure of data with hundreds of dimensions can be difficult; data with only two or three dimensions can be visualized easily. Curse of DimensionalityThe higher the number of dimensions (features) the more the sparse the data becomes or in other words, the higher the number of features more will be the data requirement! **DATA IS PRECIOUS**PCA helps us to project high dimensional data on a few dimension thus helping us to determine:a) The most important dimension/ the most important component of variationb) Helps reducing the data dimension, it does not help as a measure of underfitting. How PCA works?PCA reduces the dimensions of a data set by projecting the data onto alower-dimensional subspace. For example, a two dimensional data set could be reduced by projecting the points onto a line; each instance in the data set would then be represented by a single value rather than a pair of values. A three-dimensional dataset could be reduced to two dimensions by projecting the variables onto a plane.In general, an n-dimensional dataset can be reduced by projecting the dataset onto a k-dimensional subspace, where k is less than n. More formally, PCA can be used to find a set of vectors that span a subspace, which minimizes the sum of the squared errors of the projected data. This projection will retain the greatest proportion of the original data set's variance. Pre-reqs of understanding PCA - Variance and Co-variance - Eigenvectors and eigenvalues NOTE: Eigenvectors and eigenvalues can only be derived from square matrices, and not all square matrices have eigenvectors or eigenvalues. If a matrix does have eigenvectors and eigenvalues, it will have a pair for each of its dimensions. The principal components of a matrix are the eigenvectors of its covariance matrix, ordered by their corresponding eigenvalues. The eigenvector with the greatest eigenvalue is the first principal component; the second principal component is the eigenvector with the second greatest eigenvalue, and so on.
###Code
import numpy as np
a = np.array([[0.9, 1],
[2.4, 2.6],
[1.2, 1.7],
[0.5, -0.7],
[0.3, -0.7],
[1.8, 1.4],
[0.5, 0.6],
[0.3, 0.6],
[2.5, 2.6],
[1.3, 1.1]])
cov_a = np.cov(a.T)
#WE TRANSPOSE BECAUSE ALGORITHM IS DESIGNED TO DETERMINE COVARIANCE ALONG
#ROW, SO THE NUMBER OF ROWS ARE CONSIDERED AS NUMBER OF VARIABLES AND
#NOT CONSIDERED AS NUMBER OF DATA-POINTS
eigenval, vec = np.linalg.eig(cov_a)
reduced_mat = a.dot(vec[:,0])
print reduced_mat.shape
print a.shape
#one dimension less
###Output
(10,)
(10, 2)
###Markdown
Many implementations of PCA, including the one of scikit-learn, use singular value decomposition to calculate the eigenvectors and eigenvalues. SVD is given by the following equation:The columns of S are called left singular vectors of the data matrix, the columns of U are its right singular vectors, and the diagonal entries of are its singular values.While the singular vectors and values of a matrix are useful in some applications of signal processing and statistics, we are only interested in them as they relate to the eigenvectors and eigenvalues of the data matrix. Specifically, the left singular vectors are the eigenvectors of the covariance matrix and the diagonal elements of ∑ are the square roots of the eigenvalues of the covariance matrix. Using PCA to visualize high-dimensional data
###Code
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
data = load_iris()
y = data.target
X = data.data
pca = PCA(n_components = 2)
reduced_X = pca.fit_transform(X)
red_x, red_y = [], []
blue_x, blue_y = [], []
green_x, green_y = [], []
for i in range(len(reduced_X)):
if y[i]== 0:
red_x.append(reduced_X[i][0])
red_y.append(reduced_X[i][1])
elif y[i]==1:
blue_x.append(reduced_X[i][0])
blue_y.append(reduced_X[i][1])
else:
green_x.append(reduced_X[i][0])
green_y.append(reduced_X[i][1])
plt.scatter(red_x, red_y, color='r', marker='o')
plt.scatter(blue_x, blue_y, color='b', marker='x')
plt.scatter(green_x, green_y, color='g', marker ='.')
plt.show()
###Output
_____no_output_____
###Markdown
Face Recognition with PCAFace recognition is the supervised classification task of identifying a person from an image of his or her face. In this example, we will use a data set called Our Database of Faces from AT&T Laboratories, Cambridge. The data set contains ten images each of forty people.The images were created under different lighting conditions, and the subjects varied their facial expressions. The images are gray scale and 92 x 112 pixels in dimension.While these images are small, a feature vector that encodes the intensity of every pixel will have 10,304 dimensions. Training from such high-dimensional data could require many samples to avoid over-fitting. Instead, we will use PCA to compactly represent the images in terms of a small number of principal components.We can reshape the matrix of pixel intensities for an image into a vector, and create a matrix of these vectors for all of the training images. Each image is a linear combination of this data set's principal components. In the context of face recognition, these principal components are called eigenfaces. The eigenfaces can be thought of as standardized components of faces. Each face in the data set can be expressed as some combination of the eigenfaces, and can be approximated as a combination of the most important eigenfaces.
###Code
from os import walk, path
import numpy as np
import mahotas as mh
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
X = []
y = []
for dir_path, dir_names, file_names in walk('orl_faces'):
for fn in file_names:
if fn[-3:] == 'pgm':
image_filename = path.join(dir_path, fn)
X.append(scale(mh.imread(image_filename,
as_grey = True).reshape(-1,).astype(np.float32)))
y.append(dir_path)
X = np.array(X)
plt.imshow(X[0].reshape(112,-1), cmap='gray')
X_train, X_test, y_train, y_test = train_test_split(X, y)
pca = PCA(n_components=150)
X_train_reduced = pca.fit_transform(X_train)
X_test_reduced = pca.transform(X_test)
print X_test_reduced.shape, X_train_reduced.shape
classifier = LogisticRegression()
accuracies = cross_val_score(classifier, X_train_reduced, y_train)
print 'Cross validation Accuracy: ', np.mean(accuracies)
classifier.fit(X_train_reduced, y_train)
prediction = classifier.predict(X_test_reduced)
print classification_report(y_test, prediction)
###Output
precision recall f1-score support
orl_faces/s1 1.00 0.67 0.80 3
orl_faces/s10 1.00 1.00 1.00 1
orl_faces/s11 1.00 1.00 1.00 3
orl_faces/s12 0.67 1.00 0.80 2
orl_faces/s13 1.00 1.00 1.00 2
orl_faces/s14 1.00 1.00 1.00 2
orl_faces/s15 1.00 1.00 1.00 3
orl_faces/s16 0.00 0.00 0.00 1
orl_faces/s17 1.00 1.00 1.00 3
orl_faces/s18 1.00 1.00 1.00 2
orl_faces/s19 1.00 1.00 1.00 1
orl_faces/s2 1.00 1.00 1.00 4
orl_faces/s20 1.00 1.00 1.00 3
orl_faces/s21 1.00 1.00 1.00 3
orl_faces/s22 1.00 1.00 1.00 1
orl_faces/s23 0.33 1.00 0.50 1
orl_faces/s25 1.00 1.00 1.00 2
orl_faces/s26 1.00 0.67 0.80 3
orl_faces/s27 0.67 1.00 0.80 2
orl_faces/s28 0.50 0.50 0.50 2
orl_faces/s29 1.00 0.75 0.86 4
orl_faces/s3 1.00 1.00 1.00 3
orl_faces/s30 1.00 0.80 0.89 5
orl_faces/s31 0.33 1.00 0.50 1
orl_faces/s32 1.00 0.67 0.80 3
orl_faces/s34 0.33 1.00 0.50 2
orl_faces/s35 1.00 0.60 0.75 5
orl_faces/s36 0.00 0.00 0.00 3
orl_faces/s37 1.00 1.00 1.00 3
orl_faces/s38 1.00 1.00 1.00 4
orl_faces/s39 1.00 1.00 1.00 4
orl_faces/s4 1.00 1.00 1.00 3
orl_faces/s40 1.00 0.75 0.86 4
orl_faces/s5 1.00 1.00 1.00 2
orl_faces/s6 0.67 1.00 0.80 2
orl_faces/s7 1.00 1.00 1.00 4
orl_faces/s8 1.00 1.00 1.00 2
orl_faces/s9 1.00 1.00 1.00 2
avg / total 0.90 0.87 0.87 100
|
1-2_python_intro.ipynb | ###Markdown
Python Intro The Python programming languageHere we will learn the basics of the Python programming language. A Python program is just a collection of text files that contain instructions which a computer can read and implement. These text files are referred to as "source code".When we run a program, the computer reads the text file and converts it into operations it can understand. The computer then performs the actions specified in the text file.Let's first consider a very simple program. It contains only a single line of Python source code, which uses the ``print()`` function to output a single line of text to the screen. To run the code, select the cell and press *Shift+Enter* or *Ctrl+Enter*:
###Code
print('Hello World!')
###Output
Hello World!
###Markdown
The above code should have printed the phrase *Hello World!*. Note that you can change the text to whatever you want. Try changing the text now. Once you've changed the text, re-run the above cell by clicking on the cell and using *Shift+Enter*. The code should now output your new text below the cell. CommentsIn Python, we can include comments in our code using the ```` character. Lines that begin with ```` are not processed by the computer. Comments are very helpful for describing what our code does, which is useful if someone else needs to read our code, or if we need to read our own code at a later date.
###Code
# Print output to screen (this comment line will not be processed)
print("Hello World!")
x=5
print(x)
###Output
Hello World!
5
###Markdown
Now try adding the ```` to the start of the ``print("Hello World!")`` line above (the shortcut to comment a line is *Ctrl + /* ). Then run the cell again. There should not be any output this time, because Python ignored the ``print()`` line. ErrorsWhen we make mistakes in our code, Python will attempt to provide information about the mistake in an error message:
###Code
'This string has no trailing apostrophe
###Output
_____no_output_____
###Markdown
The above line produces a syntax error, which means the code is not written correctly. In this case, the string is missing an apostrophe. Fix the above line so that the code runs without error.I have intentionally included a lot of errors throughout this course so that you receive a lot of experience with debugging code, which is an important part of learning programming. Having to fix broken code is also a great way to challenge and improve your understanding.***Throughout these notebooks, you should fix all errors before moving on to the next cell.*** Variables and data typesA variable stores data in memory, such as numbers or text. Let's create some variables and then output their values:
###Code
x = 1
print(x)
x = 7
y = 38
print(x)
print(y)
val = x + y
print(val)
university_name = 'Swinburne'
print(university_name)
###Output
Swinburne
###Markdown
Each variable has a particular data type, such as string or integer. Here are a few examples of common data types:
###Code
# String variable (a sequence of text)
course_name = 'Data Science Fundamentals'
print(course_name)
# Integer variable (a whole number)
year = 2021
print(year)
# Floating point (A.K.A. float) variable (a decimal number)
pi = 3.14
print(pi)
# Boolean variable (either True or False)
awake = True
print(awake)
###Output
Data Science Fundamentals
2021
3.14
True
###Markdown
Note that we can perform arithmetic operations on numerical variables, such as integer numbers (e.g. 7) and floating point numbers (e.g. 7.923):
###Code
x = 4
y = 9.2
x + y
x - y
x * y
x / y
y % x # Use % (the "modulo" operator) to get the remainder of a division
x ** 2 # Two asterixes between any variables x and i denotes x to the power of i (i.e. this is x^2)
###Output
_____no_output_____
###Markdown
Note that the values assigned ``x`` and ``y`` are persistent across different cells. This is true of all variables in a given IPython notebook.We can change the value of variables. Change the value of the ``season`` variable in the code below to something other than Purple (you will also need to fix any errors):
###Code
season = 'Summer'
print(season)
# New value for season
#season =
# Print out the second season:
print(season)
# Now change the value of season to something else:
#Your code here...
season = 'winte'
# Print out the third season
print(season)
###Output
Summer
Summer
winte
###Markdown
We can concatenate different string variables using the *+* operator:
###Code
sentence = "Let's learn " + course_name + '!'
print(sentence)
###Output
Let's learn Data Science Fundamentals!
###Markdown
Note that if we attempt to include the ``year`` variable, we obtain an error:
###Code
sentence = "Let's learn " + course_name + ' in ' + year
print(sentence)
###Output
_____no_output_____
###Markdown
This is because the year variable is an *integer*, not a *string*. We can convert it to a string using the ``str()`` function; i.e. change the word ``year`` in the above code to ``str(year)``, then re-run the above cell to check that it works.Note that we can check the type of a given variable using the type() function:
###Code
print(type(sentence))
print(type(course_name))
print(type(year))
print(type(2.8))
print(type(''))
###Output
<class 'str'>
<class 'str'>
<class 'int'>
<class 'float'>
<class 'str'>
###Markdown
Now write code in the cell below that checks that data type of the variables ``a`` and ``b``:
###Code
a = True
b = 'True'
Your code here...
###Output
_____no_output_____
###Markdown
Are the data types the same? Why? Printing vs. automatic IPython variable outputThe Jupyter Notebook and Jupyter Lab programs will automatically print a variable's value if a line contains only the name of a variable:
###Code
sentence
###Output
_____no_output_____
###Markdown
But if there are multiple lines containing just the name of a variable, only the last one will be output to screen:
###Code
sentence
season
###Output
_____no_output_____
###Markdown
However, this is not usually the case when running Python programs; in many Python interpreters the above line will not produce any output at all. Instead, we usually have to specify when we want to output a value to the screen using the ``print()`` function:
###Code
print(sentence)
print(season)
###Output
_____no_output_____
###Markdown
Combining stringsWe can combine variables with strings, and there are multiple methods of accomplishing this and altering string formatting in Python. Below we show a couple of common methods for concatenating strings and variable values.1. Using the ``%`` operator to insert variable values into a string:
###Code
# Create variables containing your name and height (just estimate your height in cm)
name =
height =
# Include the variables in a sentence and print the resulting string
text = 'My name is %s, and I am %s cm tall.' % (name, height)
print(text)
###Output
_____no_output_____
###Markdown
2. Using the string method ``.format()`` to insert variables values into a string:
###Code
# Include the variables in a sentence and print the resulting string
text = 'My name is {}, and I am {} cm tall.'.format(name, height)
print(text)
# Note with .format() we can also specify a different order that each variable appears in the string
text = 'My name is {1}, and I am {0} cm tall. {0} {1} {1} {1}'.format(name, height)
print(text)
###Output
_____no_output_____
###Markdown
3. Using ``+`` to concatenate strings:
###Code
# Include the variables in a sentence and print the resulting string
text = 'My name is ' + name + ', and I am ' + height + ' cm tall.'
print(text)
###Output
_____no_output_____
###Markdown
4. Using *f-strings* (works in Python 3, but not in Python 2):
###Code
# Include the variables in a sentence and print the resulting string
text = f'My name is {name}, and I am {height} cm tall.'
print(text)
###Output
_____no_output_____
###Markdown
Strings can also be multi-line strings if we use three quotation marks before and after the string (instead of one):
###Code
# A multi-line string
text = '''
My name is {}.
I am { cm tall.
'''
text = tex.format(name, height)
print(text)
# Another multi-line string
text = '''
V
e
r
t
i
c
a
l
o o
\___/
''
print(text)
###Output
_____no_output_____
###Markdown
The ``+=`` operatorNote that if we want to append one string to the end of another like so:
###Code
a = 'Hello '
b = 'World!'
print(a)
a = a + b
print((a))
###Output
Hello
Hello World!
###Markdown
we can instead use the ``+=`` operator:
###Code
a = 'Hello '
print(a)
a += b # this is equivalent to: a = a + b
print(a)
###Output
Hello
Hello World!
###Markdown
Note that the ``+=`` operator also comes in handy when we want to increment numbers:
###Code
x = 0
print(x)
x += 1 # this is equivalent to: x = x + 1
print(x)
x += 1
print(x)
x += 1
print(x)
###Output
_____no_output_____
###Markdown
And similarly for decrementing numbers:
###Code
x = 10
print(x)
x -= 1 # this is equivalent to: x = x - 1
print(x)
x -= 1
print(v)
x -= 2
print(v)
x -= 3.7
print(v)
x -= 7
print(v)
###Output
_____no_output_____
###Markdown
Note: It may be difficult to find which specific line the above error refers to (line 8). In Jupyter Lab, you can show line numbers in each cell by clicking on the "View" menu and selecting "Show Line Numbers". Exercise 1Create a new code cell below this cell (the shortcut to create a new cell is *Esc+b* for below the current cell, and *Esc+a* for above). In the new cell, write code that does the following:HINT: Before your write any code, write several short comments which describe what each section of code will do. This is a useful way to both help structure your coding and to produce comments that describe what your code does.1. Create two variables called *x* and *y*, and set them equal to two different numbers (any numbers you want)2. Add x and y together, and store the summed value in a variable named *z*3. Print the two numbers in a string that says "The addition of [x] and [y] is [z]"4. Repeat step 3 above, but use each of the three different methods for combining strings, i.e.: - using the ``%`` operator, - using the ``.format()`` method, - using the ``+`` operator, - using *f-strings*. Which method has cleaner syntax for this particular task? 5. Use the ``+=`` operator to add the x value to z again, then print the resulting value of z6. Change the value of x to 26, then re-run the code7. Calculate $z^2$ and print the value8. Print the data types of x, y, and z
###Code
x=3 # 1.Create 2 variables
y=2
z=x+y # 2.Add x and y and store in variable named as z
print ('The addition of %s and %s is %s.'%(x,y,z)) #Print using % operator
print ('The addition of {} and {} is {}'.format(x,y,z)) #print using .format operator
print ('The addition of '+ str(x)+' and '+str(y)+ ' is '+str(z)) #print using + operator
print (f'The addition of {x} and {y} is {z}') #print using f-string operator
###Output
The addition of 26 and 2 is 28
###Markdown
The cleanest syntax is f-string
###Code
# 5. += operator to add x value to z again and print the resulting value of z
z+=x
print(z)
#change value of x
x=26
#using ** to calculate square of a number
z=z**2
print(z)
#print data type of x, y, and z
print (type(x))
print (type(y))
print (type(z))
###Output
<class 'int'>
<class 'int'>
<class 'int'>
|
MNIST_Digit_recognition.ipynb | ###Markdown
Digit Recognition (MNIST Dataset)
###Code
# Import all the neccessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Since dataset is imported from kaggle ,an api token should be created which produces a .JSON file
###Code
# The .JSON file should be kept in /kaggle directory to work
! pip install kaggle # install kaggle
! mkdir ~/.kaggle # create a directory named kaggle
! cp kaggle.json ~/.kaggle/ # place .JSON file in created directory
! chmod 600 ~/.kaggle/kaggle.json
# Now download the dataset of a perticular compitation
! kaggle competitions download digit-recognizer
# The downloaded dataset will be in zip format, so unzip it
! unzip train.csv.zip
! unzip test.csv.zip
# Read csv file and store it in a variable
x=pd.read_csv("train.csv")
# Lok into the data
x.shape
x.info()
x.describe()
x.isna().any()
# make x and y split
x_train=x.drop(labels="label",axis=1)
y_train=x.loc[:,["label"]]
# Normalize the data
x_train=x_train/250.0
# Build a dense neural network with 784 input
# 5 layer network with softmax classifier at the end
model = tf.keras.Sequential([
tf.keras.layers.Dense(100, input_shape=(784,)),
tf.keras.layers.Dense(300, activation='relu'),
tf.keras.layers.Dense(5000, activation='relu'),
tf.keras.layers.Dense(100, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# C0mpile the model with adam optimizer
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Split the test and train data
x_train,x_test, y_train,y_test = train_test_split(x_train, y_train, test_size=0.2, stratify=y_train)
x_train.shape,y_train.shape
# Train the model for 50 epoch
data = model.fit(x_train, y_train, epochs=50,batch_size=3360, validation_data=(x_test,y_test))
# Ploting the result , accuracy,loss
pd.DataFrame(data.history).plot()
# importing test set prediction
test=pd.read_csv("test.csv")
prediction = model.predict(test)
prediction = np.argmax(prediction, axis=1)
# Creating submission data frame
submission = {"ImageId": [i+1 for i in range(28000)],"Label": final_pred}
submission= pd.DataFrame(submission_dict)
submission
# Creating csv file which as all prediction and image id which can be verified in kaggle
submission.to_csv("submission.csv", index=False)
###Output
_____no_output_____ |
examples/vqe-examples/N-QPUs-VQE-manual-Vertical.ipynb | ###Markdown
$N$-QPU VQE - Vertical SpeedupThis notebook implements routines for running a VQE ansatz on several Quantum Computers of smaller size, which are connected both quantumly and classically, thus achieving a vertical speedup, using Interlin-q. Here, we first use a greedy algorithm to see how many qubits from each QPU do we need, and then use this information to build our parallel schedule accordingly. The controller host then takes care of sending the schedule to the different QPU so that each one knows their respective tasks. Since the networking overhead in this approach is relatively big, we will manually retrieve the statevector from the QPU network and calculate the expecation value ourselves. In this notebook, we will be applying VQE for the Hamiltonian of the Hydrogen molecule, which requires a maximum of 4 qubits for its ansatz, which is displayed below: Step 1: Import libraries.First we import all the necessary libraries. Interlin-q is built using the Python framework [QuNetSim](https://arxiv.org/abs/2003.06397) which is a software framework for simulating quantum networks up to the network layer. We also need PennyLane's chemistry library for decomposing the Hamiltonian. In addition, we import the scheduling algorithm which is based on Algorithm 1 from this [paper](https://arxiv.org/pdf/2101.02504.pdf).
###Code
%load_ext autoreload
%autoreload 2
# Basic Libraries
import sys
import numpy as np
sys.path.append("../../")
# QuNetSim Components
from qunetsim.components import Network
from qunetsim.objects import Logger
from qunetsim.backends.eqsn_backend import EQSNBackend
# Interlin-q Components
from interlinq import (ControllerHost, Constants, Clock,
Circuit, Layer, ComputingHost, Operation)
# Extra needed components
from hamiltonian_decomposition import decompose
from general_scheduler import HardwareConfig, GreedySchedule
Logger.DISABLED = False
import pennylane as qml
qml.version() # should be 0.15.1. Higher versions are not integrated with the notebook yet
###Output
_____no_output_____
###Markdown
Step 2: Decompose the Hamiltonian.
###Code
# These parameters are mentioned in PennyLane's VQE tutorial (https://pennylane.ai/qml/demos/tutorial_vqe.html)
# and are used as is for benchmarking
geometry = 'h2.xyz'
charge = 0
multiplicity = 1
basis_set = 'sto-3g'
name = 'h2'
# The decompose function runs PennyLane's decomposers and strips out the observables from their PennyLane objects
# to be used easily for our purposes
coefficients, observables, qubit_num = decompose(name, geometry, charge, multiplicity, basis_set)
terms = list(zip(coefficients, observables))
terms
###Output
_____no_output_____
###Markdown
Step 3: Schedule our Observables on our QPUs We are going to assume a very simple quantum network consisting of merely 3 QPUs, where a pair has two qubits and the third QPU has only one qubit.
###Code
# First, determine the size of the ansatz
max = 0
for term in terms:
_, obs = term
for ob in obs:
_, idx = ob
max = idx if idx > max else max
max += 1
max
number_of_observables = len(terms)
hardware_configuration = [2, 2, 1]
config = HardwareConfig(hardware_configuration)
sched = GreedySchedule(number_of_observables, config, max, True)
sched.make_schedule()
sched.print_schedule()
###Output
### Schedule for parallelization ###
# Round 1 #
[(0, [2, 2, 0])]
# Round 2 #
[(0, [2, 2, 0])]
# Round 3 #
[(0, [2, 2, 0])]
# Round 4 #
[(0, [2, 2, 0])]
# Round 5 #
[(0, [2, 2, 0])]
# Round 6 #
[(0, [2, 2, 0])]
# Round 7 #
[(0, [2, 2, 0])]
# Round 8 #
[(0, [2, 2, 0])]
# Round 9 #
[(0, [2, 2, 0])]
# Round 10 #
[(0, [2, 2, 0])]
# Round 11 #
[(0, [2, 2, 0])]
# Round 12 #
[(0, [2, 2, 0])]
# Round 13 #
[(0, [2, 2, 0])]
# Round 14 #
[(0, [2, 2, 0])]
# Round 15 #
[(0, [2, 2, 0])]
###Markdown
As expected, we only need two qubits each from the bigger QPUs for our ansatz. And so we will build our circuits accordingly. Step 4: Prepare Circuit for Given Parameters The circuit can be prepared using two different ways: either as one circuit, or several circuits run sequentially. The former approach is simpler and generally better for the optimisation function. The latter is better for debugging and for dynamic components of a quantum circuit (i.e. circuits that have a lot of changing operations). For the rest of this notebook, we will use the former approach, since the networking overhead can be computationally expensive in a threaded environment. Main Blocks
###Code
# Arbitrary single qubit rotation, as implemented by PennyLane here:
# https://pennylane.readthedocs.io/en/stable/code/api/pennylane.Rot.html#pennylane.Rot
def rotational_gate(params):
phi, theta, omega = params
cos = np.cos(theta / 2)
sin = np.sin(theta / 2)
res = np.array([[np.exp(-1j * (phi + omega) / 2) * cos, -np.exp(1j * (phi - omega) / 2) * sin],
[np.exp(-1j * (phi - omega) / 2) * sin, np.exp(1j * (phi + omega) / 2) * cos]])
return res
# These operations are responsible for preparing the initial state of the qubits, as shown in the figure above.
# We assume the qubits are distributed consecutively
def initialisation_operations(q_map):
computing_host_ids = list(q_map.keys())
ops = []
for host_id in computing_host_ids:
# We first initialize all the qubits in the backend
op = Operation(
name=Constants.PREPARE_QUBITS,
qids=q_map[host_id],
computing_host_ids=[host_id])
ops.append(op)
### Workaround for a bug in EQSN
# The first two in the first host
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_0'][0], q_map['QPU_0'][1]],
gate=Operation.CNOT,
computing_host_ids=['QPU_0'])
ops.append(op)
# Between the two computers
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_0'][1], q_map['QPU_1'][0]],
gate=Operation.CNOT,
computing_host_ids=computing_host_ids)
ops.append(op)
# The two in the second computer
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_1'][0], q_map['QPU_1'][1]],
gate=Operation.CNOT,
computing_host_ids=['QPU_1'])
ops.append(op)
################################
# Prepare the qubits on the computing host
op = Operation(
name=Constants.SINGLE,
qids=[q_map['QPU_0'][0]],
gate=Operation.X,
computing_host_ids=['QPU_0'])
ops.append(op)
op = Operation(
name=Constants.SINGLE,
qids=[q_map['QPU_0'][1]],
gate=Operation.X,
computing_host_ids=['QPU_0'])
ops.append(op)
# Needed to fix bug in EQSN
op = Operation(
name=Constants.SINGLE,
qids=[q_map['QPU_1'][0]],
gate=Operation.X,
computing_host_ids=['QPU_1'])
ops.append(op)
###########################
return [Layer(ops)]
# These operations are responsible for applying the rotation gates on the qubits, as well as the CNOTs.
def ansatz_operations(q_map, parameters):
computing_host_ids = list(q_map.keys())
layers = []
ops = []
j = 0
for host_id in computing_host_ids:
for i in range(len(q_map[host_id])):
op = Operation(
name=Constants.SINGLE,
qids=[q_map[host_id][i]],
gate=Operation.CUSTOM,
gate_param=rotational_gate(parameters[j]),
computing_host_ids=[host_id])
j += 1
ops.append(op)
layers.append(Layer(ops))
ops = []
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_1'][0], q_map['QPU_1'][1]],
gate=Operation.CNOT,
computing_host_ids=['QPU_1'])
ops.append(op)
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_1'][0], q_map['QPU_0'][0]],
gate=Operation.CNOT,
computing_host_ids=['QPU_1', 'QPU_0'])
ops.append(op)
op = Operation(
name=Constants.TWO_QUBIT,
qids=[q_map['QPU_1'][1], q_map['QPU_0'][1]],
gate=Operation.CNOT,
computing_host_ids=['QPU_1', 'QPU_0'])
ops.append(op)
layers.append(Layer(ops))
return layers
###Output
_____no_output_____
###Markdown
The Protocols The Controller Protocols This function builds the complete circuit needed to carry out VQE
###Code
def prepare_qubits_and_apply_ansatz(q_map, parameters):
circuit = Circuit(q_map, initialisation_operations(q_map) + ansatz_operations(q_map, parameters))
return circuit
###Output
_____no_output_____
###Markdown
This function details the first communication protocol that will be carried out by the ControllerHostAs shown from the function names, the host first schedules the observables internally, then sends out the circuit to all the ComputingHosts. Note that we didn't send the schedules as of just yet.
###Code
def controller_host_protocol_preparation_ansatz(host, q_map, params, terms):
"""
Protocol for the controller host
"""
host.schedule_expectation_terms(terms, q_map)
host.generate_and_send_schedules(prepare_qubits_and_apply_ansatz(q_map, params))
###Output
_____no_output_____
###Markdown
The Computing Protocols This is the simple communication protocol of the ComputingHosts where they simply receive a schedule of operations and later listen to the synchronization until it is time to run a specific operation (which can a gate or some communication with another party)
###Code
def computing_host_protocol(host):
host.receive_schedule()
###Output
_____no_output_____
###Markdown
Step 5: Run the circuit and get the Expectation Value Let's now run all the different pieces from above and see if we get the expected expectation value, which is the value from the PennyLane tutorial.
###Code
def init_network():
# Retrieve the QuNetSim network
network = Network.get_instance()
network.delay = 0
network.start()
# Initialize the synchronization clock
clock = Clock.get_instance()
# Initialize the statevector backend
eqsn = EQSNBackend()
# Initialize the controller host with the given ID
controller_host = ControllerHost(
host_id="host_1",
backend=eqsn
)
# Create a network with the given number of QPUs, each with the given number of qubits
computing_hosts, q_map = controller_host.create_distributed_network(
num_computing_hosts=2, # Since we don't need the third QPU, we will just drop it from our simulation
num_qubits_per_host=2)
controller_host.start()
# Add all the nodes to the QuNetSim network
network.add_hosts([controller_host])
network.add_hosts(computing_hosts)
return clock, controller_host, computing_hosts, q_map
np.random.seed(0)
params = np.random.normal(0, np.pi, (4, 3))
params
###Output
_____no_output_____
###Markdown
Running the circuit
###Code
# Make sure that the QuNetSim network has no nodes that can interfere with our operation
list_hosts = list(Network.get_instance().ARP.keys())
for key in list_hosts:
Network.get_instance().remove_host(Network.get_instance().get_host(key))
# Initialize the network and the hosts
clock, controller_host, computing_hosts, q_map = init_network()
# Run the first ControllerHost communication protocol on a thread
t1 = controller_host.run_protocol(
controller_host_protocol_preparation_ansatz,
(q_map, params, terms))
# Run the first ComputingHost communication protocol on a separate thread for each QPU
threads = []
for host in computing_hosts:
threads.append(host.run_protocol(computing_host_protocol))
# Wait for all threads to finish
t1.join()
for thread in threads:
thread.join()
# Let's see how many ticks we needed for the applying the ansatz
clock.ticks
###Output
_____no_output_____
###Markdown
In the horizontal speedup notebook, we only needed 4 ticks to run the circuit and the ansatz on one QPU. However, in this simulation, where a lot of communication took place around two different QPUs, it jumped up to 22! This shows how important it is to divide the needed qubits among the QPUs as efficiently as possible. Let's now retrieve the statevector, and compute the expectation value.
###Code
indices = []
# Get the IDs of all the qubits of first QPU. (It doesn't make a difference which QPU we chose)
for qubit_id in computing_hosts[0].qubit_ids:
indices.append(computing_hosts[0].get_qubit_by_id(qubit_id))
# We then call the statevector function from the backend
statevector = computing_hosts[0].backend.statevector(indices[0])[1]
statevector
from interlinq.utils.vqe_subroutines import expectation_value
expectation_value(terms, statevector, 4)
###Output
_____no_output_____
###Markdown
That's not very far from PennyLane's simulation value of -0.88179557 Ha! Such a small difference should be reconciled during the optimization process. Optimise Now we can go on to attempt and optimize the parameters until we minimize the expecation value.
###Code
def cost_fn(params):
params = params.reshape(4, 3)
network = Network.get_instance()
network.delay = 0
network.start()
Clock.reset_clock()
eqsn = EQSNBackend()
controller_host = ControllerHost(
host_id="host_1",
backend=eqsn
)
computing_hosts, q_map = controller_host.create_distributed_network(
num_computing_hosts=2,
num_qubits_per_host=2)
controller_host.start()
network.add_hosts(computing_hosts)
network.add_hosts([controller_host])
#############################################################
t1 = controller_host.run_protocol(controller_host_protocol_preparation_ansatz, (q_map, params, terms))
threads = []
for host in computing_hosts:
threads.append(host.run_protocol(computing_host_protocol))
t1.join()
for thread in threads:
thread.join()
#############################################################
for host in computing_hosts:
network.remove_host(host)
network.remove_host(controller_host)
indices = []
for qubit_id in computing_hosts[0].qubit_ids:
indices.append(computing_hosts[0].get_qubit_by_id(qubit_id))
statevector = computing_hosts[0].backend.statevector(indices[0])[1]
total_exp = expectation_value(terms, statevector, 4)
return np.real(total_exp)
np.random.seed(0)
params = np.random.normal(0, np.pi, (4, 3))
params
###Output
_____no_output_____
###Markdown
Let's see if the cost function returns only one final loss value as expected.
###Code
cost_fn(params)
###Output
2021-06-26 00:51:58,567: Host QPU_0 started processing
2021-06-26 00:51:58,567: Host QPU_1 started processing
2021-06-26 00:51:58,567: Host host_1 started processing
2021-06-26 00:51:58,570: host_1 sends BROADCAST message
2021-06-26 00:51:58,727: sending ACK:1 from QPU_0 to host_1
2021-06-26 00:51:58,742: sending ACK:1 from QPU_1 to host_1
2021-06-26 00:51:58,779: QPU_0 received {"QPU_0": [{"name": "PREPARE_QUBITS", "qids": ["q_0_0", "q_0_1"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "TWO_QUBIT", "qids": ["q_0_0", "q_0_1"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "SINGLE", "qids": ["q_0_0"], "cids": null, "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "SINGLE", "qids": ["q_0_1"], "cids": null, "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "SEND_ENT", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": true, "layer_end": 0}, {"name": "TWO_QUBIT", "qids": ["q_0_1", "f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 1}, {"name": "MEASURE", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": ["5057f707-6381-4d80-9b2d-0d927c3105af"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 2}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["5057f707-6381-4d80-9b2d-0d927c3105af"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 3}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["ee4ab5b3-e165-44a7-915d-76363dc0d218"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 8}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["q_0_1"], "cids": ["ee4ab5b3-e165-44a7-915d-76363dc0d218"], "gate": "Z", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 9}, {"name": "SINGLE", "qids": ["q_0_0"], "cids": null, "gate": "custom_gate", "gate_param": [[[-0.31798493034765196, 0.7437467353318117], [-0.19454774447448653, -0.5548671488518317]], [[0.19454774447448653, -0.5548671488518317], [-0.31798493034765196, -0.7437467353318117]]], "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 10}, {"name": "SINGLE", "qids": ["q_0_1"], "cids": null, "gate": "custom_gate", "gate_param": [[[0.39367767478169263, 0.8957445465285611], [-0.06940503059650993, 0.19453158476381135]], [[0.06940503059650993, 0.19453158476381135], [0.39367767478169263, -0.8957445465285611]]], "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 10}, {"name": "REC_ENT", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": true, "layer_end": 11}, {"name": "REC_ENT", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": true, "layer_end": 11}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["117df696-02ce-4c86-8614-e89a9d090288"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 14}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["ab6f9a12-2cfc-4122-97fa-028707e3f0f8"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 14}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": ["117df696-02ce-4c86-8614-e89a9d090288"], "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 15}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": ["ab6f9a12-2cfc-4122-97fa-028707e3f0f8"], "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 15}, {"name": "TWO_QUBIT", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7", "q_0_0"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 16}, {"name": "TWO_QUBIT", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088", "q_0_1"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 16}, {"name": "SINGLE", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": null, "gate": "H", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 17}, {"name": "SINGLE", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": null, "gate": "H", "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 17}, {"name": "MEASURE", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": ["86139430-e4a2-42d4-96ed-c9f23f69b1e5"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 18}, {"name": "MEASURE", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": ["ea0eb076-ae8d-4dd4-bfdb-b8beb9b82a42"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0"], "pre_allocated_qubits": false, "layer_end": 18}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["86139430-e4a2-42d4-96ed-c9f23f69b1e5"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 19}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["ea0eb076-ae8d-4dd4-bfdb-b8beb9b82a42"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_0", "QPU_1"], "pre_allocated_qubits": false, "layer_end": 19}], "QPU_1": [{"name": "PREPARE_QUBITS", "qids": ["q_1_0", "q_1_1"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "TWO_QUBIT", "qids": ["q_1_0", "q_1_1"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "SINGLE", "qids": ["q_1_0"], "cids": null, "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 0}, {"name": "REC_ENT", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": true, "layer_end": 0}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["5057f707-6381-4d80-9b2d-0d927c3105af"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 3}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": ["5057f707-6381-4d80-9b2d-0d927c3105af"], "gate": "X", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 4}, {"name": "TWO_QUBIT", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae", "q_1_0"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 5}, {"name": "SINGLE", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": null, "gate": "H", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 6}, {"name": "MEASURE", "qids": ["f4d4ba40-0f73-46bb-9d1b-ed4b239056ae"], "cids": ["ee4ab5b3-e165-44a7-915d-76363dc0d218"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 7}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["ee4ab5b3-e165-44a7-915d-76363dc0d218"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 8}, {"name": "SINGLE", "qids": ["q_1_0"], "cids": null, "gate": "custom_gate", "gate_param": [[[0.2315227000379894, -0.9438901384346993], [-0.01969801507418892, 0.234692637581541]], [[0.01969801507418892, 0.234692637581541], [0.2315227000379894, 0.9438901384346993]]], "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 10}, {"name": "SINGLE", "qids": ["q_1_1"], "cids": null, "gate": "custom_gate", "gate_param": [[[-0.9526411467473749, -0.20529868550618627], [0.015378497582795967, 0.22380973407198954]], [[-0.015378497582795967, 0.22380973407198954], [-0.9526411467473749, 0.20529868550618627]]], "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 10}, {"name": "TWO_QUBIT", "qids": ["q_1_0", "q_1_1"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 11}, {"name": "SEND_ENT", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": true, "layer_end": 11}, {"name": "SEND_ENT", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": null, "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": true, "layer_end": 11}, {"name": "TWO_QUBIT", "qids": ["q_1_0", "b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 12}, {"name": "TWO_QUBIT", "qids": ["q_1_1", "d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": null, "gate": "cnot", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 12}, {"name": "MEASURE", "qids": ["b5274dcc-8e4e-4693-bd3c-2a8f0b831aa7"], "cids": ["117df696-02ce-4c86-8614-e89a9d090288"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 13}, {"name": "MEASURE", "qids": ["d94f2556-c38f-4b3a-ba79-430ad980d088"], "cids": ["ab6f9a12-2cfc-4122-97fa-028707e3f0f8"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 13}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["117df696-02ce-4c86-8614-e89a9d090288"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 14}, {"name": "SEND_CLASSICAL", "qids": null, "cids": ["ab6f9a12-2cfc-4122-97fa-028707e3f0f8"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 14}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["86139430-e4a2-42d4-96ed-c9f23f69b1e5"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 19}, {"name": "REC_CLASSICAL", "qids": null, "cids": ["ea0eb076-ae8d-4dd4-bfdb-b8beb9b82a42"], "gate": null, "gate_param": null, "computing_host_ids": ["QPU_1", "QPU_0"], "pre_allocated_qubits": false, "layer_end": 19}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["q_1_0"], "cids": ["86139430-e4a2-42d4-96ed-c9f23f69b1e5"], "gate": "Z", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 20}, {"name": "CLASSICAL_CTRL_GATE", "qids": ["q_1_1"], "cids": ["ea0eb076-ae8d-4dd4-bfdb-b8beb9b82a42"], "gate": "Z", "gate_param": null, "computing_host_ids": ["QPU_1"], "pre_allocated_qubits": false, "layer_end": 20}]} with sequence number 0
###Markdown
Looks good! Let's disable the `Logger` to avoid clutter during the optimization process.
###Code
Logger.DISABLED = True
###Output
_____no_output_____
###Markdown
It should be noted that the networking nature of this simulation makes it tricky to use gradient-based approaches for optimization, as Interlin-q does not support it yet. So we will go for non-gradient based optimizers. SciPy Optimisers SciPy optimizers from the `minimize` API are too slow for the purely network version, thus they are not included in this notebook. Scikit-Quant Optimisers This library is specialized for non-gradient based optimization of parameterized quantum circuits, and thus is perfect for our needs. Here, the library takes as input `budget` value, which determines how "long" it should attempt minimizing the expectation value. **Note: the optimization can take up to 15 mins on an average computer.**
###Code
from skquant.interop.scipy import *
# We need to bound our parameters for the optimization
bounds = np.array([-3*np.pi, 3*np.pi])
bounds = np.tile(bounds, (4*3, 1))
np.random.seed(0)
params = np.random.normal(0, np.pi, (4, 3))
# The library must receive the parameters as a flat 1D array
flattened_parameters = params.flatten()
flattened_parameters
###Output
_____no_output_____
###Markdown
Let's try the first optimizer.
###Code
budget = 100
minimum_bobyqa = minimize(cost_fn, flattened_parameters, method=pybobyqa, bounds=bounds, options={'budget' : budget})
minimum_bobyqa
###Output
_____no_output_____
###Markdown
That's very close to PennyLane's -1.13613394 Ha!Let's try the two other optimizers.
###Code
budget = 100
minimum_imfil = minimize(cost_fn, flattened_parameters, method=imfil, bounds=bounds, options={'budget' : budget})
minimum_imfil
budget = 100
minimum_snobfit = minimize(cost_fn, flattened_parameters, method=snobfit, bounds=bounds, options={'budget' : budget})
minimum_snobfit
###Output
_____no_output_____ |
for-scripters/Python/scNetViz_use_case_1.ipynb | ###Markdown
scNetViz: EMBL-EBI Single Cell Expression Atlas Krishna Choudhary, Yihang Xin and Alex Pico 2021-01-29 In this example, we will browse a single cell expression atlas, explore a particular dataset, perform differential expression analysis based on provided categories, generate networks from the top genes from each category, and functionally characterize and visualize the networks. InstallationThe following chunk of code installs the `py4cytoscape` module.
###Code
%%capture
!python3 -m pip install python-igraph requests pandas networkx
!python3 -m pip install py4cytoscape
###Output
_____no_output_____
###Markdown
Prerequisites In addition to this package (py4cytoscape latest version 0.0.7), you will need:* Latest version of Cytoscape, which can be downloaded from https://cytoscape.org/download.html. Simply follow the installation instructions on screen.* Complete installation wizard* Launch CytoscapeFor this vignette, you need to install following apps:* Install the stringApp app from https://apps.cytoscape.org/apps/stringapp* Install the filetransfer app from https://apps.cytoscape.org/apps/filetransfer* Install the enhancedGraphics app from https://apps.cytoscape.org/apps/enhancedgraphics* Install the cyBrowser app from https://apps.cytoscape.org/apps/cybrowser* Install the cyPlot app from https://apps.cytoscape.org/apps/cyplotYou can also install app inside Python notebook by running "py4cytoscape.install_app('Your App')" Import the required package
###Code
import py4cytoscape as p4c
p4c.cytoscape_version_info() # Check cytoscape connection.
###Output
_____no_output_____
###Markdown
Pull data from the EMBL-EBI Single-Cell Expression Atlas Use the accession number of single-cell experiment to pull data from the [Single-Cell Experiment Atlas](https://www.ebi.ac.uk/gxa/sc/experiments) of EMBL-EBI.
###Code
p4c.commands.commands_run('scnetviz load gxa experiment accession=E-GEOD-81383')
###Output
_____no_output_____
###Markdown
This loads the data and opens an experiment table with three tabs, named _TPM_, _Categories_, and _DiffExp_. Differential expression analysis Run differential expression analysis for the row with `true` value of `sel.K` (default).
###Code
p4c.commands.commands_run('scnetviz calculate diffexp accession=E-GEOD-81383',)
###Output
_____no_output_____
###Markdown
Query STRING database for interaction networks Fetch protein-protein interaction networks from the [STRING](https://string-db.org/) database.
###Code
p4c.commands.commands_run('scnetviz create network accession=E-GEOD-81383')
###Output
_____no_output_____
###Markdown
The following command runs both the differential expression analysis and fetches interaction networks simultaneously.
###Code
p4c.commands.commands_run('scnetviz create all experiment=E-GEOD-81383')
###Output
_____no_output_____
###Markdown
Functional enrichment analysis Check the networks available in the current Cytoscape session.
###Code
p4c.commands.commands_run('network list')
###Output
_____no_output_____
###Markdown
Perform functional enrichment analysis for the network selected in the current session. This uses the [_stringApp_](https://www.cgl.ucsf.edu/cytoscape/stringApp/index.shtml). To view the results in the Cytoscape application, you may have to activate the Show enrichment panel option under STRING Enrichment sub-menu from Apps from the menu bar.
###Code
p4c.commands.commands_run('string retrieve enrichment allNetSpecies=Homo sapiens')
###Output
_____no_output_____ |
evaluacion_carlospantoja.ipynb | ###Markdown
EvaluaciónCompleta lo que falta.
###Code
# instalacion
!pip install pandas
!pip install matplotlib
!pip install pandas-datareader
# 1 importa las bibliotecas
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
# 2. Establecer una fecha de inicio "2020-01-01" y una fecha de finalización "2021-08-31"
start_date = "2020-01-01"
end_date = "2021-08-31"
# 3.Usar el método del lector de datos para almacenar los datos
# del precio de las acciones de facebook ('FB') en un DataFrame llamado data.
# https://finance.yahoo.com/quote/FB/history?p=FB
data = web.DataReader(name='FB', data_source='yahoo', start=start_date, end=end_date)
data
# La salida se ve igual a la que leemos en cualquier archivo CSV.
# 4. Explica el resultado.
###Output
_____no_output_____
###Markdown
* Los precios altos de las acciones tienden a incrementarse desde la fecha 2020-01-02 que es 209.78 a 382.76 la fecha 2021-08-31, durante el periodo analizado no existe gran fluctuacion de precios, es decir que la brecha entre los precios altos y bajos no fue significativa.* El precio de Apertura con el que un Valor inicia sus transacciones en una sesion bursatil muestra un incremento durante el periodo analizado, desde 209.77 hasta 379.380* El precio de cierre con el cual se hizo al ultima cotizacion durante el dia en el mercado bursatil de un determinado titulo financiero muestra in incremento al igual que el precio de apertura, solo que el de cierre siempre es un poco mayor que este ultimo* El precio ajustado de cierre no ha tenido mayor variacion durante el periodo obserado, solo un incremento desde 2020 hasta agosto 2021* En cuanto a la cantidad de acciones vendidas y compradas podemos decir que hubo un incremento desde enero del 2020 pero durante el mes de agosto hubo disminuciones sobre todo hasta el 25 y 26 de agosto, a partir del 27 de agosto el volumen de acciones volvio a subir. Tambien podemos observar que el 24 de diciembre el volumen cayo fuertemente pero luego volvio a su tendencia inicial.
###Code
# 5. Muestre un resumen de la información básica sobre este DataFrame y sus datos
# use la funcion dataFrame.info() y dataFrame.describe()
data.info()
data.describe()
# 6. Devuelve las primeras 5 filas del DataFrame con dataFrame.head() o dataFrame.iloc[]
data.head(n=5)
# 7. Seleccione solo las columnas 'Open','Close' y 'Volume' del DataFrame con dataFrame.loc
data.loc[:, ['Open', 'Close', 'Volume']]
# Ver el rango de lo datos
data.index.min(), data.index.max()
# 8. Ahora grafica los datos de "Close" usando la biblioteca matplotlib en Python,
# 9. Agrega title, marker, linestyle y color para mejorar la visualizacion
close = data['Close']
ax = close.plot(title='Facebook', linestyle='-', color='b')
ax.set_xlabel('Years')
ax.set_ylabel('Close')
ax.grid() #opcional
plt.show()
# 10. Explica la grafica sencilla de linea
###Output
_____no_output_____ |
StanfordAlgorithmSeries/MST.ipynb | ###Markdown
Q1This 'jobs_greedy_algorithm.txt' file describes a set of jobs with positive and integral weights and lengths. It has the format[number_of_jobs][job_1_weight] [job_1_length][job_2_weight] [job_2_length]...For example, the third line of the file is "74 59", indicating that the second job has weight 74 and length 59.You should NOT assume that edge weights or lengths are distinct.Your task in this problem is to run the greedy algorithm that schedules jobs in decreasing order of the difference (weight - length). Recall from lecture that this algorithm is not always optimal. IMPORTANT: if two jobs have equal difference (weight - length), you should schedule the job with higher weight first. Beware: if you break ties in a different way, you are likely to get the wrong answer. You should report the sum of weighted completion times of the resulting schedule --- a positive integer --- in the box below.ADVICE: If you get the wrong answer, try out some small test cases to debug your algorithm (and post your test cases to the discussion forum).
###Code
with open('jobs_greedy_algorithm.txt','r') as f:
lines = f.readlines()
jobs = list(map(lambda x: list(map(int, x.split())), lines))
weighted_jobs = list(map(lambda x: [x[1]-x[0], x[0], x[1]], jobs[1:]))
heapq.heapify(weighted_jobs)
tie_bucket = []
weighted_ct = 0
ct = 0
tmp_job = heapq.heappop(weighted_jobs)
tmp_weight = tmp_job[0]
tie_bucket.append(tmp_job)
count = 0
count2 = 1
while weighted_jobs != []:
if weighted_jobs != []:
tmp_job = heapq.heappop(weighted_jobs)
while tmp_job[0] == tmp_weight:
tie_bucket.append(tmp_job)
count2 += 1
if weighted_jobs != []:
tmp_job = heapq.heappop(weighted_jobs)
else:
break
tmp_weight = tmp_job[0]
else:
tmp_job = None
if tie_bucket != []:
tie_bucket2 = list(map(lambda x: [-x[1], x[2]], tie_bucket))
tie_bucket = []
heapq.heapify(tie_bucket2)
while tie_bucket2 != []:
tmp_job2 = heapq.heappop(tie_bucket2)
count += 1
#print(tmp_job2)
ct += tmp_job2[1]
weighted_ct += -tmp_job2[0]*ct
if tmp_job is not None:
tie_bucket.append(tmp_job)
count2 += 1
weighted_ct, count, count2
###Output
_____no_output_____
###Markdown
Q2For this problem, use the same data set as in the previous problem.Your task now is to run the greedy algorithm that schedules jobs (optimally) in decreasing order of the ratio (weight/length). In this algorithm, it does not matter how you break ties. You should report the sum of weighted completion times of the resulting schedule --- a positive integer --- in the box below.
###Code
weighted_jobs = list(map(lambda x: [-x[0]/x[1], x[0], x[1]], jobs[1:]))
heapq.heapify(weighted_jobs)
ct = 0
weighted_ct = 0
count = 0
weighted_jobs2 = []
while weighted_jobs != []:
tmp_job = heapq.heappop(weighted_jobs)
count += 1
ct += tmp_job[2]
weighted_ct += tmp_job[1]*ct
weighted_ct, count, ct
###Output
_____no_output_____
###Markdown
Q3 Prim's MST AlgorithmThis 'MST.txt' file describes an undirected graph with integer edge costs. It has the format[number_of_nodes] [number_of_edges][one_node_of_edge_1] [other_node_of_edge_1] [edge_1_cost][one_node_of_edge_2] [other_node_of_edge_2] [edge_2_cost]...For example, the third line of the file is "2 3 -8874", indicating that there is an edge connecting vertex 2 and vertex 3 that has cost -8874.You should NOT assume that edge costs are positive, nor should you assume that they are distinct.Your task is to run Prim's minimum spanning tree algorithm on this graph. You should report the overall cost of a minimum spanning tree --- an integer, which may or may not be negative --- in the box below.IMPLEMENTATION NOTES: This graph is small enough that the straightforward O(mn) time implementation of Prim's algorithm should work fine. OPTIONAL: For those of you seeking an additional challenge, try implementing a heap-based version. The simpler approach, which should already give you a healthy speed-up, is to maintain relevant edges in a heap (with keys = edge costs). The superior approach stores the unprocessed vertices in the heap, as described in lecture. Note this requires a heap that supports deletions, and you'll probably need to maintain some kind of mapping between vertices and their positions in the heap.
###Code
with open('MST.txt', 'r') as f:
lines = f.readlines()
lines = list(map(lambda x: list(map(int, x.split())), lines))
n, m =lines[0]
edges = list(map(lambda x: [x[2], x[0], x[1]], lines[1:]))
# naive implementaion
# initialize
V = set(range(1,n+1))
X = set()
T = []
e_cheapest = min(edges)
X.add(e_cheapest[1])
X.add(e_cheapest[2])
T.append(e_cheapest)
while X != V:
c = np.infty
for e in edges:
if (e[1] in X) + (e[2] in X) == 1:
if e[0] <= c:
e_cheapest = e
c = e[0]
X.add(e_cheapest[1])
X.add(e_cheapest[2])
T.append(e_cheapest)
len(T)
total_cost = sum(list(map(lambda x: x[0], T)))
total_cost
# heap_inplementation
# initialize
G = list(map(lambda x: [], range(n+1)))
any(map(lambda x: G[x[2]].append([x[0], x[1]]), edges))
any(map(lambda x: G[x[1]].append([x[0], x[2]]), edges))
V = set(range(1,n+1))
X = set()
T = []
e_cheapest = min(edges)
X.add(e_cheapest[1])
X.add(e_cheapest[2])
T.append(e_cheapest)
heap_v = list(map(lambda x: [np.inf, x, x], V-X))
def update_cheapest_edge(X, G,heap_v):
for i in range(len(heap_v)):
v1 = heap_v[i][1]
v2 = heap_v[i][2]
if (v1 not in X) + (v2 not in X) == 1:
if v1 not in X:
v3 = v1
else:
v3 = v2
elif v1 == v2:
v3 = v1
c = np.inf
v4 = 0
for e in G[v3]:
if e[1] in X and e[0] < c:
v4 = e[1]
c = e[0]
if c != np.inf:
heap_v[i] = [c, v3, v4]
#print(heap_v[i])
heapq.heapify(heap_v)
return
update_cheapest_edge(X, G, heap_v)
#print(heap_v)
while X != V:
e = heapq.heappop(heap_v)
#print(e)
if e[1] in X:
X.add(e[2])
elif e[2] in X:
X.add(e[1])
else:
print('booo')
update_cheapest_edge(X, G, heap_v)
T.append(e)
len(T)
total_cost = sum(list(map(lambda x: x[0], T)))
total_cost
###Output
_____no_output_____ |
week_4/week_4_unit_3_readdata_notebook.ipynb | ###Markdown
Reading data from filesHow to read from a file now? Files are organized sequentially as mentioned before, i.e. they consist of consecutivelines. For processing sequences the `for` loop is suitable. Specifically, one can iterate over the lines of a file likefollows:
###Code
# open file
file = open("lorem_ipsum.txt", "r")
# read file line by line and output the lines
for line in file:
print(line)
# close file
file.close()
###Output
_____no_output_____
###Markdown
If you compare the output of the program with the content of the file (e.g. in a text editor), you notice that blanklines have been added to the output. What is the reason for this? At the end of each line there is a line break `\n` in the text file. This is only visible indirectly, because the textcontinues on the next line. On output, the function `print()` adds another line break, hence the blank line. You can correct this behaviour in several ways. One way is to set the `end` parameter in the `print()` function to anempty character `end = ""`. Another way is to *strip* the line first. For strings there is a method `.strip()`. This removes spaces, tabs and linebreaks at the beginning and at the end of a string. `.strip()` is often used when reading forms to prevent a leadingspace from changing the input. With one optional argument, you could also specify which characters should be removed. Alternatively, `.lstrip()` or `.rstrip()` can be used. In this case something is deleted only left or right of thestring.
###Code
# Open file
file = open("lorem_ipsum.txt", "r")
# read file line by line, strip from and output the lines
for line in file:
line = line.strip()
print(line)
# Close file
file.close()
###Output
_____no_output_____
###Markdown
Output the contents of a file twiceIn the following program, the `for` loop is run twice. What does the output look like? Why?
###Code
# open file
file = open("lorem_ipsum.txt", "r")
# read file line by line and print the lines
print("First round")
for line in file:
line = line.strip()
print(line)
# read file line by line and print the lines
print("Second round")
for line in file:
line = line.strip()
print(line)
# close file
file.close()
###Output
_____no_output_____
###Markdown
When reading a file, the "read cursor" or "read pointer" is moved character by character over the file. If the *readpointer* arrives at the end of the file and is **not** reset or set to another position, it can not continue reading asthe file ends there. To place the *read cursor*, the method `.seek()` can be used. Using `.seek()` to place the read pointerThe `.seek()` method can be used to reposition the read cursor (or read pointer). Two arguments are passed to themethod. The first argument specifies by how many **bytes (!)** the pointer is moved. The second argument specifies fromwhere to reposition. The second argument can be used as follows:| Value / Example | Description || ---------------- | -------------------------------------------------- || 0 | From start (default value) || 1 | From current position || 2 | From end || `file.seek(3)` | Pointer is moved to 3rd byte || `file.seek(5,1)` | Pointer is moved 5 positions from current position || `file.seek(0,0)` | Pointer is moved back to the beginning of the file |The two files *numbers1.txt* as well as *numbers2.txt* each contain the numbers from 0 to 100. First look at the filesin an editor. Experiment with these files. Try to adjust the parameters of `.seek()` in a way that only one number (e.g.the number 50) is output.
###Code
file = open("numbers1.txt", "r")
file.seek(30, 0)
for line in file:
print(line)
file.seek(0)
for line in file:
line = line.strip()
print(line)
file.close()
###Output
_____no_output_____
###Markdown
Read a file into a list in one goIt is possible that the line breaks are superfluous and only exist because a paper page has a limited width for example.In this case, it may make sense to read the entire text "in one go" without iterating over the lines using a loop. Themethod `.readlines()` is useful for this. The result is a list with **one** entry.
###Code
# Open file
file = open("lorem_ipsum.txt", "r")
# read file in one go
line = file.readlines()
print(line)
# Close file
file.close()
###Output
_____no_output_____ |
Colab_notebook.ipynb | ###Markdown
Capstone Project Image classifier for the SVHN dataset InstructionsIn this notebook, you will create a neural network that classifies real-world images digits. You will use concepts from throughout this course in building, training, testing, validating and saving your Tensorflow classifier model.This project is peer-assessed. Within this notebook you will find instructions in each section for how to complete the project. Pay close attention to the instructions as the peer review will be carried out according to a grading rubric that checks key parts of the project instructions. Feel free to add extra cells into the notebook as required. How to submitWhen you have completed the Capstone project notebook, you will submit a pdf of the notebook for peer review. First ensure that the notebook has been fully executed from beginning to end, and all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer being able to view the outputs of your notebook. Save the notebook as a pdf (you could download the notebook with File -> Download .ipynb, open the notebook locally, and then File -> Download as -> PDF via LaTeX), and then submit this pdf for review. Let's get started!We'll start by running some imports, and loading the dataset. For this project you are free to make further imports throughout the notebook as you wish.
###Code
import tensorflow as tf
from scipy.io import loadmat
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, BatchNormalization
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, LearningRateScheduler
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
gpu_options = tf.compat.v1.GPUOptions(allow_growth=True)
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))
tf.__version__
###Output
_____no_output_____
###Markdown
For the capstone project, you will use the [SVHN dataset](http://ufldl.stanford.edu/housenumbers/). This is an image dataset of over 600,000 digit images in all, and is a harder dataset than MNIST as the numbers appear in the context of natural scene images. SVHN is obtained from house numbers in Google Street View images.* Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu and A. Y. Ng. "Reading Digits in Natural Images with Unsupervised Feature Learning". NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.The train and test datasets required for this project can be downloaded from [here](http://ufldl.stanford.edu/housenumbers/train.tar.gz) and [here](http://ufldl.stanford.edu/housenumbers/test.tar.gz). Once unzipped, you will have two files: `train_32x32.mat` and `test_32x32.mat`. You should store these files in Drive for use in this Colab notebook.Your goal is to develop an end-to-end workflow for building, training, validating, evaluating and saving a neural network that classifies a real-world image into one of ten classes.
###Code
# Load the dataset from your drive folder
train = loadmat('/home/marcin/Pictures/capstone_project/train_32x32.mat')
test = loadmat('/home/marcin/Pictures/capstone_project/test_32x32.mat')
###Output
_____no_output_____
###Markdown
Both `train` and `test` are dictionaries with keys `X` and `y` for the input images and labels respectively. 1. Inspect and preprocess the dataset* Extract the training and testing images and labels separately from the train and test dictionaries loaded for you.* Select a random sample of images and corresponding labels from the dataset (at least 10), and display them in a figure.* Convert the training and test images to grayscale by taking the average across all colour channels for each pixel. _Hint: retain the channel dimension, which will now have size 1._* Select a random sample of the grayscale images and corresponding labels from the dataset (at least 10), and display them in a figure.
###Code
train.keys()
train_data, train_targets = train['X'], train['y']
test_data, test_targets = test['X'], test['y']
print(train_data.shape)
print(train_targets.shape)
print(test_data.shape)
print(test_targets.shape)
i=0
for target in train_targets[:300]:
if target == 10:
print('index =', i)
i += 1
###Output
index = 52
index = 84
index = 93
index = 96
index = 108
index = 144
index = 182
index = 206
index = 215
index = 218
index = 226
index = 236
index = 274
index = 294
###Markdown
**Zeros are represented in targets as "10"**
###Code
num_of_img = 12
first_img_index = 51
fig, ax = plt.subplots(1, num_of_img, figsize=(32, 3))
for i in range(num_of_img):
ax[i].set_axis_off()
ax[i].imshow(train_data[..., i + first_img_index])
ax[i].set_title(train_targets[i + first_img_index][0], size=32)
###Output
_____no_output_____
###Markdown
**Instead 0's will be represented as 0**
###Code
i = 0
for target in train_targets[:]:
if target == 10:
train_targets[i] = 0
i += 1
k=0
for target in test_targets[:]:
if target == 10:
test_targets[k] = 0
k += 1
num_of_img = 12
first_img_index = 51
fig, ax = plt.subplots(1, num_of_img, figsize=(32, 3))
for i in range(num_of_img):
ax[i].set_axis_off()
ax[i].imshow(train_data[..., i + first_img_index])
ax[i].set_title(train_targets[i + first_img_index][0], size=32)
###Output
_____no_output_____
###Markdown
**Converting images to grayscale**
###Code
def get_grayscale_image(data_images, samples=train_data.shape[3]):
data_images_grayscale = np.zeros((samples, train_data.shape[0],
train_data.shape[1]))
for img_num in range(samples):
data_images_grayscale[img_num, ...] = np.average(data_images[:, :, :, img_num],
axis=2)
return data_images_grayscale[..., np.newaxis]
test_data.shape[3]
train_data_grayscale = get_grayscale_image(train_data)
test_data_grayscale = get_grayscale_image(test_data, samples=test_data.shape[3])
###Output
_____no_output_____
###Markdown
**Training images**
###Code
num_of_img = 12
first_img_index = 51
fig, ax = plt.subplots(1, num_of_img, figsize=(32, 1))
for i in range(num_of_img):
ax[i].set_axis_off()
ax[i].imshow(train_data_grayscale[i + first_img_index], cmap='gray')
ax[i].set_title(train_targets[i + first_img_index][0], size=32)
###Output
_____no_output_____
###Markdown
**Test images**
###Code
num_of_img = 12
first_img_index = 51
fig, ax = plt.subplots(1, num_of_img, figsize=(32, 1))
for i in range(num_of_img):
ax[i].set_axis_off()
ax[i].imshow(test_data_grayscale[i + first_img_index], cmap='gray')
ax[i].set_title(test_targets[i + first_img_index][0], size=32)
###Output
_____no_output_____
###Markdown
**Normalizing data**
###Code
train_data_grayscale = (train_data_grayscale - train_data_grayscale.mean()) / train_data_grayscale.std()
test_data_grayscale = (test_data_grayscale - test_data_grayscale.mean()) / test_data_grayscale.std()
train_data_grayscale[4, 3, :5]
###Output
_____no_output_____
###Markdown
**One-hot encoding test and train targets**
###Code
train_targets = tf.keras.utils.to_categorical(train_targets)
test_targets = tf.keras.utils.to_categorical(test_targets)
train_targets.shape
train_targets[1:5,]
###Output
_____no_output_____
###Markdown
2. MLP neural network classifier* Build an MLP classifier model using the Sequential API. Your model should use only Flatten and Dense layers, with the final layer having a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different MLP architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 4 or 5 layers._* Print out the model summary (using the summary() method)* Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run. * Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback.* As a guide, you should aim to achieve a final categorical cross entropy training loss of less than 1.0 (the validation loss might be higher).* Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets.* Compute and display the loss and accuracy of the trained model on the test set. **Model structure and compilation**
###Code
def get_mlp_model(input_shape=train_data_grayscale[1].shape):
model = tf.keras.models.Sequential([
Flatten(input_shape=input_shape),
Dense(64, activation='relu', name='dense_1'),
Dense(64, activation='relu', name='dense_2'),
Dense(10, activation='softmax', name='dense_3')
])
return model
print('Shape passed to get_mlp_model function:',
train_data_grayscale[1].shape)
model = get_mlp_model(train_data_grayscale[1].shape)
model.summary()
lr = 0.001
def compile_model(model):
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lr),
loss='categorical_crossentropy',
metrics=['accuracy'])
compile_model(model)
###Output
_____no_output_____
###Markdown
**Callbacks**
###Code
def get_best_epoch_callback():
path='/home/marcin/Documents/Capstone_project/mlp_checkpoint_best'
callback = ModelCheckpoint(path,
verbose=1,
save_best_only=True,
save_weights_only=False)
return callback
def get_early_stopping_callback(patience=2):
callback = EarlyStopping(monitor='val_loss',
patience=patience,
verbose=1)
return callback
best_epoch_callback = get_best_epoch_callback()
early_stopping_callback = get_early_stopping_callback()
###Output
_____no_output_____
###Markdown
**Fitting the model**
###Code
history = model.fit(train_data_grayscale,
train_targets,
epochs=30,
batch_size=128,
validation_split=0.15,
callbacks=[best_epoch_callback, early_stopping_callback])
test_loss, test_acc = model.evaluate(test_data_grayscale, test_targets)
print('Test loss = {:.03f}\nTest accuracy = {:.03f}'.format(test_loss, test_acc))
history.history.keys()
df = pd.DataFrame(
{'val_loss': history.history['val_loss'],
'loss': history.history['loss'],
'accuracy': history.history['accuracy'],
'val_accuracy': history.history['val_accuracy']},
index=range(1, 8),
)
df.head()
df[['loss', 'val_loss']].plot(grid=True)
df[['accuracy', 'val_accuracy']].plot(grid=True)
###Output
_____no_output_____
###Markdown
3. CNN neural network classifier* Build a CNN classifier model using the Sequential API. Your model should use the Conv2D, MaxPool2D, BatchNormalization, Flatten, Dense and Dropout layers. The final layer should again have a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different CNN architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 2 or 3 convolutional layers and 2 fully connected layers.)_* The CNN model should use fewer trainable parameters than your MLP model.* Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run.* Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback.* You should aim to beat the MLP model performance with fewer parameters!* Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets.* Compute and display the loss and accuracy of the trained model on the test set. **Model structure and compilation**
###Code
print('input_shape =', train_data_grayscale[1].shape)
def get_cnn_model(input_shape=train_data_grayscale[1].shape):
model = tf.keras.models.Sequential([
Conv2D(16, (3, 3), input_shape=(input_shape),
activation='relu',
bias_initializer='zeros'),
MaxPooling2D((2, 2)),
BatchNormalization(),
Conv2D(32, (3,3), activation='relu'),
MaxPooling2D((2, 2)),
BatchNormalization(),
Conv2D(64, (3,3), activation='relu'),
BatchNormalization(),
Flatten(),
Dense(32, activation='relu', kernel_regularizer='l2'),
Dense(10, activation='softmax'),
])
return model
def compile_model(model, lr=0.0001):
model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=lr),
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
**Callbacks**
###Code
def get_best_epoch_callback_cnn():
path='/home/marcin/Documents/Capstone_project/cnn_checkpoint_best'
callback = ModelCheckpoint(path,
verbose=1,
save_best_only=True,
save_weights_only=False)
return callback
def get_early_stopping_callback_cnn(patience=2, monitor='val_loss'):
callback = EarlyStopping(monitor=monitor,
patience=patience,
verbose=1)
return callback
cnn_best_epoch_callback = get_best_epoch_callback_cnn()
cnn_early_stopping_callback = get_early_stopping_callback_cnn(3)
###Output
_____no_output_____
###Markdown
**Fitting the model**
###Code
model = get_cnn_model(train_data_grayscale[1].shape)
model.summary()
compile_model(model)
epochs=30
history = model.fit(train_data_grayscale,
train_targets,
epochs=epochs,
batch_size=128,
validation_split=0.15,
callbacks=[cnn_best_epoch_callback, cnn_early_stopping_callback])
###Output
Epoch 1/30
480/487 [============================>.] - ETA: 0s - loss: 1.2085 - accuracy: 0.7786
Epoch 00001: val_loss improved from 1.56558 to 1.05020, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 1.2063 - accuracy: 0.7792 - val_loss: 1.0502 - val_accuracy: 0.8182
Epoch 2/30
472/487 [============================>.] - ETA: 0s - loss: 0.9371 - accuracy: 0.8372
Epoch 00002: val_loss improved from 1.05020 to 0.86046, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.9342 - accuracy: 0.8376 - val_loss: 0.8605 - val_accuracy: 0.8488
Epoch 3/30
477/487 [============================>.] - ETA: 0s - loss: 0.7820 - accuracy: 0.8615
Epoch 00003: val_loss improved from 0.86046 to 0.74323, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.7815 - accuracy: 0.8613 - val_loss: 0.7432 - val_accuracy: 0.8627
Epoch 4/30
480/487 [============================>.] - ETA: 0s - loss: 0.6778 - accuracy: 0.8751
Epoch 00004: val_loss improved from 0.74323 to 0.66292, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.6766 - accuracy: 0.8753 - val_loss: 0.6629 - val_accuracy: 0.8726
Epoch 5/30
476/487 [============================>.] - ETA: 0s - loss: 0.5995 - accuracy: 0.8854
Epoch 00005: val_loss improved from 0.66292 to 0.60230, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.6000 - accuracy: 0.8852 - val_loss: 0.6023 - val_accuracy: 0.8792
Epoch 6/30
479/487 [============================>.] - ETA: 0s - loss: 0.5420 - accuracy: 0.8926
Epoch 00006: val_loss improved from 0.60230 to 0.55425, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.5420 - accuracy: 0.8927 - val_loss: 0.5542 - val_accuracy: 0.8857
Epoch 7/30
479/487 [============================>.] - ETA: 0s - loss: 0.4973 - accuracy: 0.8986
Epoch 00007: val_loss improved from 0.55425 to 0.51523, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.4972 - accuracy: 0.8986 - val_loss: 0.5152 - val_accuracy: 0.8891
Epoch 8/30
481/487 [============================>.] - ETA: 0s - loss: 0.4612 - accuracy: 0.9037
Epoch 00008: val_loss improved from 0.51523 to 0.48582, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.4612 - accuracy: 0.9036 - val_loss: 0.4858 - val_accuracy: 0.8931
Epoch 9/30
486/487 [============================>.] - ETA: 0s - loss: 0.4321 - accuracy: 0.9082
Epoch 00009: val_loss improved from 0.48582 to 0.46522, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 7ms/step - loss: 0.4322 - accuracy: 0.9081 - val_loss: 0.4652 - val_accuracy: 0.8953
Epoch 10/30
472/487 [============================>.] - ETA: 0s - loss: 0.4091 - accuracy: 0.9126
Epoch 00010: val_loss improved from 0.46522 to 0.44529, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.4084 - accuracy: 0.9127 - val_loss: 0.4453 - val_accuracy: 0.8980
Epoch 11/30
486/487 [============================>.] - ETA: 0s - loss: 0.3880 - accuracy: 0.9153
Epoch 00011: val_loss improved from 0.44529 to 0.42932, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3879 - accuracy: 0.9153 - val_loss: 0.4293 - val_accuracy: 0.9001
Epoch 12/30
483/487 [============================>.] - ETA: 0s - loss: 0.3707 - accuracy: 0.9184
Epoch 00012: val_loss improved from 0.42932 to 0.41675, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3707 - accuracy: 0.9184 - val_loss: 0.4167 - val_accuracy: 0.9013
Epoch 13/30
477/487 [============================>.] - ETA: 0s - loss: 0.3556 - accuracy: 0.9218
Epoch 00013: val_loss improved from 0.41675 to 0.40844, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3551 - accuracy: 0.9220 - val_loss: 0.4084 - val_accuracy: 0.9028
Epoch 14/30
478/487 [============================>.] - ETA: 0s - loss: 0.3425 - accuracy: 0.9238
Epoch 00014: val_loss improved from 0.40844 to 0.40068, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3421 - accuracy: 0.9239 - val_loss: 0.4007 - val_accuracy: 0.9041
Epoch 15/30
481/487 [============================>.] - ETA: 0s - loss: 0.3309 - accuracy: 0.9271
Epoch 00015: val_loss improved from 0.40068 to 0.39145, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 7ms/step - loss: 0.3308 - accuracy: 0.9271 - val_loss: 0.3915 - val_accuracy: 0.9045
Epoch 16/30
485/487 [============================>.] - ETA: 0s - loss: 0.3203 - accuracy: 0.9295
Epoch 00016: val_loss improved from 0.39145 to 0.38659, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3204 - accuracy: 0.9295 - val_loss: 0.3866 - val_accuracy: 0.9054
Epoch 17/30
480/487 [============================>.] - ETA: 0s - loss: 0.3108 - accuracy: 0.9320
Epoch 00017: val_loss improved from 0.38659 to 0.38075, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3108 - accuracy: 0.9319 - val_loss: 0.3808 - val_accuracy: 0.9063
Epoch 18/30
480/487 [============================>.] - ETA: 0s - loss: 0.3026 - accuracy: 0.9345
Epoch 00018: val_loss improved from 0.38075 to 0.37608, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.3028 - accuracy: 0.9343 - val_loss: 0.3761 - val_accuracy: 0.9060
Epoch 19/30
479/487 [============================>.] - ETA: 0s - loss: 0.2945 - accuracy: 0.9352
Epoch 00019: val_loss did not improve from 0.37608
487/487 [==============================] - 2s 4ms/step - loss: 0.2942 - accuracy: 0.9353 - val_loss: 0.3788 - val_accuracy: 0.9063
Epoch 20/30
478/487 [============================>.] - ETA: 0s - loss: 0.2875 - accuracy: 0.9377
Epoch 00020: val_loss improved from 0.37608 to 0.36650, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2875 - accuracy: 0.9377 - val_loss: 0.3665 - val_accuracy: 0.9085
Epoch 21/30
481/487 [============================>.] - ETA: 0s - loss: 0.2789 - accuracy: 0.9396
Epoch 00021: val_loss improved from 0.36650 to 0.36359, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2800 - accuracy: 0.9393 - val_loss: 0.3636 - val_accuracy: 0.9103
Epoch 22/30
478/487 [============================>.] - ETA: 0s - loss: 0.2725 - accuracy: 0.9411
Epoch 00022: val_loss improved from 0.36359 to 0.36314, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 7ms/step - loss: 0.2731 - accuracy: 0.9409 - val_loss: 0.3631 - val_accuracy: 0.9095
Epoch 23/30
480/487 [============================>.] - ETA: 0s - loss: 0.2670 - accuracy: 0.9432
Epoch 00023: val_loss improved from 0.36314 to 0.36169, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2669 - accuracy: 0.9433 - val_loss: 0.3617 - val_accuracy: 0.9096
Epoch 24/30
479/487 [============================>.] - ETA: 0s - loss: 0.2609 - accuracy: 0.9449
Epoch 00024: val_loss improved from 0.36169 to 0.36018, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2612 - accuracy: 0.9448 - val_loss: 0.3602 - val_accuracy: 0.9109
Epoch 25/30
480/487 [============================>.] - ETA: 0s - loss: 0.2558 - accuracy: 0.9462
Epoch 00025: val_loss improved from 0.36018 to 0.35918, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2559 - accuracy: 0.9462 - val_loss: 0.3592 - val_accuracy: 0.9102
Epoch 26/30
479/487 [============================>.] - ETA: 0s - loss: 0.2507 - accuracy: 0.9474
Epoch 00026: val_loss improved from 0.35918 to 0.35439, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2505 - accuracy: 0.9473 - val_loss: 0.3544 - val_accuracy: 0.9127
Epoch 27/30
479/487 [============================>.] - ETA: 0s - loss: 0.2449 - accuracy: 0.9487
Epoch 00027: val_loss did not improve from 0.35439
487/487 [==============================] - 2s 4ms/step - loss: 0.2452 - accuracy: 0.9487 - val_loss: 0.3572 - val_accuracy: 0.9096
Epoch 28/30
481/487 [============================>.] - ETA: 0s - loss: 0.2399 - accuracy: 0.9509
Epoch 00028: val_loss improved from 0.35439 to 0.35296, saving model to /home/marcin/Documents/Capstone_project/cnn_checkpoint_best
INFO:tensorflow:Assets written to: /home/marcin/Documents/Capstone_project/cnn_checkpoint_best/assets
487/487 [==============================] - 3s 6ms/step - loss: 0.2402 - accuracy: 0.9507 - val_loss: 0.3530 - val_accuracy: 0.9112
Epoch 29/30
480/487 [============================>.] - ETA: 0s - loss: 0.2357 - accuracy: 0.9515
Epoch 00029: val_loss did not improve from 0.35296
487/487 [==============================] - 2s 4ms/step - loss: 0.2358 - accuracy: 0.9516 - val_loss: 0.3532 - val_accuracy: 0.9113
Epoch 30/30
477/487 [============================>.] - ETA: 0s - loss: 0.2314 - accuracy: 0.9521
Epoch 00030: val_loss did not improve from 0.35296
487/487 [==============================] - 2s 4ms/step - loss: 0.2314 - accuracy: 0.9521 - val_loss: 0.3563 - val_accuracy: 0.9110
###Markdown
**Graphs**
###Code
df = pd.DataFrame(
{'val_loss': history.history['val_loss'],
'loss': history.history['loss'],
'accuracy': history.history['accuracy'],
'val_accuracy': history.history['val_accuracy']},
index=range(1, 30 + 1),
)
df.head()
df[['loss', 'val_loss']].plot(grid=True, ylim=(0, 2), title='Losses vs epochs')
df[['accuracy', 'val_accuracy']].plot(grid=True, ylim=(0.7, 1), title='Accuracies vs epochs')
###Output
_____no_output_____
###Markdown
**Model evaluation on test set**
###Code
test_loss, test_accuracy = model.evaluate(test_data_grayscale, test_targets)
print('Test loss = {:.03f}'.format(test_loss))
print('Test accuracy = {:.03f}'.format(test_accuracy))
###Output
814/814 [==============================] - 1s 1ms/step - loss: 0.3831 - accuracy: 0.9029
Test loss = 0.383
Test accuracy = 0.903
###Markdown
4. Get model predictions* Load the best weights for the MLP and CNN models that you saved during the training run.* Randomly select 5 images and corresponding labels from the test set and display the images with their labels.* Alongside the image and label, show each model’s predictive distribution as a bar chart, and the final model prediction given by the label with maximum probability.
###Code
! ls -lh /home/marcin/Documents/Capstone_project
test_data_raw, test_targets_raw = test['X'], test['y']
###Output
_____no_output_____
###Markdown
**Predictions for MLP model**
###Code
from tensorflow.keras.models import load_model
best_mlp_model = load_model('/home/marcin/Documents/Capstone_project/mlp_checkpoint_best')
image_indexes = [32, 435, 4533, 7567, 25543]
df_probabilities = pd.DataFrame({}, index=range(0, 10))
def get_prediction(id):
test_img = test_data_grayscale[id, ...]
preds = best_mlp_model.predict(test_img[np.newaxis, ...])
return preds
for i in range(len(image_indexes)):
df_probabilities.insert(i, str(image_indexes[i]),
get_prediction(image_indexes[i])[0])
df_probabilities
for i in range(5):
fig, (ax, ax2) = plt.subplots(ncols=2)
ax.imshow(test_data_raw[..., image_indexes[i]])
ax.set_title(test_targets_raw[image_indexes[i]][0])
max_probability = 'Predicted number = '\
+ str(df_probabilities[str(image_indexes[i])].argmax()) \
+'\nwith probability = ' + str(df_probabilities[str(image_indexes[i])].max())
ax2 = df_probabilities[str(image_indexes[i])].plot.bar(
xlabel='Digit', title=max_probability , ylim=(0,1),
grid=True, legend=False)
###Output
_____no_output_____
###Markdown
**Predictions for CNN model**
###Code
best_cnn_model = load_model('/home/marcin/Documents/Capstone_project/cnn_checkpoint_best')
image_indexes = [32, 435, 4533, 7567, 25543]
df_probabilities = pd.DataFrame({}, index=range(0, 10))
def get_prediction(id):
test_img = test_data_grayscale[id, ...]
preds = best_cnn_model.predict(test_img[np.newaxis, ...])
return preds
for i in range(len(image_indexes)):
df_probabilities.insert(i, str(image_indexes[i]),
get_prediction(image_indexes[i])[0])
df_probabilities
for i in range(5):
fig, (ax, ax2) = plt.subplots(ncols=2)
ax.imshow(test_data_raw[..., image_indexes[i]])
ax.set_title(test_targets_raw[image_indexes[i]][0])
max_probability = 'Predicted number = '\
+ str(df_probabilities[str(image_indexes[i])].argmax()) \
+'\nIts probability = ' + str(df_probabilities[str(image_indexes[i])].max())
ax2 = df_probabilities[str(image_indexes[i])].plot.bar(
xlabel='Digit', title=max_probability , ylim=(0,1),
grid=True, legend=False)
###Output
_____no_output_____ |
Python-For-Data-Analysis/Chapter 4 Numpy Basics/4.3 Indexing and Slicing techniques with array Tranpose and Swap.ipynb | ###Markdown
Indexing and Slicing techniques with ndarray Tranpose and Swap
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Slicing 1D array and using copy() function
###Code
arr=np.arange(15)
arr_slice=arr[4:8] #note the difference between a view and copy
arr_slice[:]=21
print(arr)
arr=np.arange(15)
arr_slice=arr_slice[:].copy()
arr_slice[:]=2
print(arr)
###Output
[ 0 1 2 3 21 21 21 21 8 9 10 11 12 13 14]
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14]
###Markdown
Indexing and Slicing 2D array
###Code
arr=np.arange(15).reshape(3,5)
print(arr)
arr[0,4]#equivalnet to arr[0][4]
arr[0]=21
print(arr)
arr=np.arange(15).reshape(3,5)
print(arr)
new=arr[1:2,1:4]
print(new)
###Output
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]]
[[6 7 8]]
###Markdown
Slicing and Indexing 3D array
###Code
arr=np.arange(30).reshape(2,3,5)
print(arr)
new=arr[1:,1:2,1:2]
new.reshape(1,1)
print(new)
###Output
[[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]]
[[15 16 17 18 19]
[20 21 22 23 24]
[25 26 27 28 29]]]
[[[21]]]
###Markdown
Special Indexing 1 - Boolean Indexing
###Code
names=np.array(["A","B","A","A","Z"])
index=np.arange(5).reshape(5,1)
print(index[names=="A"])
print(index[~(names=="A")])
new=np.random.randn(5,5)
new
new[new<0]=0
new
###Output
_____no_output_____
###Markdown
Special Indexing 2 - Fancy Indexing
###Code
arr=np.arange(32).reshape(8,4)
print(arr)
arr=arr[[1,2,3,4],[0,1,2,3]]
print(arr)
#newt= arr[[1, 0, 3, 2]][:, [0, 3, 1, 2]]
#print(newt)
###Output
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]
[16 17 18 19]
[20 21 22 23]
[24 25 26 27]
[28 29 30 31]]
[ 4 9 14 19]
###Markdown
ndarray Transpose and Swap
###Code
arr=np.arange(32).reshape(8,4)
x=arr.T
y=np.dot(arr,x)
print(y)
x=np.arange(24).reshape(2,3,4)
print(x)
print("\n")
print(x.transpose(1,0,2))
print("\n")
print(x.transpose(0,2,1))
print("\n")
print(x.swapaxes(1,2))
###Output
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
[[[ 0 1 2 3]
[12 13 14 15]]
[[ 4 5 6 7]
[16 17 18 19]]
[[ 8 9 10 11]
[20 21 22 23]]]
[[[ 0 4 8]
[ 1 5 9]
[ 2 6 10]
[ 3 7 11]]
[[12 16 20]
[13 17 21]
[14 18 22]
[15 19 23]]]
[[[ 0 4 8]
[ 1 5 9]
[ 2 6 10]
[ 3 7 11]]
[[12 16 20]
[13 17 21]
[14 18 22]
[15 19 23]]]
|
06-cnn-with-keras.ipynb | ###Markdown
Image Classification using Convolutional Neural Network (CNN) with Keras
###Code
!pip install opencv-python imutils scikit-learn keras tensorflow
import os
import random
import cv2
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Loading Images
###Code
%matplotlib inline
image_paths = list(paths.list_images('datasets/animals'))
random.seed(42)
random.shuffle(image_paths)
image = cv2.imread(image_paths[2500])
plt.figure(figsize=(10, 10))
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(rgb_image);
data = []
labels = []
###Output
_____no_output_____
###Markdown
**Note:** Machine learning models take a *fixed size input*.
###Code
from keras.preprocessing.image import img_to_array
for image_path in image_paths:
image = cv2.imread(image_path)
label = image_path.split(os.path.sep)[-2]
image = cv2.resize(image, (32, 32), interpolation=cv2.INTER_AREA)
image = img_to_array(image, data_format='channels_first')
data.append(image)
labels.append(label)
data = np.array(data)
labels = np.array(labels)
###Output
_____no_output_____
###Markdown
Normalize images to the range [0, 1].
###Code
data = data.astype('float') / 255.0
from keras.models import Sequential
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers.core import Dense, Dropout, Flatten
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels
lb.classes_
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.25, random_state=30)
X_test.shape
###Output
_____no_output_____
###Markdown
Building a Convolutional NN Model
###Code
# Keras uses "channels first".
input_shape = (3, 32, 32)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape, activation='relu'))
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
H = model.fit(
X_train,
y_train,
epochs=20,
validation_split=0.2,
batch_size=128
)
N = np.arange(0, 20)
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(N, H.history['loss'], label='train_loss')
plt.plot(N, H.history['val_loss'], label='val_loss')
plt.title('Training Loss')
plt.xlabel('Epoch #')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(N, H.history['acc'], label="train_acc")
plt.plot(N, H.history['val_acc'], label='val_acc')
plt.title('Training Accuracy')
plt.xlabel('Epoch #')
plt.ylabel('Accuracy')
plt.legend();
y_pred = model.predict(X_test)
print(classification_report(
y_test.argmax(axis=1),
y_pred.argmax(axis=1),
target_names=lb.classes_
))
###Output
precision recall f1-score support
cats 0.56 0.52 0.54 242
dogs 0.53 0.47 0.50 252
panda 0.72 0.86 0.79 256
avg / total 0.61 0.62 0.61 750
###Markdown
Building a (more-complex) Convolutional NN Model (CONV => RELU) * 3 => POOL => FC
###Code
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape, activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.summary()
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
X_train,
y_train,
epochs=20,
validation_split=0.2,
batch_size=128
)
N = np.arange(0, 20)
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(N, H.history['loss'], label='train_loss')
plt.plot(N, H.history['val_loss'], label='val_loss')
plt.title('Training Loss')
plt.xlabel('Epoch #')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(N, H.history['acc'], label="train_acc")
plt.plot(N, H.history['val_acc'], label='val_acc')
plt.title('Training Accuracy')
plt.xlabel('Epoch #')
plt.ylabel('Accuracy')
plt.legend();
y_pred = model.predict(X_test)
print(classification_report(
y_test.argmax(axis=1),
y_pred.argmax(axis=1),
target_names=lb.classes_
))
###Output
precision recall f1-score support
cats 0.59 0.59 0.59 242
dogs 0.57 0.54 0.55 252
panda 0.83 0.87 0.85 256
avg / total 0.66 0.67 0.67 750
###Markdown
Model Persistence
###Code
import pickle
f = open('output/label-bins.pkl', 'wb')
pickle.dump(lb, f)
f.close()
model.save('output/cnn.h5')
from keras.models import load_model
model = load_model('output/cnn.h5')
y_pred = model.predict(X_test)
print(classification_report(
y_test.argmax(axis=1),
y_pred.argmax(axis=1),
target_names=lb.classes_
))
###Output
precision recall f1-score support
cats 0.59 0.59 0.59 242
dogs 0.57 0.54 0.55 252
panda 0.83 0.87 0.85 256
avg / total 0.66 0.67 0.67 750
|
notebooks/multiple_windows/.ipynb_checkpoints/4_zip_results_on_server-checkpoint.ipynb | ###Markdown
This notebook should be used on the server
###Code
from fluctmatch.bigtraj import BigTrajOnServer
bigtraj_folder = '/home/yizaochen/bigtraj_fluctmatch'
cutoff = 4.7
###Output
_____no_output_____
###Markdown
Part 1: Initialize
###Code
host = 'a_tract_21mer'
type_na = 'bdna+bdna'
b_agent = BigTrajOnServer(host, type_na, bigtraj_folder)
###Output
_____no_output_____
###Markdown
Part 2: Zip all results
###Code
b_agent.zip_all_results(cutoff)
###Output
_____no_output_____ |
S04 - Text Classification/BLU07 - Feature Extraction/BLU07 - Learning Notebook - Part 1 of 3 - Preprocessing for NLP.ipynb | ###Markdown
Natural Language Processing IntroductionAs you may have noticed, this set of BLUs will revolve around the topic of Natural Language Processing (NLP). As the name implies, this field is all about the processing and handling of language in such a way that a computer may be able to do useful things with it. There are plenty of tasks and problems around it, namely:- **Speech recognition**: the task of, given a sample of audio, extract the words that are being spoken or even prosody features, for example.- **Natural language generation**: the task of putting computational formulations into actual text, for example, automated generation of labels to images, summarisation of texts and data, creation of dialogue systems, etc.- **Natural language understanding**: the task of getting some meaning out of the data, for instance, recognizing entities in sentences, semantic roles, or even classify sentences according to their sentiment, etc., or transforming it into something machines can work on (numbers). Some of the main tasks and areas of research of NLP are:- **Part of Speech tagging**: Determine the role of each word in a given sentence, for instance, if it is an adjective, verb, noun, etc.- **Word Segmentation**: Break continuous text into words.- **Parsing**: Define a tree that represents the grammatical structure of a sentence.- **Machine Translation**: Translate sentences from a source language to a target language automatically.- **Named entity recognition**: Find parts of the text that correspond to certain entities, like names of places, people, companies, etc.- **Question answering**: Given a question in human language, find the most appropriate answer.- **Text to speech**: As the name implies, transform written text into audible, human-like sounds that correspond to the given input. Many of these tasks are out of the scope of these learning units, but we think that it is important to at least acknowledge that they exist in the realm of NLP. Also, some of these things may seem "easy", but when you think about the diversity that exists in terms of languages you start to understand how daunting all these tasks are. For instance, word segmentation may seem like a really easy task. After all, words are separated by spaces or maybe some punctuation. But, if you take a look at Mandarin Chinese, for instance, that's not the case, making that "heuristic" no longer universal. And for many of the tasks, there are plenty of corner cases, which make this field one of the most challenging but also more rewarding to work on.Throughout these learning units we hope to give you some basic understanding on how to transform text into something useful for us, what are some of the challenges in this field, solve some interesting problems and hopefully make you want to learn more about the topic afterwards!The first part of this BLU goes through some of the fundamental concepts that will be helpful for all the practical tasks that you will need during this month, but also in the future, if you ever need to work with text data. We will start by introducing **regular expressions**, followed by three important concepts in data pre-processing (**tokenization**, **stopwords**, and **stemming**). Finally, we will see what are **n-grams** and what is an **n-gram model**. Regular Expressions (aka Regex)Regular expressions are sequences of characters that allow us to define search patterns. It goes by several rules and is one of the most fundamental concepts in computer science regarding working with text data. Cheatsheet [\[1\]](https://regexr.com/3lvai)`.` - matches any character, except newline.`\d, \s \S` - match digit, match whitespace, not whitespace.`\b, \B` - word, not word boundary.`[xyz]` - matches x, y or z.`[^xyz]` - matches anything that is not x, y or z.`[x-z]` - matches a character between x and z.`^xyz$` - `^` is the start of the string, `$` is the end of the string.`\.` - use escaping to match special characters.`\t`, `\n` - matches tab and newline.`x*` - matches 0 or more symbols x.`x+` - matches 1 or more symbols x.`x?` - matches 0 or 1 symbol x.`.?`, `*?`, `+?`, etc - represent non-greedy search. `x{5}` - matches exactly 5 symbols x.`x{5,}` - matches 5 or more symbols x.`x{5, 8}` - matches between 5 and 8 symbols x.`xy|yz` - matches `xy` or `yz`. We use python's [re](https://docs.python.org/3/library/re.html) library. Using `search()` we can take a certain pattern and look for it in a text. This function will return a `Match` object, from which we can obtain the text portion that was matched by our pattern.
###Code
text = "Lisbon Madrid Lisbon Toulose Oslo Lisbona"
print("Looking for \"Madrid\":")
match = re.search("Madrid", text)
print(match)
print("\nLooking for \"Rome\":")
match = re.search("Rome", text)
print(match)
print("\nLooking for \"Lisbon\":")
match = re.search("Lisbon", text)
print(match)
###Output
Looking for "Madrid":
<re.Match object; span=(7, 13), match='Madrid'>
Looking for "Rome":
None
Looking for "Lisbon":
<re.Match object; span=(0, 6), match='Lisbon'>
###Markdown
So, it is already possible to observe some things about `re.search()`:- When there is no match, `search()` returns `None`.- The `Match` object has the index of the beginning and end of the match. Might be used via `match.start()` and `match.end()`.- If there is more than one instance of the word in the text, only the first will be retrieved.If we want to return all the matches to our pattern in a given text we might use the function `findall()`. In this case, the matched portions of the text will be returned, instead of the `Match` object.
###Code
pattern = "Lisbon"
for match in re.findall(pattern, text):
print(match)
###Output
Lisbon
Lisbon
Lisbon
###Markdown
Notice that, one of the words was written as _Lisbona_ , but we still match the _Lisbon_ portion of that word. If we add the condition of having a white space after the letter *n* we will only get two matches.
###Code
pattern = "Lisbon\s"
for match in re.findall(pattern, text):
print(match)
###Output
Lisbon
Lisbon
###Markdown
If instead we really want the `Match` objects for some reason, `finditer()` should be used instead.
###Code
pattern = "Lisbon"
for match in re.finditer(pattern, text):
print(match)
###Output
<re.Match object; span=(0, 6), match='Lisbon'>
<re.Match object; span=(14, 20), match='Lisbon'>
<re.Match object; span=(34, 40), match='Lisbon'>
###Markdown
--- Now, looking at some of the previously shown codes at cheatsheet, let's see in some simple examples how that may help us!
###Code
text = "x xy xyy"
###Output
_____no_output_____
###Markdown
Remembering what we've shown previously, `.` will match any character after x:
###Code
re.findall("x.", text)
###Output
_____no_output_____
###Markdown
`*` will match 0 or more y symbols after xy:
###Code
re.findall("xy*", text)
###Output
_____no_output_____
###Markdown
`+` will match 1 or more y symbols after x:
###Code
re.findall("xy+", text)
###Output
_____no_output_____
###Markdown
`?` will match 0 or 1 y symbols after x:
###Code
re.findall("xy?", text)
###Output
_____no_output_____
###Markdown
`{i}` will match i y symbols after x:
###Code
re.findall("xy{2}", text)
###Output
_____no_output_____
###Markdown
---
###Code
text="lotterer Jani Senna conway Kobayashi Lopez buemi Nakajima alonso"
###Output
_____no_output_____
###Markdown
If we want to match only the names that start with capital letters:
###Code
re.findall("[A-Z][a-z]+", text) # find substrings starting with a capital letter
# followed by 1 or more lowercase letters
###Output
_____no_output_____
###Markdown
If we want to match all the names that don't start with letters "B" and "L".
###Code
re.findall(r"\b[^bBlL\s][A-Za-z]+", text) # find substrings after a word boundary that...
# do not begin with B or L or whitespace
###Output
_____no_output_____
###Markdown
You may be wondering what that hacky `r` is doing before the actual regex we are using. This has no connection with regex. It is just a way of telling python that it should interpret backslashes `\` literally (Notice how our regex has `\b` and `\s`). For instance:
###Code
print("With r:\n")
print(r"lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
print("\n")
print("Without r:\n")
print("lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
###Output
With r:
lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso
Without r:
lotterer
Jani
Senna conway Kobayashi Lopez buemi Nakajima alonso
###Markdown
In the first case, since we are using `r` the model takes `\n` literally and in the second case, python interprets it as the escaped symbol for newline. ---Imagine now we have some extra information in front of the names, and that we receive a file with many lines. We still want only names starting with capital letters. So we run the previous regex and...
###Code
text="lotterer Rebellion\nJani Rebellion\nSenna Rebellion\nconway Toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima Toyota\nalonso Toyota"
re.findall("[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Well, we don't want those extra names in there. So let's try to add the symbol `^` to make sure the expression only captures the beginning part of the sentence.
###Code
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Hum.. we got a handful of nothing. Why is this happening? Well, the regex processes all the text as a single line, and the first name doesn't start with a capital letter. To make sure this is the case, let's change `lotterer` to `Lotterer`.
###Code
text="Lotterer Rebellion\nJani rebellion\nSenna Rebellion\nconway toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima toyota\nalonso Toyota"
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
But we still only capture one line. Luckily, we have [`re.MULTILINE`](https://docs.python.org/3/library/re.htmlre.MULTILINE), that allows us to process multiline strings easily.
###Code
re.findall("^[A-Z][a-z]+", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
And now we were able to get all the information we wanted! And what if we wanted the second part of each line? Well, in this case, that is the last word of the line, so we may use `$`.
###Code
re.findall("[A-Z][a-z]+$", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
What if we want all full lines ending with `rebellion`?
###Code
re.findall(".*rebellion$", text, flags=(re.MULTILINE|re.IGNORECASE))
###Output
_____no_output_____
###Markdown
You may notice that here we are also taking advantage of the flag `re.IGNORECASE`. This is a convenient flag to add if you want case-insensitive matches. Multiple regex flags can be strung together with pipes: `|`. Regular expressions can get hard to read really fast, but even knowing the basics will be certainly helpful sometime in the future. To better understand how they work, nothing is better than practicing, and sites like [this](https://regexr.com/3lvai) and [this](https://regex101.com/) are valuable visual tools to do so. The python library that we used has a lot of more powerful methods too, which might be useful to future tasks. --- Tokenizer One important step when dealing with text data is to _tokenize_ the data. In practice what this means is splitting the strings of a corpus into substrings. This is important because it transforms a string into parts that are more suitable to be used by the tools that exist in natural language processing. For instance, if we are working with the sentence_"The car went too fast on the second lap. This damaged the tires."_ ,would be better approached as a list,_["The", "car", "went", "too", "fast", "on", "the", "second", "lap", ".", "This", "damaged", "the", "tires", "."]_ .We will be using [NLTK](https://www.nltk.org/_modules/nltk/tokenize/regexp.html) implementations.
###Code
text = "The car went too fast on the second lap. This damaged the tires..."
tokenizer = RegexpTokenizer('\w+|\$[\d\.]+|\S+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
['The', 'car', 'went', 'too', 'fast', 'on', 'the', 'second', 'lap', '.', 'This', 'damaged', 'the', 'tires', '...']
###Markdown
Notice that the tokenizer is created by taking advantage of the regular expressions we learned earlier. This means that we can make different tokenizers according to what we want to split on. For instance, if we had used `[A-Z]\w+`, the tokenizer would only select the words that begin with capital letters.
###Code
tokenizer = RegexpTokenizer('[A-Z]\w+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
['The', 'This']
###Markdown
Notice that there are already some pre-defined implementations we can use by taking advantage of `RegexpTokenizer`. These are:- `BlanklineTokenizer` - Tokenize a string using blank lines as the delimiter.- `WordPunctTokenizer` - Tokenize a string into alphabetic and non-alphabetic characters.- `WhitespaceTokenizer`- Tokenize a string using spaces, tabs, and newlines as delimiters.
###Code
from nltk.tokenize import BlanklineTokenizer
from nltk.tokenize import WordPunctTokenizer
from nltk.tokenize import WhitespaceTokenizer
BlanklineTokenizer().tokenize(text)
WordPunctTokenizer().tokenize(text)
WhitespaceTokenizer().tokenize(text)
###Output
_____no_output_____
###Markdown
Notice that the `WordPunctTokenizer()` is similar to the first one we defined. This is commonly used and the default method of tokenization that will be used when we talk about the method. --- Stemming Stemming allows us to get the "root" of words. This is important because in certain tasks we are more interested in a broader representation of a given word and not the specific variation of it, like its plural, for instance. Before using the stemmer it is necessary to download some tools required by `nltk`, regarding the language we want to use. We will be working with the English language, using the NLTK Downloader, the same way we would import `nltk`.
###Code
nltk.download('stopwords')
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/christinemaroti/nltk_data...
[nltk_data] Unzipping corpora/stopwords.zip.
###Markdown
So, let's see what this step gets us for the same example we have been using. To do that, we will be using the NLTK implementation of the [snowball stemmer](https://www.nltk.org/api/nltk.stem.htmlnltk.stem.snowball.SnowballStemmer). Notice that there are other stemmers, some of them specific to certain tasks.
###Code
tokenizer = WordPunctTokenizer()
words = tokenizer.tokenize(text)
stemmer = SnowballStemmer("english", ignore_stopwords=True)
stems = [list(map(stemmer.stem, words))]
print(stems)
###Output
[['the', 'car', 'went', 'too', 'fast', 'on', 'the', 'second', 'lap', '.', 'this', 'damag', 'the', 'tire', '...']]
###Markdown
We can see that _"damage"_ and _"tires"_ are transformed into simpler forms of the respective words. Notice as well that all the words have been lowercased. Lowercasing the data is also a common step in text pre-processing. One thing that you may have noticed was the concept of "stopwords" being used. **Stopwords** are common words in a given corpus or language that, due to being so common, lose interest for most natural language processing applications. For instance, imagine a search engine, looking through a whole range of documents. Words as "*the*", "*a*", "*at*", etc. will be present in so many documents that using them in the search will not reduce the number of possible files that could be relevant to our query. So filtering them out is beneficial to our goal.In the specific case of the stemmer function that we are using, defining `ignore_stopwords` as `True` will prevent the stemming of stopwords.In the next part of this BLU you will read about stopwords again, as they are important for the task you will be doing there. Besides stemming there is also the process of **lemmatization**. Both processes share the goal of getting the root of the word, or more formally, reduce inflectional forms of a word to a common base form [\[7\]](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html), but they act differently. Whereas stemming follows a heuristic approach that drops the suffix of words in order to get closer to the common base form, lemmatization uses a dictionary and morphological analysis of words to return the base form of words, known as _lemma_.Using the example in the cited reference, if shown the word _saw_, stemming would tend to return only *s*, while lemmatization would take into account if the word was the verb or the noun, and correspondingly, return _see_ or _saw_ as the base form of the word.As you may expect, lemmatization is much more expensive in computational terms and, for certain applications, stemming might be more than enough to obtain good results. We will be using only stemming throughout the NLP learning units. --- N-Grams _n-grams_ correspond to sequences of n consecutive elements from a given sentence. Commonly each element is a word, or "token," but we may define it as we wish for the task at hand. Usually, we refer to unigrams, bigrams, trigrams, four-grams, etc. according to the length of the sequence of elements.For instance, for the sentence`"The driver made a mistake"`,we would have:- unigrams: `The`, `driver`, `made`, `a`, `mistake`- bigrams: `The driver`, `driver made`, `made a`, `a mistake`- trigrams: `The driver made`, `driver made a`, `made a mistake`- four-grams: `The driver made a`, `driver made a mistake` We will create _n-grams_ but taking advantage of the [NLTK ngram](http://www.nltk.org/_modules/nltk/model/ngram.html) implementation. We will be using the tokenized list `words` created previously.
###Code
print(words)
print(list(ngrams(words, 1)))
print(list(ngrams(words, 2)))
print(list(ngrams(words, 3)))
###Output
[('The', 'car', 'went'), ('car', 'went', 'too'), ('went', 'too', 'fast'), ('too', 'fast', 'on'), ('fast', 'on', 'the'), ('on', 'the', 'second'), ('the', 'second', 'lap'), ('second', 'lap', '.'), ('lap', '.', 'This'), ('.', 'This', 'damaged'), ('This', 'damaged', 'the'), ('damaged', 'the', 'tires'), ('the', 'tires', '...')]
###Markdown
Natural Language Processing IntroductionAs you may have noticed, this set of BLUs will revolve around the topic of Natural Language Processing (NLP). As the name implies, this field is all about the processing and handling of language in such a way that a computer may be able to do useful things with it. There are plenty of tasks and problems around it, namely:- **Speech recognition**: the task of, given a sample of audio, extract the words that are being spoken or even prosody features, for example.- **Natural language generation**: the task of putting computational formulations into actual text, for example, automated generation of labels to images, summarisation of texts and data, creation of dialogue systems, etc.- **Natural language understanding**: the task of getting some meaning out of the data, for instance, recognizing entities in sentences, semantic roles or even classify sentences according to its sentiment, etc or transforming it into something machines can work on (numbers). Some of the main tasks and areas of research of NLP are:- **Part of Speech tagging**: Determine the role of each word in a given sentence, for instance, if it is an adjective, verb, noun, etc.- **Word Segmentation**: Break continuous text into words.- **Parsing**: Define a tree that represents the grammatical structure of a sentence.- **Machine Translation**: Translate sentences from a source language to a target language automatically.- **Named entity recognition**: Find parts of the text that correspond to certain entities, like names of places, people, companies, etc.- **Question answering**: Given a question in human language, find the most appropriate answer.- **Text to speech**: As the name implies, transform written text into audible human like sounds that correspond to the given input. Many of these tasks are out of scope of these learning units, but we think that it is important to at least acknowledge that they exist in the realm of NLP. Also, some of these things may seem "easy", but when you think about the diversity that exists in terms of languages you start to understand how daunting all these tasks are. For instance, word segmentation may seem a really easy task, after all words are separated by spaces or maybe some punctuation. Well, if you take a look at Mandarin, for instance, that's not the case, making that "heuristic" no longer universal. And for many of the tasks there are plenty of corner cases, that make this field one of the most challenging but also more rewarding to work on.Throughout these learning units we hope to give you some basic understanding on how to transform text into something useful for us, what are some of the challenges in this field, solve some interesting problems and hopefully make you want to learn more about the topic afterwards!The first part of this BLU goes through some of the fundamental concepts that will be helpful for all the practical tasks that you will need during this month, but also in the future, if you ever need to work with text data. We will start by introducing **regular expressions**, followed by three important concepts in data pre-processing (**tokenization**, **stopwords** and **stemming**). Finally we will see what are **n-grams** and what is an **n-gram model**. Regular Expressions (aka Regex)Regular expressions are sequences of characters that allow us to define search patterns. It goes by several rules and is one of the most fundamental and important concepts in computer science regarding working with textual data. Cheatsheet [\[1\]](https://regexr.com/3lvai)`.` - matches any character, except newline.`\d, \s \S` - match digit, match whitespace, not whitespace.`\b, \B` - word, not word boundary.`[xyz]` - matches x, y or z.`[^xyz]` - matches anything that is not x, y or z.`[x-z]` - matches a character between x and z.`^xyz$` - `^` is the start of the string, `$` is the end of the string.`\.` - use escaping to match special characters.`\t`, `\n` - matches tab and newline.`x*` - matches 0 or more symbols x.`x+` - matches 1 or more symbols x.`x?` - matches 0 or 1 symbol x.`.?`, `*?`, `+?`, etc - represent non-greedy search. `x{5}` - matches exactly 5 symbols x.`x{5,}` - matches 5 or more symbols x.`x{5, 8}` - matches between 5 and 8 symbols x.`xy|yz` - matches `xy` or `yz`. We use python [re](https://docs.python.org/3/library/re.html) library. Using `search()` we can take a certain pattern and look for it in a text. This function will return a `Match` object, from which we can obtain the text portion that was matched by our pattern.
###Code
text = "Lisbon Madrid Lisbon Toulose Oslo Lisbona"
print("Looking for \"Madrid\":")
match = re.search("Madrid", text)
print(match)
print("\nLooking for \"Rome\":")
match = re.search("Rome", text)
print(match)
print("\nLooking for \"Lisbon\":")
match = re.search("Lisbon", text)
print(match)
###Output
_____no_output_____
###Markdown
So, it is already possible to observe some things:- When there is no match, `search()` returns `None`.- The `Match` object has the index of the beginning and ending of the match. Might be used via `match.start()` and `match.end()`.- If there is more than one instance of the word in the text only the first will be retrieved.If we want to return all the matches to our pattern in a given text we might use the funcion `findall()`. In this case, the matched portions of the text will be returned, instead of the `Match` object.
###Code
pattern = "Lisbon"
for match in re.findall(pattern, text):
print(match)
###Output
_____no_output_____
###Markdown
Notice that, one of the words was written as _Lisbona_ , but we still match the _Lisbon_ portion of that word. If we add the condition of having a white space after the letter *n* we will only get two matches.
###Code
pattern = "Lisbon\s"
for match in re.findall(pattern, text):
print(match)
###Output
_____no_output_____
###Markdown
If instead we really want the `Match` objects for some reason, `finditer()` should be used instead.
###Code
pattern = "Lisbon"
for match in re.finditer(pattern, text):
print(match)
###Output
_____no_output_____
###Markdown
--- Now, looking at some of the previously shown codes at cheatsheet, let's see in some simple examples how that may help us!
###Code
text = "x xy xyy"
###Output
_____no_output_____
###Markdown
Remembering what we've shown previously, `.` will match any character after x:
###Code
re.findall("x.", text)
###Output
_____no_output_____
###Markdown
`*` will match 0 or more y symbols after xy:
###Code
re.findall("xy*", text)
###Output
_____no_output_____
###Markdown
`+` will match 1 or more y symbols after x:
###Code
re.findall("xy+", text)
###Output
_____no_output_____
###Markdown
`?` will match 0 or 1 y symbols after x:
###Code
re.findall("xy?", text)
###Output
_____no_output_____
###Markdown
`{i}` will match i y symbols after x:
###Code
re.findall("xy{2}", text)
###Output
_____no_output_____
###Markdown
---
###Code
text="lotterer Jani Senna conway Kobayashi Lopez buemi Nakajima alonso"
###Output
_____no_output_____
###Markdown
If we want to match only the names that start with capital letters:
###Code
re.findall("[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
If we want to match all the names that don't start with letters "B" and "L".
###Code
re.findall(r"\b[^bBlL\s][A-Za-z]+", text)
###Output
_____no_output_____
###Markdown
You may be wondering what that hacky `r` is doing before the actual regex we are using. This has no connection with regex. It is just a way of telling python that it should interpret backslashes `\` literally (Notice how our regex has `\b` and `\s`). For instance:
###Code
print("With r:\n")
print(r"lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
print("\n")
print("Without r:\n")
print("lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
###Output
_____no_output_____
###Markdown
In the first case, since we are using `r` the model takes `\n` literally and in the second case, python interprets it as the escaped symbol for newline. ---Imagine now we have some extra information in front of the names and that we receive a file with many lines. We still want only the names that start with capital letters. So we run the previous regex and...
###Code
text="lotterer Rebellion\nJani Rebellion\nSenna Rebellion\nconway Toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima Toyota\nalonso Toyota"
re.findall("[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Well, we don't want those names in there. So let's try to add the symbol `^` to make sure the expression only captures the beginning part of the sentence.
###Code
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Hum.. we got a handful of nothing. Why is this happening? Well the regex processes all the text as a single line, and the first name is not one that starts with a capital letter. To make sure this is the case, let's change `lotterer` to `Lotterer`.
###Code
text="Lotterer Rebellion\nJani rebellion\nSenna Rebellion\nconway toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima toyota\nalonso Toyota"
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
But we still only capture one line. Luckily, we have [`re.MULTILINE`](https://docs.python.org/3/library/re.htmlre.MULTILINE), that allows us to process multiline strings easily.
###Code
re.findall("^[A-Z][a-z]+", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
And now we were able to get all the information we wanted! And what if we wanted the second part of each line? Well, in this case, that is the last word of the line, so we may use `$`.
###Code
re.findall("[A-Z][a-z]+$", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
Regular expressions can get hard to read really fast, but even knowing the basics will be certainly helpful sometime in the future. To better understand how they work nothing is better than practicing and [this](https://regexr.com/3lvai) site is a valuable visual tool to do so. The python library that we used has a lot of more powerful methods too, which might be useful to future tasks. --- Tokenizer One important step when dealing with text data is to _tokenize_ the data. In practice what this means is splitting the strings of a corpus into substrings. This is important because it transforms a string into parts that are more suitable to be used by the tools that exist in natural language processing. For instance, if we are working with the sentence_"The car went too fast on the second lap. This damaged the tyres."_ ,would be better approached as a list,_["The", "car", "went", "too", "fast", "on", "the", "second", "lap", ".", "This", "damaged", "the", "tyres", "."]_ .We will be using [NLTK](https://www.nltk.org/_modules/nltk/tokenize/regexp.html) implementations.
###Code
text = "The car went too fast on the second lap. This damaged the tyres..."
tokenizer = RegexpTokenizer('\w+|\$[\d\.]+|\S+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
_____no_output_____
###Markdown
Notice that the tokenizer is created taking advantage of the learned regular expressions. This means that we can make different tokenizer, accordingly to what we want. For instance, if we had used `[A-Z]\w+`, the tokenizer would only select the words that begin with capital letters.
###Code
tokenizer = RegexpTokenizer('[A-Z]\w+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
_____no_output_____
###Markdown
Notice that there are already some pre-defined implementations by taking advantage of `RegexpTokenizer`. These are:- `BlanklineTokenizer` - Tokenize a string using blank lines as delimiter.- `WordPunctTokenizer` - Tokenize a string into alphabetic and non-alphabetic characters.- `WhitespaceTokenizer`- Tokenize a string using spaces, tabs and newlines as delimiters.
###Code
from nltk.tokenize import BlanklineTokenizer
from nltk.tokenize import WordPunctTokenizer
from nltk.tokenize import WhitespaceTokenizer
BlanklineTokenizer().tokenize(text)
WordPunctTokenizer().tokenize(text)
WhitespaceTokenizer().tokenize(text)
###Output
_____no_output_____
###Markdown
Notice that the `WordPunctTokenizer()` is similar to the first one we defined. This is what is commonly used and the default method of tokenization that will be used when we talk about the method. --- Stemming Stemming allows us to get the "root" of words. This is important because in certain tasks we are more interested in a broader representation of a given word and not specific variation of it, like its plural, for instance. Before using the stemmer it is necessary to download some tools required by nltk, regarding the language we want to use. We will be working with the english language, using the NLTK Downloader, the same way we would import nltk.
###Code
nltk.download('stopwords')
###Output
_____no_output_____
###Markdown
So, let's see what this step gets us for the same example we have been using. To do that we will be using the NLTK implementation of the [snowball stemmer](https://www.nltk.org/api/nltk.stem.htmlnltk.stem.snowball.SnowballStemmer). Notice that there are other stemmers, some of them specific to certain tasks.
###Code
tokenizer = WordPunctTokenizer()
words = tokenizer.tokenize(text)
stemmer = SnowballStemmer("english", ignore_stopwords=True)
stems = list(map(stemmer.stem, words))
print(stems)
###Output
_____no_output_____
###Markdown
We can see that _"damage"_ and _"tyres"_ are transformed into simpler forms of the respective words. Notice as well that all the words have been lowercase. Lowercasing the data is also a common step in text pre-processing. One thing that you may have noticed was the concept of "stopwords" being used. **Stopwords** are common words in a given corpus or language that, due to being so common, lose interest for most natural language processing applications. For instance, imagine a search engine, looking through a whole range of documents. Words as "*the*", "*a*", "*at*", etc. will be present in so many documents that using them in the search will not reduce the amount of possible files that could be relevant to our query. So filtering them out is beneficial to our goal.In the specific case of the stemmer function we are using, defining `ignore_stopwords` as `True` will prevent the stemming of stopwords.In the next part of this BLU you will read about stopwords again, as they are important for the task you will be doing there. Besides stemming there is also the process of **lemmatization**. Both processes share the goal of getting the root of the word, or more formally, reduce inflectional forms of a word to a common base form [\[7\]](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html), but they act differently. Whereas stemming follows an heuristic approach that drops the suffix of words in order to get closer to the common base form, lemmatization uses a dictionary and morphological analysis of words to return the base form of words, known as _lemma_.Using the example in the cited reference, if shown the word _saw_ , stemming would tend to return only *s* , while lemmatization would take into account if the word was the verb or the noun, and correspondingly, return _see_ or _saw_ as the base form of the word.As you may expect, lemmatization is much more expensive in computational terms and, for certain applications, stemming might be more than enough to obtain good results. We will be using only stemming throughout the NLP learning units. --- N-Grams _n-grams_ correspond to sequences of n consecutive elements from a given sentence. Commonly each element is a word, but we may define it as we wish for the task at hands. Usually we refer to unigrams, bigrams, trigrams, four-grams, etc. according to the length of the sequence of elements.For instance, for the sentence`"The driver made a mistake"`,we would have:- unigrams: `The`, `driver`, `made`, `a`, `mistake`- bigrams: `The driver`, `driver made`, `made a`, `a mistake`- trigrams: `The driver made`, `driver made a`, `made a mistake`- four-grams: `The driver made a`, `driver made a mistake` We will creat _n-grams_ but taking advantage of [NLTK ngram](http://www.nltk.org/_modules/nltk/model/ngram.html) implementation. We will be using the tokenized list `words` created previously.
###Code
print(words)
print(list(ngrams(words, 1)))
print(list(ngrams(words, 2)))
print(list(ngrams(words, 3)))
###Output
_____no_output_____
###Markdown
Natural Language Processing IntroductionAs you may have noticed, this set of BLUs will revolve around the topic of Natural Language Processing (NLP). As the name implies, this field is all about the processing and handling of language in such a way that a computer may be able to do useful things with it. There are plenty of tasks and problems around it, namely:- **Speech recognition**: the task of, given a sample of audio, extract the words that are being spoken or even prosody features, for example.- **Natural language generation**: the task of putting computational formulations into actual text, for example, automated generation of labels to images, summarisation of texts and data, creation of dialogue systems, etc.- **Natural language understanding**: the task of getting some meaning out of the data, for instance, recognizing entities in sentences, semantic roles or even classify sentences according to its sentiment, etc or transforming it into something machines can work on (numbers). Some of the main tasks and areas of research of NLP are:- **Part of Speech tagging**: Determine the role of each word in a given sentence, for instance, if it is an adjective, verb, noun, etc.- **Word Segmentation**: Break continuous text into words.- **Parsing**: Define a tree that represents the grammatical structure of a sentence.- **Machine Translation**: Translate sentences from a source language to a target language automatically.- **Named entity recognition**: Find parts of the text that correspond to certain entities, like names of places, people, companies, etc.- **Question answering**: Given a question in human language, find the most appropriate answer.- **Text to speech**: As the name implies, transform written text into audible human like sounds that correspond to the given input. Many of these tasks are out of scope of these learning units, but we think that it is important to at least acknowledge that they exist in the realm of NLP. Also, some of these things may seem "easy", but when you think about the diversity that exists in terms of languages you start to understand how daunting all these tasks are. For instance, word segmentation may seem like a really easy task. After all, words are separated by spaces or maybe some punctuation. But, if you take a look at Mandarin Chinese, for instance, that's not the case, making that "heuristic" no longer universal. And for many of the tasks there are plenty of corner cases, which make this field one of the most challenging but also more rewarding to work on.Throughout these learning units we hope to give you some basic understanding on how to transform text into something useful for us, what are some of the challenges in this field, solve some interesting problems and hopefully make you want to learn more about the topic afterwards!The first part of this BLU goes through some of the fundamental concepts that will be helpful for all the practical tasks that you will need during this month, but also in the future, if you ever need to work with text data. We will start by introducing **regular expressions**, followed by three important concepts in data pre-processing (**tokenization**, **stopwords** and **stemming**). Finally we will see what are **n-grams** and what is an **n-gram model**. Regular Expressions (aka Regex)Regular expressions are sequences of characters that allow us to define search patterns. It goes by several rules and is one of the most fundamental and important concepts in computer science regarding working with textual data. Cheatsheet [\[1\]](https://regexr.com/3lvai)`.` - matches any character, except newline.`\d, \s \S` - match digit, match whitespace, not whitespace.`\b, \B` - word, not word boundary.`[xyz]` - matches x, y or z.`[^xyz]` - matches anything that is not x, y or z.`[x-z]` - matches a character between x and z.`^xyz$` - `^` is the start of the string, `$` is the end of the string.`\.` - use escaping to match special characters.`\t`, `\n` - matches tab and newline.`x*` - matches 0 or more symbols x.`x+` - matches 1 or more symbols x.`x?` - matches 0 or 1 symbol x.`.?`, `*?`, `+?`, etc - represent non-greedy search. `x{5}` - matches exactly 5 symbols x.`x{5,}` - matches 5 or more symbols x.`x{5, 8}` - matches between 5 and 8 symbols x.`xy|yz` - matches `xy` or `yz`. We use python's [re](https://docs.python.org/3/library/re.html) library. Using `search()` we can take a certain pattern and look for it in a text. This function will return a `Match` object, from which we can obtain the text portion that was matched by our pattern.
###Code
text = "Lisbon Madrid Lisbon Toulose Oslo Lisbona"
print("Looking for \"Madrid\":")
match = re.search("Madrid", text)
print(match)
print("\nLooking for \"Rome\":")
match = re.search("Rome", text)
print(match)
print("\nLooking for \"Lisbon\":")
match = re.search("Lisbon", text)
print(match)
###Output
Looking for "Madrid":
<re.Match object; span=(7, 13), match='Madrid'>
Looking for "Rome":
None
Looking for "Lisbon":
<re.Match object; span=(0, 6), match='Lisbon'>
###Markdown
So, it is already possible to observe some things:- When there is no match, `search()` returns `None`.- The `Match` object has the index of the beginning and ending of the match. Might be used via `match.start()` and `match.end()`.- If there is more than one instance of the word in the text only the first will be retrieved.If we want to return all the matches to our pattern in a given text we might use the function `findall()`. In this case, the matched portions of the text will be returned, instead of the `Match` object.
###Code
pattern = "Lisbon"
for match in re.findall(pattern, text):
print(match)
###Output
Lisbon
Lisbon
Lisbon
###Markdown
Notice that, one of the words was written as _Lisbona_ , but we still match the _Lisbon_ portion of that word. If we add the condition of having a white space after the letter *n* we will only get two matches.
###Code
pattern = "Lisbon\s"
for match in re.findall(pattern, text):
print(match)
###Output
Lisbon
Lisbon
###Markdown
If instead we really want the `Match` objects for some reason, `finditer()` should be used instead.
###Code
pattern = "Lisbon"
for match in re.finditer(pattern, text):
print(match)
###Output
<re.Match object; span=(0, 6), match='Lisbon'>
<re.Match object; span=(14, 20), match='Lisbon'>
<re.Match object; span=(34, 40), match='Lisbon'>
###Markdown
--- Now, looking at some of the previously shown codes at cheatsheet, let's see in some simple examples how that may help us!
###Code
text = "x xy xyy"
###Output
_____no_output_____
###Markdown
Remembering what we've shown previously, `.` will match any character after x:
###Code
re.findall("x.", text)
###Output
_____no_output_____
###Markdown
`*` will match 0 or more y symbols after xy:
###Code
re.findall("xy*", text)
###Output
_____no_output_____
###Markdown
`+` will match 1 or more y symbols after x:
###Code
re.findall("xy+", text)
###Output
_____no_output_____
###Markdown
`?` will match 0 or 1 y symbols after x:
###Code
re.findall("xy?", text)
###Output
_____no_output_____
###Markdown
`{i}` will match i y symbols after x:
###Code
re.findall("xy{2}", text)
###Output
_____no_output_____
###Markdown
---
###Code
text="lotterer Jani Senna conway Kobayashi Lopez buemi Nakajima alonso"
###Output
_____no_output_____
###Markdown
If we want to match only the names that start with capital letters:
###Code
re.findall("[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
If we want to match all the names that don't start with letters "B" and "L".
###Code
re.findall(r"\b[^bBlL\s][A-Za-z]+", text)
###Output
_____no_output_____
###Markdown
You may be wondering what that hacky `r` is doing before the actual regex we are using. This has no connection with regex. It is just a way of telling python that it should interpret backslashes `\` literally (Notice how our regex has `\b` and `\s`). For instance:
###Code
print("With r:\n")
print(r"lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
print("\n")
print("Without r:\n")
print("lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso")
###Output
With r:
lotterer \n Jani \n Senna conway Kobayashi Lopez buemi Nakajima alonso
Without r:
lotterer
Jani
Senna conway Kobayashi Lopez buemi Nakajima alonso
###Markdown
In the first case, since we are using `r` the model takes `\n` literally and in the second case, python interprets it as the escaped symbol for newline. ---Imagine now we have some extra information in front of the names and that we receive a file with many lines. We still want only the names that start with capital letters. So we run the previous regex and...
###Code
text="lotterer Rebellion\nJani Rebellion\nSenna Rebellion\nconway Toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima Toyota\nalonso Toyota"
re.findall("[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Well, we don't want those extra names in there. So let's try to add the symbol `^` to make sure the expression only captures the beginning part of the sentence.
###Code
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
Hum.. we got a handful of nothing. Why is this happening? Well the regex processes all the text as a single line, and the first name is not one that starts with a capital letter. To make sure this is the case, let's change `lotterer` to `Lotterer`.
###Code
text="Lotterer Rebellion\nJani rebellion\nSenna Rebellion\nconway toyota\nKobayashi Toyota\nLopez Toyota\nbuemi Toyota\nNakajima toyota\nalonso Toyota"
re.findall("^[A-Z][a-z]+", text)
###Output
_____no_output_____
###Markdown
But we still only capture one line. Luckily, we have [`re.MULTILINE`](https://docs.python.org/3/library/re.htmlre.MULTILINE), that allows us to process multiline strings easily.
###Code
re.findall("^[A-Z][a-z]+", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
And now we were able to get all the information we wanted! And what if we wanted the second part of each line? Well, in this case, that is the last word of the line, so we may use `$`.
###Code
re.findall("[A-Z][a-z]+$", text, re.MULTILINE)
###Output
_____no_output_____
###Markdown
Regular expressions can get hard to read really fast, but even knowing the basics will be certainly helpful sometime in the future. To better understand how they work nothing is better than practicing, and sites like [this](https://regexr.com/3lvai) and [this](https://regex101.com/) are valuable visuals tool to do so. The python library that we used has a lot of more powerful methods too, which might be useful to future tasks. --- Tokenizer One important step when dealing with text data is to _tokenize_ the data. In practice what this means is splitting the strings of a corpus into substrings. This is important because it transforms a string into parts that are more suitable to be used by the tools that exist in natural language processing. For instance, if we are working with the sentence_"The car went too fast on the second lap. This damaged the tires."_ ,would be better approached as a list,_["The", "car", "went", "too", "fast", "on", "the", "second", "lap", ".", "This", "damaged", "the", "tires", "."]_ .We will be using [NLTK](https://www.nltk.org/_modules/nltk/tokenize/regexp.html) implementations.
###Code
text = "The car went too fast on the second lap. This damaged the tires..."
tokenizer = RegexpTokenizer('\w+|\$[\d\.]+|\S+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
['The', 'car', 'went', 'too', 'fast', 'on', 'the', 'second', 'lap', '.', 'This', 'damaged', 'the', 'tires', '...']
###Markdown
Notice that the tokenizer is created taking advantage of the learned regular expressions. This means that we can make different tokenizer, accordingly to what we want. For instance, if we had used `[A-Z]\w+`, the tokenizer would only select the words that begin with capital letters.
###Code
tokenizer = RegexpTokenizer('[A-Z]\w+')
tokens = tokenizer.tokenize(text)
print(tokens)
###Output
['The', 'This']
###Markdown
Notice that there are already some pre-defined implementations by taking advantage of `RegexpTokenizer`. These are:- `BlanklineTokenizer` - Tokenize a string using blank lines as delimiter.- `WordPunctTokenizer` - Tokenize a string into alphabetic and non-alphabetic characters.- `WhitespaceTokenizer`- Tokenize a string using spaces, tabs and newlines as delimiters.
###Code
from nltk.tokenize import BlanklineTokenizer
from nltk.tokenize import WordPunctTokenizer
from nltk.tokenize import WhitespaceTokenizer
BlanklineTokenizer().tokenize(text)
WordPunctTokenizer().tokenize(text)
WhitespaceTokenizer().tokenize(text)
###Output
_____no_output_____
###Markdown
Notice that the `WordPunctTokenizer()` is similar to the first one we defined. This is what is commonly used and the default method of tokenization that will be used when we talk about the method. --- Stemming Stemming allows us to get the "root" of words. This is important because in certain tasks we are more interested in a broader representation of a given word and not specific variation of it, like its plural, for instance. Before using the stemmer it is necessary to download some tools required by nltk, regarding the language we want to use. We will be working with the English language, using the NLTK Downloader, the same way we would import nltk.
###Code
nltk.download('stopwords')
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/christinemaroti/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
So, let's see what this step gets us for the same example we have been using. To do that we will be using the NLTK implementation of the [snowball stemmer](https://www.nltk.org/api/nltk.stem.htmlnltk.stem.snowball.SnowballStemmer). Notice that there are other stemmers, some of them specific to certain tasks.
###Code
tokenizer = WordPunctTokenizer()
words = tokenizer.tokenize(text)
stemmer = SnowballStemmer("english", ignore_stopwords=True)
stems = [list(map(stemmer.stem, words))]
print(stems)
###Output
[['the', 'car', 'went', 'too', 'fast', 'on', 'the', 'second', 'lap', '.', 'this', 'damag', 'the', 'tire', '...']]
###Markdown
We can see that _"damage"_ and _"tires"_ are transformed into simpler forms of the respective words. Notice as well that all the words have been lowercased. Lowercasing the data is also a common step in text pre-processing. One thing that you may have noticed was the concept of "stopwords" being used. **Stopwords** are common words in a given corpus or language that, due to being so common, lose interest for most natural language processing applications. For instance, imagine a search engine, looking through a whole range of documents. Words as "*the*", "*a*", "*at*", etc. will be present in so many documents that using them in the search will not reduce the amount of possible files that could be relevant to our query. So filtering them out is beneficial to our goal.In the specific case of the stemmer function we are using, defining `ignore_stopwords` as `True` will prevent the stemming of stopwords.In the next part of this BLU you will read about stopwords again, as they are important for the task you will be doing there. Besides stemming there is also the process of **lemmatization**. Both processes share the goal of getting the root of the word, or more formally, reduce inflectional forms of a word to a common base form [\[7\]](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html), but they act differently. Whereas stemming follows an heuristic approach that drops the suffix of words in order to get closer to the common base form, lemmatization uses a dictionary and morphological analysis of words to return the base form of words, known as _lemma_.Using the example in the cited reference, if shown the word _saw_ , stemming would tend to return only *s* , while lemmatization would take into account if the word was the verb or the noun, and correspondingly, return _see_ or _saw_ as the base form of the word.As you may expect, lemmatization is much more expensive in computational terms and, for certain applications, stemming might be more than enough to obtain good results. We will be using only stemming throughout the NLP learning units. --- N-Grams _n-grams_ correspond to sequences of n consecutive elements from a given sentence. Commonly each element is a word, or "token," but we may define it as we wish for the task at hand. Usually we refer to unigrams, bigrams, trigrams, four-grams, etc. according to the length of the sequence of elements.For instance, for the sentence`"The driver made a mistake"`,we would have:- unigrams: `The`, `driver`, `made`, `a`, `mistake`- bigrams: `The driver`, `driver made`, `made a`, `a mistake`- trigrams: `The driver made`, `driver made a`, `made a mistake`- four-grams: `The driver made a`, `driver made a mistake` We will create _n-grams_ but taking advantage of the [NLTK ngram](http://www.nltk.org/_modules/nltk/model/ngram.html) implementation. We will be using the tokenized list `words` created previously.
###Code
print(words)
print(list(ngrams(words, 1)))
print(list(ngrams(words, 2)))
print(list(ngrams(words, 3)))
###Output
[('The', 'car', 'went'), ('car', 'went', 'too'), ('went', 'too', 'fast'), ('too', 'fast', 'on'), ('fast', 'on', 'the'), ('on', 'the', 'second'), ('the', 'second', 'lap'), ('second', 'lap', '.'), ('lap', '.', 'This'), ('.', 'This', 'damaged'), ('This', 'damaged', 'the'), ('damaged', 'the', 'tires'), ('the', 'tires', '...')]
|
solutions_do_not_open/Neural_Networks_with_Keras_solution.ipynb | ###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural Networks with Keras
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
Shallow and Deep Networks
###Code
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=1000, noise=0.1, random_state=0)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
X.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=0)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.losses import BinaryCrossentropy, SparseCategoricalCrossentropy
###Output
_____no_output_____
###Markdown
Shallow Model
###Code
model = Sequential([
Dense(1, input_shape=(2,))
])
model.compile(optimizer=Adam(learning_rate=0.05),
loss=BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
h = model.fit(X_train, y_train, epochs=50,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
def plot_decision_boundary(model, X, y):
amin, bmin = X.min(axis=0) - 0.1
amax, bmax = X.max(axis=0) + 0.1
hticks = np.linspace(amin, amax, 101)
vticks = np.linspace(bmin, bmax, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
plt.figure(figsize=(12, 8))
plt.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
plot_decision_boundary(model, X, y)
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
accuracy
###Output
_____no_output_____
###Markdown
Exercise 1: Deep modelThe model above was not able to perfectly classify the data. Build a deep model with at least 1 or 2 hidden layers and re-train it on the data. You should be able to obtain 100% accuracy. Remember to include the activation function in the definition of each layer.- Define a model- Compile the model- Fit the model- Plot the training history- Plot the decision boundary- Compare the model performance on training and test set- Print the confusion matrix for the test set (bonus points if you make it pretty)
###Code
from sklearn.metrics import accuracy_score, confusion_matrix
model = Sequential([
Dense(4, input_shape=(2,), activation='tanh'),
Dense(2, activation='tanh'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=Adam(learning_rate=0.01),
loss='binary_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=100,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
plot_decision_boundary(model, X, y)
y_train_pred = np.argmax(model.predict(X_train),axis=1)
y_test_pred = np.argmax(model.predict(X_test),axis=1)
print("The Accuracy score on the Train set is:\t{:0.3f}".format(accuracy_score(y_train, y_train_pred)))
print("The Accuracy score on the Test set is:\t{:0.3f}".format(accuracy_score(y_test, y_test_pred)))
cm = confusion_matrix(y_test, y_test_pred)
pd.DataFrame(cm)
###Output
_____no_output_____
###Markdown
Multiclass classification with Images
###Code
from tensorflow.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0
label_description = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot"
]
X_train.shape
y_train.shape
plt.figure(figsize=(10, 10))
for i in range(16):
plt.subplot(4, 4, i+1)
plt.imshow(X_train[i], cmap='gray')
plt.title(label_description[y_train[i]])
plt.axis('off')
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=5, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_pred = model.predict(X_test)
y_pred[:5]
y_test
y_pred_class = np.argmax(y_pred, axis=-1)
y_pred_class
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_class, target_names=label_description))
cm = confusion_matrix(y_test, y_pred_class)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____
###Markdown
Exercise 2: Convolutional networks and GPUUse a convolutional model to improve the performance. Write a model like this one:```pythonmodel = Sequential([ Reshape(target_shape=(28, 28, 1), input_shape=(28, 28)), Conv2D( your code here), Conv2D( your code here), MaxPooling2D(), Flatten(), Dense( your code here), Dense( your code here)])```And train it on the data for 5 epochs. You should be able to bring the accuracy above 90%.Bonus points if you figure out how to change Colab's `Notebook settings` to use GPU acceleration.Remember to display the confusion matrix for the test set.
###Code
from tensorflow.keras.layers import Reshape, Conv2D, MaxPooling2D
model = Sequential([
Reshape(target_shape=(28, 28, 1),
input_shape=(28, 28)),
Conv2D(32, (3, 3), activation='relu'),
Conv2D(32, (3, 3), activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=3, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_test_pred = np.argmax(model.predict(X_test),axis=1)
cm = confusion_matrix(y_test, y_test_pred)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____
###Markdown
Copyright 2020 Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural Networks with Keras
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
Shallow and Deep Networks
###Code
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=1000, noise=0.1, random_state=0)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
X.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=0)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.losses import BinaryCrossentropy, SparseCategoricalCrossentropy
###Output
_____no_output_____
###Markdown
Shallow Model
###Code
model = Sequential([
Dense(1, input_shape=(2,))
])
model.compile(optimizer=Adam(lr=0.05),
loss=BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
h = model.fit(X_train, y_train, epochs=50,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
def plot_decision_boundary(model, X, y):
amin, bmin = X.min(axis=0) - 0.1
amax, bmax = X.max(axis=0) + 0.1
hticks = np.linspace(amin, amax, 101)
vticks = np.linspace(bmin, bmax, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
plt.figure(figsize=(12, 8))
plt.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
plot_decision_boundary(model, X, y)
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
accuracy
###Output
_____no_output_____
###Markdown
Exercise 1: Deep modelThe model above was not able to perfectly classify the data. Build a deep model with at least 1 or 2 hidden layers and re-train it on the data. You should be able to obtain 100% accuracy. Remember to include the activation function in the definition of each layer.- Define a model- Compile the model- Fit the model- Plot the training history- Plot the decision boundary- Compare the model performance on training and test set- Print the confusion matrix for the test set (bonus points if you make it pretty)
###Code
from sklearn.metrics import accuracy_score, confusion_matrix
model = Sequential([
Dense(4, input_shape=(2,), activation='tanh'),
Dense(2, activation='tanh'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=Adam(lr=0.01),
loss='binary_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=100,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
plot_decision_boundary(model, X, y)
y_train_pred = model.predict_classes(X_train)
y_test_pred = model.predict_classes(X_test)
print("The Accuracy score on the Train set is:\t{:0.3f}".format(accuracy_score(y_train, y_train_pred)))
print("The Accuracy score on the Test set is:\t{:0.3f}".format(accuracy_score(y_test, y_test_pred)))
cm = confusion_matrix(y_test, y_test_pred)
pd.DataFrame(cm)
###Output
_____no_output_____
###Markdown
Multiclass classification with Images
###Code
from tensorflow.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0
label_description = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot"
]
X_train.shape
y_train.shape
plt.figure(figsize=(10, 10))
for i in range(16):
plt.subplot(4, 4, i+1)
plt.imshow(X_train[i], cmap='gray')
plt.title(label_description[y_train[i]])
plt.axis('off')
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=5, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_pred = model.predict(X_test)
y_pred[:5]
y_test
y_pred_class = np.argmax(y_pred, axis=-1)
y_pred_class
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_class, target_names=label_description))
cm = confusion_matrix(y_test, y_pred_class)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____
###Markdown
Exercise 2: Convolutional networks and GPUUse a convolutional model to improve the performance. Write a model like this one:```pythonmodel = Sequential([ Reshape(target_shape=(28, 28, 1), input_shape=(28, 28)), Conv2D( your code here), Conv2D( your code here), MaxPooling2D(), Flatten(), Dense( your code here), Dense( your code here)])```And train it on the data for 5 epochs. You should be able to bring the accuracy above 90%.Bonus points if you figure out how to change Colab's `Notebook settings` to use GPU acceleration.Remember to display the confusion matrix for the test set.
###Code
from tensorflow.keras.layers import Reshape, Conv2D, MaxPooling2D
model = Sequential([
Reshape(target_shape=(28, 28, 1),
input_shape=(28, 28)),
Conv2D(32, (3, 3), activation='relu'),
Conv2D(32, (3, 3), activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=3, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_test_pred = model.predict_classes(X_test)
cm = confusion_matrix(y_test, y_test_pred)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____
###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Neural Networks with Keras
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
###Output
_____no_output_____
###Markdown
Shallow and Deep Networks
###Code
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=1000, noise=0.1, random_state=0)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
X.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=0)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.losses import BinaryCrossentropy, SparseCategoricalCrossentropy
###Output
_____no_output_____
###Markdown
Shallow Model
###Code
model = Sequential([
Dense(1, input_shape=(2,))
])
model.compile(optimizer=Adam(lr=0.05),
loss=BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
h = model.fit(X_train, y_train, epochs=50,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
def plot_decision_boundary(model, X, y):
amin, bmin = X.min(axis=0) - 0.1
amax, bmax = X.max(axis=0) + 0.1
hticks = np.linspace(amin, amax, 101)
vticks = np.linspace(bmin, bmax, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
plt.figure(figsize=(12, 8))
plt.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
sns.scatterplot(X[:, 0], X[:, 1], hue=y);
plot_decision_boundary(model, X, y)
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
accuracy
###Output
_____no_output_____
###Markdown
Exercise 1: Deep modelThe model above was not able to perfectly classify the data. Build a deep model with at least 1 or 2 hidden layers and re-train it on the data. You should be able to obtain 100% accuracy. Remember to include the activation function in the definition of each layer.- Define a model- Compile the model- Fit the model- Plot the training history- Plot the decision boundary- Compare the model performance on training and test set- Print the confusion matrix for the test set (bonus points if you make it pretty)
###Code
from sklearn.metrics import accuracy_score, confusion_matrix
model = Sequential([
Dense(4, input_shape=(2,), activation='tanh'),
Dense(2, activation='tanh'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=Adam(lr=0.01),
loss='binary_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=100,
verbose=0, validation_split=0.1)
pd.DataFrame(h.history).plot();
plot_decision_boundary(model, X, y)
y_train_pred = model.predict_classes(X_train)
y_test_pred = model.predict_classes(X_test)
print("The Accuracy score on the Train set is:\t{:0.3f}".format(accuracy_score(y_train, y_train_pred)))
print("The Accuracy score on the Test set is:\t{:0.3f}".format(accuracy_score(y_test, y_test_pred)))
cm = confusion_matrix(y_test, y_test_pred)
pd.DataFrame(cm)
###Output
_____no_output_____
###Markdown
Multiclass classification with Images
###Code
from tensorflow.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0
label_description = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot"
]
X_train.shape
y_train.shape
plt.figure(figsize=(10, 10))
for i in range(16):
plt.subplot(4, 4, i+1)
plt.imshow(X_train[i], cmap='gray')
plt.title(label_description[y_train[i]])
plt.axis('off')
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(256, activation='relu'),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=5, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_pred = model.predict(X_test)
y_pred[:5]
y_test
y_pred_class = np.argmax(y_pred, axis=-1)
y_pred_class
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred_class, target_names=label_description))
cm = confusion_matrix(y_test, y_pred_class)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____
###Markdown
Exercise 2: Convolutional networks and GPUUse a convolutional model to improve the performance. Write a model like this one:```pythonmodel = Sequential([ Reshape(target_shape=(28, 28, 1), input_shape=(28, 28)), Conv2D( your code here), Conv2D( your code here), MaxPooling2D(), Flatten(), Dense( your code here), Dense( your code here)])```And train it on the data for 5 epochs. You should be able to bring the accuracy above 90%.Bonus points if you figure out how to change Colab's `Notebook settings` to use GPU acceleration.Remember to display the confusion matrix for the test set.
###Code
from tensorflow.keras.layers import Reshape, Conv2D, MaxPooling2D
model = Sequential([
Reshape(target_shape=(28, 28, 1),
input_shape=(28, 28)),
Conv2D(32, (3, 3), activation='relu'),
Conv2D(32, (3, 3), activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.compile('adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=3, validation_split=0.1)
pd.DataFrame(h.history).plot();
y_test_pred = model.predict_classes(X_test)
cm = confusion_matrix(y_test, y_test_pred)
df = pd.DataFrame(cm, index=label_description, columns=label_description)
df
###Output
_____no_output_____ |
scripts/Ben/.ipynb_checkpoints/20170812_his_integration_check-checkpoint.ipynb | ###Markdown
print out list (1) or add wells one by one (2)2input well for ['YPD', '+YTK190 1', 'w303', 'TR1'] : F1correct well: F1? 1=Yes, 0 = No, exit = break loop1['F1']input well for ['YPD', '+YTK190 1', 'w303', 'TR2'] : F7correct well: F7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7']input well for ['YPD', '+YTK190 1', 'BY4741', 'TR1'] : C1correct well: C1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1']input well for ['YPD', '+YTK190 1', 'BY4741', 'TR2'] : C7correct well: C7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7']input well for ['YPD', '+YTK190 2', 'w303', 'TR1'] : G1correct well: G1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1']input well for ['YPD', '+YTK190 2', 'w303', 'TR2'] : G7correct well: G7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7']input well for ['YPD', '+YTK190 2', 'BY4741', 'TR1'] : D1correct well: D1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1']input well for ['YPD', '+YTK190 2', 'BY4741', 'TR2'] : D7correct well: D7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7']input well for ['YPD', '+YTK190 3', 'w303', 'TR1'] : H1correct well: H1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1']input well for ['YPD', '+YTK190 3', 'w303', 'TR2'] : H7correct well: H7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7']input well for ['YPD', '+YTK190 3', 'BY4741', 'TR1'] : E1correct well: E1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1']input well for ['YPD', '+YTK190 3', 'BY4741', 'TR2'] : E7correct well: E7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7']input well for ['YPD', '+pTMP136 1', 'w303', 'TR1'] : F2correct well: F2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2']input well for ['YPD', '+pTMP136 1', 'w303', 'TR2'] : F8correct well: F8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8']input well for ['YPD', '+pTMP136 1', 'BY4741', 'TR1'] : C2correct well: C2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2']input well for ['YPD', '+pTMP136 1', 'BY4741', 'TR2'] : C8correct well: C8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8']input well for ['YPD', '+pTMP136 2', 'w303', 'TR1'] : G2correct well: G2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2']input well for ['YPD', '+pTMP136 2', 'w303', 'TR2'] : G8correct well: G8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8']input well for ['YPD', '+pTMP136 2', 'BY4741', 'TR1'] : D2correct well: D2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2']input well for ['YPD', '+pTMP136 2', 'BY4741', 'TR2'] : D8correct well: D8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8']input well for ['YPD', '+pTMP136 3', 'w303', 'TR1'] : H2correct well: H2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2']input well for ['YPD', '+pTMP136 3', 'w303', 'TR2'] : H8correct well: H8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8']input well for ['YPD', '+pTMP136 3', 'BY4741', 'TR1'] : E2correct well: E2? 1=Yes, 0 = No, exit = break loopE8['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2']input well for ['YPD', '+pTMP136 3', 'BY4741', 'TR2'] : E8correct well: E8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8']input well for ['YPD', 'AHN321', 'BY4741', 'TR1'] : B1correct well: B1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1']input well for ['YPD', 'AHN321', 'BY4741', 'TR2'] : B7correct well: B7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7']input well for ['YPD', 'HES5-41', 'w303', 'TR1'] : A7correct well: A7? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7']input well for ['YPD', 'WCD230', 'BY4741', 'TR1'] : B2correct well: B2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7', 'B2']input well for ['YPD', 'WCD230', 'BY4741', 'TR2'] : B8correct well: B8? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7', 'B2', 'B8']input well for ['YPD', 'yBMH126', 'BY4741', 'TR1'] : A1correct well: A1? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7', 'B2', 'B8', 'A1']input well for ['YPD', 'yBMH127', 'BY4741', 'TR1'] : A2correct well: A2? 1=Yes, 0 = No, exit = break loop1['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7', 'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8', 'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7', 'B2', 'B8', 'A1', 'A2']input well for ['SD-HIS', '+YTK190 1', 'w303', 'TR1'] : asdfascorrect well: asdfas? 1=Yes, 0 = No, exit = break loopexitexiting loop25, 26 (231)30, 31 (126, 127)
###Code
#The YPD list was generated with the routine above. The SDC lists were then generated from the YPD list
#manually typing in coordinates. This will be customized for each experiment layout.
well_list = {'YPD' : ['F1', 'F7', 'C1', 'C7', 'G1', 'G7', 'D1', 'D7', 'H1', 'H7',
'E1', 'E7', 'F2', 'F8', 'C2', 'C8', 'G2', 'G8', 'D2', 'D8',
'H2', 'H8', 'E2', 'E8', 'B1', 'B7', 'A7', 'B2', 'B8', 'A1',
'A2'],
'SD-HIS' : ['F3', 'F9', 'C3', 'C9', 'G3', 'G9', 'D3', 'D9', 'H3', 'H9', 'E3', 'E9',
'F4', 'F10', 'C4', 'C10', 'G4', 'G10', 'D4', 'D10', 'H4', 'H10', 'E4', 'E10',
'B3', 'B9', 'A9', 'B4', 'B10', 'A3', 'A4'],
'SDC' : ['F5', 'F11', 'C5', 'C11', 'G5', 'G11', 'D5', 'D11', 'H5', 'H11', 'E5', 'E11',
'F6', 'F12', 'C6', 'C12', 'G6', 'G12', 'D6', 'D12', 'H6', 'H12', 'E6', 'E12',
'B5', 'B11', 'A11', 'B6', 'B11', 'A5', 'A6']}
blank = {'YPD':np.mean(OD_data['A8']),
'SD-HIS':np.mean(OD_data['A10']),
'SDC':np.mean(OD_data['A12'])}
growth_data = []
for medium in media:
growth_data.append([OD_data[well]-blank[medium] for well in well_list[medium] ])
growth_data = list(chain.from_iterable(growth_data))
growth_data_df = pd.DataFrame(growth_data, index=data_index_adjusted)
#make columns time points
dt = 15.0/60.0
growth_data_df.columns = growth_data_df.columns*dt
#Plot raw growth curves for all conditions
fig, ax = plt.subplots(2, 3,sharex = True, sharey = True)
for jj, medium in enumerate(media):
for kk, background in enumerate(backgrounds):
#Select data with correct medium and background
growth_data_df_med_bg = growth_data_df.xs((medium,background),level = ['Media','Background'])
#average across technical replicates.
growth_data_df_med_bg_avg = growth_data_df_med_bg.mean(level='Strain')
#Plot growth curves
#only show labels on right side of the plot.
if jj ==2:
growth_data_df_med_bg_avg.transpose().plot(ax = ax[kk,jj], title = medium + ' ' + background, legend = True)
ax[kk,jj].legend(loc='center left', bbox_to_anchor=(1, 0.5))
else:
growth_data_df_med_bg_avg.transpose().plot(ax = ax[kk,jj], title = medium + ' ' + background, legend = False)
#Plot log2 growth curves for all conditions
fig, ax = plt.subplots(2, 3,sharex = True, sharey = True)
for jj, medium in enumerate(media):
for kk, background in enumerate(backgrounds):
#Select data with correct medium and background
growth_data_df_med_bg = growth_data_df.xs((medium,background),level = ['Media','Background'])
#average across technical replicates.
growth_data_df_med_bg_avg = growth_data_df_med_bg.mean(level='Strain')
#take Log(base2) of data.
growth_data_df_med_bg_avg_log = np.log(growth_data_df_med_bg_avg)/np.log(2)
#Plot growth curves
#only show labels on right side of the plot.
if jj ==2:
growth_data_df_med_bg_avg_log.transpose().plot(ax = ax[kk,jj], title = medium + ' ' + background, legend = True)
ax[kk,jj].legend(loc='center left', bbox_to_anchor=(1, 0.5))
else:
growth_data_df_med_bg_avg_log.transpose().plot(ax = ax[kk,jj], title = medium + ' ' + background, legend = False)
###Output
_____no_output_____
###Markdown
Looking at the log plot the linear growth range is between about 2h and 7h. I'll just use YPD for the next plot which will be a bar plot of growth rates.
###Code
#This actually only seems true for BY4741. For W303, the linear range looks more like 0 to 2
#plot log of YPD BY4741 strains in linear range
#In between time 1 and 5 log plot looked linear.
t = growth_data_df_med_bg_avg_log.columns
t_low = 1.0
t_high = 5.0
t_check = (t>t_low)*(t<t_high)
#finds indices of all time values above t_low
t_inds = [ind for ind,val in enumerate(t_check) if val==True]
t_linear_range = t[t_inds]
#Select YPD media and BY4741 background
media = 'YPD'
background = 'BY4741'
growth_data_df_med_bg = growth_data_df.xs((media,background),level = ['Media','Background'])
#Take average
growth_data_df_med_bg_avg = growth_data_df_med_bg.mean(level='Strain')
#Take Log2
growth_data_df_med_bg_avg_log = np.log(growth_data_df_med_bg_avg)/np.log(2)
#Extract only linear range
growth_data_df_med_bg_avg_log_linrange = growth_data_df_med_bg_avg_log.iloc[:,t_inds]
#Plot
fix, ax = plt.subplots()
growth_data_df_med_bg_avg_log_linrange.transpose().plot(ax = ax, title = media + ' ' + background, legend = True)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
#Make Barplot of slopes:
#Look for good sns method for plotting a barplot of the different strains.
fig, ax = plt.subplots()
growth_rates_YPD_BY4741 = pd.DataFrame(growth_data_df_med_bg_avg_log_linrange.T.apply(plate_reader_tools.get_slope))
growth_rates_YPD_BY4741["new_labels"] = ["New His 1", "New His 2", "New His 3", "Old His 1", "Old His 2", "Old His 3", "WT+dCas9","WT","Old strain no deg","Old strain deg"]
growth_rates_YPD_BY4741.set_index("new_labels",inplace=True)
growth_rates_YPD_BY4741.plot(kind='bar', ax=ax, rot = 45, legend = False,)
ax.set_ylabel("Growth rate (doublings/hr)")
#plot log of YPD W303 strains in linear range
#In between time 0 and 1.5 log plot looked linear.
t = growth_data_df_med_bg_avg_log.columns
t_low = 0.0
t_high = 1.5
t_check = (t>t_low)*(t<t_high)
#finds indices of all time values above t_low
t_inds = [ind for ind,val in enumerate(t_check) if val==True]
t_linear_range = t[t_inds]
#Select YPD media and BY4741 background
media = 'YPD'
background = 'w303'
growth_data_df_med_bg = growth_data_df.xs((media,background),level = ['Media','Background'])
#Take average
growth_data_df_med_bg_avg = growth_data_df_med_bg.mean(level='Strain')
#Take Log2
growth_data_df_med_bg_avg_log = np.log(growth_data_df_med_bg_avg)/np.log(2)
#Extract only linear range
growth_data_df_med_bg_avg_log_linrange = growth_data_df_med_bg_avg_log.iloc[:,t_inds]
#Plot
fix, ax = plt.subplots()
growth_data_df_med_bg_avg_log_linrange.transpose().plot(ax = ax, title = media + ' ' + background, legend = True)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
#Make Barplot of slopes:
#Look for good sns method for plotting a barplot of the different strains.
fig, ax = plt.subplots()
growth_rates_YPD_W303 = pd.DataFrame(growth_data_df_med_bg_avg_log_linrange.T.apply(plate_reader_tools.get_slope))
growth_rates_YPD_W303["new_labels"] = ["New His 1", "New His 2", "New His 3", "Old His 1", "Old His 2", "Old His 3", "WT"]
growth_rates_YPD_W303.set_index("new_labels",inplace=True)
growth_rates_YPD_W303.plot(kind='bar', ax=ax, rot = 45, legend = False,)
ax.set_ylabel("Growth rate (doublings/hr)")
#fig.savefig(dirname + "plots//YPD_w303_growth_rate.png")
###Output
_____no_output_____ |
l4/updates_TM-L4.ipynb | ###Markdown
L4: Word embeddings In this lab you will explore word embeddings. A **word embedding** is a mapping of words to points in a vector space such that nearby words (points) are similar in terms of their distributional properties. You will use word embedding to find similar words, and evaluate their usefulness in an inference task.You will use the word vectors that come with [spaCy](http://spacy.io). Note that you will need the ‘large’ English language model; the ‘small’ model that you used in previous labs does not include proper word vectors.
###Code
import spacy
#!python3 -m spacy download en_core_web_lg
nlp = spacy.load('en_core_web_lg')
#nlp = en_core_web_lg.load()
###Output
_____no_output_____
###Markdown
Every word in the model’s vocabulary comes with a 300-dimensional vector, represented as a NumPy array. The following code cell shows how to access the vector for the word *cheese*:
###Code
nlp.vocab['cheese'].vector
###Output
_____no_output_____
###Markdown
Problem 1: Finding similar words Your first task is to use the word embeddings to find similar words. More specifically, we ask you to write a function `most_similar` that takes a vector $x$ and returns a list with the 10 most similar entries in spaCy’s vocabulary, with similarity being defined by cosine.**Tip:** spaCy already has a [`most_similar`](https://spacy.io/api/vectorsmost_similar) method that you can wrap.
###Code
# TODO: Enter your implementation of `most_similar` here
#Source : https://spacy.io/api/vectors
#reshaping ndarray to contain only one column of vectors
def most_similar(vocab_vector,n=10):
indexes = nlp.vocab.vectors.most_similar(vocab_vector.reshape(1,-1),n=n)
words = []
for i in indexes[0][0]:
words.append(nlp.vocab[i])
return words
###Output
_____no_output_____
###Markdown
Test your implementation by running the following code cell, which will print the 10 most similar words for the word *cheese*:
###Code
print(' '.join(w.text for w in most_similar(nlp.vocab['cheese'].vector)))
###Output
CHEESE cheese Cheese Cheddar cheddar CHEDDAR BACON Bacon bacon cheeses
###Markdown
You should get the following output:
###Code
CHEESE cheese Cheese Cheddar cheddar CHEDDAR BACON Bacon bacon cheeses
###Output
_____no_output_____
###Markdown
Once you have a working implementation of `most_similar`, use it to think about in what sense the returned words really are ‘similar’ to the cue word. Try to find examples where the cue word and at least one of the words returned by `most_similar` are in the following semantic relations:1. synonymy (exchangeable meanings)2. antonymy (opposite meanings)3. hyperonymy/hyponymy (more specific/less specific meanings)Document your examples in the code cell below.
###Code
# TODO: Insert code here to generate your examples
print(' '.join(w.text for w in most_similar(nlp.vocab['pleased'].vector)))
print(' '.join(w.text for w in most_similar(nlp.vocab['awake'].vector)))
print(' '.join(w.text for w in most_similar(nlp.vocab['algorithm'].vector)))
###Output
pleased PLEASED Pleased delighted DELIGHTED Delighted Thrilled THRILLED thrilled Grateful
AWAKE Awake awake WAKING Waking waking Asleep ASLEEP asleep sleep
ALGORITHM algorithm Algorithm Algorithms algorithms ALGORITHMS COMPUTATION Computation computation heuristic
###Markdown
Problem 2: Plotting similar words Your next task is to visualize the word embedding space by a plot. To do so, you will have to reduce the dimensionality of the space from 300 to 2 dimensions. One suitable algorithm for this is [T-distributed Stochastic Neighbor Embedding](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) (TSNE), which is implemented in scikit-learn’s [TSNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) class.Write a function `plot_most_similar` that takes a list of words (lexemes) and does the following:1. For each word in the list, find the most similar words (lexemes) in the spaCy vocabulary.2. Compute the TSNE transformation of the corresponding vectors to 2 dimensions.3. Produce a scatter plot of the transformed vectors, with the vectors as points and the corresponding word forms as labels.
###Code
# TODO: Write code here to plot the most similar words
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
#Source : https://spacy.io/api/vectors
def plot_most_similar(words):
x, y = [], []
similar_words, similar_vec = [], []
for word in words:
similar_vec.extend([w.vector for w in most_similar(word.vector)])
similar_words.extend([w.text for w in most_similar(word.vector)])
X_embedded = TSNE(n_components=2).fit_transform(similar_vec)
x.extend(X_embedded[:,0])
y.extend(X_embedded[:,1])
plt.figure(figsize=(13,13))
plt.scatter(x,y)
#https://stackoverflow.com/questions/14432557/matplotlib-scatter-plot-with-different-text-at-each-data-point
i = 0
for w in similar_words:
plt.annotate(w, (x[i], y[i]))
i += 1
###Output
_____no_output_____
###Markdown
Test your code by running the following cell:
###Code
plot_most_similar(nlp.vocab[w] for w in ['cheese', 'goat', 'sweden', 'university', 'computer'])
###Output
_____no_output_____
###Markdown
Take a few minutes to look at your plot. What does it tell you? What does it *not* tell you?This plot shows different clusters made up of similar words of the input words. It tells you a nice overview of how similar words are to eachother, such as sweden, norway and finland being clustered together. One thing it doesn't really tell you is how different these clusters are from each other, is the cluster with goats and cows as close to sweden,norway,finland cluster as it is to the bacon and cheese cluster? Problem 3: Analogies In a **word analogy task** you are given three words $x$, $y$, $z$ and have to predict a word $w$ that has the same semantic relation to $z$ as $y$ has to $x$. One example is *man*, *woman*, *brother*, the expected answer being *sister* (the semantic relation is *male*/*female*).[Mikolov et al. (2013)](http://www.aclweb.org/anthology/N13-1090) have shown that some types of word analogy tasks can be solved by adding and substracting word vectors in a word embedding: the vector for *sister* is the closest vector (in terms of cosine distance) to the vector *brother* $-$ *man* $+$ *woman*. Your next task is to write a function `fourth` that takes in three words (say *brother*, *man*, *woman*) and predicts the word that completes the analogy (in this case, *sister*).
###Code
# TODO: Enter code here to solve the analogy problem
def fourth(w1, w2, w3):
calc = w1.vector - w2.vector + w3.vector
similar_word = most_similar(calc,1)
#print(similar_word) #list of 1 most similar word
return similar_word[0]
###Output
_____no_output_____
###Markdown
Test your code by running the following code. You should get *sister*.
###Code
fourth(nlp.vocab['brother'], nlp.vocab['man'], nlp.vocab['woman']).text
print(fourth(nlp.vocab['Stockholm'], nlp.vocab['Sweden'], nlp.vocab['Germany']).text)
print(fourth(nlp.vocab['Swedish'], nlp.vocab['Sweden'], nlp.vocab['France']).text)
print(fourth(nlp.vocab['better'], nlp.vocab['good'], nlp.vocab['bad']).text)
print(fourth(nlp.vocab['walked'], nlp.vocab['walk'], nlp.vocab['take']).text)
###Output
BERLIN
FRENCH
WORSE
TOOK
###Markdown
You should also be able to get the following:* *Stockholm* $-$ *Sweden* $+$ *Germany* $=$ *Berlin** *Swedish* $-$ *Sweden* $+$ *France* $=$ *French** *better* $-$ *good* $+$ *bad* $=$ *worse** *walked* $-$ *walk* $+$ *take* $=$ *took*Experiment with other examples to see whether you get the expected output. Provide three examples of analogies for which the model produces the ‘correct’ answer, and three examples on which the model ‘failed’. Based on your theoretical understanding of word embeddings, do you have a hypothesis as to why the model succeeds/fails in completing the analogy? Discuss this question in a short text.
###Code
# Correct answer
print(fourth(nlp.vocab['dog'], nlp.vocab['puppy'], nlp.vocab['kitten']).text)
print(fourth(nlp.vocab['summer'], nlp.vocab['hot'], nlp.vocab['cold']).text)
print(fourth(nlp.vocab['German'], nlp.vocab['Germany'], nlp.vocab['Spain']).text)
# Wrong answer
print(fourth(nlp.vocab['cyclist'], nlp.vocab['bike'], nlp.vocab['run']).text) #should be runner
print(fourth(nlp.vocab['yell'], nlp.vocab['talk'], nlp.vocab['quiet']).text) #should be whisper
print(fourth(nlp.vocab['king'], nlp.vocab['man'], nlp.vocab['woman']).text) #should be queen
###Output
CAT
WinteR
SPANISH
RUN
QUIET
KIng
###Markdown
Interesting that the classic king-man + woman example did not work in this test. This may because our data set is not large enough, or our method of getting word embeddings it not good enough. The ones that work make logical sense, and work because they are basic and are made up of easily distinguishable words like subtracting the country from German and adding Spain to get spanish for example. The ones that don't work are more subtle and harder to correctly guess. Natural language inference dataset In the second part of this lab, you will be evaluating the usefulness of word embeddings in the context of a natural language inference task. The data for this part is the [SNLI corpus](https://nlp.stanford.edu/projects/snli/), a collection of 570k human-written English image caption pairs manually labeled with the labels *Entailment*, *Contradiction*, and *Neutral*. Consider the following sentence pair as an example:* Sentence 1: A soccer game with multiple males playing.* Sentence 2: Some men are playing a sport.This pair is labeled with *Entailment*, because sentence 2 is logically entailed (implied) by sentence 1 – if sentence 1 is true, then sentence 2 is true, too. The following sentence pair, on the other hand, is labeled with *Contradiction*, because both sentences cannot be true at the same time.* Sentence 1: A black race car starts up in front of a crowd of people.* Sentence 2: A man is driving down a lonely road.For detailed information about the corpus, refer to [Bowman et al. (2015)](https://www.aclweb.org/anthology/D15-1075/). For this lab, we load the training portion and the development portion of the dataset.**Note:** Because the SNLI corpus is rather big, we initially only load a small portion (25,000 samples) of the training data. Once you have working code for Problems 4–6, you should set the flag `final` to `True` and re-run all cells with the full dataset.
###Code
import bz2
import pandas as pd
final_evaluation = False # TODO: Set to True for the final evaluation!
with bz2.open('train.jsonl.bz2', 'rt') as source:
if final_evaluation:
df_train = pd.read_json(source, lines=True)
else:
df_train = pd.read_json(source, lines=True, nrows=25000)
print('Number of sentence pairs in the training data:', len(df_train))
with bz2.open('dev.jsonl.bz2', 'rt') as source:
df_dev = pd.read_json(source, lines=True)
print('Number of sentence pairs in the development data:', len(df_dev))
#!pip install --upgrade pandas
###Output
_____no_output_____
###Markdown
When you inspect the data frames, you will see that we have preprocessed the sentences and separated tokens by spaces. In the columns `tagged1` and `tagged2`, we have added the part-of-speech tags for every token (as predicted by spaCy), also separated by spaces.
###Code
df_train.head()
###Output
_____no_output_____
###Markdown
Problem 4: Two simple baselines Your first task is to establish two simple baselines for the natural language inference task. Random baselineImplement the standard random baseline that generates prediction by sampling from the empirical distribution of the classes in the training data. Write code to evaluate the performance of this classifier on the development data.
###Code
# TODO: Enter code here to implement the random baseline. Print the classification report.
from sklearn.metrics import classification_report
import numpy as np
from sklearn.dummy import DummyClassifier
X = df_train[['sentence1', 'tags1', 'sentence2', 'tags2']]
y = df_train['gold_label']
dummy_clf = DummyClassifier(strategy="stratified")
dummy_clf.fit(X, y)
dev = df_dev[['sentence1', 'tags1', 'sentence2', 'tags2']]
dev_targets = df_dev['gold_label']
pred = dummy_clf.predict(dev)
print(classification_report(dev_targets, pred))
###Output
precision recall f1-score support
contradiction 0.33 0.34 0.33 3278
entailment 0.34 0.33 0.33 3329
neutral 0.33 0.33 0.33 3235
accuracy 0.33 9842
macro avg 0.33 0.33 0.33 9842
weighted avg 0.33 0.33 0.33 9842
###Markdown
One-sided baselineA second obvious baseline for the inference task is to predict the class label of a sentence pair based on the text of only one of the two sentences, just as in a standard document classification task. Put together a simple [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) + [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) pipeline that implements this idea, train it, and evaluate it on the development data. Is it better to base predictions on sentence 1 or sentence 2? Why should one sentence be more useful than the other?
###Code
# TODO: Enter code here to implement the one-sentence baselines. Print the classification reports.
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import CountVectorizer
pipe = Pipeline([('count_vect', CountVectorizer()),
('logist_reg', LogisticRegression())])
X = df_train['sentence1']
print(X.shape, y.shape)
pipe.fit(X, y)
sent1 = df_dev["sentence1"]
print(classification_report(dev_targets, pipe.predict(sent1)))
#training on sentence 2 and evaluating
X= df_train['sentence2']
pipe.fit(X, y)
sent2 = df_dev["sentence2"]
print(classification_report(dev_targets, pipe.predict(sent2)))
###Output
C:\Users\dnybe\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:764: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
Training on sentence two to predict the true label seems to perform way better than sentence one. Accuracy when trained on setence one was 0.34, and it improved to 0.64 with just sentence two. Sentence two might be better for this task because it seems to be more general be shorter, the first sentence may have irrelevant information that makes predicting from it harder. Problem 5: A classifier based on manually engineered features [Bowman et al., 2015](https://www.aclweb.org/anthology/D15-1075/) evaluate a classifier that uses (among others) **cross-unigram features**. This term is used to refer to pairs of unigrams $(w_1, w_2)$ such that $w_1$ occurs in sentence 1, $w_2$ occurs in sentence 2, and both have been assigned the same part-of-speech tag.Your next task is to implement the cross-unigram classifier. To this end, the next cell contains skeleton code for a transformer that you can use as the first component in a classification pipeline. This transformer converts each row of the SNLI data frame into a space-separated string consisting of* the standard unigrams (of sentence 1 or sentence 2 – choose whichever performed better in Problem 4)* the cross-unigrams, as described above.The space-separated string forms a new ‘document’ that can be passed to a vectorizer in exactly the same way as a standard sentence in Problem 4.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
class CrossUnigramsTransformer(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
return self
# Transform a single row of the dataframe.
def _transform(self, sentence_1,pos_1,sentence_2,pos_2):
# TODO: Replace the following line with your own code
w1 = sentence_1.split(" ")
w2 = sentence_2.split(" ")
p1 = pos_1.split(" ")
p2 = pos_2.split(" ")
new_sentence = sentence_2
for i in range(0,len(p1)):
for j in range(0,len(p2)):
if(p1[i] == p2[j]):
new_sentence+='('+w1[i]+','+w2[j]+')'
return new_sentence
def transform(self, X):
return [self._transform(sentence_1,pos_1,sentence_2,pos_2) for sentence_1,pos_1,sentence_2,pos_2 in X]
#transformer = CrossUnigramsTransformer()
#x_train = df_train.drop(['gold_label'], axis=1)
#processed_train = transformer.transform(x_train.values)
#processed_train
###Output
_____no_output_____
###Markdown
Once you have an implementation of the transformer, extend the pipeline that you built for Problem 4, train it, and evaluate it on the development data.
###Code
# TODO: Enter code here to implement the cross-unigrams classifier. Print the classification report.
transformer = CrossUnigramsTransformer()
x_train = df_train.drop(['gold_label'], axis=1)
x_test = df_dev.drop(['gold_label'], axis=1)
processed_train = transformer.transform(x_train.values)
processed_test = transformer.transform(x_test.values)
pipe.fit(processed_train,y)
print(classification_report(dev_targets, pipe.predict(processed_test)))
###Output
C:\Users\dnybe\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:764: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
Problem 6: A classifier based on word embeddings Your last task in this lab is to build a classifier for the natural language inference task that uses word embeddings. More specifically, we ask you to implement a vectorizer that represents each sentence as the sum of its word vectors – a representation known as the **continuous bag-of-words**. Thus, given that spaCy’s word vectors have 300 dimensions, each sentence will be transformed into a 300-dimensional vector. To represent a sentence pair, the vectorizer should concatenate the vectors for the individual sentences; this yields a 600-dimensional vector. This vector can then be passed to a classifier.The next code cell contains skeleton code for the vectorizer. You will have to implement two methods: one that maps a single sentence to a vector (of length 300), and one that maps a sentence pair to a vector (of length 600).
###Code
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
class PairedSentenceVectorizer(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
return self
# Vectorize a single sentence.
def _transform1(self, sentence):
# TODO: Replace the following line with your own code
return np.zeros(nlp.vocab.vectors.shape[1])
# Vectorize a single row of the dataframe.
def _transform2(self, row):
# TODO: Replace the following line with your own code
return np.zeros(nlp.vocab.vectors.shape[1] * 2)
def transform(self, X):
return np.concatenate(
[self._transform2(row).reshape(1, -1) for row in X.itertuples()]
)
###Output
_____no_output_____
###Markdown
Once you have a working implementation, build a pipeline consisting of the new vectorizer and a [multi-layer perceptron classifier](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html). This more powerful (compared to logistic regression) classifier is called for here because we do not specify features by hand (as we did in Problem 5), but want to let the model learn a good representation of the data by itself. Use 3 hidden layers, each with size 300. It suffices to train the classifier for 8 iterations (epochs).
###Code
# TODO: Enter code here to implement the word embeddings classifier. Print the classification report.
###Output
_____no_output_____
###Markdown
Problem 7: Final evaluation Once you have working code for all problems, re-run Problems 4–6 with the full training data. This will take quite a while (expect approximately 1&;nbsp;hour on Colab). **Make sure to not overwrite your previous results.** What are your results on the full data? How do they differ from the results that you obtained for the smaller training data? How do you interpret this? Summarize your findings in a short text.
###Code
# FINAL EVAL DATA #
with bz2.open('train.jsonl.bz2', 'rt') as source:
df_train = pd.read_json(source, lines=True)
print('Number of sentence pairs in the training data:', len(df_train))
with bz2.open('dev.jsonl.bz2', 'rt') as source:
df_dev = pd.read_json(source, lines=True)
print('Number of sentence pairs in the development data:', len(df_dev))
############## PROBLEM 4 #########################
X = df_train[['sentence1', 'tags1', 'sentence2', 'tags2']]
y = df_train['gold_label']
dummy_clf = DummyClassifier(strategy="stratified")
dummy_clf.fit(X, y)
dev = df_dev[['sentence1', 'tags1', 'sentence2', 'tags2']]
dev_targets = df_dev['gold_label']
pred = dummy_clf.predict(dev)
print("Dummy random classifier")
print(classification_report(dev_targets, pred))
#random baseline
pipe = Pipeline([('count_vect', CountVectorizer()),
('logist_reg', LogisticRegression())])
X = df_train['sentence1']
pipe.fit(X, y)
sent1 = df_dev["sentence1"]
print("One Sided Baseline sentence 1")
print(classification_report(dev_targets, pipe.predict(sent1)))
#training on sentence 2 and evaluating
X= df_train['sentence2']
pipe.fit(X, y)
sent2 = df_dev["sentence2"]
print("One Sided Baseline sentence 2")
print(classification_report(dev_targets, pipe.predict(sent2)))
########## PROBLEM 5 ########################
transformer = CrossUnigramsTransformer()
x_train = df_train.drop(['gold_label'], axis=1)
x_test = df_dev.drop(['gold_label'], axis=1)
processed_train = transformer.transform(x_train.values)
processed_test = transformer.transform(x_test.values)
pipe.fit(processed_train,y)
print("CrossUnigramsTransformer")
print(classification_report(dev_targets, pipe.predict(processed_test)))
############# PROBLEM 6 ######################
###Output
Dummy random classifier
precision recall f1-score support
contradiction 0.33 0.33 0.33 3278
entailment 0.34 0.33 0.34 3329
neutral 0.33 0.33 0.33 3235
accuracy 0.33 9842
macro avg 0.33 0.33 0.33 9842
weighted avg 0.33 0.33 0.33 9842
|
autoencoders/Denoising_autoencoder.ipynb | ###Markdown
Denoising Autoencoder for MNISTExamples of simple autoencoders (DNN,CNN) built for the task of learning a useful latent representation of an input image from MNIST, such that it is able to denoise input images. The CNN autoencoder does a better job than the simpler DNN autoencoder, but neither of the models does a remarkable job, due to their relative simplicity.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train.shape,y_train.shape,x_test.shape,y_test.shape
y_train # labels are pre-shuffled for testing and training
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
example = x_train[0]
noise = np.random.randn(28,28)*20
example_with_noise = np.minimum(255,np.maximum(0,example+noise)) # ensuring valid pixel values
plt.imshow(example,cmap="gray")
plt.title("MNIST Without Noise")
plt.show()
plt.imshow(example_with_noise,cmap="gray")
plt.title("MNIST With Noise")
plt.show()
x_train = x_train.astype("float32")
x_test = x_test.astype("float32")
# adding noise to the MNIST data
noise_train = np.random.randn(x_train.shape[0],28,28)*20
noise_test = np.random.randn(x_test.shape[0],28,28)*20
x_train_w_noise = np.minimum(255,np.maximum(0,x_train+noise_train))
x_test_w_noise = np.minimum(255,np.maximum(0,x_test+noise_test))
###Output
_____no_output_____
###Markdown
Simple DNN Autoencoder
###Code
class SimpleNet(nn.Module):
""" 784x1 -> 300x1 -> 100x1(h) -> 300x1 -> 784x1
"""
def __init__(self):
super().__init__()
self.h1 = nn.Linear(784,300)
self.h2 = nn.Linear(300,100) # encoding to 100x1
self.h3 = nn.Linear(100,300) # upsampling
self.h4 = nn.Linear(300,784)
def forward(self, x):
x = x.view(x.shape[0],784) # 784x1
x = F.relu(self.h1(x)) # 300x1
x = self.h2(x) # 100x1
x = F.relu(self.h3(x)) # 300x1
x = self.h4(x) # 784x1
return x
net = SimpleNet()
X = torch.from_numpy(x_train_w_noise)
y = torch.from_numpy(x_train.reshape(x_train.shape[0],784))
X = X.float()
y = y.float()
X.shape,y.shape,X.dtype,y.dtype
optimizer = optim.Adam(net.parameters(),lr=0.005)
loss_criterion = nn.MSELoss()
epochs = 30
batch_size=500
# training iteration
for i in range(epochs):
batch_losses = []
for j in range(0,X.shape[0],batch_size):
temp_X = X[j:j+500,:,:]
temp_y = y[j:j+500,:]
optimizer.zero_grad()
output = net(temp_X)
loss = loss_criterion(output,temp_y)
loss.backward()
optimizer.step()
batch_losses.append(loss)
print("Epoch {} loss: {}".format(i+1,sum(batch_losses)/len(batch_losses)))
# training set example, using the simple denoising autoencoder
img_i = 12
example = x_train[img_i]
example_w_noise = x_train_w_noise[img_i]
with torch.no_grad():
torch_example = torch.from_numpy(example_w_noise)
torch_example = torch_example.float()
torch_example = torch_example.unsqueeze(0)
out = net.forward(torch_example) # denoised from the model
out = out.numpy()
out = np.maximum(0,out)
out.shape = (28,28)
plt.subplot(1,3,1)
plt.title("Actual")
plt.imshow(example,cmap="gray")
plt.subplot(1,3,2)
plt.title("Actual + Noise")
plt.imshow(example_w_noise,cmap="gray")
plt.subplot(1,3,3)
plt.title("Denoised")
plt.imshow(out,cmap="gray")
plt.show()
# testing set example, using the simple denoising autoencoder
img_i = 12
example = x_test[img_i]
example_w_noise = x_test_w_noise[img_i]
with torch.no_grad():
torch_example = torch.from_numpy(example_w_noise)
torch_example = torch_example.float()
torch_example = torch_example.unsqueeze(0)
out = net.forward(torch_example) # denoised from the model
out = out.numpy()
out = np.maximum(0,out)
out.shape = (28,28)
plt.subplot(1,3,1)
plt.title("Actual")
plt.imshow(example,cmap="gray")
plt.subplot(1,3,2)
plt.title("Actual + Noise")
plt.imshow(example_w_noise,cmap="gray")
plt.subplot(1,3,3)
plt.title("Denoised")
plt.imshow(out,cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
Simple CNN Autoencoder
###Code
class CnnNet(nn.Module):
""" input shape: (batch_size,num_channels,length,width)
"""
def __init__(self):
super().__init__()
self.up = nn.UpsamplingNearest2d(scale_factor=2)
self.conv1 = nn.Conv2d(1,8,kernel_size=3,padding=1)
self.conv2 = nn.Conv2d(8,8,kernel_size=3,padding=1)
self.conv3 = nn.Conv2d(8,8,kernel_size=3,padding=1)
self.conv4 = nn.Conv2d(8,1,kernel_size=3,padding=1)
def forward(self, x):
x = F.relu(self.conv1(x)) # (1,1,28,28)
x = F.max_pool2d(x,(2,2)) # (1,8,14,14)
x = self.conv2(x) # (1,8,14,14)
x = F.max_pool2d(x,(2,2)) # (1,8,7,7) latent representation
x = self.up(x) # (1,8,14,14)
x = F.relu(self.conv3(x)) # (1,8,14,14)
x = self.up(x) # (1,8,28,28)
x = self.conv4(x) # (1,1,28,28)
x = x.view(x.shape[0],784) # (1,784)
return x
net2 = CnnNet()
optimizer = optim.Adam(net2.parameters(),lr=0.003)
loss_criterion = nn.MSELoss()
epochs = 9
batch_size=500
# training iteration
for i in range(epochs):
batch_losses = []
for j in range(0,X.shape[0],batch_size):
temp_X = X[j:j+500,:,:]
temp_y = y[j:j+500,:]
temp_X = temp_X.unsqueeze(1)
optimizer.zero_grad()
output = net2(temp_X)
loss = loss_criterion(output,temp_y)
loss.backward()
optimizer.step()
batch_losses.append(loss)
print("Epoch {} loss: {}".format(i+1,sum(batch_losses)/len(batch_losses)))
# training set example, using the cnn denoising autoencoder (same example as with the simpler autoencoder)
img_i = 12
example = x_train[img_i]
example_w_noise = x_train_w_noise[img_i]
with torch.no_grad():
torch_example = torch.from_numpy(example_w_noise)
torch_example = torch_example.float()
torch_example = torch_example.unsqueeze(0)
torch_example = torch_example.unsqueeze(0)
out = net2.forward(torch_example) # denoised from the model
out = out.numpy()
out = np.maximum(0,out)
out.shape = (28,28)
plt.subplot(1,3,1)
plt.title("Actual")
plt.imshow(example,cmap="gray")
plt.subplot(1,3,2)
plt.title("Actual + Noise")
plt.imshow(example_w_noise,cmap="gray")
plt.subplot(1,3,3)
plt.title("Denoised")
plt.imshow(out,cmap="gray")
plt.show()
# testing set example, using the cnn denoising autoencoder (same example as with the simpler autoencoder)
img_i = 12
example = x_test[img_i]
example_w_noise = x_test_w_noise[img_i]
with torch.no_grad():
torch_example = torch.from_numpy(example_w_noise)
torch_example = torch_example.float()
torch_example = torch_example.unsqueeze(0)
torch_example = torch_example.unsqueeze(0)
out = net2.forward(torch_example) # denoised from the model
out = out.numpy()
out = np.maximum(0,out)
out.shape = (28,28)
plt.subplot(1,3,1)
plt.title("Actual")
plt.imshow(example,cmap="gray")
plt.subplot(1,3,2)
plt.title("Actual + Noise")
plt.imshow(example_w_noise,cmap="gray")
plt.subplot(1,3,3)
plt.title("Denoised")
plt.imshow(out,cmap="gray")
plt.show()
###Output
_____no_output_____ |
M_accelerate_ALL-Copy1.ipynb | ###Markdown
MobileNet - Pytorch Step 1: Prepare data
###Code
# MobileNet-Pytorch
import argparse
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.metrics import accuracy_score
#from mobilenets import mobilenet
use_cuda = torch.cuda.is_available()
use_cudause_cud = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
# Train, Validate, Test. Heavily inspired by Kevinzakka https://github.com/kevinzakka/DenseNet/blob/master/data_loader.py
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valid_size=0.1
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=train_transform)
valid_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=valid_transform)
num_train = len(train_dataset)
indices = list(range(num_train))
split = int(np.floor(valid_size * num_train)) #5w张图片的10%用来当做验证集
np.random.seed(42)# 42
np.random.shuffle(indices) # 随机乱序[0,1,...,49999]
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx) # 这个很有意思
valid_sampler = SubsetRandomSampler(valid_idx)
###################################################################################
# ------------------------- 使用不同的批次大小 ------------------------------------
###################################################################################
show_step=2 # 批次大,show_step就小点
max_epoch=150 # 训练最大epoch数目
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=256, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=256, sampler=valid_sampler)
test_transform = transforms.Compose([
transforms.ToTensor(), normalize
])
test_dataset = datasets.CIFAR10(root="data",
train=False,
download=True,transform=test_transform)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=1,
shuffle=True)
###Output
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Step 2: Model Config 32 缩放5次到 1x1@1024 From https://github.com/kuangliu/pytorch-cifar import torchimport torch.nn as nnimport torch.nn.functional as Fclass Block(nn.Module): '''Depthwise conv + Pointwise conv''' def __init__(self, in_planes, out_planes, stride=1): super(Block, self).__init__() 分组卷积数=输入通道数 self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False) self.bn1 = nn.BatchNorm2d(in_planes) self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False) one_conv_kernel_size = 3 self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1,bias=False) 在__init__初始化 self.bn2 = nn.BatchNorm2d(out_planes) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) -------------------------- Attention ----------------------- w = F.avg_pool2d(x,x.shape[-1]) 最好在初始化层定义好 print(w.shape) [bs,in_Channel,1,1] w = w.view(w.shape[0],1,w.shape[1]) [bs,1,in_Channel] one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) 在__init__初始化 [bs,out_channel,in_Channel] w = self.conv1D(w) w = 0.5*F.tanh(w) [-0.5,+0.5] -------------- softmax --------------------------- print(w.shape) w = w.view(w.shape[0],w.shape[1],w.shape[2],1,1) print(w.shape) ------------------------- fusion -------------------------- out=out.view(out.shape[0],1,out.shape[1],out.shape[2],out.shape[3]) print("x size:",out.shape) out=out*w print("after fusion x size:",out.shape) out=out.sum(dim=2) out = F.relu(self.bn2(out)) return outclass MobileNet(nn.Module): (128,2) means conv planes=128, conv stride=2, by default conv stride=1 cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024] def __init__(self, num_classes=10): super(MobileNet, self).__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(32) self.layers = self._make_layers(in_planes=32) 自动化构建层 self.linear = nn.Linear(1024, num_classes) def _make_layers(self, in_planes): layers = [] for x in self.cfg: out_planes = x if isinstance(x, int) else x[0] stride = 1 if isinstance(x, int) else x[1] layers.append(Block(in_planes, out_planes, stride)) in_planes = out_planes return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.layers(out) out = F.avg_pool2d(out, 2) out = out.view(out.size(0), -1) out = self.linear(out) return out
###Code
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block_Attention_HALF(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention_HALF, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#------------------------ 一半 ------------------------------
self.conv2 = nn.Conv2d(in_planes, out_planes//2, kernel_size=1, stride=1, padding=0, bias=False)
#------------------------ 另一半 ----------------------------
one_conv_kernel_size = 9 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes//2, one_conv_kernel_size, stride=1,padding=4,groups=1,dilation=1,bias=False) # 在__init__初始化
#------------------------------------------------------------
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu6(self.bn1(self.conv1(x)))
#out = self.bn1(self.conv1(x))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel//2,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel//2,in_Channel]
#-------------------------------------
w = F.tanh(w) # [-0.5,+0.5]
#w=F.relu6(w) # 效果大大折扣
# [bs=1,out_channel//2,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel//2,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out_1=self.conv2(out)
out_2=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out=torch.cat([out_1,out_2],1)
# ----------------------- 试一试不要用relu -------------------------------
#out = self.bn2(out)
out=F.relu6(self.bn2(out))
return out
class Block_Attention(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=False) # 在__init__初始化
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel,in_Channel]
w = 0.5*F.tanh(w) # [-0.5,+0.5]
# [bs=1,out_channel,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out = F.relu(self.bn2(out))
return out
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), [1024,1]]
cfg = [64, (128,2), 128, (256,2), (256,1), (512,2), [512,1], [512,1], [512,1],[512,1], [512,1], [1024,2], [1024,1]]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
if isinstance(x, int):
out_planes = x
stride = 1
layers.append(Block(in_planes, out_planes, stride))
elif isinstance(x, tuple):
out_planes = x[0]
stride = x[1]
layers.append(Block(in_planes, out_planes, stride))
# AC层通过list存放设置参数
elif isinstance(x, list):
out_planes= x[0]
stride = x[1] if len(x)==2 else 1
layers.append(Block_Attention_HALF(in_planes, out_planes, stride))
else:
pass
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# From https://github.com/Z0m6ie/CIFAR-10_PyTorch
#model = mobilenet(num_classes=10, large_img=False)
# From https://github.com/kuangliu/pytorch-cifar
if torch.cuda.is_available():
model=MobileNet(10).cuda()
else:
model=MobileNet(10)
optimizer = optim.Adam(model.parameters(), lr=0.01)
scheduler = StepLR(optimizer, step_size=20, gamma=0.5)
criterion = nn.CrossEntropyLoss()
# Implement validation
def train(epoch):
model.train()
#writer = SummaryWriter()
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
correct = 0
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
loss = criterion(output, target)
loss.backward()
accuracy = 100. * (correct.cpu().numpy()/ len(output))
optimizer.step()
if batch_idx % 5*show_step == 0:
# if batch_idx % 2*show_step == 0:
# print(model.layers[1].conv1D.weight.shape)
# print(model.layers[1].conv1D.weight[0:2][0:2])
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
f1=open("Cifar10_INFO.txt","a+")
f1.write("\n"+'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
f1.close()
#writer.add_scalar('Loss/Loss', loss.item(), epoch)
#writer.add_scalar('Accuracy/Accuracy', accuracy, epoch)
scheduler.step()
def validate(epoch):
model.eval()
#writer = SummaryWriter()
valid_loss = 0
correct = 0
for data, target in valid_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
valid_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
valid_loss /= len(valid_idx)
accuracy = 100. * correct.cpu().numpy() / len(valid_idx)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
f1=open("Cifar10_INFO.txt","a+")
f1.write('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
f1.close()
#writer.add_scalar('Loss/Validation_Loss', valid_loss, epoch)
#writer.add_scalar('Accuracy/Validation_Accuracy', accuracy, epoch)
return valid_loss, accuracy
# Fix best model
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
f1=open("Cifar10_INFO.txt","a+")
f1.write('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
f1.close()
def save_best(loss, accuracy, best_loss, best_acc):
if best_loss == None:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
elif loss < best_loss and accuracy > best_acc:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
return best_loss, best_acc
# Fantastic logger for tensorboard and pytorch,
# run tensorboard by opening a new terminal and run "tensorboard --logdir runs"
# open tensorboard at http://localhost:6006/
from tensorboardX import SummaryWriter
best_loss = None
best_acc = None
import time
SINCE=time.time()
for epoch in range(max_epoch):
train(epoch)
loss, accuracy = validate(epoch)
best_loss, best_acc = save_best(loss, accuracy, best_loss, best_acc)
NOW=time.time()
DURINGS=NOW-SINCE
SINCE=NOW
print("the time of this epoch:[{} s]".format(DURINGS))
# writer = SummaryWriter()
# writer.export_scalars_to_json("./all_scalars.json")
# writer.close()
#---------------------------- Test ------------------------------
test(epoch)
###Output
Train Epoch: 0 [0/50000 (0%)] Loss: 2.317187, Accuracy: 10.16
Train Epoch: 0 [1280/50000 (3%)] Loss: 2.739555, Accuracy: 9.38
Train Epoch: 0 [2560/50000 (6%)] Loss: 2.468929, Accuracy: 9.77
Train Epoch: 0 [3840/50000 (9%)] Loss: 2.271804, Accuracy: 11.72
Train Epoch: 0 [5120/50000 (11%)] Loss: 2.214532, Accuracy: 16.80
Train Epoch: 0 [6400/50000 (14%)] Loss: 2.125456, Accuracy: 18.36
Train Epoch: 0 [7680/50000 (17%)] Loss: 2.063283, Accuracy: 18.75
Train Epoch: 0 [8960/50000 (20%)] Loss: 1.957888, Accuracy: 25.39
Train Epoch: 0 [10240/50000 (23%)] Loss: 2.006796, Accuracy: 23.05
Train Epoch: 0 [11520/50000 (26%)] Loss: 1.940412, Accuracy: 26.56
Train Epoch: 0 [12800/50000 (28%)] Loss: 1.925677, Accuracy: 23.83
Train Epoch: 0 [14080/50000 (31%)] Loss: 1.894595, Accuracy: 26.95
Train Epoch: 0 [15360/50000 (34%)] Loss: 1.874364, Accuracy: 28.12
Train Epoch: 0 [16640/50000 (37%)] Loss: 1.920679, Accuracy: 23.83
Train Epoch: 0 [17920/50000 (40%)] Loss: 1.921081, Accuracy: 26.95
Train Epoch: 0 [19200/50000 (43%)] Loss: 1.870906, Accuracy: 23.05
Train Epoch: 0 [20480/50000 (45%)] Loss: 1.905605, Accuracy: 28.52
Train Epoch: 0 [21760/50000 (48%)] Loss: 1.875262, Accuracy: 25.78
Train Epoch: 0 [23040/50000 (51%)] Loss: 1.760782, Accuracy: 31.25
Train Epoch: 0 [24320/50000 (54%)] Loss: 1.815875, Accuracy: 29.69
Train Epoch: 0 [25600/50000 (57%)] Loss: 1.789346, Accuracy: 28.12
Train Epoch: 0 [26880/50000 (60%)] Loss: 1.729920, Accuracy: 32.81
Train Epoch: 0 [28160/50000 (62%)] Loss: 1.692929, Accuracy: 39.06
Train Epoch: 0 [29440/50000 (65%)] Loss: 1.751308, Accuracy: 35.94
Train Epoch: 0 [30720/50000 (68%)] Loss: 1.733815, Accuracy: 29.30
Train Epoch: 0 [32000/50000 (71%)] Loss: 1.748714, Accuracy: 37.50
Train Epoch: 0 [33280/50000 (74%)] Loss: 1.753810, Accuracy: 28.52
Train Epoch: 0 [34560/50000 (77%)] Loss: 1.652142, Accuracy: 37.50
Train Epoch: 0 [35840/50000 (80%)] Loss: 1.726063, Accuracy: 28.91
Train Epoch: 0 [37120/50000 (82%)] Loss: 1.670838, Accuracy: 34.38
Train Epoch: 0 [38400/50000 (85%)] Loss: 1.573907, Accuracy: 43.36
Train Epoch: 0 [39680/50000 (88%)] Loss: 1.650179, Accuracy: 35.16
Train Epoch: 0 [40960/50000 (91%)] Loss: 1.646149, Accuracy: 37.89
Train Epoch: 0 [42240/50000 (94%)] Loss: 1.821646, Accuracy: 35.16
Train Epoch: 0 [43520/50000 (97%)] Loss: 1.581002, Accuracy: 39.06
Train Epoch: 0 [35000/50000 (99%)] Loss: 1.636860, Accuracy: 37.00
Validation set: Average loss: 4.5531, Accuracy: 1197/5000 (23.00%)
the time of this epoch:[21.662639617919922 s]
Train Epoch: 1 [0/50000 (0%)] Loss: 1.654787, Accuracy: 39.06
Train Epoch: 1 [1280/50000 (3%)] Loss: 1.533673, Accuracy: 39.45
Train Epoch: 1 [2560/50000 (6%)] Loss: 1.631841, Accuracy: 32.42
Train Epoch: 1 [3840/50000 (9%)] Loss: 1.631126, Accuracy: 38.67
Train Epoch: 1 [5120/50000 (11%)] Loss: 1.639353, Accuracy: 38.67
Train Epoch: 1 [6400/50000 (14%)] Loss: 1.574432, Accuracy: 44.53
Train Epoch: 1 [7680/50000 (17%)] Loss: 1.517393, Accuracy: 47.27
Train Epoch: 1 [8960/50000 (20%)] Loss: 1.620072, Accuracy: 39.84
Train Epoch: 1 [10240/50000 (23%)] Loss: 1.485670, Accuracy: 45.31
Train Epoch: 1 [11520/50000 (26%)] Loss: 1.407409, Accuracy: 48.44
Train Epoch: 1 [12800/50000 (28%)] Loss: 1.564423, Accuracy: 42.19
Train Epoch: 1 [14080/50000 (31%)] Loss: 1.428197, Accuracy: 44.14
Train Epoch: 1 [15360/50000 (34%)] Loss: 1.486374, Accuracy: 44.53
Train Epoch: 1 [16640/50000 (37%)] Loss: 1.496168, Accuracy: 46.09
Train Epoch: 1 [17920/50000 (40%)] Loss: 1.353985, Accuracy: 48.05
Train Epoch: 1 [19200/50000 (43%)] Loss: 1.440946, Accuracy: 49.61
Train Epoch: 1 [20480/50000 (45%)] Loss: 1.428810, Accuracy: 49.61
Train Epoch: 1 [21760/50000 (48%)] Loss: 1.366629, Accuracy: 51.56
Train Epoch: 1 [23040/50000 (51%)] Loss: 1.520613, Accuracy: 41.41
Train Epoch: 1 [24320/50000 (54%)] Loss: 1.424382, Accuracy: 45.70
Train Epoch: 1 [25600/50000 (57%)] Loss: 1.417356, Accuracy: 49.61
Train Epoch: 1 [26880/50000 (60%)] Loss: 1.472661, Accuracy: 45.31
Train Epoch: 1 [28160/50000 (62%)] Loss: 1.338052, Accuracy: 47.66
Train Epoch: 1 [29440/50000 (65%)] Loss: 1.241622, Accuracy: 57.42
Train Epoch: 1 [30720/50000 (68%)] Loss: 1.254776, Accuracy: 51.56
Train Epoch: 1 [32000/50000 (71%)] Loss: 1.353367, Accuracy: 50.00
Train Epoch: 1 [33280/50000 (74%)] Loss: 1.335058, Accuracy: 50.00
Train Epoch: 1 [34560/50000 (77%)] Loss: 1.464982, Accuracy: 48.83
Train Epoch: 1 [35840/50000 (80%)] Loss: 1.362602, Accuracy: 54.30
Train Epoch: 1 [37120/50000 (82%)] Loss: 1.333230, Accuracy: 52.73
Train Epoch: 1 [38400/50000 (85%)] Loss: 1.346182, Accuracy: 49.22
Train Epoch: 1 [39680/50000 (88%)] Loss: 1.174814, Accuracy: 58.98
Train Epoch: 1 [40960/50000 (91%)] Loss: 1.270859, Accuracy: 51.17
Train Epoch: 1 [42240/50000 (94%)] Loss: 1.222242, Accuracy: 59.38
Train Epoch: 1 [43520/50000 (97%)] Loss: 1.269724, Accuracy: 54.30
Train Epoch: 1 [35000/50000 (99%)] Loss: 1.163435, Accuracy: 57.00
Validation set: Average loss: 4.8672, Accuracy: 1912/5000 (38.00%)
the time of this epoch:[21.558705806732178 s]
Train Epoch: 2 [0/50000 (0%)] Loss: 1.170546, Accuracy: 58.20
Train Epoch: 2 [1280/50000 (3%)] Loss: 1.167825, Accuracy: 55.86
Train Epoch: 2 [2560/50000 (6%)] Loss: 1.267082, Accuracy: 57.03
Train Epoch: 2 [3840/50000 (9%)] Loss: 1.203408, Accuracy: 57.42
Train Epoch: 2 [5120/50000 (11%)] Loss: 1.226529, Accuracy: 49.22
Train Epoch: 2 [6400/50000 (14%)] Loss: 1.311463, Accuracy: 53.91
Train Epoch: 2 [7680/50000 (17%)] Loss: 1.213612, Accuracy: 59.38
Train Epoch: 2 [8960/50000 (20%)] Loss: 1.147260, Accuracy: 56.64
Train Epoch: 2 [10240/50000 (23%)] Loss: 1.254088, Accuracy: 55.08
Train Epoch: 2 [11520/50000 (26%)] Loss: 1.197541, Accuracy: 56.25
Train Epoch: 2 [12800/50000 (28%)] Loss: 1.137027, Accuracy: 55.86
Train Epoch: 2 [14080/50000 (31%)] Loss: 1.194584, Accuracy: 61.33
Train Epoch: 2 [15360/50000 (34%)] Loss: 1.204290, Accuracy: 60.16
Train Epoch: 2 [16640/50000 (37%)] Loss: 1.172325, Accuracy: 55.86
Train Epoch: 2 [17920/50000 (40%)] Loss: 1.149843, Accuracy: 59.38
Train Epoch: 2 [19200/50000 (43%)] Loss: 1.126659, Accuracy: 58.20
Train Epoch: 2 [20480/50000 (45%)] Loss: 1.092484, Accuracy: 58.98
Train Epoch: 2 [21760/50000 (48%)] Loss: 1.099942, Accuracy: 55.47
Train Epoch: 2 [23040/50000 (51%)] Loss: 1.186884, Accuracy: 60.94
Train Epoch: 2 [24320/50000 (54%)] Loss: 1.117447, Accuracy: 60.55
Train Epoch: 2 [25600/50000 (57%)] Loss: 1.173386, Accuracy: 55.08
Train Epoch: 2 [26880/50000 (60%)] Loss: 1.084559, Accuracy: 58.98
Train Epoch: 2 [28160/50000 (62%)] Loss: 1.171377, Accuracy: 59.77
Train Epoch: 2 [29440/50000 (65%)] Loss: 1.049761, Accuracy: 62.89
Train Epoch: 2 [30720/50000 (68%)] Loss: 1.029481, Accuracy: 63.28
Train Epoch: 2 [32000/50000 (71%)] Loss: 1.121746, Accuracy: 58.98
Train Epoch: 2 [33280/50000 (74%)] Loss: 1.131498, Accuracy: 58.98
Train Epoch: 2 [34560/50000 (77%)] Loss: 1.166869, Accuracy: 60.55
Train Epoch: 2 [35840/50000 (80%)] Loss: 1.035683, Accuracy: 60.55
Train Epoch: 2 [37120/50000 (82%)] Loss: 1.028708, Accuracy: 59.38
Train Epoch: 2 [38400/50000 (85%)] Loss: 1.008805, Accuracy: 62.11
Train Epoch: 2 [39680/50000 (88%)] Loss: 1.161276, Accuracy: 58.59
Train Epoch: 2 [40960/50000 (91%)] Loss: 1.036611, Accuracy: 63.28
Train Epoch: 2 [42240/50000 (94%)] Loss: 1.166182, Accuracy: 57.03
Train Epoch: 2 [43520/50000 (97%)] Loss: 1.036265, Accuracy: 62.89
Train Epoch: 2 [35000/50000 (99%)] Loss: 0.937140, Accuracy: 70.50
Validation set: Average loss: 2.1692, Accuracy: 2482/5000 (49.00%)
the time of this epoch:[21.577322244644165 s]
Train Epoch: 3 [0/50000 (0%)] Loss: 1.088407, Accuracy: 61.33
Train Epoch: 3 [1280/50000 (3%)] Loss: 1.053350, Accuracy: 62.89
Train Epoch: 3 [2560/50000 (6%)] Loss: 1.126616, Accuracy: 57.03
Train Epoch: 3 [3840/50000 (9%)] Loss: 0.988742, Accuracy: 65.23
Train Epoch: 3 [5120/50000 (11%)] Loss: 1.202767, Accuracy: 57.42
Train Epoch: 3 [6400/50000 (14%)] Loss: 1.000872, Accuracy: 63.67
Train Epoch: 3 [7680/50000 (17%)] Loss: 0.907074, Accuracy: 67.58
Train Epoch: 3 [8960/50000 (20%)] Loss: 1.005800, Accuracy: 62.11
Train Epoch: 3 [10240/50000 (23%)] Loss: 0.993526, Accuracy: 66.02
Train Epoch: 3 [11520/50000 (26%)] Loss: 0.856707, Accuracy: 71.48
Train Epoch: 3 [12800/50000 (28%)] Loss: 1.010143, Accuracy: 61.33
###Markdown
Step 3: Test
###Code
test(epoch)
###Output
Test set: Average loss: 0.6860, Accuracy: 8937/10000 (89.37%)
###Markdown
第一次 scale 位于[0,1] 
###Code
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'ro')
plt.figure(2)
plt.plot(xs, losses, 'ro')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
#parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'r-')
plt.figure(2)
plt.plot(xs, losses, 'r-')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
###Output
_____no_output_____ |
[01 - Initial]/MLP Pipelines/wat-time-pca-1000.ipynb | ###Markdown
machine learning models
###Code
X_train, X_test, y_train, y_test = train_test_split(principal_df, train_Y, test_size=0.3, random_state=0, shuffle = True)
#logistic regression
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import statsmodels.api as sm
from sklearn import metrics
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import accuracy_score
logit_model=sm.Logit(train_Y,train_X)
result=logit_model.fit()
print(result.summary2())
logreg = LogisticRegression(C=1,penalty='l2',random_state=42)
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print('Accuracy {:.2f}'.format(accuracy_score(y_test,y_pred)))
logreg_score_train = logreg.score(X_train,y_train)
print("Train Prediction Score",logreg_score_train*100)
logreg_score_test = accuracy_score(y_test,y_pred)
print("Test Prediction ",logreg_score_test*100)
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(classification_report(y_test, y_pred))
logit_roc_auc = roc_auc_score(y_test, logreg.predict(X_test))
fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
#KNeighborsClassifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train,y_train)
y_pred_knn= knn.predict(X_test)
knn_score_train = knn.score(X_train,y_train)
print("Train Prediction Score",knn_score_train*100)
knn_score_test = accuracy_score(y_test,y_pred_knn)
print("Test Prediction ",knn_score_test*100)
cm = confusion_matrix(y_test, y_pred_knn)
print(cm)
print(classification_report(y_test,y_pred_knn))
logit_roc_auc = roc_auc_score(y_test, y_pred_knn)
fpr, tpr, thresholds = roc_curve(y_test, knn.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='KNeighbors (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('Log_ROC')
plt.show()
#supportvectormachines
from sklearn.svm import SVC
ksvc = SVC(kernel = 'rbf',random_state = 42,probability=True)
ksvc.fit(X_train,y_train)
y_pred_ksvc= ksvc.predict(X_test)
ksvc_score_train = ksvc.score(X_train,y_train)
print("Train Prediction Score",ksvc_score_train*100)
ksvc_score_test = accuracy_score(y_test,y_pred_ksvc)
print("Test Prediction Score",ksvc_score_test*100)
cm = confusion_matrix(y_test, y_pred_ksvc)
print(cm)
print(classification_report(y_test,y_pred_ksvc))
#naive_bayes
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train,y_train)
y_pred_nb= nb.predict(X_test)
nb_score_train = nb.score(X_train,y_train)
print("Train Prediction Score",nb_score_train*100)
nb_score_test = accuracy_score(y_test,y_pred_nb)
print("Test Prediction Score",nb_score_test*100)
cm = confusion_matrix(y_test, y_pred_nb)
print(cm)
print(classification_report(y_test,y_pred_nb))
#neuralnetwork
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
from keras.callbacks import EarlyStopping
#2layer
model = Sequential()
n_cols = X_train.shape[1]
n_cols
model.add(Dense(2, activation='relu', input_shape=(n_cols,)))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
early_stopping_monitor = EarlyStopping(patience=20)
model.fit(X_train, y_train, epochs=10, validation_split=0.4 )
#3layer
model = Sequential()
n_cols = X_train.shape[1]
n_cols
model.add(Dense(4, activation='relu', input_shape=(n_cols,)))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy'])
early_stopping_monitor = EarlyStopping(patience=20)
model.fit(X_train, y_train, epochs=20, validation_split=0.4 )
#4layer
model = Sequential()
n_cols = X_train.shape[1]
n_cols
model.add(Dense(8, activation='relu', input_shape=(n_cols,)))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy'])
early_stopping_monitor = EarlyStopping(patience=20)
model.fit(X_train, y_train, epochs=20, validation_split=0.4 )
#4layer
model = Sequential()
n_cols = X_train.shape[1]
n_cols
model.add(Dense(16, activation='relu', input_shape=(n_cols,)))
model.add(Dense(8, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy'])
early_stopping_monitor = EarlyStopping(patience=20)
model.fit(X_train, y_train, epochs=20, validation_split=0.4 )
principal_df[principal_df.duplicated()].shape
###Output
_____no_output_____ |
Oefening_4_Word2Vec_word_embeddings.ipynb | ###Markdown
Using Pretrained Word EmbeddingsIt turns out that are a ton of pretrained neural networks for accomplishing specific tasks. In this notebook, we'll explore what embeddings are and some of the cool things we can do with them once we have them.--- Representing Inputs One-Hot RepresentationTraditionally, inputs to a neural network can be represented using a **One-Hot Representation**, where we assemble a vector of N values where each value represents a given class.For example, if we're classifying animals with our neural network, our one-hot representation may look like: and would represent a Bird. There are some problems with this. As the number of classes grows large, we waste a lot of space storing zeros that only tell us what we *don't* care about. Additionally, this gives us no way to represent the fact that Cats are more similar to Dogs than to Fish. In the Char-RNN notebook, we build a vocabulary of characters appearing in the training data and assign an ID to each one, letting Tensorflow handle the internal representation of the inputs. Because of this, our network learns only about what character should appear next based on the previous characters. Embedding RepresentationAn **embedding** is a representation of a value in a complex dataset in relation to the entire range of possible inputs based on some combination of features learned by the network. These can be a bit more abstract, which allows us to represent inputs as the neural network would rather than using firm classifications. Rather than a vector of on/off flags, we'll use a vector of floating point values, where each element represents the strength of a feature present in the input. One way we can generate an embedding vector is to select a layer in our neural network before the output layer and look at the values produced by it.--- Further ReadingIf you're interested in learning more about how word embeddings can be trained, there's a great tutorial about [Word2Vec](https://www.tensorflow.org/tutorials/word2vec) on tensorflow.org or [this series of blog posts by Memo Akten](https://medium.com/artists-and-machine-intelligence/ami-residency-part-1-exploring-word-space-andprojecting-meaning-onto-noise-98af7252f749)Embedding vectors have all sorts of applications across various types of data. * [Here's](https://artsexperiments.withgoogle.com/tsnemap/) one example where works of art have been mapped by their similarity, which lets us [morph from one work to another](https://artsexperiments.withgoogle.com/xdegrees). Using pip to install gensim[Gensim](https://radimrehurek.com/gensim/) is a Python library that makes it very easy to work with word embeddings. This cell may take a few moments to run as it is installed.
###Code
!pip install -q gensim==3.2.0
###Output
[K |████████████████████████████████| 15.3 MB 166 kB/s
[?25h Building wheel for gensim (setup.py) ... [?25l[?25hdone
###Markdown
Downloading our pre-trained word embeddingsWe're going to download a set of three million pretrained English word embeddings from a Word2Vec model that was trained on Google News. Some information from the project can be found on [this page](https://code.google.com/archive/p/word2vec/) along with the link to the Google Drive link. The next few cells will take care of this process for you.
###Code
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
file_id = '0B7XkCwpI5KDYNlNUTTlSS21pQmM'
downloaded = drive.CreateFile({'id':file_id})
downloaded.FetchMetadata(fetch_all=True)
downloaded.GetContentFile(downloaded.metadata['title'])
!gunzip GoogleNews-vectors-negative300.bin.gz
###Output
_____no_output_____
###Markdown
Loading the embeddings into GensimThe next cell will load word embeddings for the two hundred thousand most common words in English. The dataset will not contain excessively common "stop words" such as 'and', and others. These sorts of words don't really add to the meaning of the other words in a text and so are often omitted while training word embeddings.While the dataset contains nearly three million words, we're going to cut it down to size so you don't have to wait as long.
###Code
from gensim.models.keyedvectors import KeyedVectors
gensim_model = KeyedVectors.load_word2vec_format(
'GoogleNews-vectors-negative300.bin', binary=True, limit=300000)
gensim_model
print('apple =', gensim_model['apple'])
###Output
apple = [-0.06445312 -0.16015625 -0.01208496 0.13476562 -0.22949219 0.16210938
0.3046875 -0.1796875 -0.12109375 0.25390625 -0.01428223 -0.06396484
-0.08056641 -0.05688477 -0.19628906 0.2890625 -0.05151367 0.14257812
-0.10498047 -0.04736328 -0.34765625 0.35742188 0.265625 0.00188446
-0.01586914 0.00195312 -0.35546875 0.22167969 0.05761719 0.15917969
0.08691406 -0.0267334 -0.04785156 0.23925781 -0.05981445 0.0378418
0.17382812 -0.41796875 0.2890625 0.32617188 0.02429199 -0.01647949
-0.06494141 -0.08886719 0.07666016 -0.15136719 0.05249023 -0.04199219
-0.05419922 0.00108337 -0.20117188 0.12304688 0.09228516 0.10449219
-0.00408936 -0.04199219 0.01409912 -0.02111816 -0.13476562 -0.24316406
0.16015625 -0.06689453 -0.08984375 -0.07177734 -0.00595093 -0.00482178
-0.00089264 -0.30664062 -0.0625 0.07958984 -0.00909424 -0.04492188
0.09960938 -0.33398438 -0.3984375 0.05541992 -0.06689453 -0.04467773
0.11767578 -0.13964844 -0.26367188 0.17480469 -0.17382812 -0.40625
-0.06738281 -0.07617188 0.09423828 0.20996094 -0.16308594 -0.08691406
-0.0534668 -0.10351562 -0.07617188 -0.11083984 -0.03515625 -0.14941406
0.0378418 0.38671875 0.14160156 -0.2890625 -0.16894531 -0.140625
-0.04174805 0.22753906 0.24023438 -0.01599121 -0.06787109 0.21875
-0.42382812 -0.5625 -0.49414062 -0.3359375 0.13378906 0.01141357
0.13671875 0.0324707 0.06835938 -0.27539062 -0.15917969 0.00121307
0.01208496 -0.0039978 0.00442505 -0.04541016 0.08642578 0.09960938
-0.04296875 -0.11328125 0.13867188 0.41796875 -0.28320312 -0.07373047
-0.11425781 0.08691406 -0.02148438 0.328125 -0.07373047 -0.01348877
0.17773438 -0.02624512 0.13378906 -0.11132812 -0.12792969 -0.12792969
0.18945312 -0.13867188 0.29882812 -0.07714844 -0.37695312 -0.10351562
0.16992188 -0.10742188 -0.29882812 0.00866699 -0.27734375 -0.20996094
-0.1796875 -0.19628906 -0.22167969 0.08886719 -0.27734375 -0.13964844
0.15917969 0.03637695 0.03320312 -0.08105469 0.25390625 -0.08691406
-0.21289062 -0.18945312 -0.22363281 0.06542969 -0.16601562 0.08837891
-0.359375 -0.09863281 0.35546875 -0.00741577 0.19042969 0.16992188
-0.06005859 -0.20605469 0.08105469 0.12988281 -0.01135254 0.33203125
-0.08691406 0.27539062 -0.03271484 0.12011719 -0.0625 0.1953125
-0.10986328 -0.11767578 0.20996094 0.19921875 0.02954102 -0.16015625
0.00276184 -0.01367188 0.03442383 -0.19335938 0.00352478 -0.06542969
-0.05566406 0.09423828 0.29296875 0.04052734 -0.09326172 -0.10107422
-0.27539062 0.04394531 -0.07275391 0.13867188 0.02380371 0.13085938
0.00236511 -0.2265625 0.34765625 0.13574219 0.05224609 0.18164062
0.0402832 0.23730469 -0.16992188 0.10058594 0.03833008 0.10839844
-0.05615234 -0.00946045 0.14550781 -0.30078125 -0.32226562 0.18847656
-0.40234375 -0.3125 -0.08007812 -0.26757812 0.16699219 0.07324219
0.06347656 0.06591797 0.17285156 -0.17773438 0.00276184 -0.05761719
-0.2265625 -0.19628906 0.09667969 0.13769531 -0.49414062 -0.27929688
0.12304688 -0.30078125 0.01293945 -0.1875 -0.20898438 -0.1796875
-0.16015625 -0.03295898 0.00976562 0.25390625 -0.25195312 0.00210571
0.04296875 0.01184082 -0.20605469 0.24804688 -0.203125 -0.17773438
0.07275391 0.04541016 0.21679688 -0.2109375 0.14550781 -0.16210938
0.20410156 -0.19628906 -0.35742188 0.35742188 -0.11962891 0.35742188
0.10351562 0.07080078 -0.24707031 -0.10449219 -0.19238281 0.1484375
0.00057983 0.296875 -0.12695312 -0.03979492 0.13183594 -0.16601562
0.125 0.05126953 -0.14941406 0.13671875 -0.02075195 0.34375 ]
###Markdown
Whoops, that's really dense. Visualizing our embeddings can help us draw conclusions about our dataset and gain some insight into what our neural network is learning.T-SNE is an algorithm that reduces the dimensionality of our embedding vectors. The Google News Word2Vec embeddings are originally 300-dimensional, but T-Sne will let us view them collapsed into a 2D space based on their similarities.
###Code
from sklearn.manifold import TSNE
from matplotlib import pylab
words = [word for word in gensim_model.index2word[100:400]]
embeddings = [gensim_model[word] for word in words]
words_embedded = TSNE(n_components=2).fit_transform(embeddings)
pylab.figure(figsize=(20, 20))
for i, label in enumerate(words):
x, y = words_embedded[i, :]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
###Output
_____no_output_____
###Markdown
Note that from looking at the graph above we can see that words with similar usages appear near each other. For example,* Words dealing with time are next to each other (minutes, hours, months, weeks, days)* Countries appear near each other (India, China, United_States)* US Presidents (Bush, Obama)* Words associated with games or sports (playing, played, player, coach, gamers, players, teams)* Numbers (2, 3, 4, 5, 6, 7, 8, 9)* Directions (North, South)* Words related to probability (whether, might, expected, never, sure, possible)Can you find any other interesting ones? Try changing the [200:600] in the cell above to try another range of values. Interesting applications of word embeddingsNow that we have some idea of what knowledge our neural network has, let's explore some of the more interesting implications. Finding similar words:
###Code
gensim_model.most_similar(positive=['Obama'])
###Output
_____no_output_____
###Markdown
New Section Combining Words:Since our words are represented as an array of floats, we can add some together and lookup what their combination is.```nature + science = biology, ecology```
###Code
gensim_model.most_similar(positive=['Paris', 'Germany'], negative=['France'])
###Output
_____no_output_____
###Markdown
Finding analogous words:We can do math with the embedding vectors for words to find analogies between words.king - man + woman = queenParis - France + Germany = BerlinTea - England + United_States = CoffeeNorth - South + West = East
###Code
gensim_model.most_similar(positive=['king', 'woman'], negative=['man'])
gensim_model.most_similar(
positive=['Paris', 'Spain'],
negative=['France'])
gensim_model.most_similar(positive=['Tea', 'United_States'], negative=['England'])
gensim_model.most_similar(positive=['North', 'West'], negative=['South'])
###Output
_____no_output_____
###Markdown
We can put these together to programmatically modify sentences and phrases word by word, with varying results:
###Code
def shift_context(sentence, from_context, to_context):
new_sentence = []
for word in sentence.split():
if word in gensim_model:
word = gensim_model.most_similar(
positive=[word, to_context], negative=[from_context])[0][0]
new_sentence.append(word)
return ' '.join(new_sentence)
sentence = 'restaurant serving coffee with cream and bread'
print(shift_context(sentence, 'regular', 'fancy'))
###Output
steakhouse served cappuccino snazzy frou_frou and baguettes
###Markdown
Unfortunately, the results are not always perfect with this approach, for example:excellent - positive + negative = ```[(u'terrific', 0.5454081296920776), (u'superb', 0.5449916124343872), (u'exceptional', 0.5175209641456604), (u'Excellent', 0.4948967695236206), (u'impeccable', 0.49398699402809143), (u'superlative', 0.4694099426269531), (u'ideal', 0.4649601876735687), (u'fantastic', 0.46219557523727417), (u'abysmal', 0.4582980275154114), (u'atrocious', 0.45645347237586975)]``` The first eight results have roughly the same meaning as excellent. An unfortunate fact is that words can have many meanings across different contexts. Using these word embeddings, ```plus - positive + negative = minus```. So the words positive and negative have become associated with mathematical sign rather than good and bad. Other Problems and Strategies We run into an issue when attempting the following:```Austin - Texas + Oregon```Which yields "Corvallis" when we should expect "Salem". One way around this is to determine what relationship we want to target, in this case the full list of US States and their Capitals, and compute the average vector between our embeddings for this relationship. Gensim will let us do this by adding more parameters to the positive and negative lists.
###Code
gensim_model.most_similar(
positive=['Austin', 'Oregon'],
negative=['Texas']
)
gensim_model.most_similar(
positive=[
'Austin', 'Atlanta', 'Nashville', 'Sacramento', 'Boston', 'Oregon'
],
negative=[
'Texas', 'Georgia', 'Tennessee', 'California', 'Massachusetts',
])
###Output
_____no_output_____ |
notebooks/Pearce Gamma t test.ipynb | ###Markdown
I'm looking into doing a delta_sigma emulator. This is testing if the cat side works. Then i'll make an emulator for it.
###Code
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
zbc = (z_bins[1:]+z_bins[:-1])/2
print 1/(1+zbc)
a = 0.81120
z = 1.0/a - 1.0
###Output
_____no_output_____
###Markdown
Load up a snapshot at a redshift near the center of this bin.
###Code
print z
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a, particles=True, tol = 0.01, downsample_factor=1e-3)
cat.load_model(a, 'redMagic')
from astropy import cosmology
###Output
_____no_output_____
###Markdown
cat.h=1.0cosmo = cat.cosmologycat.cosmology = cosmology.FlatLambdaCDM(H0 = 100, Om0 = cosmo.Om(0))print cat.cosmology
###Code
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
#my clustering redmagic best fit
params['logMmin'] = 12.386
params['sigma_logM'] = 0.4111
params['f_c'] = 0.292
params['alpha'] = 1.110
params['logM1'] = 13.777
params['logM0'] = 11.43433
print params
print cat
help(cat.calc_gt)
cat.populate(params)
nd_cat = cat.calc_analytic_nd(params)
print nd_cat
fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'
hdulist = fits.open(fname)
nz_sources = hdulist[6]
sources_zbin = 1
N_z = np.array([row[2+sources_zbin] for row in nz_sources.data])
N_total = np.sum(N_z)#*0.01
N_z/=N_total # normalize
zbins = [row[0] for row in nz_sources.data]
zbins.append(nz_sources.data[-1][2])
sc_inv = cat.calc_sigma_crit_inv(zbins, N_z)
print sc_inv
zs = np.sum(zbins[:-1]*N_z)
zs
###Output
_____no_output_____
###Markdown
r_bins = np.logspace(-1.1, 1.5, 9)print r_binsprint (r_bins[1:] + r_bins[:-1])/2.0ds = cat.calc_ds_analytic(r_bins, n_cores = 2) ds plt.plot(sigma_rbins, sigma)plt.loglog() rpbc = (rp_bins[1:] + rp_bins[:-1])/2.0n_cores = 2ds_kwargs = {}ds = np.zeros_like(rpbc)small_scales = rp_bins < 1.5 smaller then an MPC, compute with ht compute the small scales using halotools, but integrate xi_mm to larger scales.ds = cat.calc_ds_analytic(rp_bins, n_cores=n_cores, **ds_kwargs)print dsif np.sum(small_scales) >0: ds_ss = cat.calc_ds(rp_bins,n_cores =n_cores, **ds_kwargs) ds[:np.sum(small_scales)-1] = ds_ss[:np.sum(small_scales)-1] print dsif np.sum(~small_scales) > 0: start_idx = np.sum(small_scales) ds_ls = cat.calc_ds_analytic(rp_bins, n_cores=n_cores, **ds_kwargs) ds[start_idx-1:] = ds_ls[start_idx-1:] print ds
###Code
sc_inv
rp_bins = np.logspace(-1.1, 1.8, 20) #binning used in buzzard mocks
#rpoints = (rp_bins[1:]+rp_bins[:-1])/2
theta_bins = np.logspace(np.log10(2.5), np.log10(250), 21)/60
#theta_bins = cat._ang_from_rp(rp_bins)
#rp_bins = cat._rp_from_ang(theta_bins)
rpoints = np.sqrt(rp_bins[1:]*rp_bins[:-1])
tpoints = np.sqrt(theta_bins[1:]*theta_bins[:-1])#(theta_bins[1:]+theta_bins[:-1])/2
###Output
_____no_output_____
###Markdown
ds = cat.calc_ds(theta_bins, angular = True, n_cores = 2)
###Code
gt = cat.calc_gt(theta_bins, 1.0, n_cores = 2)
###Output
[ 0.55742891 0.70176142 0.88346528 1.11221689 1.40019811
1.76274498 2.21916445 2.79376251 3.51713862 4.42781519
5.57428906 7.01761415 8.83465278 11.12216889 14.00198106
17.62744977 22.19164446 27.93762513 35.17138623 44.27815189
55.7428906 ]
###Markdown
sigma = cat.calc_ds_analytic(theta_bins, angular = True, n_cores =2)print sigma
###Code
from scipy.interpolate import interp1d
from scipy.integrate import quad
import pyccl as ccl
###Output
_____no_output_____
###Markdown
def calc_ds_analytic(theta_bins, angular = True, n_cores = 2, xi_kwargs = {}): calculate xi_gg first rbins = np.logspace(-1.3, 1.6, 16) xi = cat.calc_xi_gm(rbins,n_cores=n_cores, **xi_kwargs) if np.any(xi<=0): warnings.warn("Some values of xi are less than 0. Setting to a small nonzero value. This may have unexpected behavior, check your HOD") xi[xi<=0] = 1e-3 rpoints = (rbins[:-1]+rbins[1:])/2.0 xi_rmin, xi_rmax = rpoints[0], rpoints[-1] make an interpolator for integrating xi_interp = interp1d(np.log10(rpoints), np.log10(xi)) get the theotertical matter xi, for large scale estimates names, vals = cat._get_cosmo_param_names_vals() param_dict = { n:v for n,v in zip(names, vals)} if 'Omega_c' not in param_dict: param_dict['Omega_c'] = param_dict['Omega_m'] - param_dict['Omega_b'] del param_dict['Omega_m'] cosmo = ccl.Cosmology(**param_dict) big_rbins = np.logspace(1, 2.3, 21) big_rpoints = (big_rbins[1:] + big_rbins[:-1])/2.0 big_xi_rmax = big_rpoints[-1] xi_mm = ccl.correlation_3d(cosmo, cat.a, big_rpoints) xi_mm[xi_mm<0] = 1e-6 may wanna change this? xi_mm_interp = interp1d(np.log10(big_rpoints), np.log10(xi_mm)) correction factor bias = np.power(10, xi_interp(1.2)-xi_mm_interp(1.2)) rhocrit = cat.cosmology.critical_density(0).to('Msun/(Mpc^3)').value rhom = cat.cosmology.Om(0) * rhocrit * 1e-12 SM h^2/pc^2/Mpc; integral is over Mpc/h def sigma_integrand(Rz, Rp, bias, xi_interp, xi_mm_interp): Rz = np.exp(lRz) r2 = Rz**2 + Rp**2 if r2 < xi_rmin**2: return 0.0 elif xi_rmin**2 < r2 < xi_rmax**2: return 10**xi_interp(np.log10(r2)*0.5) elif r2 < big_xi_rmax**2: return bias*10 ** xi_mm_interp(np.log10(r2) * 0.5) else: return 0.0 calculate sigma first sigma_rpoints = np.logspace(-1.1, 2.2, 15) sigma = np.zeros_like(sigma_rpoints) for i, rp in enumerate(sigma_rpoints): sigma[i] = rhom*2*quad(sigma_integrand, 1e-6, 1e3, args=(rp, bias, xi_interp, xi_mm_interp))[0] sigma_interp = interp1d(np.log10(sigma_rpoints), sigma) calculate delta sigma def DS_integrand_medium_scales(R, sigma_interp): R = np.exp(lR) return R*sigma_interp(np.log10(R)) rp_bins = theta_bins if not angular else cat._rp_from_ang(theta_bins) rp_bins = np.logspace(-1.1, 2.0, 9) binning used in buzzard mocks print rp_bins rp_points = np.sqrt(rp_bins[1:]*rp_bins[:-1])(rp_bins[1:] + rp_bins[:-1])/2.0 ds = np.zeros_like(rp_points) for i, rp in enumerate(rp_points): result = quad(DS_integrand_medium_scales, sigma_rpoints[0], rp, args=(sigma_interp,))[0] ds[i] = result * 2 / (rp ** 2) - sigma_interp(np.log10(rp)) return ds (rpoints, xi), (big_rpoints, xi_mm) = calc_ds_analytic(theta_bins)sigma = calc_ds_analytic(theta_bins)ds_ls = calc_ds_analytic(theta_bins, angular = True) sigma_rpoints = np.logspace(-1.1, 2.2, 15)sigma_rp_bins = np.logspace(-1.1, 1.5, 9) binning used in buzzard mockssigma_rpoints = (sigma_rp_bins[1:]+sigma_rp_bins[:-1])/2plt.plot(sigma_rpoints, sigma)plt.loglog()plt.xscale('log') rp_bins = np.logspace(-1.1, 2.0, 9) binning used in buzzard mocksrpoints = (rp_bins[1:]+rp_bins[:-1])/2 rp_bins = cat._rp_from_ang(theta_bins)print rp_binsds_ss = cat.calc_ds(theta_bins, angular = True,n_cores =2)/cat.h rp_bins = cat._rp_from_ang(theta_bins)print rp_bins TODO my own rp_binsrpbc = (rp_bins[1:]+rp_bins[:-1])/2.0ds = np.zeros_like(rpbc)small_scales = rp_bins < 10 smaller then an MPC, compute with htprint small_scales compute the small scales using halotools, but integrate xi_mm to larger scales.start_idx = np.sum(small_scales) sigma_prime = np.zeros_like(rpbc)sigma_prime[:start_idx-1] = ds_ss[:start_idx-1]sigma_prime[start_idx-1:] = ds_ls[start_idx-1:] plt.plot(tpoints, ds_ls)plt.plot(tpoints, ds_ss)/cat.hplt.plot(tpoints, gt)plt.plot(tpoints, sigma_prime)plt.loglog();
###Code
tbins = (theta_bins[1:]+theta_bins[:-1])/2.0
plt.plot(tbins*60, gt)
plt.ylabel(r'$\gamma_t(\theta)$')
plt.xlabel(r'$\theta$ [Arcmin]')
plt.loglog()
gt_data = hdulist[3].data
gt_rm, gt_bc = [],[]
for i, row in enumerate(gt_data):
if i == 20:
break
gt_rm.append(row[3])#gt_data[3,:20]
gt_bc.append(row[4])
print gt_bc
print tbins*60
gt_rm, gt
plt.plot(gt_bc, gt_rm)
plt.plot(tbins*60, sc_inv*gt)#/cat.h)
plt.ylabel(r'$\gamma_t(\theta)$')
plt.xlabel(r'$\theta$ [Arcmin]')
plt.loglog()
###Output
_____no_output_____ |
code/pix2pixHD.ipynb | ###Markdown
pix2pixHD
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.utils
from torch.nn.utils import spectral_norm
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import os
from torch.utils.data import DataLoader
from ds import MIBIDataset
import torch.nn.utils.spectral_norm as spectral_norm
from utilities import weights_init, seg_show
normalize = matplotlib.colors.Normalize(vmin=0, vmax=1)
torch.cuda.set_device(0)
gpu_available = True
channel_names = ["Pan-Keratin", "EGFR", "Beta catenin", "dsDNA",
"Ki67", "CD3", "CD8", "CD4", "FoxP3", "MPO", "HLA-DR",
"HLA_Class_1", "CD209", "CD11b", "CD11c", "CD68", "CD63",
"Lag3", "PD1", "PD-L1", "IDO", "Vimentin", "SMA", "CD31"]
###Output
_____no_output_____
###Markdown
Parameters
###Code
# Learning rate for optimizers
batch_size = 32
nz = hidden_size = 128
kernel = 3
# Number of input channels (later to be number of classes)
num_chan = 18
# Size of feature maps in discriminator
ndf = 32
# Output dimension
nc = 24
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
c24_idx = np.array(range(25))
c24_idx = np.delete(c24_idx, 4)
###Output
_____no_output_____
###Markdown
Read In Data
###Code
# train data
cells_seg = []
cells_real = []
keys = []
path = './data'
filelist = os.listdir(path + '/train')
for i in range(int(len(filelist[:]))): # have both counts and cells
patch = path + '/train/cell_' + str(i) + '.npy'
cells_seg.append(np.load(patch)[0])
cells_real.append(np.load(patch)[1][c24_idx])
print('number of total cells: %d' % (len(cells_seg)))
cells_seg = np.array(cells_seg)
empty = np.less(np.sum(cells_seg, axis=1, keepdims=True), 0.5).astype(np.float32)
cells_seg = np.concatenate([cells_seg, empty], axis=1)
cells_real = np.array(cells_real)
cells = np.array([[cells_seg[i], cells_real[i]] for i in range(len(cells_seg))])
train_set_loader = DataLoader(MIBIDataset(cells), batch_size=batch_size,
shuffle=True, num_workers=4, pin_memory=gpu_available)
# test data
cells_seg = []
cells_real = []
keys = []
path = './data'
filelist = os.listdir(path + '/test')
for i in range(int(len(filelist[:]))): # have both counts and cells
patch = path + '/test/cell_' + str(i) + '.npy'
cells_seg.append(np.load(patch)[0])
cells_real.append(np.load(patch)[1][c24_idx])
print('number of total cells: %d' % (len(cells_seg)))
cells_seg = np.array(cells_seg)
empty = np.less(np.sum(cells_seg, axis=1, keepdims=True), 0.5).astype(np.float32)
cells_seg = np.concatenate([cells_seg, empty], axis=1)
cells_real = np.array(cells_real)
cells = np.array([[cells_seg[i], cells_real[i]] for i in range(len(cells_seg))])
test_set_loader = DataLoader(MIBIDataset(cells), batch_size=1,
shuffle=True, num_workers=4, pin_memory=gpu_available)
###Output
_____no_output_____
###Markdown
Coarse-to-Fine Generator
###Code
# Define a resnet block
class ResnetBlock(nn.Module):
def __init__(self, dim, padding_type, norm_layer, activation=nn.ReLU(True), use_dropout=False):
super(ResnetBlock, self).__init__()
self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, activation, use_dropout)
def build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout):
conv_block = []
p = 0
if padding_type == 'reflect':
conv_block += [nn.ReflectionPad2d(1)]
elif padding_type == 'replicate':
conv_block += [nn.ReplicationPad2d(1)]
elif padding_type == 'zero':
p = 1
else:
raise NotImplementedError('padding [%s] is not implemented' % padding_type)
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p),
norm_layer(dim),
activation]
if use_dropout:
conv_block += [nn.Dropout(0.5)]
p = 0
if padding_type == 'reflect':
conv_block += [nn.ReflectionPad2d(1)]
elif padding_type == 'replicate':
conv_block += [nn.ReplicationPad2d(1)]
elif padding_type == 'zero':
p = 1
else:
raise NotImplementedError('padding [%s] is not implemented' % padding_type)
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p),
norm_layer(dim)]
return nn.Sequential(*conv_block)
def forward(self, x):
out = x + self.conv_block(x)
return out
class GlobalGenerator(nn.Module):
def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks=4, norm_layer=nn.BatchNorm2d,
padding_type='reflect'):
assert(n_blocks >= 0)
super(GlobalGenerator, self).__init__()
activation = nn.ReLU(True)
model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation]
### downsample
for i in range(n_downsampling):
mult = 2**i
model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),
norm_layer(ngf * mult * 2), activation]
### resnet blocks
mult = 2**n_downsampling
for i in range(n_blocks):
model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer)]
### upsample
for i in range(n_downsampling):
mult = 2**(n_downsampling - i)
model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, output_padding=1),
norm_layer(int(ngf * mult / 2)), activation]
model += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), nn.Sigmoid()]
self.model = nn.Sequential(*model)
def forward(self, input):
return self.model(input)
class LocalEnhancer(nn.Module):
def __init__(self, input_nc, output_nc, ngf=32, n_downsample_global=2, n_blocks_global=4,
n_local_enhancers=1, n_blocks_local=3, norm_layer=nn.BatchNorm2d, padding_type='reflect'):
super(LocalEnhancer, self).__init__()
self.n_local_enhancers = n_local_enhancers
###### global generator model #####
ngf_global = ngf * (2**n_local_enhancers)
model_global = GlobalGenerator(input_nc, output_nc, ngf_global, n_downsample_global, n_blocks_global, norm_layer).model
model_global = [model_global[i] for i in range(len(model_global)-3)] # get rid of final convolution layers
self.model = nn.Sequential(*model_global)
###### local enhancer layers #####
for n in range(1, n_local_enhancers+1):
### downsample
ngf_global = ngf * (2**(n_local_enhancers-n))
model_downsample = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf_global, kernel_size=7, padding=0),
norm_layer(ngf_global), nn.ReLU(True),
nn.Conv2d(ngf_global, ngf_global * 2, kernel_size=3, stride=2, padding=1),
norm_layer(ngf_global * 2), nn.ReLU(True)]
### residual blocks
model_upsample = []
for i in range(n_blocks_local):
model_upsample += [ResnetBlock(ngf_global * 2, padding_type=padding_type, norm_layer=norm_layer)]
### upsample
model_upsample += [nn.ConvTranspose2d(ngf_global * 2, ngf_global, kernel_size=3, stride=2, padding=1, output_padding=1),
norm_layer(ngf_global), nn.ReLU(True)]
### final convolution
if n == n_local_enhancers:
model_upsample += [nn.ReflectionPad2d(3), nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), nn.Sigmoid()]
setattr(self, 'model'+str(n)+'_1', nn.Sequential(*model_downsample))
setattr(self, 'model'+str(n)+'_2', nn.Sequential(*model_upsample))
self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False)
def forward(self, input):
### create input pyramid
input_downsampled = [input]
for i in range(self.n_local_enhancers):
input_downsampled.append(self.downsample(input_downsampled[-1]))
### output at coarest level
output_prev = self.model(input_downsampled[-1])
### build up one layer at a time
for n_local_enhancers in range(1, self.n_local_enhancers+1):
model_downsample = getattr(self, 'model'+str(n_local_enhancers)+'_1')
model_upsample = getattr(self, 'model'+str(n_local_enhancers)+'_2')
input_i = input_downsampled[self.n_local_enhancers-n_local_enhancers]
output_prev = model_upsample(model_downsample(input_i) + output_prev)
return output_prev
###Output
_____no_output_____
###Markdown
Discriminator
###Code
class DiscriminatorBase(nn.Module):
def __init__(self,
ndf=ndf,
num_chan=num_chan,
batch_size=batch_size,
nz=nz,
nc=nc):
super(DiscriminatorBase, self).__init__()
self.layer1 = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(num_chan + nc, ndf, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(ndf),
nn.LeakyReLU(0.2, inplace=False))
self.layer2 = nn.Sequential(
# state size. (ndf) x 32 x 32
nn.Conv2d(ndf, ndf * 2, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=False))
self.layer3 = nn.Sequential(
# state size. (ndf*2) x 16 x 16
nn.Conv2d(ndf * 2, ndf * 4, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=False))
self.layer4 = nn.Sequential(
# state size. (ndf*4) x 8 x 8
nn.Conv2d(ndf * 4, ndf * 8, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(ndf * 8),
nn.LeakyReLU(0.2, inplace=False))
self.layer5 = nn.Sequential(
nn.Conv2d(ndf * 8, ndf * 16, kernel_size=4, stride=2, padding=1),
nn.InstanceNorm2d(ndf * 16),
nn.LeakyReLU(0.2, inplace=False))
self.layer6 = nn.Sequential(
# state size. (ndf*8) x 4 x 4
nn.Conv2d(ndf * 16, 1, kernel_size=3, stride=2, padding=1),
nn.Sigmoid()
)
self.feature_maps = []
def forward(self, input, X_seg):
x = torch.cat([input, X_seg], dim=1)
x = self.layer1(x)
self.feature_maps.append(x)
x = self.layer2(x)
self.feature_maps.append(x)
x = self.layer3(x)
self.feature_maps.append(x)
x = self.layer4(x)
self.feature_maps.append(x)
x = self.layer5(x)
self.feature_maps.append(x)
x = self.layer6(x)
# self.feature_maps.append(x)
return x
def reset(self):
self.feature_maps = []
###Output
_____no_output_____
###Markdown
Training Loop
###Code
netG = LocalEnhancer(num_chan,nc).float().cuda()
netD = DiscriminatorBase().float().cuda()
optimizerG = optim.Adam(netG.parameters(), lr=0.0002)
optimizerD = optim.Adam(netD.parameters(), lr=0.0002)
netG.apply(weights_init)
netD.apply(weights_init)
# Initialize loss functions
criterionG = nn.MSELoss()
criterionD = nn.BCELoss()
# Establish convention for real and fake labels during training
real_label = 1
fake_label = 0
print("Initialized")
# Training Loop
num_epochs = 120
# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
R_losses = []
iters = 0
d_iters = 1
g_iters = 1
print("Starting Training Loop...")
# For each epoch
for epoch in range(1, num_epochs):
if epoch % 100 == 0:
for param_group in optimizerG.param_groups:
param_group['lr'] /= 2
for param_group in optimizerD.param_groups:
param_group['lr'] /= 2
for idx, data in enumerate(train_set_loader):
X_seg, X_real = data
X_seg = torch.clamp(X_seg.transpose(2,1), 0, 1).float().cuda()
X_real = X_real.transpose(2,1).float().cuda()
## Train with all-real batch
for _ in range(d_iters):
netD.zero_grad()
output = netD(X_real, X_seg)
label = torch.full(output.size(), real_label).cuda()
errD_real = criterionD(output, label)
errD_real.backward()
D_x = output.mean().item()
fake = netG(X_seg.detach())
label.fill_(fake_label)
output = netD(fake.detach(), X_seg.detach())
errD_fake = criterionD(output, label)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
# if idx % 2 == 0:
optimizerD.step()
netD.zero_grad()
netD.reset()
for _ in range(g_iters):
netG.zero_grad()
fake = netG(X_seg)
label.fill_(real_label)
output = netD(fake, X_seg.detach())
errG = criterionD(output, label)
D_fm_fake = netD.feature_maps # fake feature map
netD.reset()
output_real = netD(X_real.detach(), X_seg.detach()).view(-1)
D_fm_real = netD.feature_maps # real feature map
netD.reset()
D_fm_loss = 0
for i in range(len(D_fm_fake)):
D_fm_loss += nn.L1Loss()(D_fm_fake[i], D_fm_real[i])
r_loss = D_fm_loss
Lambda = 10
errG += Lambda*D_fm_loss
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step()
if idx % 10 == 0:
print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tLoss_R: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
% (epoch, num_epochs, idx, len(train_set_loader),
errD.item(), errG.item(), r_loss.item(), D_x, D_G_z1, D_G_z2))
# Save Losses for plotting later
G_losses.append(errG.item())
D_losses.append(errD.item())
if epoch % 5 == 1:
fig=plt.figure(figsize=(2.5, 2.5))
print("Segmentation: ")
plt.imshow(seg_show(X_seg.detach().cpu().numpy()[0]))
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Fake: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(fake.detach().cpu().numpy()[0][i],cmap='hot', interpolation='nearest')
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Real: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow( X_real[0,i,:,:].detach().cpu().numpy(),cmap='hot', interpolation='nearest')
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Scaled Fake: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(fake.detach().cpu().numpy()[0][i],cmap='hot', interpolation='nearest', norm=normalize)
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Scaled Real: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(X_real[0,i,:,:].detach().cpu().numpy(),cmap='hot', interpolation='nearest', norm=normalize)
plt.show()
if epoch % 20 == 1:
print("============================")
print("test cell")
print("============================")
for idx, data in enumerate(test_set_loader):
X_seg, X_real = data
X_seg = torch.clamp(X_seg.transpose(2,1), 0, 1).float().cuda()
X_real = X_real.transpose(2,1).float().cuda()
noise = 0.5 * torch.randn(X_seg.size()[0], 128).cuda()
break
fake = netG(X_seg.detach())
fig=plt.figure(figsize=(2.5, 2.5))
print("Segmentation: ")
plt.imshow(seg_show(X_seg.detach().cpu().numpy()[0]))
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Fake: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(fake.detach().cpu().numpy()[0][i],cmap='hot', interpolation='nearest')
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Real: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow( X_real[0,i,:,:].detach().cpu().numpy(),cmap='hot', interpolation='nearest')
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Scaled Fake: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(fake.detach().cpu().numpy()[0][i],cmap='hot', interpolation='nearest', norm=normalize)
plt.show()
fig=plt.figure(figsize=(16, 10))
columns = 7
rows = 4
print("Scaled Real: ")
for i in range(24):
fig.add_subplot(rows, columns, i+1)
plt.title(channel_names[i])
plt.imshow(X_real[0,i,:,:].detach().cpu().numpy(),cmap='hot', interpolation='nearest', norm=normalize)
plt.show()
###Output
_____no_output_____
###Markdown
Save & Load Model
###Code
state = {
'epoch': epoch,
'G': netG.state_dict(),
'optimizerG': optimizerG.state_dict(),
'D' : netD.state_dict(),
'optimizerD' : optimizerD.state_dict()
}
torch.save(state, './model/baseline_pix2pixHD')
netG = LocalEnhancer(num_chan,nc).float().cuda()
state = torch.load('./model/baseline_pix2pixHD')
netG.load_state_dict(state['G'])
###Output
_____no_output_____
###Markdown
Reconstruction Metrics Adjusted L1
###Code
# Adjust L1
Loss = 0
AdjLoss = 0
for idx, data in enumerate(test_set_loader):
X_seg, X_real = data
X_seg = torch.clamp(X_seg.transpose(2,1), 0, 1).float().cuda()
X_real = X_real.transpose(2,1).float().cuda()
X_mask = (1 - X_seg[:,-1]).unsqueeze(1)
noise = 0.5 * torch.randn(X_seg.size()[0], 128).cuda()
fake = netG(X_seg).detach()
outside = (1 - X_mask) * fake
B,C = X_real.size()[:2]
real_data = (X_mask*X_real).view(B, C, -1)
fake_data = (X_mask*fake).view(B, C, -1)
# print(real_data.shape)
real_rank, _ = torch.sort(real_data, dim=2)
fake_rank, _ = torch.sort(fake_data, dim=2)
Loss += nn.L1Loss()(real_rank[:], fake_rank[:])
AdjLoss += nn.L1Loss()(real_rank[:], fake_rank[:])
AdjLoss += nn.L1Loss()(outside[:], torch.zeros_like(outside[:]))
print('Adjust L1 Metric:', AdjLoss.item())
print('Pure L1 Metric:', Loss.item())
###Output
Adjust L1 Metric: 0.8746554255485535
Pure L1 Metric: 0.7446504831314087
###Markdown
Adjusted MSE
###Code
# Adjust MSE
Loss = 0
AdjLoss = 0
for idx, data in enumerate(test_set_loader):
X_seg, X_real = data
X_seg = torch.clamp(X_seg.transpose(2,1), 0, 1).float().cuda()
X_real = X_real.transpose(2,1).float().cuda()
X_mask = (1 - X_seg[:,-1]).unsqueeze(1)
noise = 0.5 * torch.randn(X_seg.size()[0], 128).cuda()
fake = netG(X_seg).detach()
outside = (1 - X_mask) * fake
B,C = X_real.size()[:2]
real_data = (X_mask*X_real).view(B, C, -1)
fake_data = (X_mask*fake).view(B, C, -1)
# print(real_data.shape)
real_rank, _ = torch.sort(real_data, dim=2)
fake_rank, _ = torch.sort(fake_data, dim=2)
Loss += nn.MSELoss()(real_rank[:], fake_rank[:])
AdjLoss += nn.MSELoss()(real_rank[:], fake_rank[:])
AdjLoss += nn.MSELoss()(outside[:], torch.zeros_like(outside[:]))
print('Adjust MSE Metric:', AdjLoss.item())
print('Pure MSE Metric:', Loss.item())
###Output
Adjust L1 Metric: 0.06048102304339409
Pure L1 Metric: 0.05491497740149498
###Markdown
SSIM
###Code
cells_seg_list = []
cells_real_list = []
download_path = './data/benchmark_p14'
filelist = os.listdir(download_path)
for i in range(len(filelist)):
patch = download_path + '/cell_' + str(i) + '.npy'
cells_seg_list.append(np.load(patch)[0])
cells_real_list.append(np.load(patch)[1])
from skimage.measure import compare_ssim
ssim_score = 0
ssim_channels = np.zeros(nc)
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
# seg_test = 0
seg_test = np.sum(seg_list, axis=0)
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
# fake = netG(noise, X_seg_0).detach().cpu().numpy()[0]
fake = netG(X_seg_0).detach().cpu().numpy()[0]
# fake = netF(fake, X_seg_0)
for j in range(nc):
fake_i = fake[j].astype(float)
real_i = np.sum(real_list, axis=0)[j].astype(float) # i-th cell, j-th channel
# plt.imshow(fake_i)
# plt.show()
# plt.imshow(real_i)
# plt.show()
ssim_score += compare_ssim(fake_i, real_i)/nc
ssim_channels[j] += compare_ssim(fake_i, real_i)
print('ssim score:', ssim_score/len(cells_seg_list))
for j in range(nc):
print(channel_names[j], ssim_channels[j]/len(cells_seg_list))
###Output
/home/ubuntu/anaconda3/lib/python3.6/site-packages/skimage/util/arraycrop.py:177: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
cropped = ar[slices]
###Markdown
Cell Based MI
###Code
def mutual_information(hgram):
""" Mutual information for joint histogram
"""
# Convert bins counts to probability values
pxy = hgram / float(np.sum(hgram))
px = np.sum(pxy, axis=1) # marginal for x over y
py = np.sum(pxy, axis=0) # marginal for y over x
px_py = px[:, None] * py[None, :] # Broadcast to multiply marginals
# Now we can do the calculation using the pxy, px_py 2D arrays
nzs = pxy > 0 # Only non-zero pxy values contribute to the sum
return np.sum(pxy[nzs] * np.log(pxy[nzs] / px_py[nzs]))
total_mi = 0
n_bin = 50
mi_channels = np.zeros(nc)
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
# seg_test = 0
seg_test = np.sum(seg_list, axis=0)
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
# fake = netG(noise, X_seg_0)
# fake = netF(fake, X_seg_0).detach().cpu().numpy()[0]
for i in range(n_cell):
mask = seg_list[i].sum(axis=0)
for j in range(nc):
fake_i = (fake*mask)[j].flatten()
real_i = real_list[i][j].flatten() # i-th cell, j-th channel
hist_2d, _, _ = np.histogram2d(fake_i, real_i, bins=n_bin)
mi = mutual_information(hist_2d)
mi_channels[j] += mi/n_cell/nc
total_mi += mi/n_cell/nc
print('mutual information:', total_mi)
for j in range(nc):
print(channel_names[j], mi_channels[j])
###Output
mutual information: 9.262140659229612
Pan-Keratin 0.2520676637337301
EGFR 0.0009671456468982036
Beta catenin 0.36517683007489893
dsDNA 5.034599212339156
Ki67 0.006010974279703281
CD3 0.008652149683722004
CD8 0.056735734236278076
CD4 0.00033748009569806696
FoxP3 0.0
MPO 0.0002038725369066366
HLA-DR 0.13848432858646556
HLA_Class_1 3.2655416909327064
CD209 0.0
CD11b 0.0
CD11c 0.0022429745759158974
CD68 0.009715378147149971
CD63 0.0058574042665546985
Lag3 0.00020710560893043997
PD1 0.005851109218116845
PD-L1 0.015067493357541848
IDO 0.0003109657826317863
Vimentin 0.08265909396139536
SMA 0.01130393563923195
CD31 0.00014811652598078196
###Markdown
Center of Mass
###Code
cells_seg_list = []
cells_real_list = []
download_path = './data/cd8_test_c24'
filelist = os.listdir(download_path)
for i in range(len(filelist)):
patch = download_path + '/cell_' + str(i) + '.npy'
cells_seg_list.append(np.load(patch)[0])
cells_real_list.append(np.load(patch)[1])
cm_score = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
xy = np.mgrid[0:64,0:64]
noise = 0.5 * torch.randn(1, 128).cuda()
# seg_test = 0
seg_test = np.sum(seg_list, axis=0)
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0) # only consider the expression in tumor cells
cm_tumor_y = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cm_tumor_x = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cm_tumor = np.array([cm_tumor_x, cm_tumor_y])
tcell_mask = seg_list[0].sum(axis=0)
tcell_seg = np.array(np.where(tcell_mask==1))
dist = np.linalg.norm(tcell_seg - cm_tumor[:, 19:20], axis=0)
cm_idx = np.argmin(dist)
cm_tumor_incell = tcell_seg[:, cm_idx]
cm_tcell_y = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cm_tcell_x = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cm_tcell = np.array([cm_tcell_x, cm_tcell_y])
cm_score += np.linalg.norm(cm_tcell[:,18] - cm_tumor_incell[:], ord=2)
# cm_score += np.linalg.norm(cm_tcell[:,19] - cm_tumor[:,20], ord=2)
print('center of mass score:', cm_score / len(cells_seg_list))
###Output
center of mass score: 12.81043567516133
###Markdown
EM Distance
###Code
from scipy.stats import wasserstein_distance
def cart2pol(x, y):
rho = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
if phi < 0:
phi = 2*np.pi + phi
return (rho, phi)
def compute_histogram(img, divider=30, size = 64, offset=32):
histogram = np.zeros([divider])
for i in range(size):
for j in range(size):
x = j - offset
y = offset - i
rho, phi = cart2pol(x,y)
# normalize to [0,divider]
degree = divider * phi/(2*np.pi)
index = int(np.floor(degree))
histogram[index] += img[i,j]
return histogram
###Output
_____no_output_____
###Markdown
EMD score
###Code
# EM score threshold
em_score = 0
direct_right = 0
direct_wrong = 0
direct_all = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
seg_test = 0
xy = np.mgrid[0:64,0:64]
centroids_seg = []
centroids_tcell = []
centroids_tumor = []
express_tcell = []
for i in range(n_cell):
# non-weighted centroid of segmentation
cy = np.where(seg_list[i]==1)[1].mean()
cx = np.where(seg_list[i]==1)[2].mean()
centroids_seg.append(np.array([cx, cy]))
# weighted centroid of T cells
seg_test += seg_list[i]
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * seg_list[i].sum(axis=0) # only consider the expression in tumor cells
# fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0)
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
cy_all = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
centroids_tcell.append(np.array([cx_all, cy_all]))
# weighted centroid of tumor cells
cy_all = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
centroids_tumor.append(np.array([cx_all, cy_all]))
# print(fake_tumor[19].sum())
if i > 0 and fake_tumor[19].sum() > 1e-4:
direct_all += 1
# the previous angle
v1 = (centroids_tumor[i][:,19] - centroids_seg[0]) # previous centroid of T cell
v2 = (centroids_tcell[i][:,18] - centroids_tcell[i-1][:,18]) # current centroid of T cell
if i == 1: # no work so much
v2 = (centroids_tcell[i][:,18] - centroids_seg[0])
cos_theta = np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)
em_dist = wasserstein_distance(histo_cur, histo_pre)
if cos_theta > 0:
direct_right += 1
elif cos_theta < 0:
direct_wrong += 1
if cos_theta > 0 and histo_cur.sum() > histo_pre.sum(): # threshold the theta / check the expression level
em_score += np.linalg.norm(v2) * em_dist
elif cos_theta < 0:
em_score -= np.linalg.norm(v2) * em_dist
histo_pre = histo_cur.copy()
print("Direction: right:{}, wrong:{}, total:{}".format(direct_right, direct_wrong, direct_all))
print("em_score:{}".format(em_score))
###Output
Direction: right:670, wrong:595, total:1265
em_score:-12.243304561071165
###Markdown
Positive EMD score
###Code
# EM score threshold
em_score = 0
direct_right = 0
direct_wrong = 0
direct_all = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
seg_test = 0
xy = np.mgrid[0:64,0:64]
centroids_seg = []
centroids_tcell = []
centroids_tumor = []
express_tcell = []
for i in range(n_cell):
# non-weighted centroid of segmentation
cy = np.where(seg_list[i]==1)[1].mean()
cx = np.where(seg_list[i]==1)[2].mean()
centroids_seg.append(np.array([cx, cy]))
# weighted centroid of T cells
seg_test += seg_list[i]
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * seg_list[i].sum(axis=0) # only consider the expression in tumor cells
# fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0)
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
cy_all = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
centroids_tcell.append(np.array([cx_all, cy_all]))
# weighted centroid of tumor cells
cy_all = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
centroids_tumor.append(np.array([cx_all, cy_all]))
# print(fake_tumor[19].sum())
if i > 0 and fake_tumor[19].sum() > 1e-4:
direct_all += 1
# the previous angle
v1 = (centroids_tumor[i][:,19] - centroids_seg[0]) # previous centroid of T cell
v2 = (centroids_tcell[i][:,18] - centroids_tcell[i-1][:,18]) # current centroid of T cell
if i == 1: # no work so much
v2 = (centroids_tcell[i][:,18] - centroids_seg[0])
cos_theta = np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)
em_dist = wasserstein_distance(histo_cur, histo_pre)
if cos_theta > 0:
direct_right += 1
elif cos_theta < 0:
direct_wrong += 1
if cos_theta > 0 and histo_cur.sum() > histo_pre.sum(): # threshold the theta / check the expression level
em_score += np.linalg.norm(v2) * em_dist
# elif cos_theta < 0:
# em_score -= np.linalg.norm(v2) * em_dist
histo_pre = histo_cur.copy()
print("Direction: right:{}, wrong:{}, total:{}".format(direct_right, direct_wrong, direct_all))
print("em_score:{}".format(em_score))
###Output
Direction: right:670, wrong:595, total:1265
em_score:162.9662995074888
###Markdown
Projected EMD score
###Code
# EM score threshold
em_score = 0
direct_right = 0
direct_wrong = 0
direct_all = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
seg_test = 0
xy = np.mgrid[0:64,0:64]
centroids_seg = []
centroids_tcell = []
centroids_tumor = []
express_tcell = []
for i in range(n_cell):
# non-weighted centroid of segmentation
cy = np.where(seg_list[i]==1)[1].mean()
cx = np.where(seg_list[i]==1)[2].mean()
centroids_seg.append(np.array([cx, cy]))
# weighted centroid of T cells
seg_test += seg_list[i]
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * seg_list[i].sum(axis=0) # only consider the expression in tumor cells
# fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0)
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
cy_all = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
centroids_tcell.append(np.array([cx_all, cy_all]))
# weighted centroid of tumor cells
cy_all = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
centroids_tumor.append(np.array([cx_all, cy_all]))
# print(fake_tumor[19].sum())
if i > 0 and fake_tumor[19].sum() > 1e-4:
direct_all += 1
# the previous angle
v1 = (centroids_tumor[i][:,19] - centroids_seg[0]) # previous centroid of T cell
v2 = (centroids_tcell[i][:,18] - centroids_tcell[i-1][:,18]) # current centroid of T cell
if i == 1: # no work so much
v2 = (centroids_tcell[i][:,18] - centroids_seg[0])
cos_theta = np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)
em_dist = wasserstein_distance(histo_cur, histo_pre)
if cos_theta > 0:
direct_right += 1
elif cos_theta < 0:
direct_wrong += 1
if cos_theta > 0 and histo_cur.sum() > histo_pre.sum(): # threshold the theta / check the expression level
em_score += cos_theta * np.linalg.norm(v2) * em_dist
elif cos_theta < 0:
em_score += cos_theta * np.linalg.norm(v2) * em_dist
histo_pre = histo_cur.copy()
print("Direction: right:{}, wrong:{}, total:{}".format(direct_right, direct_wrong, direct_all))
print("em_score:{}".format(em_score))
###Output
Direction: right:670, wrong:595, total:1265
em_score:11.88863283471139
###Markdown
Random EMD score
###Code
# EM score threshold
em_score = 0
direct_right = 0
direct_wrong = 0
direct_all = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
seg_test = 0
xy = np.mgrid[0:64,0:64]
centroids_seg = []
centroids_tcell = []
centroids_tumor = []
express_tcell = []
for i in range(n_cell):
# non-weighted centroid of segmentation
cy = np.where(seg_list[i]==1)[1].mean()
cx = np.where(seg_list[i]==1)[2].mean()
centroids_seg.append(np.array([cx, cy]))
# weighted centroid of T cells
seg_test += seg_list[i]
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
X_seg_0[0,1] = X_seg_0[0,4].clone()
X_seg_0[0,4] = 0
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * seg_list[i].sum(axis=0) # only consider the expression in tumor cells
# fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0)
tcell_mask = seg_list[0].sum(axis=0)
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
indices = np.where(tcell_mask > 0)
upper = len(indices[0])
idx_rand = np.random.randint(upper)
cm_tcell_x = indices[0][idx_rand]
cm_tcell_y = indices[1][idx_rand]
centroids_tcell.append(np.array([cm_tcell_x, cm_tcell_y]))
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
# cy_all = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
# cx_all = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
# centroids_tcell.append(np.array([cx_all, cy_all]))
# weighted centroid of tumor cells
cy_all = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
centroids_tumor.append(np.array([cx_all, cy_all]))
# print(fake_tumor[19].sum())
if i > 0 and fake_tumor[19].sum() > 1e-4:
direct_all += 1
# the previous angle
v1 = (centroids_tumor[i][:,19] - centroids_seg[0]) # previous centroid of T cell
v2 = (centroids_tcell[i][:] - centroids_tcell[i-1][:]) # current centroid of T cell
if i == 1: # no work so much
v2 = (centroids_tcell[i][:] - centroids_seg[0])
cos_theta = np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)
em_dist = wasserstein_distance(histo_cur, histo_pre)
if cos_theta > 0:
direct_right += 1
elif cos_theta < 0:
direct_wrong += 1
if cos_theta > 0 and histo_cur.sum() > histo_pre.sum(): # threshold the theta / check the expression level
em_score += np.linalg.norm(v2) * em_dist
elif cos_theta < 0:
em_score -= np.linalg.norm(v2) * em_dist
histo_pre = histo_cur.copy()
print("Direction: right:{}, wrong:{}, total:{}".format(direct_right, direct_wrong, direct_all))
print("em_score:{}".format(em_score))
###Output
/home/ubuntu/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:81: RuntimeWarning: invalid value encountered in double_scalars
###Markdown
Control EMD score
###Code
# EM score threshold
em_score = 0
direct_right = 0
direct_wrong = 0
direct_all = 0
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
noise = 0.5 * torch.randn(1, 128).cuda()
seg_test = 0
xy = np.mgrid[0:64,0:64]
centroids_seg = []
centroids_tcell = []
centroids_tumor = []
express_tcell = []
for i in range(n_cell):
# non-weighted centroid of segmentation
cy = np.where(seg_list[i]==1)[1].mean()
cx = np.where(seg_list[i]==1)[2].mean()
centroids_seg.append(np.array([cx, cy]))
# weighted centroid of T cells
seg_test += seg_list[i]
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
X_seg_0[0,1] = X_seg_0[0,4].clone()
X_seg_0[0,4] = 0
fake = netG(X_seg_0).detach().cpu().numpy()[0]
fake_tcell = fake * seg_list[0].sum(axis=0) # only consider the expression in T cells
fake_tumor = fake * seg_list[i].sum(axis=0) # only consider the expression in tumor cells
# fake_tumor = fake * (seg_test-seg_list[0]).sum(axis=0)
histo_cur = compute_histogram(fake_tcell[18])
# weighted centroid of T cells
cy_all = (xy[0]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tcell).sum(axis=(1,2)) / (fake_tcell.sum(axis=(1,2))+1e-15)
centroids_tcell.append(np.array([cx_all, cy_all]))
# weighted centroid of tumor cells
cy_all = (xy[0]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
cx_all = (xy[1]*fake_tumor).sum(axis=(1,2)) / (fake_tumor.sum(axis=(1,2))+1e-15)
centroids_tumor.append(np.array([cx_all, cy_all]))
# print(fake_tumor[19].sum())
if i > 0 and fake_tumor[19].sum() > 1e-4:
direct_all += 1
# the previous angle
v1 = (centroids_tumor[i][:,19] - centroids_seg[0]) # previous centroid of T cell
v2 = (centroids_tcell[i][:,18] - centroids_tcell[i-1][:,18]) # current centroid of T cell
if i == 1: # no work so much
v2 = (centroids_tcell[i][:,18] - centroids_seg[0])
cos_theta = np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)
em_dist = wasserstein_distance(histo_cur, histo_pre)
if cos_theta > 0:
direct_right += 1
elif cos_theta < 0:
direct_wrong += 1
if cos_theta > 0 and histo_cur.sum() > histo_pre.sum(): # threshold the theta / check the expression level
em_score += np.linalg.norm(v2) * em_dist
elif cos_theta < 0:
em_score -= np.linalg.norm(v2) * em_dist
histo_pre = histo_cur.copy()
print("Direction: right:{}, wrong:{}, total:{}".format(direct_right, direct_wrong, direct_all))
print("em_score:{}".format(em_score))
###Output
Direction: right:527, wrong:789, total:1316
em_score:-92.08468389814144
###Markdown
Pan-Keratin / CD8 Experiment
###Code
# CD8 Test
noise = 0.5 * torch.randn(1, 128).cuda()
total = 0
decrease = 0
surface_area = []
tumor_expression = []
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
for k in range(1, n_cell):
seg_tumor = np.sum(seg_list[0], axis=0)
seg_test = np.zeros([17, 64, 64])
seg_test[4] = seg_tumor # tumor
seg_cd8 = 0
if k == 0:
seg_test[7] = 0 # cd8
else:
seg_cd8 = np.sum(seg_list[1:k+1], axis=0).sum(0)
seg_test[7] = seg_cd8 # cd8
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
pk_cur = (fake[0]*seg_tumor).sum()/ seg_tumor.sum()
surface_area.append(seg_cd8.sum())
tumor_expression.append(pk_cur)
# print(surface_area)#
x = np.array(surface_area)
y = np.array(tumor_expression)
plt.xlabel('Area of Cells')
plt.ylabel('Pan-Keratin Expression')
plt.scatter(x,y,s=1)
from scipy.stats import linregress
slope, intercept, r_value, p_value, std_err = linregress(x,y)
t = np.arange(0,2500)
y_t = slope*t + intercept
t_test = r_value*np.sqrt(x.shape[0]-2)/np.sqrt(1-r_value**2)
plt.plot(t,y_t,'r')
print('slope:', slope)
print('r-square', r_value**2)
print('t-test', t_test)
print('p-value', p_value)
plt.xlim(0)
plt.ylim(0)
plt.show()
# CD8 Test: Tumor Controll
noise = 0.5 * torch.randn(1, 128).cuda()
total = 0
decrease = 0
surface_area = []
tumor_expression = []
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
for k in range(1, n_cell):
seg_tumor = np.sum(seg_list[0], axis=0)
seg_test = np.zeros([17, 64, 64])
seg_test[4] = seg_tumor # tumor
seg_cd3 = 0
if k>0:
seg_cd3 = np.sum(seg_list[1:k+1], axis=0).sum(0)
seg_test[4] += seg_cd3 # cd8
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
pk_cur = (fake[0]*seg_tumor).sum() / seg_tumor.sum()
surface_area.append(seg_cd3.sum())
tumor_expression.append(pk_cur)
x = np.array(surface_area)
y = np.array(tumor_expression)
plt.xlabel('Area of Cells')
plt.ylabel('Pan-Keratin Expression')
plt.scatter(x,y,s=1)
from scipy.stats import linregress
slope, intercept, r_value, p_value, std_err = linregress(x,y)
t = np.arange(0,2500)
y_t = slope*t + intercept
t_test = r_value*np.sqrt(x.shape[0]-2)/np.sqrt(1-r_value**2)
plt.plot(t,y_t,'r')
print('slope:', slope)
print('r-square', r_value**2)
print('t-test', t_test)
print('p-value', p_value)
plt.xlim(0)
plt.ylim(0)
plt.show()
# CD8 Test
noise = 0.5 * torch.randn(1, 128).cuda()
total = 0
decrease = 0
cell_num = []
tumor_expression = []
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
for k in range(1, n_cell):
seg_tumor = np.sum(seg_list[0], axis=0)
seg_test = np.zeros([17, 64, 64])
seg_test[4] = seg_tumor # tumor
if k == 0:
seg_test[7] = 0 # cd8
else:
seg_cd3 = np.sum(seg_list[1:k+1], axis=0).sum(0)
seg_test[7] = seg_cd3 # cd8
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
pk_cur = (fake[0]*seg_tumor).sum() / seg_tumor.sum()
cell_num.append(k)
tumor_expression.append(pk_cur)
n_max = np.max(cell_num)
x_mean = np.zeros(n_max)
x_std = np.zeros(n_max)
for k in range(1, n_max+1):
exp_k = []
for i in range(len(cell_num)):
if cell_num[i]==k:
# print(k)
exp_k.append(tumor_expression[i])
x_mean[k-1] = np.mean(exp_k)
x_std[k-1] = np.std(exp_k)
t = np.arange(n_max)
plt.xlabel('Number of Cells')
plt.ylabel('Pan-Keratin Expression')
plt.bar(t,x_mean, yerr=x_std, align='center', alpha=0.5, ecolor='black', capsize=5)
plt.xlim(-0.5, 7.5)
plt.ylim(0)
plt.show()
# CD8 Test
noise = 0.5 * torch.randn(1, 128).cuda()
total = 0
decrease = 0
cell_num = []
tumor_expression = []
for c in range(len(cells_seg_list)):
seg_list = cells_seg_list[c]
real_list = cells_real_list[c]
n_cell = len(seg_list)
for k in range(1, n_cell):
seg_tumor = np.sum(seg_list[0], axis=0)
seg_test = np.zeros([17, 64, 64])
seg_test[4] = seg_tumor # tumor
seg_cd3 = 0
if k>0:
seg_cd3 = np.sum(seg_list[1:k+1], axis=0).sum(0)
seg_test[4] += seg_cd3 # cd8
empty = np.less(np.sum(seg_test, axis=0, keepdims=True), 0.5).astype(np.float32)
seg_test_18 = np.concatenate([seg_test, empty], axis=0)
X_seg_0 = torch.Tensor(seg_test_18).unsqueeze(0).cuda()
fake = netG(X_seg_0).detach().cpu().numpy()[0]
pk_cur = (fake[0]*seg_tumor).sum()/seg_tumor.sum()
cell_num.append(k)
tumor_expression.append(pk_cur)
n_max = np.max(cell_num)
x_mean = np.zeros(n_max)
x_std = np.zeros(n_max)
for k in range(1, n_max+1):
exp_k = []
for i in range(len(cell_num)):
if cell_num[i]==k:
# print(k)
exp_k.append(tumor_expression[i])
x_mean[k-1] = np.mean(exp_k)
x_std[k-1] = np.std(exp_k)
t = np.arange(n_max)
plt.xlabel('Number of Cells')
plt.ylabel('Pan-Keratin Expression')
plt.bar(t,x_mean, yerr=x_std, align='center', alpha=0.5, ecolor='black', capsize=5)
plt.xlim(-0.5, 7.5)
plt.show()
###Output
_____no_output_____ |
docs/echodata_html_repr.ipynb | ###Markdown
Stub notebook used only to generate an Echodata/xarray HTML repr2021-12-7
###Code
from pathlib import Path
import echopype as ep
bucket = "ncei-wcsd-archive"
rawdirpath = "data/raw/Bell_M._Shimada/SH1707/EK60/Summer2017-D20170728-T181619.raw"
s3raw_fpath = f"s3://{bucket}/{rawdirpath}"
ed = ep.open_raw(s3raw_fpath, sonar_model='EK60', storage_options={'anon': True})
# Manually populate additional metadata about the dataset and the platform
# -- SONAR-netCDF4 Top-level Group attributes
ed.top.attrs['title'] = "2017 Pacific Hake Acoustic Trawl Survey"
ed.top.attrs['summary'] = (
f"EK60 raw file {s3raw_fpath} from the {ed.top.attrs['title']}, "
"converted to a SONAR-netCDF4 file using echopype."
)
# -- SONAR-netCDF4 Platform Group attributes
ed.platform.attrs['platform_type'] = "Research vessel"
ed.platform.attrs['platform_name'] = "Bell M. Shimada"
ed.platform.attrs['platform_code_ICES'] = "315"
ed
with open('source/_static/echodata_sample.html', 'w') as of:
of.write(ed._repr_html_())
###Output
_____no_output_____ |
03_creating_a_u_net.ipynb | ###Markdown
In this example we will be creating a [U-net](https://towardsdatascience.com/understanding-semantic-segmentation-with-unet-6be4f42d4b47:~:text=The%20UNET%20was%20developed%20by,The%20architecture%20contains%20two%20paths.&text=Thus%20it%20is%20an%20end,accept%20image%20of%20any%20size.) model for predicting our wall shear stress. A U-net is an example of a [convolutional neural network](https://machinelearningmastery.com/convolutional-layers-for-deep-learning-neural-networks/).First we will create the base building block of our neural network, a simple block containing a [convolutions](https://machinelearningmastery.com/convolutional-layers-for-deep-learning-neural-networks/), [batch normalization](https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c) and an ReLU [activation function](https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0)
###Code
class ConvNormAct(torch.nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, padding="same", **kwargs):
super().__init__()
if padding == "same":
assert kernel_size//2 == 1
padding = kernel_size//2
self.conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding, **kwargs)
self.bnorm = torch.nn.BatchNorm2d(out_channels)
self.activation = torch.nn.ReLU()
def forward(self, x):
return self.activation(self.bnorm(self.conv(x)))
###Output
_____no_output_____
###Markdown
Bellow we show a simple example of the layer we created taking in an input with 3 features and creating an output with 6 features. Finally we can pass the output through a [max pooling](https://computersciencewiki.org/index.php/Max-pooling_/_Pooling:~:text=Max%20pooling%20is%20a%20sample,in%20the%20sub%2Dregions%20binned.) layer to reduce the size.
###Code
x = torch.randn(1, 3, 256, 256)
layer = ConvNormAct(3, 6, 3)
pool = torch.nn.MaxPool2d(2)
output = pool(layer(x))
print(x.shape, output.shape)
###Output
torch.Size([1, 3, 256, 256]) torch.Size([1, 6, 128, 128])
###Markdown
Now we need to create an upsamping layer for our data. We will use upsample convolutions, as they generally converge faster than simple transposed convolutions
###Code
class UpsampleConv(torch.nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, **kwargs):
super().__init__()
self.upsample = torch.nn.UpsamplingNearest2d(scale_factor=2)
self.conv = ConvNormAct(in_channels, out_channels, kernel_size, **kwargs)
def forward(self,x):
return self.conv(self.upsample(x))
upsample_layer = UpsampleConv(6,3)
print(upsample_layer(output).shape)
###Output
torch.Size([1, 3, 256, 256])
###Markdown
Now we have the tools to create our simple u-net model. We will make a relatively shallow network and visualize it using [tensorboard](https://www.tensorflow.org/tensorboard) for pytorch
###Code
class UNet(torch.nn.Module):
def __init__(self, in_channels, out_channels, base_channels=64, kernel_size=3):
super().__init__()
ConvWrapped = partial(ConvNormAct, kernel_size=3)
# encoding layers
self.conv1a = ConvWrapped(in_channels, base_channels)
self.conv1b = ConvWrapped(base_channels, base_channels)
self.pool_1 = torch.nn.MaxPool2d(2)
self.conv2a = ConvWrapped(base_channels, 2*base_channels)
self.conv2b = ConvWrapped(2*base_channels, 2*base_channels)
self.pool_2 = torch.nn.MaxPool2d(2)
self.conv3a = ConvWrapped(2*base_channels, 4*base_channels)
self.conv3b = ConvWrapped(4*base_channels, 4*base_channels)
# deconding layers
self.upsample_1 = UpsampleConv(4*base_channels, 2*base_channels)
self.conv4a = ConvWrapped(4*base_channels, 2*base_channels)
self.conv4b = ConvWrapped(2*base_channels, 2*base_channels)
self.upsample_2 = UpsampleConv(2*base_channels, base_channels)
self.conv5a = ConvWrapped(2*base_channels, base_channels)
self.conv5b = ConvWrapped(base_channels, base_channels)
self.output_conv = torch.nn.Conv2d(base_channels, out_channels, kernel_size=1)
def forward(self, x):
x = self.conv1a(x)
x = self.conv1b(x)
c1 = x
x = self.pool_1(x)
x = self.conv2a(x)
x = self.conv2b(x)
c2 = x
x = self.pool_2(x)
x = self.conv3a(x)
x = self.conv3b(x)
x = self.upsample_1(x)
x = torch.cat([x, c2], dim=1)
x = self.conv4a(x)
x = self.conv4b(x)
x = self.upsample_2(x)
x = torch.cat([x, c1], dim=1)
x = self.conv5a(x)
x = self.conv5b(x)
return self.output_conv(x)
###Output
_____no_output_____
###Markdown
Now to visualize the created network with tensorboard
###Code
# create a summary writer for tensorboard
writer = SummaryWriter('runs/view_model')
# create a dummy input
x = torch.randn(1, 3, 256, 256)
# construct the model and pass the input through it
model = UNet(3, 1)
# add the graph to tensorboard and close the writer
writer.add_graph(model, x)
writer.close()
# load the tensorbaord extension
%load_ext tensorboard
# run tensorboard, if it does not work, we can try running the command in the terminal after moving to the required directory
%tensorboard --logdir=runs
###Output
_____no_output_____ |
semana_10/dia_1/RESU_4-Errores y contornos.ipynb | ###Markdown
Visualizando Errores Para cuantificar cualquier experimento científico, la precisión en la medida de los errores es casi tan importante, si no más importante, que la precisión de la medida en sí.Por ejemplo, imaginemos que estamos usando algunas observaciones astrofísicas para estimar la Constante de Hubble, la medida local de la tasa de expansión del Universo.En la literatura actual, se sugiere un valor de alrededor de 71 (km/s)/Mpc, y obtenemos un valor de 74 (km/s)/Mpc con nuestro método. ¿Son coherentes los valores? La única respuesta correcta, dada esta información, es la siguiente: "No hay forma de saberlo".Supongamos que aumentamos esta información con las incertidumbres reportadas: la literatura actual sugiere un valor de en torno a 71 $\pm$ 2.5 (km/s)/Mpc, mientras que nuestro método obtenemos un valor de 74 $\pm$ 5 (km / s) / Mpc. Ahora, ¿los valores son consistentes? Esa es una pregunta que puede responderse cuantitativamente.En la visualización de datos y resultados, mostrar estos errores de manera efectiva puede hacer que un gráfico transmita información mucho más completa.Como en los notebooks anteriores, comenzamos Barras de Error básicasUna barra de error básica puede ser creada con una simple llamada a una función de Matplotlib:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 50)
dy = 0.8
y = np.sin(x) + dy * np.random.randn(50)
plt.errorbar(x, y, yerr=dy, fmt='.k');
###Output
_____no_output_____
###Markdown
Aquí, el parámetro `` fmt `` es un código de formato que controla la apariencia de líneas y puntos, y tiene la misma sintaxis que la abreviatura utilizada en `` plt.plot `` (lo hemos visto en el notebook anterior).Además de estas opciones básicas, la función `` errorbar `` tiene muchas otras opciones para ajustar las salidas.Con estas opciones adicionales, podemos personalizar fácilmente la estética del gráfico de barra de error.Normalmente, querremos darle mayor importancia a los puntos, por lo que podríamos hacer que las barras de error fuesen más tenues que los puntos:
###Code
plt.errorbar(x, y, yerr=dy, fmt='o', color='black',
ecolor='lightgray', elinewidth=3, capsize=3);
###Output
_____no_output_____
###Markdown
Además de estas opciones, también podemos especificar barras de error horizontales (`` xerr ``), barras de error de un solo lado y muchas otras variantes.Para obtener más información sobre las opciones disponibles, podemos consultar el docstring de `` plt.errorbar ``. Densidad y Contornos A veces es útil mostrar datos tridimensionales en dos dimensiones utilizando contornos o regiones codificadas por colores.Hay tres funciones de Matplotlib que pueden ser útiles para esto: `` plt.contour ``, que nos permitirá crear gráficos de contorno; `` plt.contourf `` para crear gráficos de contorno rellenos, y `` plt.imshow `` con la que podremos mostrar imágenes.A continuación, analizaremos varios ejemplos de su uso, comenzando por la configuración del notebook para graficar e importar las funciones que usaremos:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import numpy as np
###Output
_____no_output_____
###Markdown
Visualizando funciones tridimensionales Comenzaremos viendo los gráficos de controno con la función $z = f(x, y)$, siendo $f$ lo que mostramos a continuación:
###Code
def f(x, y):
return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
###Output
_____no_output_____
###Markdown
Se puede crear un gráfico de contorno con la función `` plt.contour ``.Para ello, necesitaremos tres argumentos: una cuadrícula de valores `x`, otra cuadrícula de valores `y`, y otra para `z`.Los valores `x` e `y` representan posiciones en el gráfico, y los valores `z` estarán representados por los niveles de contorno.Quizás la forma más sencilla de preparar dichos datos es usar la función `` np.meshgrid ``, que crea cuadrículas bidimensionales a partir de matrices unidimensionales:
###Code
x = np.linspace(0, 5, 50)
y = np.linspace(0, 5, 50)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
###Output
_____no_output_____
###Markdown
Ahora veamos esto con un gráfico de contorno de línea estándar:
###Code
plt.contour(x, y, Z, colors='black');
###Output
_____no_output_____
###Markdown
Date cuenta que, por defecto, cuando se utiliza un solo color, los valores negativos se representan con líneas discontinuas, y los valores positivos, con líneas continuas.Alternativamente, las líneas se pueden codificar por colores especificando un mapa de colores con el argumento `` cmap ``.A continuación, además de cambiar los colores para la representación, aumentaremos el número de líneas que se dibujan, 20 intervalos igualmente espaciados dentro del rango de datos:
###Code
plt.contour(X, Y, Z, levels = 20, cmap='RdGy');
###Output
_____no_output_____
###Markdown
Hemos elegido el mapa de color ``RdGy`` (*Red-Gray*), el cual es una gran opción para datos centrados.Matplotlib tiene un gran rango de mapas de colores disponibles, los cuales podemos encontrar fácilmente a través del buscador del notebook haciendo uso del autocompletado (pulsando ```` tras escribir el módulo ``plt.cm``):```plt.cm.```Nuestro gráfico se va poniendo cada vez más guapo, pero sigue habiendo puntos de mejora, como los espacios entre líneas, que podrían distraer un poco.Podemos cambiar esto cambiando la representación por una de contorno con relleno, usando la función ``plt.contourf()`` (con una ``f`` al final), que sigue prácticamente la misma sintaxis que ``plt.contour()``.Además, añadiremos el comando ``plt.colorbar()``, que creará de forma automática un eje adicional con la información del color del propio gráfico.
###Code
plt.contourf(X, Y, Z, 20, cmap='RdGy')
plt.colorbar();
###Output
_____no_output_____
###Markdown
La barra de color deja bien claro que las zonas negras son "picos" y las rojas, "valles".Un problema potencial de esta representación es que se ve "pixelado", es decir, los cambios entre color se ven claramente discretos en lugar de continuos, lo que no siempre es lo que se desea.Esto podría remediarse estableciendo el número de contornos en un número muy alto, pero esto daría como resultado un gráfico ineficiente: Matplotlib debe representar un nuevo polígono para cada paso en el nivel.Una mejor manera de manejar esto es usar la función `` plt.imshow () ``, que interpreta una cuadrícula bidimensional de datos como una imagen. VEámoslo con un ejemploEl siguiente código lo muestra:
###Code
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy', interpolation='gaussian')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Sin embargo, existen algunos errores potenciales con ``imshow()``:- ``plt.imshow()`` no acepta cuadrículas de *x* e *y*, así que debemos especificarle a mano el parámetro ``extent``([*xmin*, *xmax*, *ymin*, *ymax*]) de la imagen de la representación.- ``plt.imshow()`` por defecto, sigue el estándar de la definición de matriz de imágenes, donde el origen está en la parte superior izquierda, no en la parte inferior izquierda como en la mayoría de los gráficos de contorno. Esto debe cambiarse cuando se muestran datos cuadriculados. Por último, a veces puede resultar útil combinar gráficos de contorno y gráficos de imagen.Por ejemplo, aquí usaremos una imagen de fondo parcialmente transparente (con transparencia establecida a través del parámetro `alfa`) y trazaremos contornos con etiquetas en los propios contornos (usando la función `` plt.clabel ()``):
###Code
contours = plt.contour(X, Y, Z, 3, colors='black')
plt.clabel(contours, inline=True, fontsize=8)
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy', alpha=0.5, interpolation='gaussian')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Finalmente, la combinación de estas tres funciones (``plt.contour ``, `` plt.contourf `` y `` plt.imshow `` ) nos brinda posibilidades casi ilimitadas para mostrar este tipo de datos tridimensionales dentro de un gráfico bidimensional.Para obtener más información sobre las opciones disponibles en estas funciones, podemos consultar sus docstrings. A veces resulta útil combinar gráficos de contorno y gráficos de imagen. EjercicioEn este ejercicio practicaremos lo básico de las representacoines de errores:1. Configura el estilo de ``plt`` del notebook para que muestre la cuadrícula (grid). Prueba a crearte una figura en blanco (o con cualquier cosa sencilla) para ver si funciona2. Lee las medidas del dataframe "desinteg_part.csv", que tiene datos sobre diferentes experimentos de la desintegración de las partículas en función de su radio (en nm), para los que te dicen que el error en el eje Y es de 0.2453. Representa los datos con un ``errorbar`` donde los marcadores sean * y el color de la representación sea verde.4. Crea otra gráfica donde se le dé mayor importancia a los datos, de modo que se muestren con puntos como markers, de color azul, donde la representación del error sea de un color azul más tenue y se utilice un capsize de 2NOTA: Puede que el csv tenga un separador distinto al que utilizamos por defecto
###Code
plt.style.use('seaborn-whitegrid')
fig = plt.figure()
ax = plt.axes()
df = pd.read_csv("desinteg_part.csv", sep=';')
plt.errorbar(df['Radio [nm]'], df['Tiempo [s]'], yerr=0.245, fmt='*g');
plt.errorbar(x, y, yerr=dy, fmt='o', color='blue',
ecolor='lightblue', capsize=2);
###Output
_____no_output_____
###Markdown
EjercicioVamos a practicar un poco estos conceptos sobre el DataFrame de futbolistas, "FIFA20.csv":1. Lee el DataFrame y elimina todos aquellos registros con algún nulo2. Crea un par de columnas nuevas en función de 'dob', que te devuelvan el año y el mes. Llámalas "year" y "month"3. Crea un gráfico de contorno simple (con líneas de color negro) que muestre la media del salario de cada futbolista ("wage_eur") en función del año y del mes de nacimiento. Para ello, tendremos que obtener primero un DataFrame que nos devuelva estos datos, para lo cual hemos visto más de una forma en notebooks pasados. Una vez tengamos ese DataFrame, podremos utilizar el índice como eje Y y las columnas como eje X para crear nuestra representación.4. En base a lo que has hecho en el apartado 3, créate ahora un gráfico de contorno pero con relleno. Usa el cmap "viridis".5. Investiga diferentes valores de cmap (en el notebook te dice cómo puedes encontrar más valores posibles) y prueba con un par de mapas de color que no hayamos visto en clase6. Finalmente, crea un gráfico como el anterior (con el colormap que más te haya gustado) pero con una interpolación gaussiana para que no haya desniveles tan abruptos, con un factor de transparencia de 0.8 y haz que se muestren los contornos con un tamaño de letra de 10
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("FIFA20.csv")
df = df.dropna()
df['year'] = df['dob'].apply(lambda x: int(x[:4]))
df['month'] = df['dob'].apply(lambda x: int(x[5:7]))
pivot = df.pivot_table("wage_eur", index="year", columns='month').fillna(0)
x = pivot.columns
y = pivot.index
Z = pivot.fillna(0)
X, Y = np.meshgrid(x, y)
Z = Z.values
plt.contour(x, y, Z, colors='black');
plt.contourf(X, Y, Z, 20, cmap='viridis')
plt.colorbar();
plt.contourf(X, Y, Z, 20, cmap='autumn_r')
plt.colorbar();
plt.contourf(X, Y, Z, 20, cmap='cool')
plt.colorbar();
contours = plt.contour(X, Y, Z, 3, colors='black')
plt.clabel(contours, inline=True, fontsize=10)
plt.imshow(Z, extent=[np.min(X), np.max(X), np.min(Y), np.max(Y)], origin='lower',
cmap='RdGy', alpha=0.8, interpolation='gaussian')
plt.colorbar();
###Output
_____no_output_____ |
C50-DistilBert.ipynb | ###Markdown
"Authorship Identification: Part-2 (DistilBERT Transformer)"> "Using transfer-learning to fine-tune pretrained DistilBERT transformer for authorship identification. In a nutshell, DistilBERT is a small version of BERT which is "smaller, faster, cheaper, and lighter". It has 40% less parameters original BERT, runs 60% faster and preserve over 95% of BERT’s performances (measured on the GLUE language understanding benchmark)."- toc: true- sticky_rank: 2- branch: master- badges: true- comments: true- categories: [project, machine-learning, notebook, python]- image: images/vignette/base.jpg- hide: false- search_exclude: false Abstract**This is a follow-up post on the authorship identification project.**I regard the past few years as the inception of the era of Transformers which started with the popular Research Paper "Attention is all you need" by "somebody" in 2020. Several transformer architectures have shown up since then. Some of the famous ones are -BERT, DistillBERT, GPT, GPT2, and the latest GPT3 which has outperformed many previous state-of-the-art models at several tasks in NLP, BERT (by Google) is also one of the most popular transformers out there.Transformers are very large models with multi-billions of parameters. Pretrained transformers have shown tremendous capability when used fine-tuned for a downstream task in Transfer Learning similar to the CNNs in Computer Vision.In this part, I'll use fine-tuned DistilBERT transformer which is a smaller version of the original BERT for the downstream classification task.I'll use the `transformers` library from Huggingface which consists of numerous state-of-the-art transformers and supports several downstream tasks out of the box. In short, I consider Huggingface a great starting point for a person engrossed in NLP and it offers tons of great functionalities.I'll provide links to resources for you to learn more about these technologies.
###Code
# Imports
import keras
import tensorflow as tf
import numpy as np
from pathlib import Path
from utils import plot_history
from keras.preprocessing import text_dataset_from_directory
ds_dir = Path('data/C50/')
train_dir = ds_dir / 'train'
test_dir = ds_dir / 'test'
seed = 1000
batch_size = 16
train_ds = text_dataset_from_directory(train_dir,
label_mode='int',
seed=seed,
shuffle=True,
validation_split=0.2,
subset='training')
val_ds = text_dataset_from_directory(train_dir,
label_mode='int',
seed=seed,
shuffle=True,
validation_split=0.2,
subset='validation')
test_ds = text_dataset_from_directory(test_dir,
label_mode='int',
seed=seed,
shuffle=True,
batch_size=batch_size)
class_names = train_ds.class_names
# Prepare and Configure the datasets
from utils import get_text, prepare_batched
from transformers import DistilBertTokenizerFast
AUTOTUNE = tf.data.AUTOTUNE
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
batch_size=2
train_ds = prepare_batched(train_ds, tokenizer, batch_size=batch_size)
val_ds = prepare_batched(val_ds, tokenizer, batch_size=batch_size)
test_ds = prepare_batched(test_ds, tokenizer, batch_size=batch_size)
# Fine-tuning the model
keras.backend.clear_session()
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=50)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy()
)
history = model.fit(train_ds, validation_data=val_ds, epochs=1)
plot_history(history, metric='sparse_categorical_accuracy', save_path=Path('plots/distilbert.jpg'))
model.save_pretrained("DistilBERT_finetuned.h5")
print("Evaluate model on test dataset")
model.evaluate(test_ds)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.