repo_name
stringlengths 7
79
| path
stringlengths 4
179
| copies
stringlengths 1
3
| size
stringlengths 4
6
| content
stringlengths 959
798k
| license
stringclasses 15
values |
---|---|---|---|---|---|
geophysics/mtpy
|
mtpy/modeling/occam2d_rewrite.py
|
1
|
304439
|
# -*- coding: utf-8 -*-
"""
Spin-off from 'occamtools'
(Created August 2011, re-written August 2013)
Tools for Occam2D
authors: JP/LK
Classes:
- Data
- Model
- Setup
- Run
- Plot
- Mask
Functions:
- getdatetime
- makestartfiles
- writemeshfile
- writemodelfile
- writestartupfile
- read_datafile
- get_model_setup
- blocks_elements_setup
"""
#==============================================================================
import numpy as np
import scipy as sp
from scipy.stats import mode
import sys
import os
import os.path as op
import subprocess
import shutil
import fnmatch
import datetime
from operator import itemgetter
import time
import matplotlib.colorbar as mcb
from matplotlib.colors import Normalize
from matplotlib.ticker import MultipleLocator
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import scipy.interpolate as spi
import mtpy.core.edi as MTedi
import mtpy.core.mt as mt
import mtpy.modeling.winglinktools as MTwl
import mtpy.utils.conversions as MTcv
import mtpy.utils.filehandling as MTfh
import mtpy.utils.configfile as MTcf
import mtpy.analysis.geometry as MTgy
import mtpy.utils.exceptions as MTex
import scipy.interpolate as si
from mtpy.imaging.mtplottools import plot_errorbar
reload(MTcv)
reload(MTcf)
reload(MTedi)
reload(MTex)
#==============================================================================
class Mesh():
"""
deals only with the finite element mesh. Builds a finite element mesh
based on given parameters defined below. The mesh reads in the station
locations, finds the center and makes the relative location of the
furthest left hand station 0. The mesh increases in depth logarithmically
as required by the physics of MT. Also, the model extends horizontally
and vertically with padding cells in order to fullfill the assumption of
the forward operator that at the edges the structure is 1D. Stations are
place on the horizontal nodes as required by Wannamaker's forward
operator.
Mesh has the ability to create a mesh that incorporates topography given
a elevation profile. It adds more cells to the mesh with thickness
z1_layer. It then sets the values of the triangular elements according to
the elevation value at that location. If the elevation covers less than
50% of the triangular cell, then the cell value is set to that of air
.. note:: Mesh is inhereted by Regularization, so the mesh can also be
be built from there, same as the example below.
Arguments:
-----------
======================= ===================================================
Key Words/Attributes Description
======================= ===================================================
air_key letter associated with the value of air
*default* is 0
air_value value given to an air cell, *default* is 1E13
cell_width width of cells with in station area in meters
*default* is 100
elevation_profile elevation profile along the profile line.
given as np.ndarray(nx, 2), where the elements
are x_location, elevation. If elevation profile
is given add_elevation is called automatically.
*default* is None
mesh_fn full path to mesh file.
mesh_values letter values of each triangular mesh element
if the cell is free value is ?
n_layers number of vertical layers in mesh
*default* is 90
num_x_pad_cells number of horizontal padding cells outside the
the station area that will increase in size
by x_pad_multiplier. *default* is 7
num_x_pad_small_cells number of horizonal padding cells just outside
the station area with width cell_width. This is
to extend the station area if needed.
*default* is 2
num_z_pad_cells number of vertical padding cells below
z_target_depth down to z_bottom. *default* is 5
rel_station_locations relative station locations within the mesh. The
locations are relative to the center of the station
area. *default* is None, filled later
save_path full path to save mesh file to.
*default* is current working directory.
station_locations location of stations in meters, can be on a
relative grid or in UTM.
x_grid location of horizontal grid nodes in meters
x_nodes relative spacing between grid nodes
x_pad_multiplier horizontal padding cells will increase by this
multiple out to the edge of the grid.
*default* is 1.5
z1_layer thickness of the first layer in the model.
Should be at least 1/4 of the first skin depth
*default* is 10
z_bottom bottom depth of the model (m). Needs to be large
enough to be 1D at the edge.
*default* is 200000.0
z_grid location of vertical nodes in meters
z_nodes relative distance between vertical nodes in meters
z_target_depth depth to deepest target of interest. Below this
depth cells will be padded to z_bottom
======================= ===================================================
======================= ===================================================
Methods Description
======================= ===================================================
add_elevation adds elevation to the mesh given elevation
profile.
build_mesh builds the mesh given the attributes of Mesh. If
elevation_profile is not None, add_elevation is
called inside build_mesh
plot_mesh plots the built mesh with station location.
read_mesh_file reads in an existing mesh file and populates the
appropriate attributes.
write_mesh_file writes a mesh file to save_path
======================= ===================================================
:Example: ::
>>> import mtpy.modeling.occam2d as occcam2d
>>> edipath = r"/home/mt/edi_files"
>>> slist = ['mt{0:03}'.format(ss) for ss in range(20)]
>>> ocd = occam2d.Data(edi_path=edipath, station_list=slist)
>>> ocd.save_path = r"/home/occam/Line1/Inv1"
>>> ocd.write_data_file()
>>> ocm = occam2d.Mesh(ocd.station_locations)
>>> # add in elevation
>>> ocm.elevation_profile = ocd.elevation_profile
>>> # change number of layers
>>> ocm.n_layers = 110
>>> # change cell width in station area
>>> ocm.cell_width = 200
>>> ocm.build_mesh()
>>> ocm.plot_mesh()
>>> ocm.save_path = ocd.save_path
>>> ocm.write_mesh_file()
"""
def __init__(self, station_locations=None, **kwargs):
self.station_locations = station_locations
self.rel_station_locations = None
self.n_layers = kwargs.pop('n_layers', 90)
self.cell_width = kwargs.pop('cell_width', 100)
self.num_x_pad_cells = kwargs.pop('num_x_pad_cells', 7)
self.num_z_pad_cells = kwargs.pop('num_z_pad_cells', 5)
self.x_pad_multiplier = kwargs.pop('x_pad_multiplier', 1.5)
self.z1_layer = kwargs.pop('z1_layer', 10.0)
self.z_bottom = kwargs.pop('z_bottom', 200000.0)
self.z_target_depth = kwargs.pop('z_target_depth', 50000.0)
self.num_x_pad_small_cells = kwargs.pop('num_x_pad_small_cells', 2)
self.save_path = kwargs.pop('save_path', None)
self.mesh_fn = kwargs.pop('mesh_fn', None)
self.elevation_profile = kwargs.pop('elevation_profile', None)
self.x_nodes = None
self.z_nodes = None
self.x_grid = None
self.z_grid = None
self.mesh_values = None
self.air_value = 1e13
self.air_key = '0'
def build_mesh(self):
"""
Build the finite element mesh given the parameters defined by the
attributes of Mesh. Computes relative station locations by finding
the center of the station area and setting the middle to 0. Mesh
blocks are built by calculating the distance between stations and
putting evenly spaced blocks between the stations being close to
cell_width. This places a horizontal node at the station location.
If the spacing between stations is smaller than
cell_width, a horizontal node is placed between the stations to be
sure the model has room to change between the station.
If elevation_profile is given, add_elevation is called to add
topography into the mesh.
Populates attributes:
* mesh_values
* rel_station_locations
* x_grid
* x_nodes
* z_grid
* z_nodes
:Example: ::
>>> import mtpy.modeling.occam2d as occcam2d
>>> edipath = r"/home/mt/edi_files"
>>> slist = ['mt{0:03}'.format(ss) for ss in range(20)]
>>> ocd = occam2d.Data(edi_path=edipath, station_list=slist)
>>> ocd.save_path = r"/home/occam/Line1/Inv1"
>>> ocd.write_data_file()
>>> ocm = occam2d.Mesh(ocd.station_locations)
>>> # add in elevation
>>> ocm.elevation_profile = ocd.elevation_profile
>>> # change number of layers
>>> ocm.n_layers = 110
>>> # change cell width in station area
>>> ocm.cell_width = 200
>>> ocm.build_mesh()
"""
if self.station_locations is None:
raise OccamInputError('Need to input station locations to define '
'a finite element mesh')
#be sure the station locations are sorted from left to right
self.station_locations.sort()
self.rel_station_locations = np.copy(self.station_locations)
#center the stations around 0 like the mesh will be
self.rel_station_locations -= self.rel_station_locations.mean()
#1) make horizontal nodes at station locations and fill in the cells
# around that area with cell width. This will put the station
# in the center of the regularization block as prescribed for occam
# the first cell of the station area will be outside of the furthest
# right hand station to reduce the effect of a large neighboring cell.
self.x_grid = np.array([self.rel_station_locations[0]-self.cell_width*\
self.x_pad_multiplier])
for ii, offset in enumerate(self.rel_station_locations[:-1]):
dx = self.rel_station_locations[ii+1]-offset
num_cells = int(np.floor(dx/self.cell_width))
#if the spacing between stations is smaller than mesh set cell
#size to mid point between stations
if num_cells == 0:
cell_width = dx/2.
num_cells = 1
#calculate cell spacing so that they are equal between neighboring
#stations
else:
cell_width = dx/num_cells
if self.x_grid[-1] != offset:
self.x_grid = np.append(self.x_grid, offset)
for dd in range(num_cells):
new_cell = offset+(dd+1)*cell_width
#make sure cells aren't too close together
try:
if abs(self.rel_station_locations[ii+1]-new_cell) >= cell_width*.9:
self.x_grid = np.append(self.x_grid, new_cell)
else:
pass
except IndexError:
pass
self.x_grid = np.append(self.x_grid, self.rel_station_locations[-1])
# add a cell on the right hand side of the station area to reduce
# effect of a large cell next to it
self.x_grid = np.append(self.x_grid,
self.rel_station_locations[-1]+self.cell_width*\
self.x_pad_multiplier)
#--> pad the mesh with exponentially increasing horizontal cells
# such that the edge of the mesh can be estimated with a 1D model
x_left = float(abs(self.x_grid[0]-self.x_grid[1]))
x_right = float(abs(self.x_grid[-1]-self.x_grid[-2]))
x_pad_cell = np.max([x_left, x_right])
for ii in range(self.num_x_pad_cells):
left_cell = self.x_grid[0]
right_cell = self.x_grid[-1]
pad_cell = x_pad_cell*self.x_pad_multiplier**(ii+1)
self.x_grid = np.insert(self.x_grid, 0, left_cell-pad_cell)
self.x_grid = np.append(self.x_grid, right_cell+pad_cell)
#--> compute relative positions for the grid
self.x_nodes = self.x_grid.copy()
for ii, xx in enumerate(self.x_grid[:-1]):
self.x_nodes[ii] = abs(self.x_grid[ii+1]-xx)
self.x_nodes = self.x_nodes[:-1]
#2) make vertical nodes so that they increase with depth
#--> make depth grid
log_z = np.logspace(np.log10(self.z1_layer),
np.log10(self.z_target_depth-\
np.logspace(np.log10(self.z1_layer),
np.log10(self.z_target_depth),
num=self.n_layers)[-2]),
num=self.n_layers-self.num_z_pad_cells)
#round the layers to be whole numbers
ztarget = np.array([zz-zz%10**np.floor(np.log10(zz)) for zz in
log_z])
#--> create padding cells past target depth
log_zpad = np.logspace(np.log10(self.z_target_depth),
np.log10(self.z_bottom-\
np.logspace(np.log10(self.z_target_depth),
np.log10(self.z_bottom),
num=self.num_z_pad_cells)[-2]),
num=self.num_z_pad_cells)
#round the layers to be whole numbers
zpadding = np.array([zz-zz%10**np.floor(np.log10(zz)) for zz in
log_zpad])
#create the vertical nodes
self.z_nodes = np.append(ztarget, zpadding)
#calculate actual distances of depth layers
self.z_grid = np.array([self.z_nodes[:ii+1].sum()
for ii in range(self.z_nodes.shape[0])])
self.mesh_values = np.zeros((self.x_nodes.shape[0],
self.z_nodes.shape[0], 4), dtype=str)
self.mesh_values[:,:,:] = '?'
#get elevation if elevation_profile is given
if self.elevation_profile is not None:
self.add_elevation(self.elevation_profile)
print '='*55
print '{0:^55}'.format('mesh parameters'.upper())
print '='*55
print ' number of horizontal nodes = {0}'.format(self.x_nodes.shape[0])
print ' number of vertical nodes = {0}'.format(self.z_nodes.shape[0])
print ' Total Horizontal Distance = {0:2f}'.format(self.x_nodes.sum())
print ' Total Vertical Distance = {0:2f}'.format(self.z_nodes.sum())
print '='*55
def add_elevation(self, elevation_profile=None):
"""
the elevation model needs to be in relative coordinates and be a
numpy.ndarray(2, num_elevation_points) where the first column is
the horizontal location and the second column is the elevation at
that location.
If you have a elevation model use Profile to project the elevation
information onto the profile line
To build the elevation I'm going to add the elevation to the top
of the model which will add cells to the mesh. there might be a better
way to do this, but this is the first attempt. So I'm going to assume
that the first layer of the mesh without elevation is the minimum
elevation and blocks will be added to max elevation at an increment
according to z1_layer
.. note:: the elevation model should be symmetrical ie, starting
at the first station and ending on the last station, so for
now any elevation outside the station area will be ignored
and set to the elevation of the station at the extremities.
This is not ideal but works for now.
Arguments:
-----------
**elevation_profile** : np.ndarray(2, num_elev_points)
- 1st row is for profile location
- 2nd row is for elevation values
Computes:
---------
**mesh_values** : mesh values, setting anything above topography
to the key for air, which for Occam is '0'
"""
if elevation_profile is not None:
self.elevation_profile = elevation_profile
if self.elevation_profile is None:
raise OccamInputError('Need to input an elevation profile to '
'add elevation into the mesh.')
elev_diff = abs(elevation_profile[1].max()-elevation_profile[1].min())
num_elev_layers = int(elev_diff/self.z1_layer)
#add vertical nodes and values to mesh_values
self.z_nodes = np.append([self.z1_layer]*num_elev_layers, self.z_nodes)
self.z_grid = np.array([self.z_nodes[:ii+1].sum()
for ii in range(self.z_nodes.shape[0])])
#this assumes that mesh_values have not been changed yet and are all ?
self.mesh_values = np.zeros((self.x_grid.shape[0],
self.z_grid.shape[0], 4), dtype=str)
self.mesh_values[:,:,:] = '?'
#--> need to interpolate the elevation values onto the mesh nodes
# first center the locations about 0, this needs to be the same
# center as the station locations.
offset = elevation_profile[0]-elevation_profile[0].mean()
elev = elevation_profile[1]-elevation_profile[1].min()
func_elev = spi.interp1d(offset, elev, kind='linear')
# need to figure out which cells and triangular cells need to be air
xpad = self.num_x_pad_cells+1
for ii, xg in enumerate(self.x_grid[xpad:-xpad], xpad):
#get the index value for z_grid by calculating the elevation
#difference relative to the top of the model
dz = elev.max()-func_elev(xg)
#index of ground in the model for that x location
zz = int(np.ceil(dz/self.z1_layer))
if zz == 0:
pass
else:
#--> need to figure out the triangular elements
#top triangle
zlayer = elev.max()-self.z_grid[zz]
try:
xtop = xg+(self.x_grid[ii+1]-xg)/2
ytop = zlayer+3*(self.z_grid[zz]-self.z_grid[zz-1])/4
elev_top = func_elev(xtop)
#print xg, xtop, ytop, elev_top, zz
if elev_top > ytop:
self.mesh_values[ii, 0:zz, 0] = self.air_key
else:
self.mesh_values[ii, 0:zz-1, 0] = self.air_key
except ValueError:
pass
#left triangle
try:
xleft = xg+(self.x_grid[ii+1]-xg)/4.
yleft = zlayer+(self.z_grid[zz]-self.z_grid[zz-1])/2.
elev_left = func_elev(xleft)
#print xg, xleft, yleft, elev_left, zz
if elev_left > yleft:
self.mesh_values[ii, 0:zz, 1] = self.air_key
except ValueError:
pass
#bottom triangle
try:
xbottom = xg+(self.x_grid[ii+1]-xg)/2
ybottom = zlayer+(self.z_grid[zz]-self.z_grid[zz-1])/4
elev_bottom = func_elev(xbottom)
#print xg, xbottom, ybottom, elev_bottom, zz
if elev_bottom > ybottom:
self.mesh_values[ii, 0:zz, 2] = self.air_key
except ValueError:
pass
#right triangle
try:
xright = xg+3*(self.x_grid[ii+1]-xg)/4
yright = zlayer+(self.z_grid[zz]-self.z_grid[zz-1])/2
elev_right = func_elev(xright)
if elev_right > yright*.95:
self.mesh_values[ii, 0:zz, 3] = self.air_key
except ValueError:
pass
#--> need to fill out the padding cells so they have the same elevation
# as the extremity stations.
for ii in range(xpad):
self.mesh_values[ii, :, :] = self.mesh_values[xpad+1, :, :]
for ii in range(xpad+1):
self.mesh_values[-(ii+1), :, :] = self.mesh_values[-xpad-2, :, :]
print '{0:^55}'.format('--- Added Elevation to Mesh --')
def plot_mesh(self, **kwargs):
"""
Plot built mesh with station locations.
=================== ===================================================
Key Words Description
=================== ===================================================
depth_scale [ 'km' | 'm' ] scale of mesh plot.
*default* is 'km'
fig_dpi dots-per-inch resolution of the figure
*default* is 300
fig_num number of the figure instance
*default* is 'Mesh'
fig_size size of figure in inches (width, height)
*default* is [5, 5]
fs size of font of axis tick labels, axis labels are
fs+2. *default* is 6
ls [ '-' | '.' | ':' ] line style of mesh lines
*default* is '-'
marker marker of stations
*default* is r"$\blacktriangledown$"
ms size of marker in points. *default* is 5
plot_triangles [ 'y' | 'n' ] to plot mesh triangles.
*default* is 'n'
=================== ===================================================
"""
fig_num = kwargs.pop('fig_num', 'Mesh')
fig_size = kwargs.pop('fig_size', [5, 5])
fig_dpi = kwargs.pop('fig_dpi', 300)
marker = kwargs.pop('marker', r"$\blacktriangledown$")
ms = kwargs.pop('ms', 5)
mc = kwargs.pop('mc', 'k')
lw = kwargs.pop('ls', .35)
fs = kwargs.pop('fs', 6)
plot_triangles = kwargs.pop('plot_triangles', 'n')
depth_scale = kwargs.pop('depth_scale', 'km')
#set the scale of the plot
if depth_scale == 'km':
df = 1000.
elif depth_scale == 'm':
df = 1.
else:
df = 1000.
plt.rcParams['figure.subplot.left'] = .12
plt.rcParams['figure.subplot.right'] = .98
plt.rcParams['font.size'] = fs
if self.x_grid is None:
self.build_mesh()
fig = plt.figure(fig_num, figsize=fig_size, dpi=fig_dpi)
ax = fig.add_subplot(1, 1, 1, aspect='equal')
#plot the station marker
#plots a V for the station cause when you use scatter the spacing
#is variable if you change the limits of the y axis, this way it
#always plots at the surface.
for offset in self.rel_station_locations:
ax.text((offset)/df,
0,
marker,
horizontalalignment='center',
verticalalignment='baseline',
fontdict={'size':ms,'color':mc})
#--> make list of column lines
row_line_xlist = []
row_line_ylist = []
for xx in self.x_grid/df:
row_line_xlist.extend([xx,xx])
row_line_xlist.append(None)
row_line_ylist.extend([0, self.z_grid[-1]/df])
row_line_ylist.append(None)
#plot column lines (variables are a little bit of a misnomer)
ax.plot(row_line_xlist,
row_line_ylist,
color='k',
lw=lw)
#--> make list of row lines
col_line_xlist = [self.x_grid[0]/df, self.x_grid[-1]/df]
col_line_ylist = [0, 0]
for yy in self.z_grid/df:
col_line_xlist.extend([self.x_grid[0]/df,
self.x_grid[-1]/df])
col_line_xlist.append(None)
col_line_ylist.extend([yy, yy])
col_line_ylist.append(None)
#plot row lines (variables are a little bit of a misnomer)
ax.plot(col_line_xlist,
col_line_ylist,
color='k',
lw=lw)
if plot_triangles == 'y':
row_line_xlist = []
row_line_ylist = []
for xx in self.x_grid/df:
row_line_xlist.extend([xx,xx])
row_line_xlist.append(None)
row_line_ylist.extend([0, self.z_grid[-1]/df])
row_line_ylist.append(None)
#plot columns
ax.plot(row_line_xlist,
row_line_ylist,
color='k',
lw=lw)
col_line_xlist = []
col_line_ylist = []
for yy in self.z_grid/df:
col_line_xlist.extend([self.x_grid[0]/df,
self.x_grid[-1]/df])
col_line_xlist.append(None)
col_line_ylist.extend([yy, yy])
col_line_ylist.append(None)
#plot rows
ax.plot(col_line_xlist,
col_line_ylist,
color='k',
lw=lw)
diag_line_xlist = []
diag_line_ylist = []
for xi, xx in enumerate(self.x_grid[:-1]/df):
for yi, yy in enumerate(self.z_grid[:-1]/df):
diag_line_xlist.extend([xx, self.x_grid[xi+1]/df])
diag_line_xlist.append(None)
diag_line_xlist.extend([xx, self.x_grid[xi+1]/df])
diag_line_xlist.append(None)
diag_line_ylist.extend([yy, self.z_grid[yi+1]/df])
diag_line_ylist.append(None)
diag_line_ylist.extend([self.z_grid[yi+1]/df, yy])
diag_line_ylist.append(None)
#plot diagonal lines.
ax.plot(diag_line_xlist,
diag_line_ylist,
color='k',
lw=lw)
#--> set axes properties
ax.set_ylim(self.z_target_depth/df, -2000/df)
xpad = self.num_x_pad_cells-1
ax.set_xlim(self.x_grid[xpad]/df, -self.x_grid[xpad]/df)
ax.set_xlabel('Easting ({0})'.format(depth_scale),
fontdict={'size':fs+2, 'weight':'bold'})
ax.set_ylabel('Depth ({0})'.format(depth_scale),
fontdict={'size':fs+2, 'weight':'bold'})
plt.show()
def write_mesh_file(self, save_path=None, basename='Occam2DMesh'):
"""
Write a finite element mesh file.
Calls build_mesh if it already has not been called.
Arguments:
-----------
**save_path** : string
directory path or full path to save file
**basename** : string
basename of mesh file. *default* is 'Occam2DMesh'
Returns:
----------
**mesh_fn** : string
full path to mesh file
:example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> edi_path = r"/home/mt/edi_files"
>>> profile = occam2d.Profile(edi_path)
>>> profile.plot_profile()
>>> mesh = occam2d.Mesh(profile.station_locations)
>>> mesh.build_mesh()
>>> mesh.write_mesh_file(save_path=r"/home/occam2d/Inv1")
"""
if save_path is not None:
self.save_path = save_path
if self.save_path is None:
self.save_path = os.getcwd()
self.mesh_fn = os.path.join(self.save_path, basename)
if self.x_nodes is None:
self.build_mesh()
mesh_lines = []
nx = self.x_nodes.shape[0]
nz = self.z_nodes.shape[0]
mesh_lines.append('MESH FILE Created by mtpy.modeling.occam2d\n')
mesh_lines.append(" {0} {1} {2} {0} {0} {3}\n".format(0, nx,
nz, 2))
#--> write horizontal nodes
node_str = ''
for ii, xnode in enumerate(self.x_nodes):
node_str += '{0:>9.1f} '.format(xnode)
if np.remainder(ii+1, 8) == 0:
node_str += '\n'
mesh_lines.append(node_str)
node_str = ''
node_str += '\n'
mesh_lines.append(node_str)
#--> write vertical nodes
node_str = ''
for ii, znode in enumerate(self.z_nodes):
node_str += '{0:>9.1f} '.format(znode)
if np.remainder(ii+1, 8) == 0:
node_str += '\n'
mesh_lines.append(node_str)
node_str = ''
node_str += '\n'
mesh_lines.append(node_str)
#--> need a 0 after the nodes
mesh_lines.append(' 0\n')
#--> write triangular mesh block values as ?
for zz in range(self.z_nodes.shape[0]):
for tt in range(4):
mesh_lines.append(''.join(self.mesh_values[:, zz, tt])+'\n')
mfid = file(self.mesh_fn, 'w')
mfid.writelines(mesh_lines)
mfid.close()
print 'Wrote Mesh file to {0}'.format(self.mesh_fn)
def read_mesh_file(self, mesh_fn):
"""
reads an occam2d 2D mesh file
Arguments:
----------
**mesh_fn** : string
full path to mesh file
Populates:
-----------
**x_grid** : array of horizontal locations of nodes (m)
**x_nodes**: array of horizontal node relative distances
(column locations (m))
**z_grid** : array of vertical node locations (m)
**z_nodes** : array of vertical nodes
(row locations(m))
**mesh_values** : np.array of free parameters
To do:
------
incorporate fixed values
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> mg = occam2d.Mesh()
>>> mg.mesh_fn = r"/home/mt/occam/line1/Occam2Dmesh"
>>> mg.read_mesh_file()
"""
self.mesh_fn = mesh_fn
mfid = file(self.mesh_fn,'r')
mlines = mfid.readlines()
nh = int(mlines[1].strip().split()[1])
nv = int(mlines[1].strip().split()[2])
self.x_nodes = np.zeros(nh)
self.z_nodes=np.zeros(nv)
self.mesh_values = np.zeros((nh, nv, 4), dtype=str)
#get horizontal nodes
h_index = 0
v_index = 0
m_index = 0
line_count = 2
#--> fill horizontal nodes
for mline in mlines[line_count:]:
mline = mline.strip().split()
for m_value in mline:
self.x_nodes[h_index] = float(m_value)
h_index += 1
line_count += 1
if h_index == nh:
break
#--> fill vertical nodes
for mline in mlines[line_count:]:
mline = mline.strip().split()
for m_value in mline:
self.z_nodes[v_index] = float(m_value)
v_index += 1
line_count += 1
if v_index == nv:
break
#--> fill model values
for ll, mline in enumerate(mlines[line_count+1:], line_count):
mline = mline.strip()
if m_index == nv or mline.lower().find('exception')>0:
break
else:
mlist = list(mline)
if len(mlist) != nh:
print '--- Line {0} in {1}'.format(ll, self.mesh_fn)
print 'Check mesh file too many columns'
print 'Should be {0}, has {1}'.format(nh,len(mlist))
mlist = mlist[0:nh]
for kk in range(4):
for jj, mvalue in enumerate(list(mlist)):
self.mesh_values[jj,m_index,kk] = mline[jj]
m_index += 1
#sometimes it seems that the number of nodes is not the same as the
#header would suggest so need to remove the zeros
self.x_nodes = self.x_nodes[np.nonzero(self.x_nodes)]
if self.x_nodes.shape[0] != nh:
new_nh = self.x_nodes.shape[0]
print 'The header number {0} should read {1}'.format(nh, new_nh)
self.mesh_values.resize(new_nh, nv, 4)
else:
new_nh = nh
self.z_nodes = self.z_nodes[np.nonzero(self.z_nodes)]
if self.z_nodes.shape[0] != nv:
new_nv = self.z_nodes.shape[0]
print 'The header number {0} should read {1}'.format(nv, new_nv)
self.mesh_values.resize(new_nh, nv, 4)
#make x_grid and z_grid
self.x_grid = self.x_nodes.copy()
self.x_grid = np.append(self.x_grid, self.x_grid[-1])
self.x_grid = np.array([self.x_grid[:ii].sum()
for ii in range(self.x_grid.shape[0])])
self.x_grid -= self.x_grid.mean()
self.z_grid = np.array([self.z_nodes[:ii].sum()
for ii in range(self.z_nodes.shape[0])])
class Profile():
"""
Takes data from .edi files to create a profile line for 2D modeling.
Can project the stations onto a profile that is perpendicular to strike
or a given profile direction.
If _rotate_to_strike is True, the impedance tensor and tipper are rotated
to align with the geoelectric strike angle.
If _rotate_to_strike is True and geoelectric_strike is not given,
then it is calculated using the phase tensor. First, 2D sections are
estimated from the impedance tensort hen the strike is esitmated from the
phase tensor azimuth + skew. This angle is then used to project the
stations perpendicular to the strike angle.
If you want to project onto an angle not perpendicular to strike, give
profile_angle and set _rotate_to_strike to False. This will project
the impedance tensor and tipper to be perpendicular with the
profile_angle.
Arguments:
-----------
**edi_path** : string
full path to edi files
**station_list** : list of stations to create profile for if None is
given all .edi files in edi_path will be used.
.. note:: that algorithm assumes .edi files are
named by station and it only looks for
the station within the .edi file name
it does not match exactly, so if you have
.edi files with similar names there
might be some problems.
**geoelectric_strike** : float
geoelectric strike direction in degrees
assuming 0 is North and East is 90
**profile_angle** : float
angle to project the stations onto a profile line
.. note:: the geoelectric strike angle and
profile angle should be orthogonal for
best results from 2D modeling.
======================= ===================================================
**Attributes** Description
======================= ===================================================
edi_list list of mtpy.core.mt.MT instances for each .edi
file found in edi_path
elevation_model numpy.ndarray(3, num_elevation_points) elevation
values for the profile line (east, north, elev)
geoelectric_strike geoelectric strike direction assuming N == 0
profile_angle angle of profile line assuming N == 0
profile_line (slope, N-intercept) of profile line
_profile_generated [ True | False ] True if profile has already been
generated
edi_path path to find .edi files
station_list list of stations to extract from edi_path
num_edi number of edi files to create a profile for
_rotate_to_strike [ True | False] True to project the stations onto
a line that is perpendicular to geoelectric strike
also Z and Tipper are rotated to strike direction.
======================= ===================================================
.. note:: change _rotate_to_strike to False if you want to project the
stations onto a given profile direction. This will rotate
Z and Tipper to be orthogonal to this direction
======================= ===================================================
Methods Description
======================= ===================================================
generate_profile generates a profile for the given stations
plot_profile plots the profile line along with original station
locations to compare.
======================= ===================================================
:Example: ::
>>> import mtpy.modeling.occam2d as occam
>>> edi_path = r"/home/mt/edi_files"
>>> station_list = ['mt{0:03}'.format(ss) for ss in range(0, 15)]
>>> prof_line = occam.Profile(edi_path, station_list=station_list)
>>> prof_line.plot_profile()
>>> #if you want to project to a given strike
>>> prof_line.geoelectric_strike = 36.7
>>> prof_line.generate_profile()
>>> prof_line.plot_profile()
"""
def __init__(self, edi_path=None, **kwargs):
self.edi_path = edi_path
self.station_list = kwargs.pop('station_list', None)
self.geoelectric_strike = kwargs.pop('geoelectric_strike', None)
self.profile_angle = kwargs.pop('profile_angle', None)
self.edi_list = []
self._rotate_to_strike = True
self.num_edi = 0
self._profile_generated = False
self.profile_line = None
self.station_locations = None
self.elevation_model = kwargs.pop('elevation_model', None)
self.elevation_profile = None
self.estimate_elevation = True
def _get_edi_list(self):
"""
get a list of edi files that coorespond to the station list
each element of the list is a mtpy.core.mt.MT object
"""
if self.station_list is not None:
for station in self.station_list:
for edi in os.listdir(self.edi_path):
if edi.find(station) == 0 and edi[-3:] == 'edi':
self.edi_list.append(mt.MT(os.path.join(self.edi_path,
edi)))
break
else:
self.edi_list = [mt.MT(os.path.join(self.edi_path, edi)) for
edi in os.listdir(self.edi_path)
if edi[-3:]=='edi']
self.num_edi = len(self.edi_list)
for edi in self.edi_list:
if type(edi.Tipper.rotation_angle) is list:
edi.Tipper.rotation_angle = np.array(edi.Tipper.rotation_angle)
def generate_profile(self):
"""
Generate linear profile by regression of station locations.
If profile_angle is not None, then station are projected onto that
line. Else, the a geoelectric strike is calculated from the data
and the stations are projected onto an angle perpendicular to the
estimated strike direction. If _rotate_to_strike is True, the
impedance tensor and Tipper data are rotated to align with strike.
Else, data is not rotated to strike.
To project stations onto a given line, set profile_angle and
_rotate_to_strike to False. This will project the stations onto
profile_angle and rotate the impedance tensor and tipper to be
perpendicular to the profile_angle.
"""
self._get_edi_list()
strike_angles = np.zeros(self.num_edi)
easts = np.zeros(self.num_edi)
norths = np.zeros(self.num_edi)
utm_zones = np.zeros(self.num_edi)
for ii, edi in enumerate(self.edi_list):
#find strike angles for each station if a strike angle is not given
if self.geoelectric_strike is None:
try:
#check dimensionality to be sure strike is estimate for 2D
dim = MTgy.dimensionality(z_object=edi.Z)
#get strike for only those periods
gstrike = MTgy.strike_angle(edi.Z.z[np.where(dim==2)])[:,0]
strike_angles[ii] = np.median(gstrike)
except:
pass
easts[ii] = edi.east
norths[ii] = edi.north
utm_zones[ii] = int(edi.utm_zone[:-1])
if len(self.edi_list) == 0:
raise IOError('Could not find and .edi file in {0}'.format(self.edi_path))
if self.geoelectric_strike is None:
try:
#might try mode here instead of mean
self.geoelectric_strike = np.median(strike_angles[np.nonzero(strike_angles)])
except:
#empty list or so....
#can happen, if everyhing is just 1D
self.geoelectric_strike = 0.
#need to check the zones of the stations
main_utmzone = mode(utm_zones)[0][0]
for ii, zone in enumerate(utm_zones):
if zone == main_utmzone:
continue
else:
print ('station {0} is out of main utm zone'.format(self.edi_list[ii].station)+\
' will not be included in profile')
# check regression for 2 profile orientations:
# horizontal (N=N(E)) or vertical(E=E(N))
# use the one with the lower standard deviation
profile1 = sp.stats.linregress(easts, norths)
profile2 = sp.stats.linregress(norths, easts)
profile_line = profile1[:2]
#if the profile is rather E=E(N), the parameters have to converted
# into N=N(E) form:
if profile2[4] < profile1[4]:
profile_line = (1./profile2[0], -profile2[1]/profile2[0])
self.profile_line = profile_line
#profile_line = sp.polyfit(lo_easts, lo_norths, 1)
if self.profile_angle is None:
self.profile_angle = (90-(np.arctan(profile_line[0])*180/np.pi))%180
#rotate Z according to strike angle,
#if strike was explicitely given, use that value!
#otherwise:
#have 90 degree ambiguity in strike determination
#choose strike which offers larger angle with profile
#if profile azimuth is in [0,90].
if self._rotate_to_strike is False:
if 0 <= self.profile_angle < 90:
if np.abs(self.profile_angle-self.geoelectric_strike) < 45:
self.geoelectric_strike += 90
elif 90 <= self.profile_angle < 135:
if self.profile_angle-self.geoelectric_strike < 45:
self.geoelectric_strike -= 90
else:
if self.profile_angle-self.geoelectric_strike >= 135:
self.geoelectric_strike += 90
self.geoelectric_strike = self.geoelectric_strike%180
#rotate components of Z and Tipper to align with geoelectric strike
#which should now be perpendicular to the profile strike
if self._rotate_to_strike == True:
self.profile_angle = self.geoelectric_strike+90
p1 = np.tan(np.deg2rad(90-self.profile_angle))
#need to project the y-intercept to the new angle
p2 = (self.profile_line[0]-p1)*easts[0]+self.profile_line[1]
self.profile_line = (p1, p2)
for edi in self.edi_list:
edi.Z.rotate(self.geoelectric_strike-edi.Z.rotation_angle)
# rotate tipper to profile azimuth, not strike.
try:
edi.Tipper.rotate((self.profile_angle-90)%180-
edi.Tipper.rotation_angle.mean())
except AttributeError:
edi.Tipper.rotate((self.profile_angle-90)%180-
edi.Tipper.rotation_angle)
print '='*72
print ('Rotated Z and Tipper to align with '
'{0:+.2f} degrees E of N'.format(self.geoelectric_strike))
print ('Profile angle is '
'{0:+.2f} degrees E of N'.format(self.profile_angle))
print '='*72
else:
for edi in self.edi_list:
edi.Z.rotate((self.profile_angle-90)%180-edi.Z.rotation_angle)
# rotate tipper to profile azimuth, not strike.
try:
edi.Tipper.rotate((self.profile_angle-90)%180-
edi.Tipper.rotation_angle.mean())
except AttributeError:
edi.Tipper.rotate((self.profile_angle-90)%180-
edi.Tipper.rotation_angle)
print '='*72
print ('Rotated Z and Tipper to be perpendicular with '
'{0:+.2f} profile angle'.format((self.profile_angle-90)%180))
print ('Profile angle is'
'{0:+.2f} degrees E of N'.format(self.profile_angle))
print '='*72
#--> project stations onto profile line
projected_stations = np.zeros((self.num_edi, 2))
self.station_locations = np.zeros(self.num_edi)
#create profile vector
profile_vector = np.array([1, self.profile_line[0]])
#be sure the amplitude is 1 for a unit vector
profile_vector /= np.linalg.norm(profile_vector)
for ii, edi in enumerate(self.edi_list):
station_vector = np.array([easts[ii], norths[ii]-self.profile_line[1]])
position = np.dot(profile_vector, station_vector)*profile_vector
self.station_locations[ii] = np.linalg.norm(position)
edi.offset = np.linalg.norm(position)
edi.projected_east = position[0]
edi.projected_north = position[1]+self.profile_line[1]
projected_stations[ii] = [position[0], position[1]+self.profile_line[1]]
#set the first station to 0
for edi in self.edi_list:
edi.offset -= self.station_locations.min()
self.station_locations -= self.station_locations.min()
#Sort from West to East:
index_sort = np.argsort(self.station_locations)
if self.profile_angle == 0:
#Exception: sort from North to South
index_sort = np.argsort(norths)
#sorting along the profile
self.edi_list = [self.edi_list[ii] for ii in index_sort]
self.station_locations = np.array([self.station_locations[ii]
for ii in index_sort])
if self.estimate_elevation == True:
self.project_elevation()
self._profile_generated = True
def project_elevation(self, elevation_model=None):
"""
projects elevation data into the profile
Arguments:
-------------
**elevation_model** : np.ndarray(3, num_elevation_points)
(east, north, elevation)
for now needs to be in utm coordinates
if None then elevation is taken from edi_list
Returns:
----------
**elevation_profile** :
"""
self.elevation_model = elevation_model
#--> get an elevation model for the mesh
if self.elevation_model == None:
self.elevation_profile = np.zeros((2, len(self.edi_list)))
self.elevation_profile[0,:] = np.array([ss
for ss in self.station_locations])
self.elevation_profile[1,:] = np.array([edi.elev
for edi in self.edi_list])
#--> project known elevations onto the profile line
else:
self.elevation_profile = np.zeros((2, self.elevation_model.shape[1]))
#create profile vector
profile_vector = np.array([1, self.profile_line[0]])
#be sure the amplitude is 1 for a unit vector
profile_vector /= np.linalg.norm(profile_vector)
for ii in range(self.elevation_model.shape[1]):
east = self.elevation_model[0, ii]
north = self.elevation_model[1, ii]
elev = self.elevation_model[2, ii]
elev_vector = np.array([east, north-self.profile_line[1]])
position = np.dot(profile_vector, elev_vector)*profile_vector
self.elevation_profile[0, ii] = np.linalg.norm(position)
self.elevation_profile[1, ii] = elev
def plot_profile(self, **kwargs):
"""
Plot the projected profile line along with original station locations
to make sure the line projected is correct.
===================== =================================================
Key Words Description
===================== =================================================
fig_dpi dots-per-inch resolution of figure
*default* is 300
fig_num number if figure instance
*default* is 'Projected Profile'
fig_size size of figure in inches (width, height)
*default* is [5, 5]
fs [ float ] font size in points of axes tick labels
axes labels are fs+2
*default* is 6
lc [ string | (r, g, b) ]color of profile line
(see matplotlib.line for options)
*default* is 'b' -- blue
lw float, width of profile line in points
*default* is 1
marker [ string ] marker for stations
(see matplotlib.pyplot.plot) for options
mc [ string | (r, g, b) ] color of projected
stations. *default* is 'k' -- black
ms [ float ] size of station marker
*default* is 5
station_id [min, max] index values for station labels
*default* is None
===================== =================================================
:Example: ::
>>> edipath = r"/home/mt/edi_files"
>>> pr = occam2d.Profile(edi_path=edipath)
>>> pr.generate_profile()
>>> # set station labels to only be from 1st to 4th index
>>> # of station name
>>> pr.plot_profile(station_id=[0,4])
"""
fig_num = kwargs.pop('fig_num', 'Projected Profile')
fig_size = kwargs.pop('fig_size', [5, 5])
fig_dpi = kwargs.pop('fig_dpi', 300)
marker = kwargs.pop('marker', 'v')
ms = kwargs.pop('ms', 5)
mc = kwargs.pop('mc', 'k')
lc = kwargs.pop('lc', 'b')
lw = kwargs.pop('ls', 1)
fs = kwargs.pop('fs', 6)
station_id = kwargs.pop('station_id', None)
plt.rcParams['figure.subplot.left'] = .12
plt.rcParams['figure.subplot.right'] = .98
plt.rcParams['font.size'] = fs
if self._profile_generated is False:
self.generate_profile()
fig = plt.figure(fig_num, figsize=fig_size, dpi=fig_dpi)
ax = fig.add_subplot(1, 1, 1, aspect='equal')
for edi in self.edi_list:
m1, = ax.plot(edi.projected_east, edi.projected_north,
marker=marker, ms=ms, mfc=mc, mec=mc, color=lc)
m2, = ax.plot(edi.east, edi.north, marker=marker,
ms=.5*ms, mfc=(.6, .6, .6), mec=(.6, .6, .6),
color=lc)
if station_id is None:
ax.text(edi.projected_east, edi.projected_north*1.00025,
edi.station,
ha='center', va='baseline',
fontdict={'size':fs, 'weight':'bold'})
else:
ax.text(edi.projected_east, edi.projected_north*1.00025,
edi.station[station_id[0]:station_id[1]],
ha='center', va='baseline',
fontdict={'size':fs, 'weight':'bold'})
peasts = np.array([edi.projected_east for edi in self.edi_list])
pnorths = np.array([edi.projected_north for edi in self.edi_list])
easts = np.array([edi.east for edi in self.edi_list])
norths = np.array([edi.north for edi in self.edi_list])
ploty = sp.polyval(self.profile_line, easts)
ax.plot(easts, ploty, lw=lw, color=lc)
ax.set_title('Original/Projected Stations')
ax.set_ylim((min([norths.min(), pnorths.min()])*.999,
max([norths.max(), pnorths.max()])*1.001))
ax.set_xlim((min([easts.min(), peasts.min()])*.98,
max([easts.max(), peasts.max()])*1.02))
ax.set_xlabel('Easting (m)',
fontdict={'size':fs+2, 'weight':'bold'})
ax.set_ylabel('Northing (m)',
fontdict={'size':fs+2, 'weight':'bold'})
ax.grid(alpha=.5)
ax.legend([m1, m2], ['Projected', 'Original'], loc='upper left',
prop={'size':fs})
plt.show()
class Regularization(Mesh):
"""
Creates a regularization grid based on Mesh. Note that Mesh is inherited
by Regularization, therefore the intended use is to build a mesh with
the Regularization class.
The regularization grid is what Occam calculates the inverse model on.
Setup is tricky and can be painful, as you can see it is not quite fully
functional yet, as it cannot incorporate topography yet. It seems like
you'd like to have the regularization setup so that your target depth is
covered well, in that the regularization blocks to this depth are
sufficiently small to resolve resistivity structure at that depth.
Finally, you want the regularization to go to a half space at the bottom,
basically one giant block.
Arguments:
-----------
**station_locations** : np.ndarray(n_stations)
array of station locations along a profile
line in meters.
======================= ===================================================
Key Words/Attributes Description
======================= ===================================================
air_key letter associated with the value of air
*default* is 0
air_value value given to an air cell, *default* is 1E13
binding_offset offset from the right side of the furthest left
hand model block in meters. The regularization
grid is setup such that this should be 0.
cell_width width of cells with in station area in meters
*default* is 100
description description of the model for the model file.
*default* is 'simple inversion'
elevation_profile elevation profile along the profile line.
given as np.ndarray(nx, 2), where the elements
are x_location, elevation. If elevation profile
is given add_elevation is called automatically.
*default* is None
mesh_fn full path to mesh file.
mesh_values letter values of each triangular mesh element
if the cell is free value is ?
model_columns
model_name
model_rows
min_block_width [ float ] minimum model block width in meters,
*default* is 2*cell_width
n_layers number of vertical layers in mesh
*default* is 90
num_free_param [ int ] number of free parameters in the model.
this is a tricky number to estimate apparently.
num_layers [ int ] number of regularization layers.
num_x_pad_cells number of horizontal padding cells outside the
the station area that will increase in size
by x_pad_multiplier. *default* is 7
num_x_pad_small_cells number of horizonal padding cells just outside
the station area with width cell_width. This is
to extend the station area if needed.
*default* is 2
num_z_pad_cells number of vertical padding cells below
z_target_depth down to z_bottom. *default* is 5
prejudice_fn full path to prejudice file
*default* is 'none'
reg_basename basename of regularization file (model file)
*default* is 'Occam2DModel'
reg_fn full path to regularization file (model file)
*default* is save_path/reg_basename
rel_station_locations relative station locations within the mesh. The
locations are relative to the center of the station
area. *default* is None, filled later
save_path full path to save mesh and model file to.
*default* is current working directory.
statics_fn full path to static shift file
Static shifts in occam may not work.
*default* is 'none'
station_locations location of stations in meters, can be on a
relative grid or in UTM.
trigger [ float ] multiplier to merge model blocks at
depth. A higher number increases the number of
model blocks at depth. *default* is .65
x_grid location of horizontal grid nodes in meters
x_nodes relative spacing between grid nodes
x_pad_multiplier horizontal padding cells will increase by this
multiple out to the edge of the grid.
*default* is 1.5
z1_layer thickness of the first layer in the model.
Should be at least 1/4 of the first skin depth
*default* is 10
z_bottom bottom depth of the model (m). Needs to be large
enough to be 1D at the edge.
*default* is 200000.0
z_grid location of vertical nodes in meters
z_nodes relative distance between vertical nodes in meters
z_target_depth depth to deepest target of interest. Below this
depth cells will be padded to z_bottom
======================= ===================================================
.. note:: regularization does not work with topography yet. Having
problems calculating the number of free parameters.
========================= =================================================
Methods Description
========================= =================================================
add_elevation adds elevation to the mesh given elevation
profile.
build_mesh builds the mesh given the attributes of Mesh. If
elevation_profile is not None, add_elevation is
called inside build_mesh
build_regularization builds the regularization grid from the build mesh
be sure to plot the grids before starting the
inversion to make sure coverage is appropriate.
get_num_free_param estimate the number of free parameters.
**This is a work in progress**
plot_mesh plots the built mesh with station location.
read_mesh_file reads in an existing mesh file and populates the
appropriate attributes.
read_regularization_file read in existing regularization file, populates
apporopriate attributes
write_mesh_file writes a mesh file to save_path
write_regularization_file writes a regularization file
======================= ===================================================
:Example: ::
>>> edipath = r"/home/mt/edi_files"
>>> profile = occam2d.Profile(edi_path=edi_path)
>>> profile.generate_profile()
>>> reg = occam2d.Regularization(profile.station_locations)
>>> reg.build_mesh()
>>> reg.build_regularization()
>>> reg.save_path = r"/home/occam2d/Line1/Inv1"
>>> reg.write_regularization_file()
"""
def __init__(self, station_locations=None, **kwargs):
# Be sure to initialize Mesh
Mesh.__init__(self, station_locations, **kwargs)
self.min_block_width = kwargs.pop('min_block_width',
2*np.median(self.cell_width))
self.trigger = kwargs.pop('trigger', .75)
self.model_columns = None
self.model_rows = None
self.binding_offset = None
self.reg_fn = None
self.reg_basename = 'Occam2DModel'
self.model_name = 'model made by mtpy.modeling.occam2d'
self.description = 'simple Inversion'
self.num_param = None
self.num_free_param = None
self.statics_fn = kwargs.pop('statics_fn', 'none')
self.prejudice_fn = kwargs.pop('prejudice_fn', 'none')
self.num_layers = kwargs.pop('num_layers', None)
#--> build mesh
if self.station_locations is not None:
self.build_mesh()
self.build_regularization()
def build_regularization(self):
"""
Builds larger boxes around existing mesh blocks for the regularization.
As the model deepens the regularization boxes get larger.
The regularization boxes are merged mesh cells as prescribed by the
Occam method.
"""
# list of the mesh columns to combine
self.model_columns = []
# list of mesh rows to combine
self.model_rows = []
#At the top of the mesh model blocks will be 2 combined mesh blocks
#Note that the padding cells are combined into one model block
station_col = [2]*((self.x_nodes.shape[0]-2*self.num_x_pad_cells)/2)
model_cols = [self.num_x_pad_cells]+station_col+[self.num_x_pad_cells]
station_widths = [self.x_nodes[ii]+self.x_nodes[ii+1] for ii in
range(self.num_x_pad_cells,
self.x_nodes.shape[0]-self.num_x_pad_cells, 2)]
pad_width = self.x_nodes[0:self.num_x_pad_cells].sum()
model_widths = [pad_width]+station_widths+[pad_width]
num_cols = len(model_cols)
model_thickness = np.append(self.z_nodes[0:self.z_nodes.shape[0]-
self.num_z_pad_cells],
self.z_nodes[-self.num_z_pad_cells:].sum())
self.num_param = 0
#--> now need to calulate model blocks to the bottom of the model
columns = list(model_cols)
widths = list(model_widths)
for zz, thickness in enumerate(model_thickness):
#index for model column blocks from first_row, start at 1 because
# 0 is for padding cells
block_index = 1
num_rows = 1
if zz == 0:
num_rows += 1
if zz == len(model_thickness)-1:
num_rows = self.num_z_pad_cells
while block_index+1 < num_cols-1:
#check to see if horizontally merged mesh cells are not larger
#than the thickness times trigger
if thickness < self.trigger*(widths[block_index]+\
widths[block_index+1]):
block_index += 1
continue
#merge 2 neighboring cells to avoid vertical exaggerations
else:
widths[block_index] += widths[block_index+1]
columns[block_index] += columns[block_index+1]
#remove one of the merged cells
widths.pop(block_index+1)
columns.pop(block_index+1)
num_cols -= 1
self.num_param += num_cols
self.model_columns.append(list(columns))
self.model_rows.append([num_rows, num_cols])
#calculate the distance from the right side of the furthest left
#model block to the furthest left station which is half the distance
# from the center of the mesh grid.
self.binding_offset = self.x_grid[self.num_x_pad_cells+1]+\
self.station_locations.mean()
self.get_num_free_params()
print '='*55
print '{0:^55}'.format('regularization parameters'.upper())
print '='*55
print ' binding offset = {0:.1f}'.format(self.binding_offset)
print ' number layers = {0}'.format(len(self.model_columns))
print ' number of parameters = {0}'.format(self.num_param)
print ' number of free param = {0}'.format(self.num_free_param)
print '='*55
def get_num_free_params(self):
"""
estimate the number of free parameters in model mesh.
I'm assuming that if there are any fixed parameters in the block, then
that model block is assumed to be fixed. Not sure if this is right
cause there is no documentation.
**DOES NOT WORK YET**
"""
self.num_free_param = 0
row_count = 0
#loop over columns and rows of regularization grid
for col, row in zip(self.model_columns, self.model_rows):
rr = row[0]
col_count = 0
for ii, cc in enumerate(col):
#make a model block from the index values of the regularization
#grid
model_block = self.mesh_values[row_count:row_count+rr,
col_count:col_count+cc, :]
#find all the free triangular blocks within that model block
find_free = np.where(model_block=='?')
try:
#test to see if the number of free parameters is equal
#to the number of triangular elements with in the model
#block, if there is the model block is assumed to be free.
if find_free[0].size == model_block.size:
self.num_free_param += 1
except IndexError:
pass
col_count += cc
row_count += rr
def write_regularization_file(self, reg_fn=None, reg_basename=None,
statics_fn='none', prejudice_fn='none',
save_path=None):
"""
Write a regularization file for input into occam.
Calls build_regularization if build_regularization has not already
been called.
if reg_fn is None, then file is written to save_path/reg_basename
Arguments:
----------
**reg_fn** : string
full path to regularization file. *default* is None
and file will be written to save_path/reg_basename
**reg_basename** : string
basename of regularization file
**statics_fn** : string
full path to static shift file
.. note:: static shift does not always work in
occam2d.exe
**prejudice_fn** : string
full path to prejudice file
**save_path** : string
path to save regularization file.
*default* is current working directory
"""
if save_path is not None:
self.save_path = save_path
if reg_basename is not None:
self.reg_basename = reg_basename
if reg_fn is None:
if self.save_path is None:
self.save_path = os.getcwd()
self.reg_fn = os.path.join(self.save_path, self.reg_basename)
self.statics_fn = statics_fn
self.prejudice_fn = prejudice_fn
if self.model_columns is None:
if self.binding_offset is None:
self.build_mesh()
self.build_regularization()
reg_lines = []
#--> write out header information
reg_lines.append('{0:<18}{1}\n'.format('Format:',
'occam2mtmod_1.0'.upper()))
reg_lines.append('{0:<18}{1}\n'.format('Model Name:',
self.model_name.upper()))
reg_lines.append('{0:<18}{1}\n'.format('Description:',
self.description.upper()))
if os.path.dirname(self.mesh_fn) == self.save_path:
reg_lines.append('{0:<18}{1}\n'.format('Mesh File:',
os.path.basename(self.mesh_fn)))
else:
reg_lines.append('{0:<18}{1}\n'.format('Mesh File:',self.mesh_fn))
reg_lines.append('{0:<18}{1}\n'.format('Mesh Type:',
'pw2d'.upper()))
if os.path.dirname(self.statics_fn) == self.save_path:
reg_lines.append('{0:<18}{1}\n'.format('Statics File:',
os.path.basename(self.statics_fn)))
else:
reg_lines.append('{0:<18}{1}\n'.format('Statics File:',
self.statics_fn))
if os.path.dirname(self.prejudice_fn) == self.save_path:
reg_lines.append('{0:<18}{1}\n'.format('Prejudice File:',
os.path.basename(self.prejudice_fn)))
else:
reg_lines.append('{0:<18}{1}\n'.format('Prejudice File:',
self.prejudice_fn))
reg_lines.append('{0:<20}{1: .1f}\n'.format('Binding Offset:',
self.binding_offset))
reg_lines.append('{0:<20}{1}\n'.format('Num Layers:',
len(self.model_columns)))
#--> write rows and columns of regularization grid
for row, col in zip(self.model_rows, self.model_columns):
reg_lines.append(''.join([' {0:>5}'.format(rr) for rr in row])+'\n')
reg_lines.append(''.join(['{0:>5}'.format(cc) for cc in col])+'\n')
reg_lines.append('{0:<18}{1}\n'.format('NO. EXCEPTIONS:', '0'))
rfid = file(self.reg_fn, 'w')
rfid.writelines(reg_lines)
rfid.close()
print 'Wrote Regularization file to {0}'.format(self.reg_fn)
def read_regularization_file(self, reg_fn):
"""
Read in a regularization file and populate attributes:
* binding_offset
* mesh_fn
* model_columns
* model_rows
* prejudice_fn
* statics_fn
"""
self.reg_fn = reg_fn
self.save_path = os.path.dirname(reg_fn)
rfid = open(self.reg_fn, 'r')
self.model_rows = []
self.model_columns = []
ncols = []
rlines = rfid.readlines()
for ii, iline in enumerate(rlines):
#read header information
if iline.find(':') > 0:
iline = iline.strip().split(':')
key = iline[0].strip().lower()
key = key.replace(' ', '_').replace('file', 'fn')
value = iline[1].strip()
try:
setattr(self, key, float(value))
except ValueError:
setattr(self, key, value)
#append the last line
if key.find('exception') > 0:
self.model_columns.append(ncols)
#get mesh values
else:
iline = iline.strip().split()
iline = [int(jj) for jj in iline]
if len(iline) == 2:
if len(ncols) > 0:
self.model_columns.append(ncols)
self.model_rows.append(iline)
ncols = []
elif len(iline) > 2:
ncols = ncols+iline
#set mesh file name
if not os.path.isfile(self.mesh_fn):
self.mesh_fn = os.path.join(self.save_path, self.mesh_fn)
#set statics file name
if not os.path.isfile(self.mesh_fn):
self.statics_fn = os.path.join(self.save_path, self.statics_fn)
#set prejudice file name
if not os.path.isfile(self.mesh_fn):
self.prejudice_fn = os.path.join(self.save_path, self.prejudice_fn)
class Startup(object):
"""
Reads and writes the startup file for Occam2D.
.. note:: Be sure to look at the Occam 2D documentation for description
of all parameters
========================= =================================================
Key Words/Attributes Description
========================= =================================================
data_fn full path to data file
date_time date and time the startup file was written
debug_level [ 0 | 1 | 2 ] see occam documentation
*default* is 1
description brief description of inversion run
*default* is 'startup created by mtpy'
diagonal_penalties penalties on diagonal terms
*default* is 0
format Occam file format
*default* is 'OCCAMITER_FLEX'
iteration current iteration number
*default* is 0
iterations_to_run maximum number of iterations to run
*default* is 20
lagrange_value starting lagrange value
*default* is 5
misfit_reached [ 0 | 1 ] 0 if misfit has been reached, 1 if it
has. *default* is 0
misfit_value current misfit value. *default* is 1000
model_fn full path to model file
model_limits limits on model resistivity values
*default* is None
model_value_steps limits on the step size of model values
*default* is None
model_values np.ndarray(num_free_params) of model values
param_count number of free parameters in model
resistivity_start starting resistivity value. If model_values is
not given, then all values with in model_values
array will be set to resistivity_start
roughness_type [ 0 | 1 | 2 ] type of roughness
*default* is 1
roughness_value current roughness value.
*default* is 1E10
save_path directory path to save startup file to
*default* is current working directory
startup_basename basename of startup file name.
*default* is Occam2DStartup
startup_fn full path to startup file.
*default* is save_path/startup_basename
stepsize_count max number of iterations per step
*default* is 8
target_misfit target misfit value.
*default* is 1.
========================= =================================================
:Example: ::
>>> startup = occam2d.Startup()
>>> startup.data_fn = ocd.data_fn
>>> startup.model_fn = profile.reg_fn
>>> startup.param_count = profile.num_free_params
>>> startup.save_path = r"/home/occam2d/Line1/Inv1"
"""
def __init__(self, **kwargs):
self.save_path = kwargs.pop('save_path', None)
self.startup_basename = kwargs.pop('startup_basename', 'Occam2DStartup')
self.startup_fn = kwargs.pop('startup_fn', None)
self.model_fn = kwargs.pop('model_fn', None)
self.data_fn = kwargs.pop('data_fn', None)
self.format = kwargs.pop('format', 'OCCAMITER_FLEX')
self.date_time = kwargs.pop('date_time', time.ctime())
self.description = kwargs.pop('description', 'startup created by mtpy')
self.iterations_to_run = kwargs.pop('iterations_to_run', 20)
self.roughness_type = kwargs.pop('roughness_type', 1)
self.target_misfit = kwargs.pop('target_misfit', 1.0)
self.diagonal_penalties = kwargs.pop('diagonal_penalties', 0)
self.stepsize_count = kwargs.pop('stepsize_count', 8)
self.model_limits = kwargs.pop('model_limits', None)
self.model_value_steps = kwargs.pop('model_value_steps', None)
self.debug_level = kwargs.pop('debug_level', 1)
self.iteration = kwargs.pop('iteration', 0)
self.lagrange_value = kwargs.pop('lagrange_value', 5.0)
self.roughness_value = kwargs.pop('roughness_value', 1e10)
self.misfit_value = kwargs.pop('misfit_value', 1000)
self.misfit_reached = kwargs.pop('misfit_reached', 0)
self.param_count = kwargs.pop('param_count', None)
self.resistivity_start = kwargs.pop('resistivity_start', 2)
self.model_values = kwargs.pop('model_values', None)
def write_startup_file(self, startup_fn=None, save_path=None,
startup_basename=None):
"""
Write a startup file based on the parameters of startup class.
Default file name is save_path/startup_basename
Arguments:
-----------
**startup_fn** : string
full path to startup file. *default* is None
**save_path** : string
directory to save startup file. *default* is None
**startup_basename** : string
basename of starup file. *default* is None
"""
if save_path is not None:
self.save_path = save_path
if self.save_path is None:
self.save_path = os.path.dirname(self.data_fn)
if startup_basename is not None:
self.startup_basename = startup_basename
if startup_fn is None:
self.startup_fn = os.path.join(self.save_path,
self.startup_basename)
#--> check to make sure all the important input are given
if self.data_fn is None:
raise OccamInputError('Need to input data file name')
if self.model_fn is None:
raise OccamInputError('Need to input model/regularization file name')
if self.param_count is None:
raise OccamInputError('Need to input number of model parameters')
slines = []
slines.append('{0:<20}{1}\n'.format('Format:',self.format))
slines.append('{0:<20}{1}\n'.format('Description:', self.description))
if os.path.dirname(self.model_fn) == self.save_path:
slines.append('{0:<20}{1}\n'.format('Model File:',
os.path.basename(self.model_fn)))
else:
slines.append('{0:<20}{1}\n'.format('Model File:', self.model_fn))
if os.path.dirname(self.data_fn) == self.save_path:
slines.append('{0:<20}{1}\n'.format('Data File:',
os.path.basename(self.data_fn)))
else:
slines.append('{0:<20}{1}\n'.format('Data File:', self.data_fn))
slines.append('{0:<20}{1}\n'.format('Date/Time:', self.date_time))
slines.append('{0:<20}{1}\n'.format('Iterations to run:',
self.iterations_to_run))
slines.append('{0:<20}{1}\n'.format('Target Misfit:',
self.target_misfit))
slines.append('{0:<20}{1}\n'.format('Roughness Type:',
self.roughness_type))
slines.append('{0:<20}{1}\n'.format('Diagonal Penalties:',
self.diagonal_penalties))
slines.append('{0:<20}{1}\n'.format('Stepsize Cut Count:',
self.stepsize_count))
if self.model_limits is None:
slines.append('{0:<20}{1}\n'.format('!Model Limits:', 'none'))
else:
slines.append('{0:<20}{1},{2}\n'.format('Model Limits:',
self.model_limits[0],
self.model_limits[1]))
if self.model_value_steps is None:
slines.append('{0:<20}{1}\n'.format('!Model Value Steps:', 'none'))
else:
slines.append('{0:<20}{1}\n'.format('Model Value Steps:',
self.model_value_steps))
slines.append('{0:<20}{1}\n'.format('Debug Level:', self.debug_level))
slines.append('{0:<20}{1}\n'.format('Iteration:', self.iteration))
slines.append('{0:<20}{1}\n'.format('Lagrange Value:',
self.lagrange_value))
slines.append('{0:<20}{1}\n'.format('Roughness Value:',
self.roughness_value))
slines.append('{0:<20}{1}\n'.format('Misfit Value:', self.misfit_value))
slines.append('{0:<20}{1}\n'.format('Misfit Reached:',
self.misfit_reached))
slines.append('{0:<20}{1}\n'.format('Param Count:', self.param_count))
#make an array of starting values if not are given
if self.model_values is None:
self.model_values = np.zeros(self.param_count)
self.model_values[:] = self.resistivity_start
if self.model_values.shape[0] != self.param_count:
raise OccamInputError('length of model vaues array is not equal '
'to param count {0} != {1}'.format(
self.model_values.shape[0], self.param_count))
#write out starting resistivity values
sline = []
for ii, mv in enumerate(self.model_values):
sline.append('{0:^10.4f}'.format(mv))
if np.remainder(ii+1, 4) == 0:
sline.append('\n')
slines.append(''.join(list(sline)))
sline = []
slines.append(''.join(list(sline+['\n'])))
#--> write file
sfid = file(self.startup_fn, 'w')
sfid.writelines(slines)
sfid.close()
print 'Wrote Occam2D startup file to {0}'.format(self.startup_fn)
#------------------------------------------------------------------------------
class Data(Profile):
"""
Reads and writes data files and more.
Inherets Profile, so the intended use is to use Data to project stations
onto a profile, then write the data file.
===================== =====================================================
Model Modes Description
===================== =====================================================
1 or log_all Log resistivity of TE and TM plus Tipper
2 or log_te_tip Log resistivity of TE plus Tipper
3 or log_tm_tip Log resistivity of TM plus Tipper
4 or log_te_tm Log resistivity of TE and TM
5 or log_te Log resistivity of TE
6 or log_tm Log resistivity of TM
7 or all TE, TM and Tipper
8 or te_tip TE plus Tipper
9 or tm_tip TM plus Tipper
10 or te_tm TE and TM mode
11 or te TE mode
12 or tm TM mode
13 or tip Only Tipper
===================== =====================================================
**data** : is a list of dictioinaries containing the data for each station.
keys include:
* 'station' -- name of station
* 'offset' -- profile line offset
* 'te_res' -- TE resisitivity in linear scale
* 'tm_res' -- TM resistivity in linear scale
* 'te_phase' -- TE phase in degrees
* 'tm_phase' -- TM phase in degrees in first quadrant
* 're_tip' -- real part of tipper along profile
* 'im_tip' -- imaginary part of tipper along profile
each key is a np.ndarray(2, num_freq)
index 0 is for data
index 1 is for error
===================== =====================================================
Key Words/Attributes Desctription
===================== =====================================================
_data_header header line in data file
_data_string full data string
_profile_generated [ True | False ] True if profile has already been
generated.
_rotate_to_strike [ True | False ] True to rotate data to strike
angle. *default* is True
data list of dictionaries of data for each station.
see above
data_fn full path to data file
data_list list of lines to write to data file
edi_list list of mtpy.core.mt instances for each .edi file
read
edi_path directory path where .edi files are
edi_type [ 'z' | 'spectra' ] for .edi format
elevation_model model elevation np.ndarray(east, north, elevation)
in meters
elevation_profile elevation along profile np.ndarray (x, elev) (m)
fn_basename data file basename *default* is OccamDataFile.dat
freq list of frequencies to use for the inversion
freq_max max frequency to use in inversion. *default* is None
freq_min min frequency to use in inversion. *default* is None
freq_num number of frequencies to use in inversion
geoelectric_strike geoelectric strike angle assuming N = 0, E = 90
masked_data similar to data, but any masked points are now 0
mode_dict dictionary of model modes to chose from
model_mode model mode to use for inversion, see above
num_edi number of stations to invert for
occam_dict dictionary of occam parameters to use internally
occam_format occam format of data file.
*default* is OCCAM2MTDATA_1.0
phase_te_err percent error in phase for TE mode. *default* is 5
phase_tm_err percent error in phase for TM mode. *default* is 5
profile_angle angle of profile line realtive to N = 0, E = 90
profile_line m, b coefficients for mx+b definition of profile line
res_te_err percent error in resistivity for TE mode.
*default* is 10
res_tm_err percent error in resistivity for TM mode.
*default* is 10
save_path directory to save files to
station_list list of station for inversion
station_locations station locations along profile line
tipper_err percent error in tipper. *default* is 5
title title in data file.
===================== =====================================================
=========================== ===============================================
Methods Description
=========================== ===============================================
_fill_data fills the data array that is described above
_get_data_list gets the lines to write to data file
_get_frequencies gets frequency list to invert for
get_profile_origin get profile origin in UTM coordinates
mask_points masks points in data picked from
plot_mask_points
plot_mask_points plots data responses to interactively mask
data points.
plot_resonse plots data/model responses, returns
PlotResponse data type.
read_data_file read in existing data file and fill appropriate
attributes.
write_data_file write a data file according to Data attributes
=========================== ===============================================
:Example Write Data File: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> edipath = r"/home/mt/edi_files"
>>> slst = ['mt{0:03}'.format(ss) for ss in range(1, 20)]
>>> ocd = occam2d.Data(edi_path=edipath, station_list=slst)
>>> # model just the tm mode and tipper
>>> ocd.model_mode = 3
>>> ocd.save_path = r"/home/occam/Line1/Inv1"
>>> ocd.write_data_file()
>>> # mask points
>>> ocd.plot_mask_points()
>>> ocd.mask_points()
"""
def __init__(self, edi_path=None, **kwargs):
Profile.__init__(self, edi_path, **kwargs)
self.data_fn = kwargs.pop('data_fn', None)
self.fn_basename = kwargs.pop('fn_basename', 'OccamDataFile.dat')
self.save_path = kwargs.pop('save_path', None)
self.freq = kwargs.pop('freq', None)
self.model_mode = kwargs.pop('model_mode', '1')
self.data = kwargs.pop('data', None)
self.data_list = None
self.res_te_err = kwargs.pop('res_te_err', 10)
self.res_tm_err = kwargs.pop('res_tm_err', 10)
self.phase_te_err = kwargs.pop('phase_te_err', 5)
self.phase_tm_err = kwargs.pop('phase_tm_err', 5)
self.tipper_err = kwargs.pop('tipper_err', 10)
self.freq_min = kwargs.pop('freq_min', None)
self.freq_max = kwargs.pop('freq_max', None)
self.freq_num = kwargs.pop('freq_num', None)
self.occam_format = 'OCCAM2MTDATA_1.0'
self.title = 'MTpy-OccamDatafile'
self.edi_type = 'z'
self.masked_data = None
self.occam_dict = {'1':'log_te_res',
'2':'te_phase',
'3':'re_tip',
'4':'im_tip',
'5':'log_tm_res',
'6':'tm_phase',
'9':'te_res',
'10':'tm_res'}
self.mode_dict = {'log_all':[1, 2, 3, 4, 5, 6],
'log_te_tip':[1, 2, 3, 4],
'log_tm_tip':[5, 6, 3, 4],
'log_te_tm':[1, 2, 5, 6],
'log_te':[1, 2],
'log_tm':[5, 6],
'all':[9, 2, 3, 4, 10, 6],
'te_tip':[9, 2, 3, 4],
'tm_tip':[10, 6, 3, 4],
'te_tm':[9, 2, 10, 6],
'te':[9, 2],
'tm':[10, 6],
'tip':[3, 4],
'1':[1, 2, 3, 4, 5, 6],
'2':[1, 2, 3, 4],
'3':[5, 6, 3, 4],
'4':[1, 2, 5, 6],
'5':[1, 2],
'6':[5, 6],
'7':[9, 2, 3, 4, 10, 6],
'8':[9, 2, 3, 4],
'9':[10, 6, 3, 4],
'10':[9, 2, 10, 6],
'11':[9, 2],
'12':[10, 6],
'13':[3, 4]}
self._data_string = '{0:^6}{1:^6}{2:^6} {3: >8} {4: >8}\n'
self._data_header = '{0:<6}{1:<6}{2:<6} {3:<8} {4:<8}\n'.format(
'SITE', 'FREQ', 'TYPE', 'DATUM', 'ERROR')
def read_data_file(self, data_fn=None):
"""
Read in an existing data file and populate appropriate attributes
* data
* data_list
* freq
* station_list
* station_locations
Arguments:
-----------
**data_fn** : string
full path to data file
*default* is None and set to save_path/fn_basename
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Data()
>>> ocd.read_data_file(r"/home/Occam2D/Line1/Inv1/Data.dat")
"""
if data_fn is not None:
self.data_fn = data_fn
if os.path.isfile(self.data_fn) == False:
raise OccamInputError('Could not find {0}'.format(self.data_fn))
if self.data_fn is None:
raise OccamInputError('data_fn is None, input filename')
self.save_path = op.dirname(self.data_fn)
print 'Reading from {0}'.format(self.data_fn)
dfid = open(self.data_fn,'r')
dlines = dfid.readlines()
#get format of input data
self.occam_format = dlines[0].strip().split(':')[1].strip()
#get title
title_str = dlines[1].strip().split(':')[1].strip()
title_list = title_str.split(',')
self.title = title_list[0]
#get strike angle and profile angle
if len(title_list) > 1:
for t_str in title_list[1:]:
t_list = t_str.split('=')
if len(t_list) > 1:
key = t_list[0].strip().lower().replace(' ', '_')
if key == 'profile':
key = 'profile_angle'
elif key == 'strike':
key = 'geoelectric_strike'
value = t_list[1].split('deg')[0].strip()
print ' {0} = {1}'.format(key, value)
try:
setattr(self, key, float(value))
except ValueError:
setattr(self, key, value)
#get number of sites
nsites = int(dlines[2].strip().split(':')[1].strip())
print ' {0} = {1}'.format('number of sites', nsites)
#get station names
self.station_list = np.array([dlines[ii].strip()
for ii in range(3, nsites+3)])
#get offsets in meters
self.station_locations = np.array([float(dlines[ii].strip())
for ii in range(4+nsites, 4+2*nsites)])
#get number of frequencies
nfreq = int(dlines[4+2*nsites].strip().split(':')[1].strip())
print ' {0} = {1}'.format('number of frequencies', nfreq)
#get frequencies
self.freq = np.array([float(dlines[ii].strip())
for ii in range(5+2*nsites,5+2*nsites+nfreq)])
#get periods
self.period = 1./self.freq
#-----------get data-------------------
#set zero array size the first row will be the data and second the error
asize = (2, self.freq.shape[0])
#make a list of dictionaries for each station.
self.data = [{'station':station,
'offset':offset,
'te_phase':np.zeros(asize),
'tm_phase':np.zeros(asize),
're_tip':np.zeros(asize),
'im_tip':np.zeros(asize),
'te_res':np.zeros(asize),
'tm_res':np.zeros(asize)}
for station, offset in zip(self.station_list,
self.station_locations)]
self.data_list = dlines[7+2*nsites+nfreq:]
for line in self.data_list:
try:
station, freq, comp, odata, oerr = line.split()
#station index -1 cause python starts at 0
ss = int(station)-1
#frequency index -1 cause python starts at 0
ff = int(freq)-1
#data key
key = self.occam_dict[comp]
#put into array
if int(comp) == 1 or int(comp) == 5:
self.data[ss][key[4:]][0, ff] = 10**float(odata)
#error
self.data[ss][key[4:]][1, ff] = float(oerr)*np.log(10)
else:
self.data[ss][key][0, ff] = float(odata)
#error
self.data[ss][key][1, ff] = float(oerr)
except ValueError:
print 'Could not read line {0}'.format(line)
def _get_frequencies(self):
"""
from the list of edi's get a frequency list to invert for.
Uses Attributes:
------------
**freq_min** : float (Hz)
minimum frequency to invert for.
*default* is None and will use the data to find min
**freq_max** : float (Hz)
maximum frequency to invert for
*default* is None and will use the data to find max
**freq_num** : int
number of frequencies to invert for
*default* is None and will use the data to find num
"""
#get all frequencies from all edi files
lo_all_freqs = []
for edi in self.edi_list:
lo_all_freqs.extend(list(edi.Z.freq))
#sort all frequencies so that they are in descending order,
#use set to remove repeats and make an array
all_freqs = np.array(sorted(list(set(lo_all_freqs)), reverse=True))
#--> get min and max values if none are given
if (self.freq_min is None) or (self.freq_min < all_freqs.min()) or\
(self.freq_min > all_freqs.max()):
self.freq_min = all_freqs.min()
if (self.freq_max is None) or (self.freq_max > all_freqs.max()) or\
(self.freq_max < all_freqs.max()):
self.freq_max = all_freqs.max()
#--> get all frequencies within the given range
self.freq = all_freqs[np.where((all_freqs >= self.freq_min) &
(all_freqs <= self.freq_max))]
if len(self.freq) == 0:
raise OccamInputError('No frequencies in user-defined interval '
'[{0}, {1}]'.format(self.freq_min, self.freq_max))
#check, if frequency list is longer than given max value
if self.freq_num is not None:
if int(self.freq_num) < self.freq.shape[0]:
print ('Number of frequencies exceeds freq_num '
'{0} > {1} '.format(self.freq.shape[0], self.freq_num)+
'Trimming frequencies to {0}'.format(self.freq_num))
excess = self.freq.shape[0]/float(self.freq_num)
if excess < 2:
offset = 0
else:
stepsize = (self.freq.shape[0]-1)/self.freq_num
offset = stepsize/2.
indices = np.array(np.around(np.linspace(offset,
self.freq.shape[0]-1-offset,
self.freq_num),0), dtype='int')
if indices[0] > (self.freq.shape[0]-1-indices[-1]):
indices -= 1
self.freq = self.freq[indices]
def _fill_data(self):
"""
Read all Edi files.
Create a profile
rotate impedance and tipper
Extract frequencies.
Collect all information sorted according to occam specifications.
Data of Z given in muV/m/nT = km/s
Error is assumed to be 1 stddev.
"""
#create a profile line, this sorts the stations by offset and rotates
#data.
self.generate_profile()
self.plot_profile()
#--> get frequencies to invert for
self._get_frequencies()
#set zero array size the first row will be the data and second the error
asize = (2, self.freq.shape[0])
#make a list of dictionaries for each station.
self.data=[{'station':station,
'offset':offset,
'te_phase':np.zeros(asize),
'tm_phase':np.zeros(asize),
're_tip':np.zeros(asize),
'im_tip':np.zeros(asize),
'te_res':np.zeros(asize),
'tm_res':np.zeros(asize)}
for station, offset in zip(self.station_list,
self.station_locations)]
#loop over mt object in edi_list and use a counter starting at 1
#because that is what occam starts at.
for s_index, edi in enumerate(self.edi_list):
rho = edi.Z.resistivity
phi = edi.Z.phase
rho_err = edi.Z.resistivity_err
station_freqs = edi.Z.freq
tipper = edi.Tipper.tipper
tipper_err = edi.Tipper.tippererr
self.data[s_index]['station'] = edi.station
self.data[s_index]['offset'] = edi.offset
for freq_num, frequency in enumerate(self.freq):
#skip, if the listed frequency is not available for the station
if not (frequency in station_freqs):
continue
#find the respective frequency index for the station
f_index = np.abs(station_freqs-frequency).argmin()
#--> get te resistivity
self.data[s_index]['te_res'][0, freq_num] = rho[f_index, 0, 1]
#compute error
if rho[f_index, 0, 1] != 0.0:
#--> get error from data
if self.res_te_err is None:
self.data[s_index]['te_res'][1, freq_num] = \
np.abs(rho_err[f_index, 0, 1]/rho[f_index, 0, 1])
#--> set generic error floor
else:
self.data[s_index]['te_res'][1, freq_num] = \
self.res_te_err/100.
#--> get tm resistivity
self.data[s_index]['tm_res'][0, freq_num] = rho[f_index, 1, 0]
#compute error
if rho[f_index, 1, 0] != 0.0:
#--> get error from data
if self.res_tm_err is None:
self.data[s_index]['tm_res'][1, freq_num] = \
np.abs(rho_err[f_index, 1, 0]/rho[f_index, 1, 0])
#--> set generic error floor
else:
self.data[s_index]['tm_res'][1, freq_num] = \
self.res_tm_err/100.
#--> get te phase
phase_te = phi[f_index, 0, 1]
#be sure the phase is in the first quadrant
if phase_te > 180:
phase_te -= 180
self.data[s_index]['te_phase'][0, freq_num] = phase_te
#compute error
#if phi[f_index, 0, 1] != 0.0:
#--> get error from data
if self.phase_te_err is None:
self.data[s_index]['te_phase'][1, freq_num] = \
np.degrees(np.arcsin(.5*
self.data[s_index]['te_res'][0, freq_num]))
#--> set generic error floor
else:
self.data[s_index]['te_phase'][1, freq_num] = \
(self.phase_te_err/100.)*57./2.
#--> get tm phase and be sure its in the first quadrant
phase_tm = phi[f_index, 1, 0]%180
self.data[s_index]['tm_phase'][0, freq_num] = phase_tm
#compute error
#if phi[f_index, 1, 0] != 0.0:
#--> get error from data
if self.phase_tm_err is None:
self.data[s_index]['tm_phase'][1, freq_num] = \
np.degrees(np.arcsin(.5*
self.data[s_index]['tm_res'][0, freq_num]))
#--> set generic error floor
else:
self.data[s_index]['tm_phase'][1, freq_num] = \
(self.phase_tm_err/100.)*57./2.
#--> get Tipper
if tipper is not None:
self.data[s_index]['re_tip'][0, freq_num] = \
tipper[f_index, 0, 1].real
self.data[s_index]['im_tip'][0, freq_num] = \
tipper[f_index, 0, 1].imag
#get error
if self.tipper_err is not None:
self.data[s_index]['re_tip'][1, freq_num] = \
self.tipper_err/100.
self.data[s_index]['im_tip'][1, freq_num] = \
self.tipper_err/100.
else:
self.data[s_index]['re_tip'][1, freq_num] = \
tipper[f_index, 0, 1].real/tipper_err[f_index, 0, 1]
self.data[s_index]['im_tip'][1, freq_num] = \
tipper[f_index, 0, 1].imag/tipper_err[f_index, 0, 1]
def _get_data_list(self):
"""
Get all the data needed to write a data file.
"""
self.data_list = []
for ss, sdict in enumerate(self.data, 1):
for ff in range(self.freq.shape[0]):
for mmode in self.mode_dict[self.model_mode]:
#log(te_res)
if mmode == 1:
if sdict['te_res'][0, ff] != 0.0:
dvalue = np.log10(sdict['te_res'][0, ff])
derror = sdict['te_res'][1, ff]/np.log(10)
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#te_res
if mmode == 9:
if sdict['te_res'][0, ff] != 0.0:
dvalue = sdict['te_res'][0, ff]
derror = sdict['te_res'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#te_phase
if mmode == 2:
if sdict['te_phase'][0, ff] != 0.0:
dvalue = sdict['te_phase'][0, ff]
derror = sdict['te_phase'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#log(tm_res)
if mmode == 5:
if sdict['tm_res'][0, ff] != 0.0:
dvalue = np.log10(sdict['tm_res'][0, ff])
derror = sdict['tm_res'][1, ff]/np.log(10)
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#tm_res
if mmode == 10:
if sdict['tm_res'][0, ff] != 0.0:
dvalue = sdict['tm_res'][0, ff]
derror = sdict['tm_res'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#tm_phase
if mmode == 6:
if sdict['tm_phase'][0, ff] != 0.0:
dvalue = sdict['tm_phase'][0, ff]
derror = sdict['tm_phase'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#Re_tip
if mmode == 3:
if sdict['re_tip'][0, ff] != 0.0:
dvalue = sdict['re_tip'][0, ff]
derror = sdict['re_tip'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
#Im_tip
if mmode == 4:
if sdict['im_tip'][0, ff] != 0.0:
dvalue = sdict['im_tip'][0, ff]
derror = sdict['im_tip'][1, ff]
dstr = '{0:.4f}'.format(dvalue)
derrstr = '{0:.4f}'.format(derror)
line = self._data_string.format(ss, ff+1, mmode,
dstr, derrstr)
self.data_list.append(line)
def write_data_file(self, data_fn=None):
"""
Write a data file.
Arguments:
-----------
**data_fn** : string
full path to data file.
*default* is save_path/fn_basename
If there data is None, then _fill_data is called to create a profile,
rotate data and get all the necessary data. This way you can use
write_data_file directly without going through the steps of projecting
the stations, etc.
:Example: ::
>>> edipath = r"/home/mt/edi_files"
>>> slst = ['mt{0:03}'.format(ss) for ss in range(1, 20)]
>>> ocd = occam2d.Data(edi_path=edipath, station_list=slst)
>>> ocd.save_path = r"/home/occam/line1/inv1"
>>> ocd.write_data_file()
"""
if self.data is None:
self._fill_data()
#get the appropriate data to write to file
self._get_data_list()
if data_fn is not None:
self.data_fn = data_fn
else:
if self.save_path is None:
self.save_path = os.getcwd()
if not os.path.exists(self.save_path):
os.mkdir(self.save_path)
self.data_fn = os.path.join(self.save_path, self.fn_basename)
data_lines = []
#--> header line
data_lines.append('{0:<18}{1}\n'.format('FORMAT:', self.occam_format))
#--> title line
if self.profile_angle is None:
self.profile_angle = 0
if self.geoelectric_strike is None:
self.geoelectric_strike = 0.0
t_str = '{0}, Profile={1:.1f} deg, Strike={2:.1f} deg'.format(
self.title, self.profile_angle, self.geoelectric_strike)
data_lines.append('{0:<18}{1}\n'.format('TITLE:', t_str))
#--> sites
data_lines.append('{0:<18}{1}\n'.format('SITES:', len(self.data)))
for sdict in self.data:
data_lines.append(' {0}\n'.format(sdict['station']))
#--> offsets
data_lines.append('{0:<18}\n'.format('OFFSETS (M):'))
for sdict in self.data:
data_lines.append(' {0:.1f}\n'.format(sdict['offset']))
#--> frequencies
data_lines.append('{0:<18}{1}\n'.format('FREQUENCIES:',
self.freq.shape[0]))
for ff in self.freq:
data_lines.append(' {0:.6f}\n'.format(ff))
#--> data
data_lines.append('{0:<18}{1}\n'.format('DATA BLOCKS:',
len(self.data_list)))
data_lines.append(self._data_header)
data_lines += self.data_list
dfid = file(self.data_fn, 'w')
dfid.writelines(data_lines)
dfid.close()
print 'Wrote Occam2D data file to {0}'.format(self.data_fn)
def get_profile_origin(self):
"""
get the origin of the profile in real world coordinates
Author: Alison Kirkby (2013)
NEED TO ADAPT THIS TO THE CURRENT SETUP.
"""
x,y = self.easts,self.norths
x1,y1 = x[0],y[0]
[m,c1] = self.profile
x0 = (y1+(1.0/m)*x1-c1)/(m+(1.0/m))
y0 = m*x0+c1
self.profile_origin = [x0,y0]
def plot_response(self, **kwargs):
"""
plot data and model responses as apparent resistivity, phase and
tipper. See PlotResponse for key words.
Returns:
---------
**pr_obj** : PlotResponse object
:Example: ::
>>> pr_obj = ocd.plot_response()
"""
pr_obj = PlotResponse(self.data_fn, **kwargs)
return pr_obj
def plot_mask_points(self, data_fn=None, marker='h', res_err_inc=.25,
phase_err_inc=.05):
"""
An interactive plotting tool to mask points an add errorbars
Arguments:
----------
**res_err_inc** : float
amount to increase the error bars. Input as a
decimal percentage. 0.3 for 30 percent
*Default* is 0.2 (20 percent)
**phase_err_inc** : float
amount to increase the error bars. Input as a
decimal percentage. 0.3 for 30 percent
*Default* is 0.05 (5 percent)
**marker** : string
marker that the masked points will be
*Default* is 'h' for hexagon
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Data()
>>> ocd.data_fn = r"/home/Occam2D/Line1/Inv1/Data.dat"
>>> ocd.plot_mask_points()
"""
if data_fn is not None:
self.data_fn = data_fn
pr_obj = self.plot_response(**kwargs)
#make points an attribute of self which is a data type OccamPointPicker
self.masked_data = OccamPointPicker(pr_obj.ax_list,
pr_obj.line_list,
pr_obj.err_list,
phase_err_inc=phase_err_inc,
res_err_inc=res_err_inc)
plt.show()
def mask_points(self, maskpoints_obj):
"""
mask points and rewrite the data file
NEED TO REDO THIS TO FIT THE CURRENT SETUP
"""
mp_obj = maskpoints_obj
m_data = list(self.data)
#rewrite the data file
#make a reverse dictionary for locating the masked points in the data
#file
rploc = dict([('{0}'.format(mp_obj.fndict[key]),int(key)-1)
for key in mp_obj.fndict.keys()])
#make a period dictionary to locate points changed
frpdict = dict([('{0:.5g}'.format(fr),ff)
for ff,fr in enumerate(1./self.freq)])
#loop over the data list
for dd, dat in enumerate(mp_obj.data):
derror = self.points.error[dd]
#loop over the 4 main entrie
for ss, skey in enumerate(['resxy', 'resyx', 'phasexy','phaseyx']):
#rewrite any coinciding points
for frpkey in frpdict.keys():
try:
ff = frpdict[frpkey]
floc = self.points.fdict[dd][ss][frpkey]
#CHANGE APPARENT RESISTIVITY
if ss == 0 or ss == 1:
#change the apparent resistivity value
if m_data[rploc[str(dd)]][skey][0][ff] != \
np.log10(dat[ss][floc]):
if dat[ss][floc] == 0:
m_data[rploc[str(dd)]][skey][0][ff] = 0.0
else:
m_data[rploc[str(dd)]][skey][0][ff] = \
np.log10(dat[ss][floc])
#change the apparent resistivity error value
if dat[ss][floc] == 0.0:
rerr = 0.0
else:
rerr = derror[ss][floc]/dat[ss][floc]/np.log(10)
if m_data[rploc[str(dd)]][skey][1][ff] != rerr:
m_data[rploc[str(dd)]][skey][1][ff] = rerr
#DHANGE PHASE
elif ss == 2 or ss == 3:
#change the phase value
if m_data[rploc[str(dd)]][skey][0][ff] != \
dat[ss][floc]:
if dat[ss][floc] == 0:
m_data[rploc[str(dd)]][skey][0][ff] = 0.0
else:
m_data[rploc[str(dd)]][skey][0][ff] = \
dat[ss][floc]
#change the apparent resistivity error value
if dat[ss][floc] == 0.0:
rerr = 0.0
else:
rerr = derror[ss][floc]
if m_data[rploc[str(dd)]][skey][1][ff] != rerr:
m_data[rploc[str(dd)]][skey][1][ff] = rerr
except KeyError:
pass
class Response(object):
"""
Reads .resp file output by Occam. Similar structure to Data.data.
If resp_fn is given in the initialization of Response, read_response_file
is called.
Arguments:
------------
**resp_fn** : string
full path to .resp file
Attributes:
-------------
**resp** : is a list of dictioinaries containing the data for each
station. keys include:
* 'te_res' -- TE resisitivity in linear scale
* 'tm_res' -- TM resistivity in linear scale
* 'te_phase' -- TE phase in degrees
* 'tm_phase' -- TM phase in degrees in first quadrant
* 're_tip' -- real part of tipper along profile
* 'im_tip' -- imaginary part of tipper along profile
each key is a np.ndarray(2, num_freq)
index 0 is for model response
index 1 is for normalized misfit
:Example: ::
>>> resp_obj = occam2d.Response(r"/home/occam/line1/inv1/test_01.resp")
"""
def __init__(self, resp_fn=None, **kwargs):
self.resp_fn = resp_fn
self.resp = None
self.occam_dict = {'1':'log_te_res',
'2':'te_phase',
'3':'re_tip',
'4':'im_tip',
'5':'log_tm_res',
'6':'tm_phase',
'9':'te_res',
'10':'tm_res'}
if resp_fn is not None:
self.read_response_file()
def read_response_file(self, resp_fn=None):
"""
read in response file and put into a list of dictionaries similar
to Data
"""
if resp_fn is not None:
self.resp_fn = resp_fn
if self.resp_fn is None:
raise OccamInputError('resp_fn is None, please input response file')
if os.path.isfile(self.resp_fn) == False:
raise OccamInputError('Could not find {0}'.format(self.resp_fn))
r_arr = np.loadtxt(self.resp_fn, dtype=[('station', np.int),
('freq', np.int),
('comp', np.int),
('z', np.int),
('data', np.float),
('resp', np.float),
('err', np.float)])
num_stat = r_arr['station'].max()
num_freq = r_arr['freq'].max()
#set zero array size the first row will be the data and second the error
asize = (2, num_freq)
#make a list of dictionaries for each station.
self.resp = [{'te_phase':np.zeros(asize),
'tm_phase':np.zeros(asize),
're_tip':np.zeros(asize),
'im_tip':np.zeros(asize),
'te_res':np.zeros(asize),
'tm_res':np.zeros(asize)}
for ss in range(num_stat)]
for line in r_arr:
#station index -1 cause python starts at 0
ss = line['station']-1
#frequency index -1 cause python starts at 0
ff = line['freq']-1
#data key
key = self.occam_dict[str(line['comp'])]
#put into array
if line['comp'] == 1 or line['comp'] == 5:
self.resp[ss][key[4:]][0, ff] = 10**line['resp']
#error
self.resp[ss][key[4:]][1, ff] = line['err']*np.log(10)
else:
self.resp[ss][key][0, ff] = line['resp']
#error
self.resp[ss][key][1, ff] = line['err']
class Model(Startup):
"""
Read .iter file output by Occam2d. Builds the resistivity model from
mesh and regularization files found from the .iter file. The resistivity
model is an array(x_nodes, z_nodes) set on a regular grid, and the values
of the model response are filled in according to the regularization grid.
This allows for faster plotting.
Inherets Startup because they are basically the same object.
Argument:
----------
**iter_fn** : string
full path to .iter file to read. *default* is None.
**model_fn** : string
full path to regularization file. *default* is None
and found directly from the .iter file. Only input
if the regularization is different from the file that
is in the .iter file.
**mesh_fn** : string
full path to mesh file. *default* is None
Found directly from the model_fn file. Only input
if the mesh is different from the file that
is in the model file.
===================== =====================================================
Key Words/Attributes Description
===================== =====================================================
data_fn full path to data file
iter_fn full path to .iter file
mesh_fn full path to mesh file
mesh_x np.ndarray(x_nodes, z_nodes) mesh grid for plotting
mesh_z np.ndarray(x_nodes, z_nodes) mesh grid for plotting
model_values model values from startup file
plot_x nodes of mesh in horizontal direction
plot_z nodes of mesh in vertical direction
res_model np.ndarray(x_nodes, z_nodes) resistivity model
values in linear scale
===================== =====================================================
===================== =====================================================
Methods Description
===================== =====================================================
build_model get the resistivity model from the .iter file
in a regular grid according to the mesh file
with resistivity values according to the model file
read_iter_file read .iter file and fill appropriate attributes
write_iter_file write an .iter file incase you want to set it as the
starting model or a priori model
===================== =====================================================
:Example: ::
>>> model = occam2D.Model(r"/home/occam/line1/inv1/test_01.iter")
>>> model.build_model()
"""
def __init__(self, iter_fn=None, model_fn=None, mesh_fn=None, **kwargs):
Startup.__init__(self, **kwargs)
self.iter_fn = iter_fn
self.model_fn = model_fn
self.mesh_fn = mesh_fn
self.data_fn = kwargs.pop('data_fn', None)
self.model_values = kwargs.pop('model_values', None)
self.res_model = None
self.plot_x = None
self.plot_z = None
self.mesh_x = None
self.mesh_z = None
def read_iter_file(self, iter_fn=None):
"""
Read an iteration file.
Arguments:
----------
**iter_fn** : string
full path to iteration file if iterpath=None. If
iterpath is input then iterfn is just the name
of the file without the full path.
Returns:
--------
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> itfn = r"/home/Occam2D/Line1/Inv1/Test_15.iter"
>>> ocm = occam2d.Model(itfn)
>>> ocm.read_iter_file()
"""
if iter_fn is not None:
self.iter_fn == iter_fn
if self.iter_fn is None:
raise OccamInputError('iter_fn is None, input iteration file')
#check to see if the file exists
if os.path.exists(self.iter_fn) == False:
raise OccamInputError('Can not find {0}'.format(self.iter_fn))
self.save_path = os.path.dirname(self.iter_fn)
#open file, read lines, close file
ifid = file(self.iter_fn, 'r')
ilines = ifid.readlines()
ifid.close()
ii = 0
#put header info into dictionary with similar keys
while ilines[ii].lower().find('param') != 0:
iline = ilines[ii].strip().split(':')
key = iline[0].strip().lower()
if key.find('!') != 0:
key = key.replace(' ', '_').replace('file', 'fn').replace('/','_')
value = iline[1].strip()
try:
setattr(self, key, float(value))
except ValueError:
setattr(self, key, value)
ii += 1
#get number of parameters
iline = ilines[ii].strip().split(':')
key = iline[0].strip().lower().replace(' ', '_')
value = int(iline[1].strip())
setattr(self, key, value)
self.model_values = np.zeros(self.param_count)
kk= int(ii+1)
jj = 0
mv_index = 0
while jj < len(ilines)-kk:
iline = np.array(ilines[jj+kk].strip().split(), dtype='float')
self.model_values[mv_index:mv_index+iline.shape[0]] = iline
jj += 1
mv_index += iline.shape[0]
#make sure data file is full path
if os.path.isfile(self.data_fn) == False:
self.data_fn = os.path.join(self.save_path, self.data_fn)
#make sure model file is full path
if os.path.isfile(self.model_fn) == False:
self.model_fn = os.path.join(self.save_path, self.model_fn)
def write_iter_file(self, iter_fn=None):
"""
write an iteration file if you need to for some reason, same as
startup file
"""
if iter_fn is not None:
self.iter_fn = iter_fn
self.write_startup_file(iter_fn)
def build_model(self):
"""
build the model from the mesh, regularization grid and model file
"""
#first read in the iteration file
self.read_iter_file()
#read in the regulariztion file
r1 = Regularization()
r1.read_regularization_file(self.model_fn)
r1.model_rows = np.array(r1.model_rows)
self.model_rows = r1.model_rows
self.model_columns = r1.model_columns
#read in mesh file
r1.read_mesh_file(r1.mesh_fn)
#get the binding offset which is the right side of the furthest left
#block, this helps locate the model in relative space
bndgoff = r1.binding_offset
#make sure that the number of rows and number of columns are the same
assert len(r1.model_rows) == len(r1.model_columns)
#initiate the resistivity model to the shape of the FE mesh
self.res_model = np.zeros((r1.z_nodes.shape[0], r1.x_nodes.shape[0]))
#read in the model and set the regularization block values to map onto
#the FE mesh so that the model can be plotted as an image or regular
#mesh.
mm = 0
for ii in range(len(r1.model_rows)):
#get the number of layers to combine
#this index will be the first index in the vertical direction
ny1 = r1.model_rows[:ii, 0].sum()
#the second index in the vertical direction
ny2 = ny1+r1.model_rows[ii][0]
#make the list of amalgamated columns an array for ease
lc = np.array(r1.model_columns[ii])
#loop over the number of amalgamated blocks
for jj in range(len(r1.model_columns[ii])):
#get first in index in the horizontal direction
nx1 = lc[:jj].sum()
#get second index in horizontal direction
nx2 = nx1+lc[jj]
#put the apporpriate resistivity value into all the amalgamated
#model blocks of the regularization grid into the forward model
#grid
self.res_model[ny1:ny2, nx1:nx2] = self.model_values[mm]
mm += 1
#make some arrays for plotting the model
self.plot_x = np.array([r1.x_nodes[:ii+1].sum()
for ii in range(len(r1.x_nodes))])
self.plot_z = np.array([r1.z_nodes[:ii+1].sum()
for ii in range(len(r1.z_nodes))])
#center the grid onto the station coordinates
x0 = bndgoff-self.plot_x[r1.model_columns[0][0]]
self.plot_x += x0
#flip the arrays around for plotting purposes
#plotx = plotx[::-1] and make the first layer start at zero
self.plot_z = self.plot_z[::-1]-self.plot_z[0]
#make a mesh grid to plot in the model coordinates
self.mesh_x, self.mesh_z = np.meshgrid(self.plot_x, self.plot_z)
#flip the resmodel upside down so that the top is the stations
self.res_model = np.flipud(self.res_model)
#==============================================================================
# plot the MT and model responses
#==============================================================================
class PlotResponse():
"""
Helper class to deal with plotting the MT response and occam2d model.
Arguments:
-------------
**data_fn** : string
full path to data file
**resp_fn** : string or list
full path(s) to response file(s)
==================== ======================================================
Attributes/key words description
==================== ======================================================
ax_list list of matplotlib.axes instances for use with
OccamPointPicker
color_mode [ 'color' | 'bw' ] plot figures in color or
black and white ('bw')
cted color of Data TE marker and line
ctem color of Model TE marker and line
ctewl color of Winglink Model TE marker and line
ctmd color of Data TM marker and line
ctmm color of Model TM marker and line
ctmwl color of Winglink Model TM marker and line
e_capsize size of error bar caps in points
e_capthick line thickness of error bar caps in points
err_list list of line properties of error bars for use with
OccamPointPicker
fig_dpi figure resolution in dots-per-inch
fig_list list of dictionaries with key words
station --> station name
fig --> matplotlib.figure instance
axrte --> matplotlib.axes instance for TE app.res
axrtm --> matplotlib.axes instance for TM app.res
axpte --> matplotlib.axes instance for TE phase
axptm --> matplotlib.axes instance for TM phase
fig_num starting number of figure
fig_size size of figure in inches (width, height)
font_size size of axes ticklabel font in points
line_list list of matplotlib.Line instances for use with
OccamPointPicker
lw line width of lines in points
ms marker size in points
mted marker for Data TE mode
mtem marker for Model TE mode
mtewl marker for Winglink Model TE
mtmd marker for Data TM mode
mtmm marker for Model TM mode
mtmwl marker for Winglink TM mode
period np.ndarray of periods to plot
phase_limits limits on phase plots in degrees (min, max)
plot_num [ 1 | 2 ]
1 to plot both modes in a single plot
2 to plot modes in separate plots (default)
plot_tipper [ 'y' | 'n' ] plot tipper data if desired
plot_type [ '1' | station_list]
'1' --> to plot all stations in different figures
station_list --> to plot a few stations, give names
of stations ex. ['mt01', 'mt07']
plot_yn [ 'y' | 'n']
'y' --> to plot on instantiation
'n' --> to not plot on instantiation
res_limits limits on resistivity plot in log scale (min, max)
rp_list list of dictionaries from read2Ddata
station_list station_list list of stations in rp_list
subplot_bottom subplot spacing from bottom (relative coordinates)
subplot_hspace vertical spacing between subplots
subplot_left subplot spacing from left
subplot_right subplot spacing from right
subplot_top subplot spacing from top
subplot_wspace horizontal spacing between subplots
wl_fn Winglink file name (full path)
==================== ======================================================
=================== =======================================================
Methods Description
=================== =======================================================
plot plots the apparent resistiviy and phase of data and
model if given. called on instantiation if plot_yn
is 'y'.
redraw_plot call redraw_plot to redraw the figures,
if one of the attributes has been changed
save_figures save all the matplotlib.figure instances in fig_list
=================== =======================================================
:Example: ::
>>> data_fn = r"/home/occam/line1/inv1/OccamDataFile.dat"
>>> resp_list = [r"/home/occam/line1/inv1/test_{0:02}".format(ii)
for ii in range(2, 8, 2)]
>>> pr_obj = occam2d.PlotResponse(data_fn, resp_list, plot_tipper='y')
"""
def __init__(self, data_fn, resp_fn=None, **kwargs):
self.data_fn = data_fn
self.resp_fn = resp_fn
if self.resp_fn is not None:
if type(self.resp_fn) != list:
self.resp_fn = [self.resp_fn]
self.wl_fn = kwargs.pop('wl_fn', None)
self.color_mode = kwargs.pop('color_mode', 'color')
self.ms = kwargs.pop('ms', 1.5)
self.lw = kwargs.pop('lw', .5)
self.e_capthick = kwargs.pop('e_capthick', .5)
self.e_capsize = kwargs.pop('e_capsize', 2)
self.ax_list = []
self.line_list = []
self.err_list = []
#color mode
if self.color_mode == 'color':
#color for data
self.cted = kwargs.pop('cted', (0, 0, 1))
self.ctmd = kwargs.pop('ctmd', (1, 0, 0))
self.mted = kwargs.pop('mted', 's')
self.mtmd = kwargs.pop('mtmd', 'o')
#color for occam2d model
self.ctem = kwargs.pop('ctem', (0, .6, .3))
self.ctmm = kwargs.pop('ctmm', (.9, 0, .8))
self.mtem = kwargs.pop('mtem', '+')
self.mtmm = kwargs.pop('mtmm', '+')
#color for Winglink model
self.ctewl = kwargs.pop('ctewl', (0, .6, .8))
self.ctmwl = kwargs.pop('ctmwl', (.8, .7, 0))
self.mtewl = kwargs.pop('mtewl', 'x')
self.mtmwl = kwargs.pop('mtmwl', 'x')
#color of tipper
self.ctipr = kwargs.pop('ctipr', self.cted)
self.ctipi = kwargs.pop('ctipi', self.ctmd)
#black and white mode
elif self.color_mode == 'bw':
#color for data
self.cted = kwargs.pop('cted', (0, 0, 0))
self.ctmd = kwargs.pop('ctmd', (0, 0, 0))
self.mted = kwargs.pop('mted', '*')
self.mtmd = kwargs.pop('mtmd', 'v')
#color for occam2d model
self.ctem = kwargs.pop('ctem', (0.6, 0.6, 0.6))
self.ctmm = kwargs.pop('ctmm', (0.6, 0.6, 0.6))
self.mtem = kwargs.pop('mtem', '+')
self.mtmm = kwargs.pop('mtmm', 'x')
#color for Winglink model
self.ctewl = kwargs.pop('ctewl', (0.3, 0.3, 0.3))
self.ctmwl = kwargs.pop('ctmwl', (0.3, 0.3, 0.3))
self.mtewl = kwargs.pop('mtewl', '|')
self.mtmwl = kwargs.pop('mtmwl', '_')
self.ctipr = kwargs.pop('ctipr', self.cted)
self.ctipi = kwargs.pop('ctipi', self.ctmd)
self.phase_limits = kwargs.pop('phase_limits', (-5, 95))
self.res_limits = kwargs.pop('res_limits', None)
self.tip_limits = kwargs.pop('tip_limits', (-.5, .5))
self.fig_num = kwargs.pop('fig_num', 1)
self.fig_size = kwargs.pop('fig_size', [6, 6])
self.fig_dpi = kwargs.pop('dpi', 300)
self.subplot_wspace = .1
self.subplot_hspace = .15
self.subplot_right = .98
self.subplot_left = .085
self.subplot_top = .93
self.subplot_bottom = .1
self.font_size = kwargs.pop('font_size', 6)
self.plot_type = kwargs.pop('plot_type', '1')
self.plot_num = kwargs.pop('plot_num', 2)
self.plot_tipper = kwargs.pop('plot_tipper', 'n')
self.plot_model_error = kwargs.pop('plot_model_err', 'y')
self.plot_yn = kwargs.pop('plot_yn', 'y')
if self.plot_num == 1:
self.ylabel_coord = kwargs.pop('ylabel_coords', (-.055, .5))
elif self.plot_num == 2:
self.ylabel_coord = kwargs.pop('ylabel_coords', (-.12, .5))
self.fig_list = []
if self.plot_yn == 'y':
self.plot()
def plot(self):
"""
plot the data and model response, if given, in individual plots.
"""
data_obj = Data()
data_obj.read_data_file(self.data_fn)
rp_list = data_obj.data
nr = len(rp_list)
#create station list
self.station_list = [rp['station'] for rp in rp_list]
#boolean for adding winglink output to the plots 0 for no, 1 for yes
addwl = 0
#read in winglink data file
if self.wl_fn != None:
addwl = 1
self.subplot_hspace+.1
wld, wlrp_list, wlplist, wlslist, wltlist = MTwl.readOutputFile(
self.wl_fn)
sdict = dict([(ostation, wlistation) for wlistation in wlslist
for ostation in self.station_list
if wlistation.find(ostation)>=0])
#set a local parameter period for less typing
period = data_obj.period
#---------------plot each respones in a different figure---------------
if self.plot_type == '1':
pstation_list = range(len(self.station_list))
else:
if type(self.plot_type) is not list:
self.plot_type = [self.plot_type]
pstation_list = []
for ii, station in enumerate(self.station_list):
for pstation in self.plot_type:
if station.find(pstation) >= 0:
pstation_list.append(ii)
#set the grid of subplots
if self.plot_tipper == 'y':
gs = gridspec.GridSpec(3, 2,
wspace=self.subplot_wspace,
left=self.subplot_left,
top=self.subplot_top,
bottom=self.subplot_bottom,
right=self.subplot_right,
hspace=self.subplot_hspace,
height_ratios=[2, 1.5, 1])
else:
gs = gridspec.GridSpec(2, 2,
wspace=self.subplot_wspace,
left=self.subplot_left,
top=self.subplot_top,
bottom=self.subplot_bottom,
right=self.subplot_right,
hspace=self.subplot_hspace,
height_ratios=[2, 1.5])
#--> set default font size
plt.rcParams['font.size'] = self.font_size
#loop over each station to plot
for ii, jj in enumerate(pstation_list):
fig = plt.figure(self.station_list[jj],
self.fig_size, dpi=self.fig_dpi)
plt.clf()
#--> set subplot instances
#---plot both TE and TM in same subplot---
if self.plot_num == 1:
axrte = fig.add_subplot(gs[0,:])
axrtm = axrte
axpte = fig.add_subplot(gs[1,:],sharex=axrte)
axptm = axpte
if self.plot_tipper == 'y':
axtipre = fig.add_subplot(gs[2, :], sharex=axrte)
axtipim = axtipre
#---plot TE and TM in separate subplots---
elif self.plot_num == 2:
axrte = fig.add_subplot(gs[0,0])
axrtm = fig.add_subplot(gs[0,1])
axpte = fig.add_subplot(gs[1,0], sharex=axrte)
axptm = fig.add_subplot(gs[1,1], sharex=axrtm)
if self.plot_tipper == 'y':
axtipre = fig.add_subplot(gs[2, 0], sharex=axrte)
axtipim = fig.add_subplot(gs[2, 1], sharex=axrtm)
#plot the data, it should be the same for all response files
#empty lists for legend marker and label
rlistte = []
llistte = []
rlisttm = []
llisttm = []
#------------Plot Resistivity----------------------------------
#cut out missing data points first
#--> data
rxy = np.where(rp_list[jj]['te_res'][0]!=0)[0]
ryx = np.where(rp_list[jj]['tm_res'][0]!=0)[0]
#--> TE mode Data
if len(rxy) > 0:
rte_err = rp_list[jj]['te_res'][1, rxy]*\
rp_list[jj]['te_res'][0, rxy]
rte = plot_errorbar(axrte,
period[rxy],
rp_list[jj]['te_res'][0, rxy],
ls=':',
marker=self.mted,
ms=self.ms,
color=self.cted,
y_error=rte_err,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
rlistte.append(rte[0])
llistte.append('$Obs_{TE}$')
else:
rte = [None, [None, None, None], [None, None, None]]
#--> TM mode data
if len(ryx) > 0:
rtm_err = rp_list[jj]['tm_res'][1, ryx]*\
rp_list[jj]['tm_res'][0, ryx]
rtm = plot_errorbar(axrtm,
period[ryx],
rp_list[jj]['tm_res'][0, ryx],
ls=':',
marker=self.mtmd,
ms=self.ms,
color=self.ctmd,
y_error=rtm_err,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
rlisttm.append(rtm[0])
llisttm.append('$Obs_{TM}$')
else:
rtm = [None, [None, None, None], [None, None, None]]
#--------------------plot phase--------------------------------
#cut out missing data points first
#--> data
pxy = np.where(rp_list[jj]['te_phase'][0]!=0)[0]
pyx = np.where(rp_list[jj]['tm_phase'][0]!=0)[0]
#--> TE mode data
if len(pxy) > 0:
pte = plot_errorbar(axpte,
period[pxy],
rp_list[jj]['te_phase'][0, pxy],
ls=':',
marker=self.mted,
ms=self.ms,
color=self.cted,
y_error=rp_list[jj]['te_phase'][1, pxy],
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
else:
pte = [None, [None, None, None], [None, None, None]]
#--> TM mode data
if len(pyx)>0:
ptm = plot_errorbar(axptm,
period[pyx],
rp_list[jj]['tm_phase'][0, pyx],
ls=':',
marker=self.mtmd,
ms=self.ms,
color=self.ctmd,
y_error=rp_list[jj]['tm_phase'][1, pyx],
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
else:
ptm = [None, [None, None, None], [None, None, None]]
#append axis properties to lists that can be used by
#OccamPointPicker
self.ax_list.append([axrte, axrtm, axpte, axptm])
self.line_list.append([rte[0], rtm[0], pte[0], ptm[0]])
self.err_list.append([[rte[1][0],rte[1][1],rte[2][0]],
[rtm[1][0],rtm[1][1],rtm[2][0]],
[pte[1][0],pte[1][1],pte[2][0]],
[ptm[1][0],ptm[1][1],ptm[2][0]]])
#---------------------plot tipper----------------------------------
if self.plot_tipper == 'y':
t_list = []
t_label = []
txy = np.where(rp_list[jj]['re_tip'][0]!=0)[0]
tyx = np.where(rp_list[jj]['im_tip'][0]!=0)[0]
#--> real tipper data
if len(txy)>0:
per_list_p =[]
tpr_list_p = []
per_list_n =[]
tpr_list_n = []
for per, tpr in zip(period[txy],
rp_list[jj]['re_tip'][0, txy]):
if tpr >= 0:
per_list_p.append(per)
tpr_list_p.append(tpr)
else:
per_list_n.append(per)
tpr_list_n.append(tpr)
if len(per_list_p) > 0:
m_line, s_line, b_line = axtipre.stem(per_list_p,
tpr_list_p,
markerfmt='^',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', self.ctipr)
plt.setp(m_line, 'markeredgecolor', self.ctipr)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', self.ctipr)
plt.setp(b_line, 'linewidth', .01)
t_list.append(m_line)
t_label.append('Real')
if len(per_list_n) > 0:
m_line, s_line, b_line = axtipre.stem(per_list_n,
tpr_list_n,
markerfmt='v',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', self.ctipr)
plt.setp(m_line, 'markeredgecolor', self.ctipr)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', self.ctipr)
plt.setp(b_line, 'linewidth', .01)
if len(t_list) == 0:
t_list.append(m_line)
t_label.append('Real')
else:
pass
if len(tyx)>0:
per_list_p =[]
tpi_list_p = []
per_list_n =[]
tpi_list_n = []
for per, tpi in zip(period[tyx],
rp_list[jj]['im_tip'][0, tyx]):
if tpi >= 0:
per_list_p.append(per)
tpi_list_p.append(tpi)
else:
per_list_n.append(per)
tpi_list_n.append(tpi)
if len(per_list_p) > 0:
m_line, s_line, b_line = axtipim.stem(per_list_p,
tpi_list_p,
markerfmt='^',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', self.ctipi)
plt.setp(m_line, 'markeredgecolor', self.ctipi)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', self.ctipi)
plt.setp(b_line, 'linewidth', .01)
t_list.append(m_line)
t_label.append('Imag')
if len(per_list_n) > 0:
m_line, s_line, b_line = axtipim.stem(per_list_n,
tpi_list_n,
markerfmt='v',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', self.ctipi)
plt.setp(m_line, 'markeredgecolor', self.ctipi)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', self.ctipi)
plt.setp(b_line, 'linewidth', .01)
if len(t_list) <= 1:
t_list.append(m_line)
t_label.append('Imag')
else:
pass
#------------------- plot model response --------------------------
if self.resp_fn is not None:
num_resp = len(self.resp_fn)
for rr, rfn in enumerate(self.resp_fn):
resp_obj = Response()
resp_obj.read_response_file(rfn)
rp = resp_obj.resp
# create colors for different responses
if self.color_mode == 'color':
cxy = (0,
.4+float(rr)/(3*num_resp),
0)
cyx = (.7+float(rr)/(4*num_resp),
.13,
.63-float(rr)/(4*num_resp))
elif self.color_mode == 'bw':
cxy = (1-1.25/(rr+2.), 1-1.25/(rr+2.), 1-1.25/(rr+2.))
cyx = (1-1.25/(rr+2.), 1-1.25/(rr+2.), 1-1.25/(rr+2.))
#calculate rms's
rmslistte = np.hstack((rp[jj]['te_res'][1],
rp[jj]['te_phase'][1]))
rmslisttm = np.hstack((rp[jj]['tm_res'][1],
rp[jj]['tm_phase'][1]))
rmste = np.sqrt(np.sum([rms**2 for rms in rmslistte])/
len(rmslistte))
rmstm = np.sqrt(np.sum([rms**2 for rms in rmslisttm])/
len(rmslisttm))
#------------Plot Resistivity------------------------------
#cut out missing data points first
#--> response
mrxy = np.where(rp[jj]['te_res'][0]!=0)[0]
mryx = np.where(rp[jj]['tm_res'][0]!=0)[0]
#--> TE mode Model Response
if len(mrxy) > 0:
r3 = plot_errorbar(axrte,
period[mrxy],
rp[jj]['te_res'][0, mrxy],
ls='--',
marker=self.mtem,
ms=self.ms,
color=cxy,
y_error=None,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
rlistte.append(r3[0])
llistte.append('$Mod_{TE}$ '+'{0:.2f}'.format(rmste))
else:
pass
#--> TM mode model response
if len(mryx)>0:
r4 = plot_errorbar(axrtm,
period[mryx],
rp[jj]['tm_res'][0, mryx],
ls='--',
marker=self.mtmm,
ms=self.ms,
color=cyx,
y_error=None,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
rlisttm.append(r4[0])
llisttm.append('$Mod_{TM}$ '+'{0:.2f}'.format(rmstm))
else:
pass
#--------------------plot phase--------------------------------
#cut out missing data points first
#--> reponse
mpxy = np.where(rp[jj]['te_phase'][0]!=0)[0]
mpyx = np.where(rp[jj]['tm_phase'][0]!=0)[0]
#--> TE mode response
if len(mpxy) > 0:
p3 = plot_errorbar(axpte,
period[mpxy],
rp[jj]['te_phase'][0, mpxy],
ls='--',
ms=self.ms,
color=cxy,
y_error=None,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
else:
pass
#--> TM mode response
if len(mpyx) > 0:
p4 = plot_errorbar(axptm,
period[mpyx],
rp[jj]['tm_phase'][0, mpyx],
ls='--',
marker=self.mtmm,
ms=self.ms,
color=cyx,
y_error=None,
lw=self.lw,
e_capsize=self.e_capsize,
e_capthick=self.e_capthick)
else:
pass
#---------------------plot tipper--------------------------
if self.plot_tipper == 'y':
txy = np.where(rp[jj]['re_tip'][0]!=0)[0]
tyx = np.where(rp[jj]['im_tip'][0]!=0)[0]
#--> real tipper data
if len(txy)>0:
per_list_p =[]
tpr_list_p = []
per_list_n =[]
tpr_list_n = []
for per, tpr in zip(period[txy],
rp[jj]['re_tip'][0, txy]):
if tpr >= 0:
per_list_p.append(per)
tpr_list_p.append(tpr)
else:
per_list_n.append(per)
tpr_list_n.append(tpr)
if len(per_list_p) > 0:
m_line, s_line, b_line = axtipre.stem(per_list_p,
tpr_list_p,
markerfmt='^',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', cxy)
plt.setp(m_line, 'markeredgecolor', cxy)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', cxy)
plt.setp(b_line, 'linewidth', .01)
if len(per_list_n) > 0:
m_line, s_line, b_line = axtipre.stem(per_list_n,
tpr_list_n,
markerfmt='v',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', cxy)
plt.setp(m_line, 'markeredgecolor', cxy)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', cxy)
plt.setp(b_line, 'linewidth', .01)
else:
pass
if len(tyx)>0:
per_list_p =[]
tpi_list_p = []
per_list_n =[]
tpi_list_n = []
for per, tpi in zip(period[tyx],
rp[jj]['im_tip'][0, tyx]):
if tpi >= 0:
per_list_p.append(per)
tpi_list_p.append(tpi)
else:
per_list_n.append(per)
tpi_list_n.append(tpi)
if len(per_list_p) > 0:
m_line, s_line, b_line = axtipim.stem(per_list_p,
tpi_list_p,
markerfmt='^',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', cyx)
plt.setp(m_line, 'markeredgecolor', cyx)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', cyx)
plt.setp(b_line, 'linewidth', .01)
if len(per_list_n) > 0:
m_line, s_line, b_line = axtipim.stem(per_list_n,
tpi_list_n,
markerfmt='v',
basefmt='k')
plt.setp(m_line, 'markerfacecolor', cyx)
plt.setp(m_line, 'markeredgecolor', cyx)
plt.setp(m_line, 'markersize', self.ms)
plt.setp(s_line, 'linewidth', self.lw)
plt.setp(s_line, 'color', cyx)
plt.setp(b_line, 'linewidth', .01)
else:
pass
#--------------add in winglink responses------------------------
if addwl == 1:
try:
wlrms = wld[sdict[self.station_list[jj]]]['rms']
axrte.set_title(self.station_list[jj]+
'\n rms_occ_TE={0:.2f}'.format(rmste)+
'rms_occ_TM={0:.2f}'.format(rmstm)+
'rms_wl={0:.2f}'.format(wlrms),
fontdict={'size':self.font_size,
'weight':'bold'})
for ww, wlistation in enumerate(wlslist):
if wlistation.find(self.station_list[jj])==0:
print '{0} was Found {0} in winglink file'.format(
self.station_list[jj], wlistation)
wlrpdict = wlrp_list[ww]
zrxy = [np.where(wlrpdict['te_res'][0]!=0)[0]]
zryx = [np.where(wlrpdict['tm_res'][0]!=0)[0]]
#plot winglink resistivity
r5 = axrte.loglog(wlplist[zrxy],
wlrpdict['te_res'][1][zrxy],
ls='-.',
marker=self.mtewl,
ms=self.ms,
color=self.ctewl,
mfc=self.ctewl,
lw=self.lw)
r6 = axrtm.loglog(wlplist[zryx],
wlrpdict['tm_res'][1][zryx],
ls='-.',
marker=self.mtmwl,
ms=self.ms,
color=self.ctmwl,
mfc=self.ctmwl,
lw=self.lw)
#plot winglink phase
axpte.semilogx(wlplist[zrxy],
wlrpdict['te_phase'][1][zrxy],
ls='-.',
marker=self.mtewl,
ms=self.ms,
color=self.ctewl,
mfc=self.ctewl,
lw=self.lw)
axptm.semilogx(wlplist[zryx],
wlrpdict['tm_phase'][1][zryx],
ls='-.',
marker=self.mtmwl,
ms=self.ms,
color=self.ctmwl,
mfc=self.ctmwl,
lw=self.lw)
rlistte.append(r5[0])
rlisttm.append(r6[0])
llistte.append('$WLMod_{TE}$ '+'{0:.2f}'.format(wlrms))
llisttm.append('$WLMod_{TM}$ '+'{0:.2f}'.format(wlrms))
except (IndexError, KeyError):
print 'Station not present'
else:
if self.plot_num == 1:
axrte.set_title(self.station_list[jj],
fontdict={'size':self.font_size+2,
'weight':'bold'})
elif self.plot_num == 2:
fig.suptitle(self.station_list[jj],
fontdict={'size':self.font_size+2,
'weight':'bold'})
#set the axis properties
ax_list = [axrte, axrtm]
for aa, axr in enumerate(ax_list):
#set both axes to logarithmic scale
axr.set_xscale('log')
try:
axr.set_yscale('log')
except ValueError:
pass
#put on a grid
axr.grid(True, alpha=.3, which='both', lw=.5*self.lw)
axr.yaxis.set_label_coords(self.ylabel_coord[0],
self.ylabel_coord[1])
#set resistivity limits if desired
if self.res_limits != None:
axr.set_ylim(10**self.res_limits[0],
10**self.res_limits[1])
#set the tick labels to invisible
plt.setp(axr.xaxis.get_ticklabels(), visible=False)
if aa == 0:
axr.set_ylabel('App. Res. ($\Omega \cdot m$)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
#set legend based on the plot type
if self.plot_num == 1:
if aa == 0:
axr.legend(rlistte+rlisttm,llistte+llisttm,
loc=2,markerscale=1,
borderaxespad=.05,
labelspacing=.08,
handletextpad=.15,
borderpad=.05,
prop={'size':self.font_size+1})
elif self.plot_num == 2:
if aa == 0:
axr.legend(rlistte,
llistte,
loc=2,markerscale=1,
borderaxespad=.05,
labelspacing=.08,
handletextpad=.15,
borderpad=.05,
prop={'size':self.font_size+1})
if aa==1:
axr.legend(rlisttm,
llisttm,
loc=2,markerscale=1,
borderaxespad=.05,
labelspacing=.08,
handletextpad=.15,
borderpad=.05,
prop={'size':self.font_size+1})
#set Properties for the phase axes
for aa, axp in enumerate([axpte, axptm]):
#set the x-axis to log scale
axp.set_xscale('log')
#set the phase limits
axp.set_ylim(self.phase_limits)
#put a grid on the subplot
axp.grid(True, alpha=.3, which='both', lw=.5*self.lw)
#set the tick locations
axp.yaxis.set_major_locator(MultipleLocator(10))
axp.yaxis.set_minor_locator(MultipleLocator(2))
#set the x axis label
if self.plot_tipper == 'y':
plt.setp(axp.get_xticklabels(), visible=False)
else:
axp.set_xlabel('Period (s)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
#put the y label on the far left plot
axp.yaxis.set_label_coords(self.ylabel_coord[0],
self.ylabel_coord[1])
if aa==0:
axp.set_ylabel('Phase (deg)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
#set axes properties of tipper axis
if self.plot_tipper == 'y':
for aa, axt in enumerate([axtipre, axtipim]):
axt.set_xscale('log')
#set tipper limits
axt.set_ylim(self.tip_limits)
#put a grid on the subplot
axt.grid(True, alpha=.3, which='both', lw=.5*self.lw)
#set the tick locations
axt.yaxis.set_major_locator(MultipleLocator(.2))
axt.yaxis.set_minor_locator(MultipleLocator(.1))
#set the x axis label
axt.set_xlabel('Period (s)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
axt.set_xlim(10**np.floor(np.log10(data_obj.period.min())),
10**np.ceil(np.log10(data_obj.period.max())))
#put the y label on the far left plot
axt.yaxis.set_label_coords(self.ylabel_coord[0],
self.ylabel_coord[1])
if aa==0:
axt.set_ylabel('Tipper',
fontdict={'size':self.font_size+2,
'weight':'bold'})
if self.plot_num == 2:
axt.text(axt.get_xlim()[0]*1.25,
self.tip_limits[1]*.9,
'Real', horizontalalignment='left',
verticalalignment='top',
bbox={'facecolor':'white'},
fontdict={'size':self.font_size+1})
else:
axt.legend(t_list, t_label,
loc=2,markerscale=1,
borderaxespad=.05,
labelspacing=.08,
handletextpad=.15,
borderpad=.05,
prop={'size':self.font_size+1})
if aa == 1:
if self.plot_num == 2:
axt.text(axt.get_xlim()[0]*1.25,
self.tip_limits[1]*.9,
'Imag', horizontalalignment='left',
verticalalignment='top',
bbox={'facecolor':'white'},
fontdict={'size':self.font_size+1})
#make sure the axis and figure are accessible to the user
self.fig_list.append({'station':self.station_list[jj],
'fig':fig, 'axrte':axrte, 'axrtm':axrtm,
'axpte':axpte, 'axptm':axptm})
#set the plot to be full screen well at least try
plt.show()
def redraw_plot(self):
"""
redraw plot if parameters were changed
use this function if you updated some attributes and want to re-plot.
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plot2DResponses()
>>> #change color of te markers to a gray-blue
>>> p1.cted = (.5, .5, .7)
>>> p1.redraw_plot()
"""
plt.close('all')
self.plot()
def save_figures(self, save_path, fig_fmt='pdf', fig_dpi=None,
close_fig='y'):
"""
save all the figure that are in self.fig_list
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plot2DResponses()
>>> p1.save_figures(r"/home/occam2d/Figures", fig_fmt='jpg')
"""
if not os.path.exists(save_path):
os.mkdir(save_path)
for fdict in self.fig_list:
svfn = '{0}_resp.{1}'.format(fdict['station'], fig_fmt)
fdict['fig'].savefig(os.path.join(save_path, svfn),
dpi=self.fig_dpi)
if close_fig == 'y':
plt.close(fdict['fig'])
print "saved figure to {0}".format(os.path.join(save_path, svfn))
#==============================================================================
# plot model
#==============================================================================
class PlotModel(Model):
"""
plot the 2D model found by Occam2D. The model is displayed as a meshgrid
instead of model bricks. This speeds things up considerably.
Inherets the Model class to take advantage of the attributes and methods
already coded.
Arguments:
-----------
**iter_fn** : string
full path to iteration file. From here all the
necessary files can be found assuming they are in the
same directory. If they are not then need to input
manually.
======================= ===============================================
keywords description
======================= ===============================================
block_font_size font size of block number is blocknum == 'on'
blocknum [ 'on' | 'off' ] to plot regulariztion block
numbers.
cb_pad padding between axes edge and color bar
cb_shrink percentage to shrink the color bar
climits limits of the color scale for resistivity
in log scale (min, max)
cmap name of color map for resistivity values
femesh plot the finite element mesh
femesh_triangles plot the finite element mesh with each block
divided into four triangles
fig_aspect aspect ratio between width and height of
resistivity image. 1 for equal axes
fig_dpi resolution of figure in dots-per-inch
fig_num number of figure instance
fig_size size of figure in inches (width, height)
font_size size of axes tick labels, axes labels is +2
grid [ 'both' | 'major' |'minor' | None ] string
to tell the program to make a grid on the
specified axes.
meshnum [ 'on' | 'off' ] 'on' will plot finite element
mesh numbers
meshnum_font_size font size of mesh numbers if meshnum == 'on'
ms size of station marker
plot_yn [ 'y' | 'n']
'y' --> to plot on instantiation
'n' --> to not plot on instantiation
regmesh [ 'on' | 'off' ] plot the regularization mesh
plots as blue lines
station_color color of station marker
station_font_color color station label
station_font_pad padding between station label and marker
station_font_rotation angle of station label in degrees 0 is
horizontal
station_font_size font size of station label
station_font_weight font weight of station label
station_id index to take station label from station name
station_marker station marker. if inputing a LaTex marker
be sure to input as r"LaTexMarker" otherwise
might not plot properly
subplot_bottom subplot spacing from bottom
subplot_left subplot spacing from left
subplot_right subplot spacing from right
subplot_top subplot spacing from top
title title of plot. If None then the name of the
iteration file and containing folder will be
the title with RMS and Roughness.
xlimits limits of plot in x-direction in (km)
xminorticks increment of minor ticks in x direction
xpad padding in x-direction in km
ylimits depth limits of plot positive down (km)
yminorticks increment of minor ticks in y-direction
ypad padding in negative y-direction (km)
yscale [ 'km' | 'm' ] scale of plot, if 'm' everything
will be scaled accordingly.
======================= ===============================================
=================== =======================================================
Methods Description
=================== =======================================================
plot plots resistivity model.
redraw_plot call redraw_plot to redraw the figures,
if one of the attributes has been changed
save_figure saves the matplotlib.figure instance to desired
location and format
=================== ======================================================
:Example:
---------------
>>> import mtpy.modeling.occam2d as occam2d
>>> model_plot = occam2d.PlotModel(r"/home/occam/Inv1/mt_01.iter")
>>> # change the color limits
>>> model_plot.climits = (1, 4)
>>> model_plot.redraw_plot()
>>> #change len of station name
>>> model_plot.station_id = [2, 5]
>>> model_plot.redraw_plot()
"""
def __init__(self, iter_fn=None, data_fn=None, **kwargs):
Model.__init__(self, iter_fn, **kwargs)
self.yscale = kwargs.pop('yscale', 'km')
self.fig_num = kwargs.pop('fig_num', 1)
self.fig_size = kwargs.pop('fig_size', [6, 6])
self.fig_dpi = kwargs.pop('dpi', 300)
self.fig_aspect = kwargs.pop('fig_aspect', 1)
self.title = kwargs.pop('title', 'on')
self.xpad = kwargs.pop('xpad', 1.0)
self.ypad = kwargs.pop('ypad', 1.0)
self.ms = kwargs.pop('ms', 10)
self.station_locations = None
self.station_list = None
self.station_id = kwargs.pop('station_id', None)
self.station_font_size = kwargs.pop('station_font_size', 8)
self.station_font_pad = kwargs.pop('station_font_pad', 1.0)
self.station_font_weight = kwargs.pop('station_font_weight', 'bold')
self.station_font_rotation = kwargs.pop('station_font_rotation', 60)
self.station_font_color = kwargs.pop('station_font_color', 'k')
self.station_marker = kwargs.pop('station_marker',
r"$\blacktriangledown$")
self.station_color = kwargs.pop('station_color', 'k')
self.ylimits = kwargs.pop('ylimits', None)
self.xlimits = kwargs.pop('xlimits', None)
self.xminorticks = kwargs.pop('xminorticks', 5)
self.yminorticks = kwargs.pop('yminorticks', 1)
self.climits = kwargs.pop('climits', (0,4))
self.cmap = kwargs.pop('cmap', 'jet_r')
self.font_size = kwargs.pop('font_size', 8)
self.femesh = kwargs.pop('femesh', 'off')
self.femesh_triangles = kwargs.pop('femesh_triangles', 'off')
self.femesh_lw = kwargs.pop('femesh_lw', .4)
self.femesh_color = kwargs.pop('femesh_color', 'k')
self.meshnum = kwargs.pop('meshnum', 'off')
self.meshnum_font_size = kwargs.pop('meshnum_font_size', 3)
self.regmesh = kwargs.pop('regmesh', 'off')
self.regmesh_lw = kwargs.pop('regmesh_lw', .4)
self.regmesh_color = kwargs.pop('regmesh_color', 'b')
self.blocknum = kwargs.pop('blocknum', 'off')
self.block_font_size = kwargs.pop('block_font_size', 3)
self.grid = kwargs.pop('grid', None)
self.cb_shrink = kwargs.pop('cb_shrink', .8)
self.cb_pad = kwargs.pop('cb_pad', .01)
self.subplot_right = .99
self.subplot_left = .085
self.subplot_top = .92
self.subplot_bottom = .1
self.plot_yn = kwargs.pop('plot_yn', 'y')
if self.plot_yn == 'y':
self.plot()
def plot(self):
"""
plotModel will plot the model output by occam2d in the iteration file.
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> itfn = r"/home/Occam2D/Line1/Inv1/Test_15.iter"
>>> model_plot = occam2d.PlotModel(itfn)
>>> model_plot.ms = 20
>>> model_plot.ylimits = (0,.350)
>>> model_plot.yscale = 'm'
>>> model_plot.spad = .10
>>> model_plot.ypad = .125
>>> model_plot.xpad = .025
>>> model_plot.climits = (0,2.5)
>>> model_plot.aspect = 'equal'
>>> model_plot.redraw_plot()
"""
#--> read in iteration file and build the model
self.read_iter_file()
self.build_model()
#--> get station locations and names from data file
d_object = Data()
d_object.read_data_file(self.data_fn)
setattr(self, 'station_locations', d_object.station_locations.copy())
setattr(self, 'station_list', d_object.station_list.copy())
#set the scale of the plot
if self.yscale == 'km':
df = 1000.
pf = 1.0
elif self.yscale == 'm':
df = 1.
pf = 1000.
else:
df = 1000.
pf = 1.0
#set some figure properties to use the maiximum space
plt.rcParams['font.size'] = self.font_size
plt.rcParams['figure.subplot.left'] = self.subplot_left
plt.rcParams['figure.subplot.right'] = self.subplot_right
plt.rcParams['figure.subplot.bottom'] = self.subplot_bottom
plt.rcParams['figure.subplot.top'] = self.subplot_top
#station font dictionary
fdict = {'size':self.station_font_size,
'weight':self.station_font_weight,
'rotation':self.station_font_rotation,
'color':self.station_font_color}
#plot the model as a mesh
self.fig = plt.figure(self.fig_num, self.fig_size, dpi=self.fig_dpi)
plt.clf()
#add a subplot to the figure with the specified aspect ratio
ax = self.fig.add_subplot(1, 1, 1, aspect=self.fig_aspect)
#plot the model as a pcolormesh so the extents are constrained to
#the model coordinates
ax.pcolormesh(self.mesh_x/df,
self.mesh_z/df,
self.res_model,
cmap=self.cmap,
vmin=self.climits[0],
vmax=self.climits[1])
#make a colorbar for the resistivity
cbx = mcb.make_axes(ax, shrink=self.cb_shrink, pad=self.cb_pad)
cb = mcb.ColorbarBase(cbx[0],
cmap=self.cmap,
norm=Normalize(vmin=self.climits[0],
vmax=self.climits[1]))
cb.set_label('Resistivity ($\Omega \cdot$m)',
fontdict={'size':self.font_size+1,'weight':'bold'})
cb.set_ticks(np.arange(int(self.climits[0]),int(self.climits[1])+1))
cb.set_ticklabels(['10$^{0}$'.format('{'+str(nn)+'}') for nn in
np.arange(int(self.climits[0]),
int(self.climits[1])+1)])
#set the offsets of the stations and plot the stations
#need to figure out a way to set the marker at the surface in all
#views.
for offset, name in zip(self.station_locations, self.station_list):
#plot the station marker
#plots a V for the station cause when you use scatter the spacing
#is variable if you change the limits of the y axis, this way it
#always plots at the surface.
ax.text(offset/df,
self.plot_z.min(),
self.station_marker,
horizontalalignment='center',
verticalalignment='baseline',
fontdict={'size':self.ms,'color':self.station_color})
#put station id onto station marker
#if there is a station id index
if self.station_id != None:
ax.text(offset/df,
-self.station_font_pad*pf,
name[self.station_id[0]:self.station_id[1]],
horizontalalignment='center',
verticalalignment='baseline',
fontdict=fdict)
#otherwise put on the full station name found form data file
else:
ax.text(offset/df,
-self.station_font_pad*pf,
name,
horizontalalignment='center',
verticalalignment='baseline',
fontdict=fdict)
#set the initial limits of the plot to be square about the profile line
if self.ylimits == None:
ax.set_ylim(abs(self.station_locations.max()-
self.station_locations.min())/df,
-self.ypad*pf)
else:
ax.set_ylim(self.ylimits[1]*pf,
(self.ylimits[0]-self.ypad)*pf)
if self.xlimits == None:
ax.set_xlim(self.station_locations.min()/df-(self.xpad*pf),
self.station_locations.max()/df+(self.xpad*pf))
else:
ax.set_xlim(self.xlimits[0]*pf, self.xlimits[1]*pf)
#set the axis properties
ax.xaxis.set_minor_locator(MultipleLocator(self.xminorticks*pf))
ax.yaxis.set_minor_locator(MultipleLocator(self.yminorticks*pf))
#set axes labels
ax.set_xlabel('Horizontal Distance ({0})'.format(self.yscale),
fontdict={'size':self.font_size+2,'weight':'bold'})
ax.set_ylabel('Depth ({0})'.format(self.yscale),
fontdict={'size':self.font_size+2,'weight':'bold'})
#put a grid on if one is desired
if self.grid is not None:
ax.grid(alpha=.3, which=self.grid, lw=.35)
#set title as rms and roughness
if type(self.title) is str:
if self.title == 'on':
titlestr = os.path.join(os.path.basename(
os.path.dirname(self.iter_fn)),
os.path.basename(self.iter_fn))
ax.set_title('{0}: RMS={1:.2f}, Roughness={2:.0f}'.format(
titlestr,self.misfit_value, self.roughness_value),
fontdict={'size':self.font_size+1,
'weight':'bold'})
else:
ax.set_title('{0}; RMS={1:.2f}, Roughness={2:.0f}'.format(
self.title, self.misfit_value,
self.roughness_value),
fontdict={'size':self.font_size+1,
'weight':'bold'})
else:
print 'RMS {0:.2f}, Roughness={1:.0f}'.format(self.misfit_value,
self.roughness_value)
#plot forward model mesh
#making an extended list seperated by None's speeds up the plotting
#by as much as 99 percent, handy
if self.femesh == 'on':
row_line_xlist = []
row_line_ylist = []
for xx in self.plot_x/df:
row_line_xlist.extend([xx,xx])
row_line_xlist.append(None)
row_line_ylist.extend([0, self.plot_zy[0]/df])
row_line_ylist.append(None)
#plot column lines (variables are a little bit of a misnomer)
ax.plot(row_line_xlist,
row_line_ylist,
color='k',
lw=.5)
col_line_xlist = []
col_line_ylist = []
for yy in self.plot_z/df:
col_line_xlist.extend([self.plot_x[0]/df,
self.plot_x[-1]/df])
col_line_xlist.append(None)
col_line_ylist.extend([yy, yy])
col_line_ylist.append(None)
#plot row lines (variables are a little bit of a misnomer)
ax.plot(col_line_xlist,
col_line_ylist,
color='k',
lw=.5)
if self.femesh_triangles == 'on':
row_line_xlist = []
row_line_ylist = []
for xx in self.plot_x/df:
row_line_xlist.extend([xx,xx])
row_line_xlist.append(None)
row_line_ylist.extend([0, self.plot_z[0]/df])
row_line_ylist.append(None)
#plot columns
ax.plot(row_line_xlist,
row_line_ylist,
color='k',
lw=.5)
col_line_xlist = []
col_line_ylist = []
for yy in self.plot_z/df:
col_line_xlist.extend([self.plot_x[0]/df,
self.plot_x[-1]/df])
col_line_xlist.append(None)
col_line_ylist.extend([yy, yy])
col_line_ylist.append(None)
#plot rows
ax.plot(col_line_xlist,
col_line_ylist,
color='k',
lw=.5)
diag_line_xlist = []
diag_line_ylist = []
for xi, xx in enumerate(self.plot_x[:-1]/df):
for yi, yy in enumerate(self.plot_z[:-1]/df):
diag_line_xlist.extend([xx, self.plot_x[xi+1]/df])
diag_line_xlist.append(None)
diag_line_xlist.extend([xx, self.plot_x[xi+1]/df])
diag_line_xlist.append(None)
diag_line_ylist.extend([yy, self.plot_z[yi+1]/df])
diag_line_ylist.append(None)
diag_line_ylist.extend([self.plot_z[yi+1]/df, yy])
diag_line_ylist.append(None)
#plot diagonal lines.
ax.plot(diag_line_xlist,
diag_line_ylist,
color='k',
lw=.5)
#plot the regularization mesh
if self.regmesh == 'on':
line_list = []
for ii in range(len(self.model_rows)):
#get the number of layers to combine
#this index will be the first index in the vertical direction
ny1 = self.model_rows[:ii,0].sum()
#the second index in the vertical direction
ny2 = ny1+self.model_rows[ii][0]
#make the list of amalgamated columns an array for ease
lc = np.array(self.model_cols[ii])
yline = ax.plot([self.plot_x[0]/df,self.plot_x[-1]/df],
[self.plot_z[-ny1]/df,
self.plot_z[-ny1]/df],
color='b',
lw=.5)
line_list.append(yline)
#loop over the number of amalgamated blocks
for jj in range(len(self.model_cols[ii])):
#get first in index in the horizontal direction
nx1 = lc[:jj].sum()
#get second index in horizontal direction
nx2 = nx1+lc[jj]
try:
if ny1 == 0:
ny1 = 1
xline = ax.plot([self.plot_x[nx1]/df,
self.plot_x[nx1]/df],
[self.plot_z[-ny1]/df,
self.plot_z[-ny2]/df],
color='b',
lw=.5)
line_list.append(xline)
except IndexError:
pass
##plot the mesh block numbers
if self.meshnum == 'on':
kk = 1
for yy in self.plot_z[::-1]/df:
for xx in self.plot_x/df:
ax.text(xx, yy, '{0}'.format(kk),
fontdict={'size':self.meshnum_font_size})
kk+=1
##plot regularization block numbers
if self.blocknum == 'on':
kk=1
for ii in range(len(self.model_rows)):
#get the number of layers to combine
#this index will be the first index in the vertical direction
ny1 = self.model_rows[:ii,0].sum()
#the second index in the vertical direction
ny2 = ny1+self.model_rows[ii][0]
#make the list of amalgamated columns an array for ease
lc = np.array(self.model_cols[ii])
#loop over the number of amalgamated blocks
for jj in range(len(self.model_cols[ii])):
#get first in index in the horizontal direction
nx1 = lc[:jj].sum()
#get second index in horizontal direction
nx2 = nx1+lc[jj]
try:
if ny1 == 0:
ny1 = 1
#get center points of the blocks
yy = self.plot_z[-ny1]-(self.plot_z[-ny1]-
self.plot_z[-ny2])/2
xx = self.plot_x[nx1]-\
(self.plot_x[nx1]-self.plot_x[nx2])/2
#put the number
ax.text(xx/df, yy/df, '{0}'.format(kk),
fontdict={'size':self.block_font_size},
horizontalalignment='center',
verticalalignment='center')
kk+=1
except IndexError:
pass
plt.show()
#make attributes that can be manipulated
self.ax = ax
self.cbax = cb
def redraw_plot(self):
"""
redraw plot if parameters were changed
use this function if you updated some attributes and want to re-plot.
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plotAllResponses()
>>> #change line width
>>> p1.lw = 2
>>> p1.redraw_plot()
"""
plt.close(self.fig)
self.plot()
def save_figure(self, save_fn, file_format='pdf', orientation='portrait',
fig_dpi=None, close_fig='y'):
"""
save_plot will save the figure to save_fn.
Arguments:
-----------
**save_fn** : string
full path to save figure to, can be input as
* directory path -> the directory path to save to
in which the file will be saved as
save_fn/station_name_PhaseTensor.file_format
* full path -> file will be save to the given
path. If you use this option then the format
will be assumed to be provided by the path
**file_format** : [ pdf | eps | jpg | png | svg ]
file type of saved figure pdf,svg,eps...
**orientation** : [ landscape | portrait ]
orientation in which the file will be saved
*default* is portrait
**fig_dpi** : int
The resolution in dots-per-inch the file will be
saved. If None then the dpi will be that at
which the figure was made. I don't think that
it can be larger than dpi of the figure.
**close_plot** : [ y | n ]
* 'y' will close the plot after saving.
* 'n' will leave plot open
:Example: ::
>>> # to save plot as jpg
>>> model_plot.save_figure(r"/home/occam/figures",
file_format='jpg')
"""
if fig_dpi == None:
fig_dpi = self.fig_dpi
if os.path.isdir(save_fn) == False:
file_format = save_fn[-3:]
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
else:
save_fn = os.path.join(save_fn, 'OccamModel.'+
file_format)
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
if close_fig == 'y':
plt.clf()
plt.close(self.fig)
else:
pass
self.fig_fn = save_fn
print 'Saved figure to: '+self.fig_fn
def update_plot(self):
"""
update any parameters that where changed using the built-in draw from
canvas.
Use this if you change an of the .fig or axes properties
:Example: ::
>>> # to change the grid lines to only be on the major ticks
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotAllResponses()
>>> [ax.grid(True, which='major') for ax in [ps1.axrte,ps1.axtep]]
>>> ps1.update_plot()
"""
self.fig.canvas.draw()
def __str__(self):
"""
rewrite the string builtin to give a useful message
"""
return ("Plots the resistivity model found by Occam2D.")
#==============================================================================
# plot L2 curve of iteration vs rms
#==============================================================================
class PlotL2():
"""
Plot L2 curve of iteration vs rms and rms vs roughness.
Need to only input an .iter file, will read all similar .iter files
to get the rms, iteration number and roughness of all similar .iter files.
Arguments:
----------
**iter_fn** : string
full path to an iteration file output by Occam2D.
======================= ===================================================
Keywords/attributes Description
======================= ===================================================
ax1 matplotlib.axes instance for rms vs iteration
ax2 matplotlib.axes instance for roughness vs rms
fig matplotlib.figure instance
fig_dpi resolution of figure in dots-per-inch
fig_num number of figure instance
fig_size size of figure in inches (width, height)
font_size size of axes tick labels, axes labels is +2
plot_yn [ 'y' | 'n']
'y' --> to plot on instantiation
'n' --> to not plot on instantiation
rms_arr structure np.array as described above
rms_color color of rms marker and line
rms_lw line width of rms line
rms_marker marker for rms values
rms_marker_size size of marker for rms values
rms_mean_color color of mean line
rms_median_color color of median line
rough_color color of roughness line and marker
rough_font_size font size for iteration number inside roughness
marker
rough_lw line width for roughness line
rough_marker marker for roughness
rough_marker_size size of marker for roughness
subplot_bottom subplot spacing from bottom
subplot_left subplot spacing from left
subplot_right subplot spacing from right
subplot_top subplot spacing from top
======================= ===================================================
=================== =======================================================
Methods Description
=================== =======================================================
plot plots L2 curve.
redraw_plot call redraw_plot to redraw the figures,
if one of the attributes has been changed
save_figure saves the matplotlib.figure instance to desired
location and format
=================== ======================================================
"""
def __init__(self, iter_fn, **kwargs):
self.iter_path = os.path.dirname(iter_fn)
self.iter_basename = os.path.basename(iter_fn)[:-7]
self.iter_fn_list = []
self.rms_arr = None
self.rough_arr = None
self.subplot_right = .98
self.subplot_left = .085
self.subplot_top = .91
self.subplot_bottom = .1
self.fig_num = kwargs.pop('fig_num', 1)
self.fig_size = kwargs.pop('fig_size', [6, 6])
self.fig_dpi = kwargs.pop('dpi', 300)
self.font_size = kwargs.pop('font_size', 8)
self.rms_lw = kwargs.pop('rms_lw', 1)
self.rms_marker = kwargs.pop('rms_marker', 'd')
self.rms_color = kwargs.pop('rms_color', 'k')
self.rms_marker_size = kwargs.pop('rms_marker_size', 5)
self.rms_median_color = kwargs.pop('rms_median_color', 'red')
self.rms_mean_color = kwargs.pop('rms_mean_color', 'orange')
self.rough_lw = kwargs.pop('rough_lw', .75)
self.rough_marker = kwargs.pop('rough_marker', 'o')
self.rough_color = kwargs.pop('rough_color', 'b')
self.rough_marker_size = kwargs.pop('rough_marker_size', 7)
self.rough_font_size = kwargs.pop('rough_font_size', 6)
self.plot_yn = kwargs.pop('plot_yn', 'y')
if self.plot_yn == 'y':
self.plot()
def _get_iterfn_list(self):
"""
get all iteration files for a given inversion
"""
self.iter_fn_list = [os.path.join(self.iter_path, fn)
for fn in os.listdir(self.iter_path)
if fn.find(self.iter_basename) == 0 and
fn.find('.iter') > 0]
def _get_values(self):
"""
get rms and roughness values from iteration files
"""
self._get_iterfn_list()
self.rms_arr = np.zeros((len(self.iter_fn_list), 2))
self.rough_arr = np.zeros((len(self.iter_fn_list), 2))
for ii, itfn in enumerate(self.iter_fn_list):
m_object = Model(itfn)
m_object.read_iter_file()
m_index = int(m_object.iteration)
self.rms_arr[ii, 1] = float(m_object.misfit_value)
self.rms_arr[ii, 0] = m_index
self.rough_arr[ii, 1] = float(m_object.roughness_value)
self.rough_arr[ii, 0] = m_index
#sort by iteration number
# self.rms_arr = np.sort(self.rms_arr, axis=1)
# self.rough_arr = np.sort(self.rough_arr, axis=1)
def plot(self):
"""
plot L2 curve
"""
self._get_values()
nr = self.rms_arr.shape[0]
med_rms = np.median(self.rms_arr[1:, 1])
mean_rms = np.mean(self.rms_arr[1:, 1])
#set the dimesions of the figure
plt.rcParams['font.size'] = self.font_size
plt.rcParams['figure.subplot.left'] = self.subplot_left
plt.rcParams['figure.subplot.right'] = self.subplot_right
plt.rcParams['figure.subplot.bottom'] = self.subplot_bottom
plt.rcParams['figure.subplot.top'] = self.subplot_top
#make figure instance
self.fig = plt.figure(self.fig_num,self.fig_size, dpi=self.fig_dpi)
plt.clf()
#make a subplot for RMS vs Iteration
self.ax1 = self.fig.add_subplot(1, 1, 1)
#plot the rms vs iteration
l1, = self.ax1.plot(self.rms_arr[:, 0],
self.rms_arr[:, 1],
'-k',
lw=1,
marker='d',
ms=5)
#plot the median of the RMS
m1, = self.ax1.plot(self.rms_arr[:, 0],
np.repeat(med_rms, nr),
ls='--',
color=self.rms_median_color,
lw=self.rms_lw*.75)
#plot the mean of the RMS
m2, = self.ax1.plot(self.rms_arr[:, 0],
np.repeat(mean_rms, nr),
ls='--',
color=self.rms_mean_color,
lw=self.rms_lw*.75)
#make subplot for RMS vs Roughness Plot
self.ax2 = self.ax1.twiny()
self.ax2.set_xlim(self.rough_arr[1:, 1].min(),
self.rough_arr[1:, 1].max())
self.ax1.set_ylim(np.floor(self.rms_arr[1:,1].min()),
self.rms_arr[1:, 1].max())
#plot the rms vs roughness
l2, = self.ax2.plot(self.rough_arr[:, 1],
self.rms_arr[:, 1],
ls='--',
color=self.rough_color,
lw=self.rough_lw,
marker=self.rough_marker,
ms=self.rough_marker_size,
mfc='white')
#plot the iteration number inside the roughness marker
for rms, ii, rough in zip(self.rms_arr[:, 1], self.rms_arr[:, 0],
self.rough_arr[:, 1]):
#need this because if the roughness is larger than this number
#matplotlib puts the text out of bounds and a draw_text_image
#error is raised and file cannot be saved, also the other
#numbers are not put in.
if rough > 1e8:
pass
else:
self.ax2.text(rough,
rms,
'{0:.0f}'.format(ii),
horizontalalignment='center',
verticalalignment='center',
fontdict={'size':self.rough_font_size,
'weight':'bold',
'color':self.rough_color})
#make a legend
self.ax1.legend([l1, l2, m1, m2],
['RMS', 'Roughness',
'Median_RMS={0:.2f}'.format(med_rms),
'Mean_RMS={0:.2f}'.format(mean_rms)],
ncol=1,
loc='upper right',
columnspacing=.25,
markerscale=.75,
handletextpad=.15)
#set the axis properties for RMS vs iteration
self.ax1.yaxis.set_minor_locator(MultipleLocator(.1))
self.ax1.xaxis.set_minor_locator(MultipleLocator(1))
self.ax1.xaxis.set_major_locator(MultipleLocator(1))
self.ax1.set_ylabel('RMS',
fontdict={'size':self.font_size+2,
'weight':'bold'})
self.ax1.set_xlabel('Iteration',
fontdict={'size':self.font_size+2,
'weight':'bold'})
self.ax1.grid(alpha=.25, which='both', lw=self.rough_lw)
self.ax2.set_xlabel('Roughness',
fontdict={'size':self.font_size+2,
'weight':'bold',
'color':self.rough_color})
for t2 in self.ax2.get_xticklabels():
t2.set_color(self.rough_color)
plt.show()
def redraw_plot(self):
"""
redraw plot if parameters were changed
use this function if you updated some attributes and want to re-plot.
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plotAllResponses()
>>> #change line width
>>> p1.lw = 2
>>> p1.redraw_plot()
"""
plt.close(self.fig)
self.plot()
def save_figure(self, save_fn, file_format='pdf', orientation='portrait',
fig_dpi=None, close_fig='y'):
"""
save_plot will save the figure to save_fn.
Arguments:
-----------
**save_fn** : string
full path to save figure to, can be input as
* directory path -> the directory path to save to
in which the file will be saved as
save_fn/station_name_PhaseTensor.file_format
* full path -> file will be save to the given
path. If you use this option then the format
will be assumed to be provided by the path
**file_format** : [ pdf | eps | jpg | png | svg ]
file type of saved figure pdf,svg,eps...
**orientation** : [ landscape | portrait ]
orientation in which the file will be saved
*default* is portrait
**fig_dpi** : int
The resolution in dots-per-inch the file will be
saved. If None then the dpi will be that at
which the figure was made. I don't think that
it can be larger than dpi of the figure.
**close_plot** : [ y | n ]
* 'y' will close the plot after saving.
* 'n' will leave plot open
:Example: ::
>>> # to save plot as jpg
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotPseudoSection()
>>> ps1.save_plot(r'/home/MT/figures', file_format='jpg')
"""
if fig_dpi == None:
fig_dpi = self.fig_dpi
if os.path.isdir(save_fn) == False:
file_format = save_fn[-3:]
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
else:
save_fn = os.path.join(save_fn, '_L2.'+
file_format)
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
if close_fig == 'y':
plt.clf()
plt.close(self.fig)
else:
pass
self.fig_fn = save_fn
print 'Saved figure to: '+self.fig_fn
def update_plot(self):
"""
update any parameters that where changed using the built-in draw from
canvas.
Use this if you change an of the .fig or axes properties
:Example: ::
>>> # to change the grid lines to only be on the major ticks
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotAllResponses()
>>> [ax.grid(True, which='major') for ax in [ps1.axrte,ps1.axtep]]
>>> ps1.update_plot()
"""
self.fig.canvas.draw()
def __str__(self):
"""
rewrite the string builtin to give a useful message
"""
return ("Plots RMS vs Iteration computed by Occam2D")
#==============================================================================
# plot pseudo section of data and model response
#==============================================================================
class PlotPseudoSection(object):
"""
plot a pseudo section of the data and response if given
Arguments:
-------------
**data_fn** : string
full path to data file.
**resp_fn** : string
full path to response file.
==================== ======================================================
key words description
==================== ======================================================
axmpte matplotlib.axes instance for TE model phase
axmptm matplotlib.axes instance for TM model phase
axmrte matplotlib.axes instance for TE model app. res
axmrtm matplotlib.axes instance for TM model app. res
axpte matplotlib.axes instance for TE data phase
axptm matplotlib.axes instance for TM data phase
axrte matplotlib.axes instance for TE data app. res.
axrtm matplotlib.axes instance for TM data app. res.
cb_pad padding between colorbar and axes
cb_shrink percentage to shrink the colorbar to
fig matplotlib.figure instance
fig_dpi resolution of figure in dots per inch
fig_num number of figure instance
fig_size size of figure in inches (width, height)
font_size size of font in points
label_list list to label plots
ml factor to label stations if 2 every other station
is labeled on the x-axis
period np.array of periods to plot
phase_cmap color map name of phase
phase_limits_te limits for te phase in degrees (min, max)
phase_limits_tm limits for tm phase in degrees (min, max)
plot_resp [ 'y' | 'n' ] to plot response
plot_tipper [ 'y' | 'n' ] to plot tipper
plot_yn [ 'y' | 'n' ] 'y' to plot on instantiation
res_cmap color map name for resistivity
res_limits_te limits for te resistivity in log scale (min, max)
res_limits_tm limits for tm resistivity in log scale (min, max)
rp_list list of dictionaries as made from read2Dresp
station_id index to get station name (min, max)
station_list station list got from rp_list
subplot_bottom subplot spacing from bottom (relative coordinates)
subplot_hspace vertical spacing between subplots
subplot_left subplot spacing from left
subplot_right subplot spacing from right
subplot_top subplot spacing from top
subplot_wspace horizontal spacing between subplots
==================== ======================================================
=================== =======================================================
Methods Description
=================== =======================================================
plot plots a pseudo-section of apparent resistiviy and phase
of data and model if given. called on instantiation
if plot_yn is 'y'.
redraw_plot call redraw_plot to redraw the figures,
if one of the attributes has been changed
save_figure saves the matplotlib.figure instance to desired
location and format
=================== =======================================================
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> r_fn = r"/home/Occam2D/Line1/Inv1/Test_15.resp"
>>> d_fn = r"/home/Occam2D/Line1/Inv1/DataRW.dat"
>>> ps_plot = occam2d.PlotPseudoSection(d_fn, r_fn)
"""
def __init__(self, data_fn, resp_fn=None, **kwargs):
self.data_fn = data_fn
self.resp_fn = resp_fn
self.plot_resp = kwargs.pop('plot_resp', 'y')
if self.resp_fn is None:
self.plot_resp = 'n'
self.label_list = [r'$\rho_{TE-Data}$',r'$\rho_{TE-Model}$',
r'$\rho_{TM-Data}$',r'$\rho_{TM-Model}$',
'$\phi_{TE-Data}$','$\phi_{TE-Model}$',
'$\phi_{TM-Data}$','$\phi_{TM-Model}$',
'$\Re e\{T_{Data}\}$','$\Re e\{T_{Model}\}$',
'$\Im m\{T_{Data}\}$','$\Im m\{T_{Model}\}$']
self.phase_limits_te = kwargs.pop('phase_limits_te', (-5, 95))
self.phase_limits_tm = kwargs.pop('phase_limits_tm', (-5, 95))
self.res_limits_te = kwargs.pop('res_limits_te', (0,3))
self.res_limits_tm = kwargs.pop('res_limits_tm', (0,3))
self.tip_limits_re = kwargs.pop('tip_limits_re', (-1, 1))
self.tip_limits_im = kwargs.pop('tip_limits_im', (-1, 1))
self.phase_cmap = kwargs.pop('phase_cmap', 'jet')
self.res_cmap = kwargs.pop('res_cmap', 'jet_r')
self.tip_cmap = kwargs.pop('res_cmap', 'Spectral')
self.ml = kwargs.pop('ml', 2)
self.station_id = kwargs.pop('station_id', [0,4])
self.fig_num = kwargs.pop('fig_num', 1)
self.fig_size = kwargs.pop('fig_size', [6, 6])
self.fig_dpi = kwargs.pop('dpi', 300)
self.subplot_wspace = .025
self.subplot_hspace = .0
self.subplot_right = .95
self.subplot_left = .085
self.subplot_top = .97
self.subplot_bottom = .1
self.font_size = kwargs.pop('font_size', 6)
self.plot_type = kwargs.pop('plot_type', '1')
self.plot_num = kwargs.pop('plot_num', 2)
self.plot_tipper = kwargs.pop('plot_tipper', 'n')
self.plot_yn = kwargs.pop('plot_yn', 'y')
self.cb_shrink = .7
self.cb_pad = .015
self.axrte = None
self.axrtm = None
self.axpte = None
self.axptm = None
self.axmrte = None
self.axmrtm = None
self.axmpte = None
self.axmptm = None
self.axtpr = None
self.axtpi = None
self.axmtpr = None
self.axmtpi = None
self.te_res_arr = None
self.tm_res_arr = None
self.te_phase_arr = None
self.tm_phase_arr = None
self.tip_real_arr = None
self.tip_imag_arr = None
self.fig = None
if self.plot_yn == 'y':
self.plot()
def plot(self):
"""
plot pseudo section of data and response if given
"""
if self.plot_resp == 'y':
nr = 2
else:
nr = 1
data_obj = Data()
data_obj.read_data_file(self.data_fn)
if self.resp_fn is not None:
resp_obj = Response()
resp_obj.read_response_file(self.resp_fn)
ns = len(data_obj.station_list)
nf = len(data_obj.period)
ylimits = (data_obj.period.max(), data_obj.period.min())
#make a grid for pcolormesh so you can have a log scale
#get things into arrays for plotting
offset_list = np.zeros(ns+1)
te_res_arr = np.ones((nf, ns, nr))
tm_res_arr = np.ones((nf, ns, nr))
te_phase_arr = np.zeros((nf, ns, nr))
tm_phase_arr = np.zeros((nf, ns, nr))
tip_real_arr = np.zeros((nf, ns, nr))
tip_imag_arr = np.zeros((nf, ns, nr))
for ii, d_dict in enumerate(data_obj.data):
offset_list[ii] = d_dict['offset']
te_res_arr[:, ii, 0] = d_dict['te_res'][0]
tm_res_arr[:, ii, 0] = d_dict['tm_res'][0]
te_phase_arr[:, ii, 0] = d_dict['te_phase'][0]
tm_phase_arr[:, ii, 0] = d_dict['tm_phase'][0]
tip_real_arr[:, ii, 0] = d_dict['re_tip'][0]
tip_imag_arr[:, ii, 0] = d_dict['im_tip'][0]
#read in response data
if self.plot_resp == 'y':
for ii, r_dict in enumerate(resp_obj.resp):
te_res_arr[:, ii, 1] = r_dict['te_res'][0]
tm_res_arr[:, ii, 1] = r_dict['tm_res'][0]
te_phase_arr[:, ii, 1] = r_dict['te_phase'][0]
tm_phase_arr[:, ii, 1] = r_dict['tm_phase'][0]
tip_real_arr[:, ii, 1] = r_dict['re_tip'][0]
tip_imag_arr[:, ii, 1] = r_dict['im_tip'][0]
#need to make any zeros 1 for taking log10
te_res_arr[np.where(te_res_arr == 0)] = 1.0
tm_res_arr[np.where(tm_res_arr == 0)] = 1.0
self.te_res_arr = te_res_arr
self.tm_res_arr = tm_res_arr
self.te_phase_arr = te_phase_arr
self.tm_phase_arr = tm_phase_arr
self.tip_real_arr = tip_real_arr
self.tip_imag_arr = tip_imag_arr
#need to extend the last grid cell because meshgrid expects n+1 cells
offset_list[-1] = offset_list[-2]*1.15
#make a meshgrid for plotting
#flip frequency so bottom corner is long period
dgrid, fgrid = np.meshgrid(offset_list, data_obj.period[::-1])
#make list for station labels
sindex_1 = self.station_id[0]
sindex_2 = self.station_id[1]
slabel = [data_obj.station_list[ss][sindex_1:sindex_2]
for ss in range(0, ns, self.ml)]
xloc = offset_list[0]+abs(offset_list[0]-offset_list[1])/5
yloc = 1.10*data_obj.period[1]
plt.rcParams['font.size'] = self.font_size
plt.rcParams['figure.subplot.bottom'] = self.subplot_bottom
plt.rcParams['figure.subplot.top'] = self.subplot_top
plt.rcParams['figure.subplot.right'] = self.subplot_right
plt.rcParams['figure.subplot.left'] = self.subplot_left
log_labels_te = ['10$^{0}$'.format('{'+str(nn)+'}')
for nn in np.arange(int(self.res_limits_te[0]),
int(self.res_limits_te[1])+1)]
log_labels_tm = ['10$^{0}$'.format('{'+str(nn)+'}')
for nn in np.arange(int(self.res_limits_tm[0]),
int(self.res_limits_tm[1])+1)]
self.fig = plt.figure(self.fig_num, self.fig_size, dpi=self.fig_dpi)
plt.clf()
if self.plot_resp == 'y':
if self.plot_tipper == 'y':
gs1 = gridspec.GridSpec(1, 3,
left=self.subplot_left,
right=self.subplot_right,
wspace=self.subplot_wspace)
gs4 = gridspec.GridSpecFromSubplotSpec(2, 2,
hspace=self.subplot_hspace,
wspace=0,
subplot_spec=gs1[2])
else:
gs1 = gridspec.GridSpec(1, 2,
left=self.subplot_left,
right=self.subplot_right,
wspace=self.subplot_wspace)
gs2 = gridspec.GridSpecFromSubplotSpec(2, 2,
hspace=self.subplot_hspace,
wspace=0,
subplot_spec=gs1[0])
gs3 = gridspec.GridSpecFromSubplotSpec(2, 2,
hspace=self.subplot_hspace,
wspace=0,
subplot_spec=gs1[1])
#plot TE resistivity data
self.axrte = plt.Subplot(self.fig, gs2[0, 0])
self.fig.add_subplot(self.axrte)
self.axrte.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(te_res_arr[:, :, 0])),
cmap=self.res_cmap,
vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1])
#plot TE resistivity model
self.axmrte = plt.Subplot(self.fig, gs2[0, 1])
self.fig.add_subplot(self.axmrte)
self.axmrte.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(te_res_arr[:, :, 1])),
cmap=self.res_cmap,
vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1])
#plot TM resistivity data
self.axrtm = plt.Subplot(self.fig, gs3[0, 0])
self.fig.add_subplot(self.axrtm)
self.axrtm.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(tm_res_arr[:,:,0])),
cmap=self.res_cmap,
vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1])
#plot TM resistivity model
self.axmrtm = plt.Subplot(self.fig, gs3[0, 1])
self.fig.add_subplot(self.axmrtm)
self.axmrtm.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(tm_res_arr[:,:,1])),
cmap=self.res_cmap,
vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1])
#plot TE phase data
self.axpte = plt.Subplot(self.fig, gs2[1, 0])
self.fig.add_subplot(self.axpte)
self.axpte.pcolormesh(dgrid,
fgrid,
np.flipud(te_phase_arr[:,:,0]),
cmap=self.phase_cmap,
vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1])
#plot TE phase model
self.axmpte = plt.Subplot(self.fig, gs2[1, 1])
self.fig.add_subplot(self.axmpte)
self.axmpte.pcolormesh(dgrid,
fgrid,
np.flipud(te_phase_arr[:,:,1]),
cmap=self.phase_cmap,
vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1])
#plot TM phase data
self.axptm = plt.Subplot(self.fig, gs3[1, 0])
self.fig.add_subplot(self.axptm)
self.axptm.pcolormesh(dgrid,
fgrid,
np.flipud(tm_phase_arr[:,:,0]),
cmap=self.phase_cmap,
vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1])
#plot TM phase model
self.axmptm = plt.Subplot(self.fig, gs3[1, 1])
self.fig.add_subplot(self.axmptm)
self.axmptm.pcolormesh(dgrid,
fgrid,
np.flipud(tm_phase_arr[:,:,1]),
cmap=self.phase_cmap,
vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1])
ax_list=[self.axrte, self.axmrte, self.axrtm, self.axmrtm,
self.axpte, self.axmpte, self.axptm, self.axmptm]
if self.plot_tipper == 'y':
#plot real tipper data
self.axtpr = plt.Subplot(self.fig, gs4[0, 0])
self.fig.add_subplot(self.axtpr)
self.axtpr.pcolormesh(dgrid,
fgrid,
np.flipud(tip_real_arr[:,:,0]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
#plot real tipper model
self.axmtpr = plt.Subplot(self.fig, gs4[0, 1])
self.fig.add_subplot(self.axmtpr)
self.axmtpr.pcolormesh(dgrid,
fgrid,
np.flipud(tip_real_arr[:,:,1]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
#plot imag tipper data
self.axtpi = plt.Subplot(self.fig, gs4[1, 0])
self.fig.add_subplot(self.axtpi)
self.axtpi.pcolormesh(dgrid,
fgrid,
np.flipud(tip_imag_arr[:,:,0]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
#plot imag tipper model
self.axmtpi = plt.Subplot(self.fig, gs4[1, 1])
self.fig.add_subplot(self.axmtpi)
self.axmtpi.pcolormesh(dgrid,
fgrid,
np.flipud(tip_imag_arr[:,:,1]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
ax_list.append(self.axtpr)
ax_list.append(self.axmtpr)
ax_list.append(self.axtpi)
ax_list.append(self.axmtpi)
#make everthing look tidy
for xx, ax in enumerate(ax_list):
ax.semilogy()
ax.set_ylim(ylimits)
ax.xaxis.set_ticks(offset_list[np.arange(0, ns, self.ml)])
ax.xaxis.set_ticks(offset_list, minor=True)
ax.xaxis.set_ticklabels(slabel)
ax.set_xlim(offset_list.min(),offset_list.max())
if np.remainder(xx, 2.0) == 1:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cbx = mcb.make_axes(ax,
shrink=self.cb_shrink,
pad=self.cb_pad)
if xx == 2 or xx == 6 or xx == 8 or xx == 10:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
if xx < 4:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
if xx == 1:
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1]))
cb.set_ticks(np.arange(int(self.res_limits_te[0]),
int(self.res_limits_te[1])+1))
cb.set_ticklabels(log_labels_te)
if xx == 3:
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1]))
cb.set_label('App. Res. ($\Omega \cdot$m)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
cb.set_label('Resistivity ($\Omega \cdot$m)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
cb.set_ticks(np.arange(int(self.res_limits_tm[0]),
int(self.res_limits_tm[1])+1))
cb.set_ticklabels(log_labels_tm)
else:
#color bar TE phase
if xx == 5:
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1]))
#color bar TM phase
if xx == 7:
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1]))
cb.set_label('Phase (deg)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#color bar tipper Imag
if xx == 9:
cb = mcb.ColorbarBase(cbx[0],cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1]))
cb.set_label('Re{T}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
if xx == 11:
cb = mcb.ColorbarBase(cbx[0],cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_im[0],
vmax=self.tip_limits_im[1]))
cb.set_label('Im{T}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
ax.text(xloc, yloc, self.label_list[xx],
fontdict={'size':self.font_size+1},
bbox={'facecolor':'white'},
horizontalalignment='left',
verticalalignment='top')
if xx == 0 or xx == 4:
ax.set_ylabel('Period (s)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
if xx>3:
ax.set_xlabel('Station',fontdict={'size':self.font_size+2,
'weight':'bold'})
plt.show()
else:
if self.plot_tipper == 'y':
gs1 = gridspec.GridSpec(2, 3,
left=self.subplot_left,
right=self.subplot_right,
hspace=self.subplot_hspace,
wspace=self.subplot_wspace)
else:
gs1 = gridspec.GridSpec(2, 2,
left=self.subplot_left,
right=self.subplot_right,
hspace=self.subplot_hspace,
wspace=self.subplot_wspace)
#plot TE resistivity data
self.axrte = self.fig.add_subplot(gs1[0, 0])
self.axrte.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(te_res_arr[:, :, 0])),
cmap=self.res_cmap,
vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1])
#plot TM resistivity data
self.axrtm = self.fig.add_subplot(gs1[0, 1])
self.axrtm.pcolormesh(dgrid,
fgrid,
np.flipud(np.log10(tm_res_arr[:, :, 0])),
cmap=self.res_cmap,
vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1])
#plot TE phase data
self.axpte = self.fig.add_subplot(gs1[1, 0])
self.axpte.pcolormesh(dgrid,
fgrid,
np.flipud(te_phase_arr[:, :, 0]),
cmap=self.phase_cmap,
vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1])
#plot TM phase data
self.axptm = self.fig.add_subplot(gs1[1, 1])
self.axptm.pcolormesh(dgrid,
fgrid,
np.flipud(tm_phase_arr[:,:, 0]),
cmap=self.phase_cmap,
vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1])
ax_list = [self.axrte, self.axrtm, self.axpte, self.axptm]
if self.plot_tipper == 'y':
#plot real tipper data
self.axtpr = plt.Subplot(self.fig, gs1[0, 2])
self.fig.add_subplot(self.axtpr)
self.axtpr.pcolormesh(dgrid,
fgrid,
np.flipud(tip_real_arr[:,:,0]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
#plot real tipper data
self.axtpi = plt.Subplot(self.fig, gs1[1, 2])
self.fig.add_subplot(self.axtpi)
self.axtpi.pcolormesh(dgrid,
fgrid,
np.flipud(tip_imag_arr[:,:,0]),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
ax_list.append(self.axtpr)
ax_list.append(self.axtpi)
#make everything look tidy
for xx,ax in enumerate(ax_list):
ax.semilogy()
ax.set_ylim(ylimits)
ax.xaxis.set_ticks(offset_list[np.arange(0, ns, self.ml)])
ax.xaxis.set_ticks(offset_list, minor=True)
ax.xaxis.set_ticklabels(slabel)
ax.grid(True, alpha=.25)
ax.set_xlim(offset_list.min(),offset_list.max())
cbx = mcb.make_axes(ax,
shrink=self.cb_shrink,
pad=self.cb_pad)
if xx == 0:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1]))
cb.set_ticks(np.arange(self.res_limits_te[0],
self.res_limits_te[1]+1))
cb.set_ticklabels(log_labels_te)
elif xx == 1:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1]))
cb.set_label('App. Res. ($\Omega \cdot$m)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
cb.set_ticks(np.arange(self.res_limits_tm[0],
self.res_limits_tm[1]+1))
cb.set_ticklabels(log_labels_tm)
elif xx == 2:
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1]))
cb.set_ticks(np.arange(self.phase_limits_te[0],
self.phase_limits_te[1]+1, 15))
elif xx == 3:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1]))
cb.set_label('Phase (deg)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
cb.set_ticks(np.arange(self.phase_limits_te[0],
self.phase_limits_te[1]+1, 15))
#real tipper
elif xx == 4:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0], cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1]))
cb.set_label('Re{T}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#imag tipper
elif xx == 5:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0], cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_im[0],
vmax=self.tip_limits_im[1]))
cb.set_label('Im{T}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
ax.text(xloc, yloc, self.label_list[2*xx],
fontdict={'size':self.font_size+1},
bbox={'facecolor':'white'},
horizontalalignment='left',
verticalalignment='top')
if xx == 0 or xx == 2:
ax.set_ylabel('Period (s)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
if xx>1:
ax.set_xlabel('Station',fontdict={'size':self.font_size+2,
'weight':'bold'})
plt.show()
def redraw_plot(self):
"""
redraw plot if parameters were changed
use this function if you updated some attributes and want to re-plot.
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plotPseudoSection()
>>> #change color of te markers to a gray-blue
>>> p1.res_cmap = 'seismic_r'
>>> p1.redraw_plot()
"""
plt.close(self.fig)
self.plot()
def save_figure(self, save_fn, file_format='pdf', orientation='portrait',
fig_dpi=None, close_plot='y'):
"""
save_plot will save the figure to save_fn.
Arguments:
-----------
**save_fn** : string
full path to save figure to, can be input as
* directory path -> the directory path to save to
in which the file will be saved as
save_fn/station_name_PhaseTensor.file_format
* full path -> file will be save to the given
path. If you use this option then the format
will be assumed to be provided by the path
**file_format** : [ pdf | eps | jpg | png | svg ]
file type of saved figure pdf,svg,eps...
**orientation** : [ landscape | portrait ]
orientation in which the file will be saved
*default* is portrait
**fig_dpi** : int
The resolution in dots-per-inch the file will be
saved. If None then the dpi will be that at
which the figure was made. I don't think that
it can be larger than dpi of the figure.
**close_plot** : [ y | n ]
* 'y' will close the plot after saving.
* 'n' will leave plot open
:Example: ::
>>> # to save plot as jpg
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotPseudoSection()
>>> ps1.save_plot(r'/home/MT/figures', file_format='jpg')
"""
if fig_dpi == None:
fig_dpi = self.fig_dpi
if os.path.isdir(save_fn) == False:
file_format = save_fn[-3:]
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
else:
save_fn = os.path.join(save_fn, 'OccamPseudoSection.'+
file_format)
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
if close_plot == 'y':
plt.clf()
plt.close(self.fig)
else:
pass
self.fig_fn = save_fn
print 'Saved figure to: '+self.fig_fn
def update_plot(self):
"""
update any parameters that where changed using the built-in draw from
canvas.
Use this if you change an of the .fig or axes properties
:Example: ::
>>> # to change the grid lines to only be on the major ticks
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotPseudoSection()
>>> [ax.grid(True, which='major') for ax in [ps1.axrte,ps1.axtep]]
>>> ps1.update_plot()
"""
self.fig.canvas.draw()
def __str__(self):
"""
rewrite the string builtin to give a useful message
"""
return ("Plots a pseudo section of TE and TM modes for data and "
"response if given.")
#==============================================================================
# plot misfits as a pseudo-section
#==============================================================================
class PlotMisfitPseudoSection(object):
"""
plot a pseudo section of the data and response if given
Arguments:
-------------
**rp_list** : list of dictionaries for each station with keywords:
* *station* : string
station name
* *offset* : float
relative offset
* *resxy* : np.array(nf,4)
TE resistivity and error as row 0 and 1 respectively
* *resyx* : np.array(fn,4)
TM resistivity and error as row 0 and 1 respectively
* *phasexy* : np.array(nf,4)
TE phase and error as row 0 and 1 respectively
* *phaseyx* : np.array(nf,4)
Tm phase and error as row 0 and 1 respectively
* *realtip* : np.array(nf,4)
Real Tipper and error as row 0 and 1 respectively
* *imagtip* : np.array(nf,4)
Imaginary Tipper and error as row 0 and 1
respectively
Note: that the resistivity will be in log10 space. Also, there
are 2 extra rows in the data arrays, this is to put the
response from the inversion.
**period** : np.array of periods to plot that correspond to the index
values of each rp_list entry ie. resxy.
==================== ==================================================
key words description
==================== ==================================================
axmpte matplotlib.axes instance for TE model phase
axmptm matplotlib.axes instance for TM model phase
axmrte matplotlib.axes instance for TE model app. res
axmrtm matplotlib.axes instance for TM model app. res
axpte matplotlib.axes instance for TE data phase
axptm matplotlib.axes instance for TM data phase
axrte matplotlib.axes instance for TE data app. res.
axrtm matplotlib.axes instance for TM data app. res.
cb_pad padding between colorbar and axes
cb_shrink percentage to shrink the colorbar to
fig matplotlib.figure instance
fig_dpi resolution of figure in dots per inch
fig_num number of figure instance
fig_size size of figure in inches (width, height)
font_size size of font in points
label_list list to label plots
ml factor to label stations if 2 every other station
is labeled on the x-axis
period np.array of periods to plot
phase_cmap color map name of phase
phase_limits_te limits for te phase in degrees (min, max)
phase_limits_tm limits for tm phase in degrees (min, max)
plot_resp [ 'y' | 'n' ] to plot response
plot_yn [ 'y' | 'n' ] 'y' to plot on instantiation
res_cmap color map name for resistivity
res_limits_te limits for te resistivity in log scale (min, max)
res_limits_tm limits for tm resistivity in log scale (min, max)
rp_list list of dictionaries as made from read2Dresp
station_id index to get station name (min, max)
station_list station list got from rp_list
subplot_bottom subplot spacing from bottom (relative coordinates)
subplot_hspace vertical spacing between subplots
subplot_left subplot spacing from left
subplot_right subplot spacing from right
subplot_top subplot spacing from top
subplot_wspace horizontal spacing between subplots
==================== ==================================================
=================== =======================================================
Methods Description
=================== =======================================================
plot plots a pseudo-section of apparent resistiviy and phase
of data and model if given. called on instantiation
if plot_yn is 'y'.
redraw_plot call redraw_plot to redraw the figures,
if one of the attributes has been changed
save_figure saves the matplotlib.figure instance to desired
location and format
=================== =======================================================
:Example: ::
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData()
>>> rfile = r"/home/Occam2D/Line1/Inv1/Test_15.resp"
>>> ocd.data_fn = r"/home/Occam2D/Line1/Inv1/DataRW.dat"
>>> ps1 = ocd.plot2PseudoSection(resp_fn=rfile)
"""
def __init__(self, data_fn, resp_fn, **kwargs):
self.data_fn = data_fn
self.resp_fn = resp_fn
self.label_list = [r'$\rho_{TE}$', r'$\rho_{TM}$',
'$\phi_{TE}$', '$\phi_{TM}$',
'$\Re e\{T\}$', '$\Im m\{T\}$']
self.phase_limits_te = kwargs.pop('phase_limits_te', (-10, 10))
self.phase_limits_tm = kwargs.pop('phase_limits_tm', (-10, 10))
self.res_limits_te = kwargs.pop('res_limits_te', (-2, 2))
self.res_limits_tm = kwargs.pop('res_limits_tm', (-2, 2))
self.tip_limits_re = kwargs.pop('tip_limits_re', (-.2, .2))
self.tip_limits_im = kwargs.pop('tip_limits_im', (-.2, .2))
self.phase_cmap = kwargs.pop('phase_cmap', 'BrBG')
self.res_cmap = kwargs.pop('res_cmap', 'BrBG_r')
self.tip_cmap = kwargs.pop('tip_cmap', 'PuOr')
self.plot_tipper = kwargs.pop('plot_tipper', 'n')
self.ml = kwargs.pop('ml', 2)
self.station_id = kwargs.pop('station_id', [0,4])
self.fig_num = kwargs.pop('fig_num', 1)
self.fig_size = kwargs.pop('fig_size', [6, 6])
self.fig_dpi = kwargs.pop('dpi', 300)
self.subplot_wspace = .0025
self.subplot_hspace = .0
self.subplot_right = .95
self.subplot_left = .085
self.subplot_top = .97
self.subplot_bottom = .1
self.font_size = kwargs.pop('font_size', 6)
self.plot_yn = kwargs.pop('plot_yn', 'y')
self.cb_shrink = .7
self.cb_pad = .015
self.axrte = None
self.axrtm = None
self.axpte = None
self.axptm = None
self.axtpr = None
self.axtpi = None
self.misfit_te_res = None
self.misfit_te_phase = None
self.misfit_tm_res = None
self.misfit_tm_phase = None
self.misfit_tip_real = None
self.misfit_tip_imag = None
self.fig = None
self._data_obj = None
if self.plot_yn == 'y':
self.plot()
def get_misfit(self):
"""
compute misfit of MT response found from the model and the data.
Need to normalize correctly
"""
data_obj = Data()
data_obj.read_data_file(self.data_fn)
self._data_obj = data_obj
resp_obj = Response()
resp_obj.read_response_file(self.resp_fn)
n_stations = len(data_obj.station_list)
n_periods = len(data_obj.freq)
self.misfit_te_res = np.zeros((n_periods, n_stations))
self.misfit_te_phase = np.zeros((n_periods, n_stations))
self.misfit_tm_res = np.zeros((n_periods, n_stations))
self.misfit_tm_phase = np.zeros((n_periods, n_stations))
self.misfit_tip_real = np.zeros((n_periods, n_stations))
self.misfit_tip_imag = np.zeros((n_periods, n_stations))
for rr, r_dict in zip(range(n_stations), resp_obj.resp):
self.misfit_te_res[:, rr] = r_dict['te_res'][1]
self.misfit_tm_res[:, rr] = r_dict['tm_res'][1]
self.misfit_te_phase[:, rr] = r_dict['te_phase'][1]
self.misfit_tm_phase[:, rr] = r_dict['tm_phase'][1]
self.misfit_tip_real[:, rr] = r_dict['re_tip'][1]
self.misfit_tip_imag[:, rr] = r_dict['im_tip'][1]
self.misfit_te_res = np.nan_to_num(self.misfit_te_res)
self.misfit_te_phase = np.nan_to_num(self.misfit_te_phase)
self.misfit_tm_res = np.nan_to_num(self.misfit_tm_res)
self.misfit_tm_phase = np.nan_to_num(self.misfit_tm_phase)
self.misfit_tip_real = np.nan_to_num(self.misfit_tip_real)
self.misfit_tip_imag = np.nan_to_num(self.misfit_tip_imag)
def plot(self):
"""
plot pseudo section of data and response if given
"""
self.get_misfit()
ylimits = (self._data_obj.period.max(), self._data_obj.period.min())
offset_list = np.append(self._data_obj.station_locations,
self._data_obj.station_locations[-1]*1.15)
#make a meshgrid for plotting
#flip frequency so bottom corner is long period
dgrid, fgrid = np.meshgrid(offset_list, self._data_obj.period[::-1])
#make list for station labels
ns = len(self._data_obj.station_list)
sindex_1 = self.station_id[0]
sindex_2 = self.station_id[1]
slabel = [self._data_obj.station_list[ss][sindex_1:sindex_2]
for ss in range(0, ns, self.ml)]
xloc = offset_list[0]+abs(offset_list[0]-offset_list[1])/5
yloc = 1.10*self._data_obj.period[1]
plt.rcParams['font.size'] = self.font_size
plt.rcParams['figure.subplot.bottom'] = self.subplot_bottom
plt.rcParams['figure.subplot.top'] = self.subplot_top
plt.rcParams['figure.subplot.right'] = self.subplot_right
plt.rcParams['figure.subplot.left'] = self.subplot_left
plt.rcParams['figure.subplot.hspace'] = self.subplot_hspace
plt.rcParams['figure.subplot.wspace'] = self.subplot_wspace
self.fig = plt.figure(self.fig_num, self.fig_size, dpi=self.fig_dpi)
plt.clf()
if self.plot_tipper != 'y':
self.axrte = self.fig.add_subplot(2, 2, 1)
self.axrtm = self.fig.add_subplot(2, 2, 2, sharex=self.axrte)
self.axpte = self.fig.add_subplot(2, 2, 3, sharex=self.axrte)
self.axptm = self.fig.add_subplot(2, 2, 4, sharex=self.axrte)
else:
self.axrte = self.fig.add_subplot(2, 3, 1)
self.axrtm = self.fig.add_subplot(2, 3, 2, sharex=self.axrte)
self.axpte = self.fig.add_subplot(2, 3, 4, sharex=self.axrte)
self.axptm = self.fig.add_subplot(2, 3, 5, sharex=self.axrte)
self.axtpr = self.fig.add_subplot(2, 3, 3, sharex=self.axrte)
self.axtpi = self.fig.add_subplot(2, 3, 6, sharex=self.axrte)
#--> TE Resistivity
self.axrte.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_te_res),
cmap=self.res_cmap,
vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1])
#--> TM Resistivity
self.axrtm.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_tm_res),
cmap=self.res_cmap,
vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1])
#--> TE Phase
self.axpte.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_te_phase),
cmap=self.phase_cmap,
vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1])
#--> TM Phase
self.axptm.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_tm_phase),
cmap=self.phase_cmap,
vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1])
ax_list = [self.axrte, self.axrtm, self.axpte, self.axptm]
if self.plot_tipper == 'y':
self.axtpr.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_tip_real),
cmap=self.tip_cmap,
vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1])
self.axtpi.pcolormesh(dgrid,
fgrid,
np.flipud(self.misfit_tip_imag),
cmap=self.tip_cmap,
vmin=self.tip_limits_im[0],
vmax=self.tip_limits_im[1])
ax_list.append(self.axtpr)
ax_list.append(self.axtpi)
#make everthing look tidy
for xx, ax in enumerate(ax_list):
ax.semilogy()
ax.set_ylim(ylimits)
ax.xaxis.set_ticks(offset_list[np.arange(0, ns, self.ml)])
ax.xaxis.set_ticks(offset_list, minor=True)
ax.xaxis.set_ticklabels(slabel)
ax.set_xlim(offset_list.min(),offset_list.max())
cbx = mcb.make_axes(ax,
shrink=self.cb_shrink,
pad=self.cb_pad)
#te res
if xx == 0:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_te[0],
vmax=self.res_limits_te[1]))
#tm res
elif xx == 1:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.res_cmap,
norm=Normalize(vmin=self.res_limits_tm[0],
vmax=self.res_limits_tm[1]))
cb.set_label('Log$_{10}$ App. Res. ($\Omega \cdot$m)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#te phase
elif xx == 2:
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_te[0],
vmax=self.phase_limits_te[1]))
#tm phase
elif xx == 3:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0],cmap=self.phase_cmap,
norm=Normalize(vmin=self.phase_limits_tm[0],
vmax=self.phase_limits_tm[1]))
cb.set_label('Phase (deg)',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#real tipper
elif xx == 4:
plt.setp(ax.xaxis.get_ticklabels(), visible=False)
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0], cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_re[0],
vmax=self.tip_limits_re[1]))
cb.set_label('Re{Tip}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#imag tipper
elif xx == 5:
plt.setp(ax.yaxis.get_ticklabels(), visible=False)
cb = mcb.ColorbarBase(cbx[0], cmap=self.tip_cmap,
norm=Normalize(vmin=self.tip_limits_im[0],
vmax=self.tip_limits_im[1]))
cb.set_label('Im{Tip}',
fontdict={'size':self.font_size+1,
'weight':'bold'})
#make label for plot
ax.text(xloc, yloc, self.label_list[xx],
fontdict={'size':self.font_size+2},
bbox={'facecolor':'white'},
horizontalalignment='left',
verticalalignment='top')
if xx == 0 or xx == 2:
ax.set_ylabel('Period (s)',
fontdict={'size':self.font_size+2,
'weight':'bold'})
if xx > 1:
ax.set_xlabel('Station',fontdict={'size':self.font_size+2,
'weight':'bold'})
plt.show()
def redraw_plot(self):
"""
redraw plot if parameters were changed
use this function if you updated some attributes and want to re-plot.
:Example: ::
>>> # change the color and marker of the xy components
>>> import mtpy.modeling.occam2d as occam2d
>>> ocd = occam2d.Occam2DData(r"/home/occam2d/Data.dat")
>>> p1 = ocd.plotPseudoSection()
>>> #change color of te markers to a gray-blue
>>> p1.res_cmap = 'seismic_r'
>>> p1.redraw_plot()
"""
plt.close(self.fig)
self.plot()
def save_figure(self, save_fn, file_format='pdf', orientation='portrait',
fig_dpi=None, close_plot='y'):
"""
save_plot will save the figure to save_fn.
Arguments:
-----------
**save_fn** : string
full path to save figure to, can be input as
* directory path -> the directory path to save to
in which the file will be saved as
save_fn/station_name_PhaseTensor.file_format
* full path -> file will be save to the given
path. If you use this option then the format
will be assumed to be provided by the path
**file_format** : [ pdf | eps | jpg | png | svg ]
file type of saved figure pdf,svg,eps...
**orientation** : [ landscape | portrait ]
orientation in which the file will be saved
*default* is portrait
**fig_dpi** : int
The resolution in dots-per-inch the file will be
saved. If None then the dpi will be that at
which the figure was made. I don't think that
it can be larger than dpi of the figure.
**close_plot** : [ y | n ]
* 'y' will close the plot after saving.
* 'n' will leave plot open
:Example: ::
>>> # to save plot as jpg
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotPseudoSection()
>>> ps1.save_plot(r'/home/MT/figures', file_format='jpg')
"""
if fig_dpi == None:
fig_dpi = self.fig_dpi
if os.path.isdir(save_fn) == False:
file_format = save_fn[-3:]
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
else:
save_fn = os.path.join(save_fn, 'OccamMisfitPseudoSection.'+
file_format)
self.fig.savefig(save_fn, dpi=fig_dpi, format=file_format,
orientation=orientation, bbox_inches='tight')
if close_plot == 'y':
plt.clf()
plt.close(self.fig)
else:
pass
self.fig_fn = save_fn
print 'Saved figure to: '+self.fig_fn
def update_plot(self):
"""
update any parameters that where changed using the built-in draw from
canvas.
Use this if you change an of the .fig or axes properties
:Example: ::
>>> # to change the grid lines to only be on the major ticks
>>> import mtpy.modeling.occam2d as occam2d
>>> dfn = r"/home/occam2d/Inv1/data.dat"
>>> ocd = occam2d.Occam2DData(dfn)
>>> ps1 = ocd.plotPseudoSection()
>>> [ax.grid(True, which='major') for ax in [ps1.axrte,ps1.axtep]]
>>> ps1.update_plot()
"""
self.fig.canvas.draw()
def __str__(self):
"""
rewrite the string builtin to give a useful message
"""
return ("Plots a pseudo section of TE and TM modes for data and "
"response if given.")
class OccamPointPicker(object):
"""
This class helps the user interactively pick points to mask and add
error bars.
Useage:
-------
To mask just a single point right click over the point and a gray point
will appear indicating it has been masked
To mask both the apparent resistivity and phase left click over the point.
Gray points will appear over both the apparent resistivity and phase.
Sometimes the points don't exactly matchup, haven't quite worked that bug
out yet, but not to worry it picks out the correct points
To add error bars to a point click the middle or scroll bar button. This
only adds error bars to the point and does not reduce them so start out
with reasonable errorbars. You can change the increment that the error
bars are increased with res_err_inc and phase_err_inc.
.. note:: There is a bug when only plotting TE or TM that you cannot mask
points in the phase. I'm not sure where it comes from, but
works with all modes. So my suggestion is to make a data file
with all modes, mask data points and then rewrite that data file
if you want to use just one of the modes. That's the work
around for the moment.
Arguments:
----------
**ax_list** : list of the resistivity and phase axis that have been
plotted as [axr_te,axr_tm,axp_te,axp_tm]
**line_list** : list of lines used to plot the responses, not the error
bars as [res_te,res_tm,phase_te,phase_tm]
**err_list** : list of the errorcaps and errorbar lines as
[[cap1,cap2,bar],...]
**res_err_inc** : increment to increase the errorbars for resistivity.
put .20 for 20 percent change. *Default* is .05
**phase_err_inc** : increment to increase the errorbars for the phase
put .10 for 10 percent change. *Defualt* is .02
**marker** : marker type for masked points. See matplotlib.pyplot.plot
for options of markers. *Default* is h for hexagon.
Attributes:
-----------
**ax_list** : axes list used to plot the data
**line_list** : line list used to plot the data
**err_list** : error list used to plot the data
**data** : list of data points that were not masked for each plot.
**fdict** : dictionary of frequency arrays for each plot and data set.
**fndict** : dictionary of figure numbers to corresponed with data.
**cid_list** : list of event ids.
**res_err_inc** : increment to increase resistivity error bars
**phase_inc** : increment to increase phase error bars
**marker** : marker of masked points
**fig_num** : figure numbers
**data_list** : list of lines to write into the occam2d data file.
:Example: ::
>>> ocd = occam2d.Occam2DData()
>>> ocd.data_fn = r"/home/Occam2D/Line1/Inv1/Data.dat"
>>> ocd.plotMaskPoints()
"""
def __init__(self, ax_list, line_list, err_list,
res_err_inc=.05, phase_err_inc=.02, marker='h'):
#give the class some attributes
self.ax_list = ax_list
self.line_list = line_list
self.err_list = err_list
self.data = []
self.error = []
self.fdict = []
self.fndict = {}
#see if just one figure is plotted or multiple figures are plotted
self.ax = ax_list[0][0]
self.line = line_list[0][0]
self.cidlist = []
self.ax_num = None
self.res_index = None
self.phase_index = None
self.fig_num = 0
for nn in range(len(ax_list)):
self.data.append([])
self.error.append([])
self.fdict.append([])
#get data from lines and make a dictionary of frequency points for
#easy indexing
#line_find = False
for ii, line in enumerate(line_list[nn]):
try:
self.data[nn].append(line.get_data()[1])
self.fdict[nn].append(dict([('{0:.5g}'.format(kk), ff)
for ff,kk in
enumerate(line.get_data()[0])]))
self.fndict['{0}'.format(line.figure.number)] = nn
#set some events
#if line_find == False:
cid1 = line.figure.canvas.mpl_connect('pick_event',self)
cid2 = line.figure.canvas.mpl_connect('axes_enter_event',
self.inAxes)
cid3 = line.figure.canvas.mpl_connect('key_press_event',
self.on_close)
cid4 = line.figure.canvas.mpl_connect('figure_enter_event',
self.inFigure)
self.cidlist.append([cid1, cid2, cid3, cid4])
#set the figure number
self.fig_num = self.line.figure.number
#line_find = True
except AttributeError:
self.data[nn].append([])
self.fdict[nn].append([])
#read in the error in a useful way so that it can be translated to
#the data file. Make the error into an array
for ee, err in enumerate(err_list[nn]):
try:
errpath = err[2].get_paths()
errarr = np.zeros(len(self.fdict[nn][ee].keys()))
for ff, epath in enumerate(errpath):
errv = epath.vertices
errarr[ff] = abs(errv[0,1]-self.data[nn][ee][ff])
self.error[nn].append(errarr)
except AttributeError:
self.error[nn].append([])
#set the error bar increment values
self.res_err_inc = res_err_inc
self.phase_err_inc = phase_err_inc
#set the marker
self.marker = marker
#make a list of occam2d lines to write later
self.data_list = []
def __call__(self, event):
"""
When the function is called the mouse events will be recorder for
picking points to mask or change error bars. The axes is redrawn with
a gray marker to indicate a masked point and/or increased size in
errorbars.
Arguments:
----------
**event** : type mouse_click_event
Useage:
-------
**Left mouse button** will mask both resistivity and phase point
**Right mouse button** will mask just the point selected
**Middle mouse button** will increase the error bars
**q** will close the figure.
"""
self.event = event
#make a new point that is an PickEvent type
npoint = event.artist
#if the right button is clicked mask the point
if event.mouseevent.button == 3:
#get the point that was clicked on
ii = event.ind
xd = npoint.get_xdata()[ii]
yd = npoint.get_ydata()[ii]
#set the x index from the frequency dictionary
ll = self.fdict[self.fig_num][self.ax_num]['{0:.5g}'.format(xd[0])]
#change the data to be a zero
self.data[self.fig_num][self.ax_num][ll] = 0
#reset the point to be a gray x
self.ax.plot(xd, yd,
ls = 'None',
color=(.7, .7, .7),
marker=self.marker,
ms=4)
#redraw the canvas
self.ax.figure.canvas.draw()
#if the left button is clicked change both resistivity and phase points
elif event.mouseevent.button == 1:
#get the point that was clicked on
ii = event.ind
xd = npoint.get_xdata()[ii]
yd = npoint.get_ydata()[ii]
#set the x index from the frequency dictionary
ll = self.fdict[self.fig_num][self.ax_num]['{0:.5g}'.format(xd[0])]
#set the data point to zero
print self.data[self.fig_num][self.ax_num][ll]
self.data[self.fig_num][self.ax_num][ll] = 0
#reset the point to be a gray x
self.ax.plot(xd, yd,
ls='None',
color=(.7, .7, .7),
marker=self.marker,
ms=4)
self.ax.figure.canvas.draw()
#check to make sure there is a corresponding res/phase point
try:
kk = (self.ax_num+2)%4
print kk
#get the corresponding y-value
yd2 = self.data[self.fig_num][kk][ll]
#set that data point to 0 as well
self.data[self.fig_num][kk][ll] = 0
#make that data point a gray x
self.ax_list[self.fig_num][kk].plot(xd, yd2,
ls='None',
color=(.7, .7, .7),
marker=self.marker,
ms=4)
#redraw the canvas
self.ax.figure.canvas.draw()
except KeyError:
print 'Axis does not contain res/phase point'
#if click the scroll button or middle button change increase the
#errorbars by the given amount
elif event.mouseevent.button == 2:
ii = event.ind
xd = npoint.get_xdata()[ii]
yd = npoint.get_ydata()[ii]
jj = self.ax_num
#get x index
ll = self.fdict[self.fig_num][jj]['{0:.5g}'.format(xd[0])]
#make error bar array
eb = self.err_list[self.fig_num][jj][2].get_paths()[ll].vertices
#make ecap array
ecapl = self.err_list[self.fig_num][jj][0].get_data()[1][ll]
ecapu = self.err_list[self.fig_num][jj][1].get_data()[1][ll]
#change apparent resistivity error
if jj == 0 or jj == 1:
nebu = eb[0,1]-self.res_err_inc*eb[0,1]
nebl = eb[1,1]+self.res_err_inc*eb[1,1]
ecapl = ecapl-self.res_err_inc*ecapl
ecapu = ecapu+self.res_err_inc*ecapu
#change phase error
elif jj == 2 or jj == 3:
nebu = eb[0,1]-eb[0,1]*self.phase_err_inc
nebl = eb[1,1]+eb[1,1]*self.phase_err_inc
ecapl = ecapl-ecapl*self.phase_err_inc
ecapu = ecapu+ecapu*self.phase_err_inc
#put the new error into the error array
self.error[self.fig_num][jj][ll] = abs(nebu-\
self.data[self.fig_num][jj][ll])
#set the new error bar values
eb[0, 1] = nebu
eb[1, 1] = nebl
#reset the error bars and caps
ncapl = self.err_list[self.fig_num][jj][0].get_data()
ncapu = self.err_list[self.fig_num][jj][1].get_data()
ncapl[1][ll] = ecapl
ncapu[1][ll] = ecapu
#set the values
self.err_list[self.fig_num][jj][0].set_data(ncapl)
self.err_list[self.fig_num][jj][1].set_data(ncapu)
self.err_list[self.fig_num][jj][2].get_paths()[ll].vertices = eb
#redraw the canvas
self.ax.figure.canvas.draw()
#get the axis number that the mouse is in and change to that axis
def inAxes(self, event):
"""
gets the axes that the mouse is currently in.
Arguments:
---------
**event**: is a type axes_enter_event
Returns:
--------
**OccamPointPicker.jj** : index of resistivity axes for ax_list
**OccamPointPicker.kk** : index of phase axes for ax_list
"""
self.event2 = event
self.ax = event.inaxes
for jj, axj in enumerate(self.ax_list):
for ll, axl in enumerate(axj):
if self.ax == axl:
self.ax_num = ll
self.line = self.line_list[self.fig_num][self.ax_num]
#get the figure number that the mouse is in
def inFigure(self, event):
"""
gets the figure number that the mouse is in
Arguments:
----------
**event** : figure_enter_event
Returns:
--------
**OccamPointPicker.fig_num** : figure number that corresponds to the
index in the ax_list, datalist, errorlist
and line_list.
"""
self.event3 = event
self.fig_num = self.fndict['{0}'.format(event.canvas.figure.number)]
#type the q key to quit the figure and disconnect event handling
def on_close(self, event):
"""
close the figure with a 'q' key event and disconnect the event ids
Arguments:
----------
**event** : key_press_event
Returns:
--------
print statement saying the figure is closed
"""
self.event3 = event
if self.event3.key == 'q':
for cid in self.cidlist[self.fig_num]:
event.canvas.mpl_disconnect(cid)
plt.close(event.canvas.figure)
print 'Closed figure ', self.fig_num
class Run():
"""
Run Occam2D by system call.
Future plan: implement Occam in Python and call it from here directly.
"""
class Mask(Data):
"""
Allow masking of points from data file (effectively commenting them out,
so the process is reversable). Inheriting from Data class.
"""
class OccamInputError(Exception):
pass
|
gpl-3.0
|
daxadal/Computational-Geometry
|
Practica_2/test_classify_plane_points_v2.py
|
1
|
3255
|
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Checking the implementation of MLP
Data in points along two logarithmic spirals
@author: avaldes
"""
from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt
import time
from MLP import MLP
# create data
nb_black = 100
nb_red = 100
nb_data = nb_black + nb_red
s = np.linspace(0, 4*np.pi, nb_black)
x_black = np.vstack([np.log(1 + s) * np.cos(s),
np.log(1 + s) * np.sin(s)]).T
x_red = np.vstack([-np.log(1 + s) * np.cos(s),
-np.log(1 + s) * np.sin(s)]).T
x_data = np.vstack((x_black, x_red))
t_data = np.asarray([0]*nb_black + [1]*nb_red).reshape(nb_data, 1)
# Net structure
D = x_data.shape[1] # initial dimension
K = 1 # final dimension
# You must find the best MLP structure in order to
# misclassify as few points as possible. Training time will be
# measured too.
# You can use at most 3000 weights and 2000 epochs.
# For example:
K_list = [D, 50, 50, K] # list of dimensions of layers
# The bigger the amount of neurons, the "smoother" it will appear,
# although there is no significant change in the amont of correctly
# classified points, which seem to be inevitably all but one of the
# centermost points (even with as few as 50 epochs, with something like
# 50 neurons in each hidden layer, or with as few as 25 neurons (or less!)
# per layer with sufficient epochs).
activation_functions = [lambda x: np.sin(x),
MLP.sigmoid,
MLP.sigmoid]
diff_activation_functions = [lambda x: np.cos(x),
MLP.dsigmoid,
MLP.dsigmoid]
# network training
time_begin = time.time()
mlp = MLP(K_list,
activation_functions, diff_activation_functions)
mlp.train(x_data, t_data,
epochs=2000, batch_size=10,
epsilon=0.1,
print_cost=True)
time_end = time.time()
print('Time used in training %f' % (time_end - time_begin))
# check if circles nearby the middle one are
# correctly classified
mlp.get_activations_and_units(x_black)
wrong_black = (mlp.y > 1/2).squeeze()
print('Points misclassified as black: {}'.format(np.sum(wrong_black)))
mlp.get_activations_and_units(x_red)
wrong_red = (mlp.y < 1/2).squeeze()
print('Points misclassified as red: {}'.format(np.sum(wrong_red)))
# plot the probability mapping and the data
delta = 0.01
x = np.arange(-3, 3, delta)
y = np.arange(-3, 3, delta)
X, Y = np.meshgrid(x, y)
x_pts = np.vstack((X.flatten(), Y.flatten())).T
mlp.get_activations_and_units(x_pts)
grid_size = X.shape[0]
Z = mlp.y.reshape(grid_size, grid_size)
plt.axis('equal')
plt.contourf(X, Y, Z, 50)
plt.scatter(x_black[:, 0], x_black[:, 1],
marker=',',
s=1,
color='black')
plt.scatter(x_red[:, 0], x_red[:, 1],
marker=',',
s=1,
color='red')
plt.scatter(x_black[wrong_black, 0],
x_black[wrong_black, 1],
facecolors='None',
s=50,
marker='o', color='red')
plt.scatter(x_red[wrong_red, 0],
x_red[wrong_red, 1],
facecolors='None',
s=50,
marker='o', color='black')
plt.show()
|
apache-2.0
|
lin-credible/scikit-learn
|
examples/model_selection/plot_validation_curve.py
|
229
|
1823
|
"""
==========================
Plotting Validation Curves
==========================
In this plot you can see the training scores and validation scores of an SVM
for different values of the kernel parameter gamma. For very low values of
gamma, you can see that both the training score and the validation score are
low. This is called underfitting. Medium values of gamma will result in high
values for both scores, i.e. the classifier is performing fairly well. If gamma
is too high, the classifier will overfit, which means that the training score
is good but the validation score is poor.
"""
print(__doc__)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.learning_curve import validation_curve
digits = load_digits()
X, y = digits.data, digits.target
param_range = np.logspace(-6, -1, 5)
train_scores, test_scores = validation_curve(
SVC(), X, y, param_name="gamma", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with SVM")
plt.xlabel("$\gamma$")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
plt.semilogx(param_range, train_scores_mean, label="Training score", color="r")
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2, color="r")
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="g")
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2, color="g")
plt.legend(loc="best")
plt.show()
|
bsd-3-clause
|
wholmgren/pvlib-python
|
pvlib/test/test_midc.py
|
2
|
2266
|
import inspect
import os
import pandas as pd
from pandas.util.testing import network
import pytest
import pytz
from pvlib.iotools import midc
@pytest.fixture
def test_mapping():
return {
'Direct Normal [W/m^2]': 'dni',
'Global PSP [W/m^2]': 'ghi',
'Rel Humidity [%]': 'relative_humidity',
'Temperature @ 2m [deg C]': 'temp_air',
'Non Existant': 'variable',
}
test_dir = os.path.dirname(
os.path.abspath(inspect.getfile(inspect.currentframe())))
midc_testfile = os.path.join(test_dir, '../data/midc_20181014.txt')
midc_raw_testfile = os.path.join(test_dir, '../data/midc_raw_20181018.txt')
midc_network_testfile = ('https://midcdmz.nrel.gov/apps/data_api.pl'
'?site=UAT&begin=20181018&end=20181019')
def test_midc_format_index():
data = pd.read_csv(midc_testfile)
data = midc.format_index(data)
start = pd.Timestamp("20181014 00:00")
start = start.tz_localize("MST")
end = pd.Timestamp("20181014 23:59")
end = end.tz_localize("MST")
assert type(data.index) == pd.DatetimeIndex
assert data.index[0] == start
assert data.index[-1] == end
def test_midc_format_index_tz_conversion():
data = pd.read_csv(midc_testfile)
data = data.rename(columns={'MST': 'PST'})
data = midc.format_index(data)
assert data.index[0].tz == pytz.timezone('Etc/GMT+8')
def test_midc_format_index_raw():
data = pd.read_csv(midc_raw_testfile)
data = midc.format_index_raw(data)
start = pd.Timestamp('20181018 00:00')
start = start.tz_localize('MST')
end = pd.Timestamp('20181018 23:59')
end = end.tz_localize('MST')
assert data.index[0] == start
assert data.index[-1] == end
def test_read_midc_var_mapping_as_arg(test_mapping):
data = midc.read_midc(midc_testfile, variable_map=test_mapping)
assert 'ghi' in data.columns
assert 'temp_air' in data.columns
@network
def test_read_midc_raw_data_from_nrel():
start_ts = pd.Timestamp('20181018')
end_ts = pd.Timestamp('20181019')
var_map = midc.MIDC_VARIABLE_MAP['UAT']
data = midc.read_midc_raw_data_from_nrel('UAT', start_ts, end_ts, var_map)
for k, v in var_map.items():
assert v in data.columns
assert data.index.size == 2880
|
bsd-3-clause
|
tiagofrepereira2012/tensorflow
|
tensorflow/examples/learn/text_classification.py
|
12
|
6651
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example of Estimator for DNN-based text classification with DBpedia data."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
import numpy as np
import pandas
from sklearn import metrics
import tensorflow as tf
FLAGS = None
MAX_DOCUMENT_LENGTH = 10
EMBEDDING_SIZE = 50
n_words = 0
MAX_LABEL = 15
WORDS_FEATURE = 'words' # Name of the input words feature.
def estimator_spec_for_softmax_classification(
logits, labels, mode):
"""Returns EstimatorSpec instance for softmax classification."""
predicted_classes = tf.argmax(logits, 1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'class': predicted_classes,
'prob': tf.nn.softmax(logits)
})
onehot_labels = tf.one_hot(labels, MAX_LABEL, 1, 0)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
labels=labels, predictions=predicted_classes)
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def bag_of_words_model(features, labels, mode):
"""A bag-of-words model. Note it disregards the word order in the text."""
bow_column = tf.feature_column.categorical_column_with_identity(
WORDS_FEATURE, num_buckets=n_words)
bow_embedding_column = tf.feature_column.embedding_column(
bow_column, dimension=EMBEDDING_SIZE)
bow = tf.feature_column.input_layer(
features,
feature_columns=[bow_embedding_column])
logits = tf.layers.dense(bow, MAX_LABEL, activation=None)
return estimator_spec_for_softmax_classification(
logits=logits, labels=labels, mode=mode)
def rnn_model(features, labels, mode):
"""RNN model to predict from sequence of words to a class."""
# Convert indexes of words into embeddings.
# This creates embeddings matrix of [n_words, EMBEDDING_SIZE] and then
# maps word indexes of the sequence into [batch_size, sequence_length,
# EMBEDDING_SIZE].
word_vectors = tf.contrib.layers.embed_sequence(
features[WORDS_FEATURE], vocab_size=n_words, embed_dim=EMBEDDING_SIZE)
# Split into list of embedding per word, while removing doc length dim.
# word_list results to be a list of tensors [batch_size, EMBEDDING_SIZE].
word_list = tf.unstack(word_vectors, axis=1)
# Create a Gated Recurrent Unit cell with hidden size of EMBEDDING_SIZE.
cell = tf.contrib.rnn.GRUCell(EMBEDDING_SIZE)
# Create an unrolled Recurrent Neural Networks to length of
# MAX_DOCUMENT_LENGTH and passes word_list as inputs for each unit.
_, encoding = tf.contrib.rnn.static_rnn(cell, word_list, dtype=tf.float32)
# Given encoding of RNN, take encoding of last step (e.g hidden size of the
# neural network of last step) and pass it as features for softmax
# classification over output classes.
logits = tf.layers.dense(encoding, MAX_LABEL, activation=None)
return estimator_spec_for_softmax_classification(
logits=logits, labels=labels, mode=mode)
def main(unused_argv):
global n_words
# Prepare training and testing data
dbpedia = tf.contrib.learn.datasets.load_dataset(
'dbpedia', test_with_fake_data=FLAGS.test_with_fake_data)
x_train = pandas.DataFrame(dbpedia.train.data)[1]
y_train = pandas.Series(dbpedia.train.target)
x_test = pandas.DataFrame(dbpedia.test.data)[1]
y_test = pandas.Series(dbpedia.test.target)
# Process vocabulary
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(
MAX_DOCUMENT_LENGTH)
x_transform_train = vocab_processor.fit_transform(x_train)
x_transform_test = vocab_processor.transform(x_test)
x_train = np.array(list(x_transform_train))
x_test = np.array(list(x_transform_test))
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
# Build model
# Switch between rnn_model and bag_of_words_model to test different models.
model_fn = rnn_model
if FLAGS.bow_model:
# Subtract 1 because VocabularyProcessor outputs a word-id matrix where word
# ids start from 1 and 0 means 'no word'. But
# categorical_column_with_identity assumes 0-based count and uses -1 for
# missing word.
x_train -= 1
x_test -= 1
model_fn = bag_of_words_model
classifier = tf.estimator.Estimator(model_fn=model_fn)
# Train.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_train},
y=y_train,
batch_size=len(x_train),
num_epochs=None,
shuffle=True)
classifier.train(input_fn=train_input_fn, steps=100)
# Predict.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={WORDS_FEATURE: x_test},
y=y_test,
num_epochs=1,
shuffle=False)
predictions = classifier.predict(input_fn=test_input_fn)
y_predicted = np.array(list(p['class'] for p in predictions))
y_predicted = y_predicted.reshape(np.array(y_test).shape)
# Score with sklearn.
score = metrics.accuracy_score(y_test, y_predicted)
print('Accuracy (sklearn): {0:f}'.format(score))
# Score with tensorflow.
scores = classifier.evaluate(input_fn=test_input_fn)
print('Accuracy (tensorflow): {0:f}'.format(scores['accuracy']))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--test_with_fake_data',
default=False,
help='Test the example code with fake data.',
action='store_true')
parser.add_argument(
'--bow_model',
default=False,
help='Run with BOW model instead of RNN.',
action='store_true')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
|
apache-2.0
|
emmanuelle/scikits.image
|
skimage/transform/tests/test_warps.py
|
2
|
3592
|
from numpy.testing import assert_array_almost_equal, run_module_suite
import numpy as np
from scipy.ndimage import map_coordinates
from skimage.transform import (warp, warp_coords, fast_homography,
AffineTransform,
ProjectiveTransform,
SimilarityTransform)
from skimage import transform as tf, data, img_as_float
from skimage.color import rgb2gray
def test_warp():
x = np.zeros((5, 5), dtype=np.uint8)
x[2, 2] = 255
x = img_as_float(x)
theta = - np.pi / 2
tform = SimilarityTransform(scale=1, rotation=theta, translation=(0, 4))
x90 = warp(x, tform, order=1)
assert_array_almost_equal(x90, np.rot90(x))
x90 = warp(x, tform.inverse, order=1)
assert_array_almost_equal(x90, np.rot90(x))
def test_homography():
x = np.zeros((5, 5), dtype=np.uint8)
x[1, 1] = 255
x = img_as_float(x)
theta = -np.pi / 2
M = np.array([[np.cos(theta), - np.sin(theta), 0],
[np.sin(theta), np.cos(theta), 4],
[0, 0, 1]])
x90 = warp(x,
inverse_map=ProjectiveTransform(M).inverse,
order=1)
assert_array_almost_equal(x90, np.rot90(x))
def test_fast_homography():
img = rgb2gray(data.lena()).astype(np.uint8)
img = img[:, :100]
theta = np.deg2rad(30)
scale = 0.5
tx, ty = 50, 50
H = np.eye(3)
S = scale * np.sin(theta)
C = scale * np.cos(theta)
H[:2, :2] = [[C, -S], [S, C]]
H[:2, 2] = [tx, ty]
for mode in ('constant', 'mirror', 'wrap'):
p0 = warp(img, ProjectiveTransform(H).inverse, mode=mode, order=1)
p1 = fast_homography(img, H, mode=mode)
# import matplotlib.pyplot as plt
# f, (ax0, ax1, ax2, ax3) = plt.subplots(1, 4)
# ax0.imshow(img)
# ax1.imshow(p0, cmap=plt.cm.gray)
# ax2.imshow(p1, cmap=plt.cm.gray)
# ax3.imshow(np.abs(p0 - p1), cmap=plt.cm.gray)
# plt.show()
d = np.mean(np.abs(p0 - p1))
assert d < 0.001
def test_swirl():
image = img_as_float(data.checkerboard())
swirl_params = {'radius': 80, 'rotation': 0, 'order': 2, 'mode': 'reflect'}
swirled = tf.swirl(image, strength=10, **swirl_params)
unswirled = tf.swirl(swirled, strength=-10, **swirl_params)
assert np.mean(np.abs(image - unswirled)) < 0.01
def test_const_cval_out_of_range():
img = np.random.randn(100, 100)
warped = warp(img, AffineTransform(translation=(10, 10)), cval=-10)
assert np.sum(warped < 0) == (2 * 100 * 10 - 10 * 10)
def test_warp_identity():
lena = img_as_float(rgb2gray(data.lena()))
assert len(lena.shape) == 2
assert np.allclose(lena, warp(lena, AffineTransform(rotation=0)))
assert not np.allclose(lena, warp(lena, AffineTransform(rotation=0.1)))
rgb_lena = np.transpose(np.asarray([lena, np.zeros_like(lena), lena]),
(1, 2, 0))
warped_rgb_lena = warp(rgb_lena, AffineTransform(rotation=0.1))
assert np.allclose(rgb_lena, warp(rgb_lena, AffineTransform(rotation=0)))
assert not np.allclose(rgb_lena, warped_rgb_lena)
# assert no cross-talk between bands
assert np.all(0 == warped_rgb_lena[:, :, 1])
def test_warp_coords_example():
image = data.lena().astype(np.float32)
assert 3 == image.shape[2]
tform = SimilarityTransform(translation=(0, -10))
coords = warp_coords(30, 30, 3, tform)
warped_image1 = map_coordinates(image[:, :, 0], coords[:2])
if __name__ == "__main__":
run_module_suite()
|
bsd-3-clause
|
ajros/openplotter
|
graph.py
|
1
|
2117
|
#!/usr/bin/env python
# This file is part of Openplotter.
# Copyright (C) 2015 by sailoog <https://github.com/sailoog/openplotter>
#
# Openplotter is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# any later version.
# Openplotter is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Openplotter. If not, see <http://www.gnu.org/licenses/>.
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import os, sys, datetime, csv
from matplotlib.widgets import Cursor
pathname = os.path.dirname(sys.argv[0])
currentpath = os.path.abspath(pathname)
ifile = open(currentpath+'/weather_log.csv', "r")
reader = csv.reader(ifile)
log_list = []
for row in reader:
log_list.append(row)
ifile.close()
dates=[]
pressure=[]
temperature=[]
for i in range(0,len(log_list)):
dates.append(datetime.datetime.fromtimestamp(float(log_list[i][0])))
pressure.append(round(float(log_list[i][1]),1))
temperature.append(round(float(log_list[i][2]),1))
if len(dates)==0:
dates.append(datetime.datetime.now())
pressure.append(0)
temperature.append(0)
fig=plt.figure()
plt.rc("font", size=10)
fig.canvas.set_window_title('Thermograph / Barograph')
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212, sharex=ax1)
ax1.plot(dates,temperature,'ro-')
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%y %H:%M'))
ax1.set_title('Temperature (Cel)')
ax1.grid(True)
ax2.plot(dates,pressure,'go-')
ax2.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%y %H:%M'))
ax2.set_title('Pressure (hPa)')
ax2.grid(True)
plt.tight_layout()
cursor = Cursor(ax1, useblit=True, color='gray', linewidth=1 )
cursor2 = Cursor(ax2, useblit=True, color='gray', linewidth=1 )
fig.autofmt_xdate()
plt.show()
|
gpl-2.0
|
JohanComparat/pySU
|
spm/bin_SMF/pdf_repeat.py
|
1
|
3569
|
import sys
import os
import numpy as n
import astropy.io.fits as fits
import matplotlib
matplotlib.rcParams['agg.path.chunksize'] = 2000000
matplotlib.rcParams.update({'font.size': 13})
#matplotlib.use('Agg')
import matplotlib.pyplot as p
from cycler import cycler
from scipy.spatial import KDTree
from scipy.stats import norm
# ADD FILTER ON CHI2/NDOF
"""
Plots stellar mass error and stellar mass vs. mass and redshift
"""
z_bins = n.arange(0, 4.1, 0.1)
m_bins = n.arange(8.5,12.6,1.)
sn_bins = n.array([0, 0.5, 1, 2, 10, 100]) # n.logspace(-1.,2,6)
err_bins = n.arange(-2., 2.2, 0.1)
x_err_bins = (err_bins[1:] + err_bins[:-1])/2.
#n.hstack(( n.array([-10000., -10., -5.]), n.arange(-2.5, 2.5, 0.1), n.array([5., 10., 10000.]) ))
#prefix = 'SDSS'
#hdus = fits.open(os.path.join(os.environ['DATA_DIR'], 'spm', 'firefly', 'FireflyGalaxySdss26.fits'))
#redshift_reliable = (hdus[1].data['Z'] >= 0) & ( hdus[1].data['Z_ERR'] >= 0) & (hdus[1].data['ZWARNING'] == 0) & (hdus[1].data['Z'] > hdus[1].data['Z_ERR'] )
prefix = 'BOSS'
out_dir = os.path.join(os.environ['DATA_DIR'], 'spm', 'results', 'catalogs')
hdus = fits.open(os.path.join(os.environ['DATA_DIR'], 'spm', 'firefly', 'FireflyGalaxyEbossDR14.fits'))
NN, bb = n.histogram(hdus[1].data['PLATE'], bins = n.arange(0, 11000,1))
repeat_plates = bb[:-1][(NN>4000)]
rp = repeat_plates[4]
sel_rp = (hdus[1].data['PLATE'] == rp)
data = hdus[1].data[sel_rp]
tree = KDTree(n.transpose([hdus[1].data['PLUG_RA'][sel_rp], hdus[1].data['PLUG_DEC'][sel_rp]]))
ids = n.array(tree.query_ball_tree(tree, 0.0001))
mjds = n.array(list(set(hdus[1].data['MJD'][sel_rp])))
mjd_sel = (hdus[1].data['MJD'][sel_rp] == mjds[-1])
id_sort = ids[mjd_sel]
imf = 'Chabrier'
diff = []
for jj in range(len(id_sort)):
ids = list(n.copy(id_sort[jj]))
id_highest_snr = n.argmax(data[ids]['SN_MEDIAN_ALL'])
del ids[id_highest_snr]
M_1 = 10**data[imf+'_stellar_mass'][id_highest_snr]
M_1_e = M_1 * (10**abs((data[imf+'_stellar_mass' + '_err_plus'][id_highest_snr] - data[imf+'_stellar_mass' + '_err_minus'][id_highest_snr])/2.)-1.)
M_2s = 10**data[imf+'_stellar_mass'][ids]
M_2_es = M_1 * (10**abs((data[imf+'_stellar_mass' + '_err_plus'][ids] - data[imf+'_stellar_mass' + '_err_minus'][ids])/2.)-1.)
z_agree = (data['ZWARNING_NOQSO'][id_highest_snr]==0)&(abs(data['Z_NOQSO'][id_highest_snr] - data['Z_NOQSO'][ids]) < 0.05)
if M_1 > M_1_e and M_1 > 0. and data['Z_ERR_NOQSO'][id_highest_snr]>0 and data['Z_NOQSO'][id_highest_snr]> data['Z_ERR_NOQSO'][id_highest_snr] :
arr = (M_1 - M_2s)/(M_1_e**2. + M_2_es**2.)**0.5
#print arr[z_agree]
diff.append( arr[z_agree] )
ps = [-0.15, -0.2, 0.05, 0.25, 0.65, 0.35]
x_norm = n.arange(-2,2,0.01)
out, xxx = n.histogram(n.hstack((diff)), bins = err_bins, normed=True)
outN = n.histogram(n.hstack((diff)), bins = err_bins)[0]
N100 = (outN>10)
eb = p.errorbar(x_err_bins[N100], out[N100], xerr=(xxx[1:][N100]-xxx[:-1][N100])/2., yerr = out[N100]*outN[N100]**(-0.5), fmt='+', color='r')
p.plot(x_norm, ps[4]*norm.pdf(x_norm, loc=ps[0], scale=ps[2]), label='N('+str(ps[0])+','+str(ps[2])+')', ls='dashed', lw=0.5)
p.plot(x_norm, ps[5]*norm.pdf(x_norm, loc=ps[1], scale=ps[3]), label='N('+str(ps[1])+','+str(ps[3])+')', ls='dashed', lw=0.5)
p.plot(x_norm, ps[5]*norm.pdf(x_norm, loc=ps[1], scale=ps[3]) + ps[4]*norm.pdf(x_norm, loc=ps[0], scale=ps[2]), lw=0.5)
p.yscale('log')
p.ylabel('pdf')
p.xlim((-1.1, 1.1))
p.ylim((1e-2, 10.))
p.grid()
p.xlabel(r'$(M_1-M_2)/\sqrt{\sigma^2_{M1}+\sigma^2_{M2}}$')
p.savefig(os.path.join(out_dir, "pdf_diff_mass_repeat.jpg" ))
p.show()
|
cc0-1.0
|
yaricom/SpaceNetBuildingDetector
|
src/python/spaceNet/geoTools.py
|
1
|
15559
|
from osgeo import gdal, osr, ogr
from pandas import pandas as pd
import numpy as np
import os
import csv
import rtree
import subprocess
def importgeojson(geojsonfilename, removeNoBuildings=False):
# driver = ogr.GetDriverByName('geojson')
datasource = ogr.Open(geojsonfilename, 0)
layer = datasource.GetLayer()
print(layer.GetFeatureCount())
polys = []
for idx, feature in enumerate(layer):
poly = feature.GetGeometryRef()
if poly:
polys.append({'ImageId': feature.GetField('ImageId'), 'BuildingId': feature.GetField('BuildingId'),
'poly': feature.GetGeometryRef().Clone()})
return polys
def readwktcsv(csv_path):
#
# csv Format Expected = ['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'PolygonWKT_Geo']
# returns list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'poly': poly}
# image_id is a string,
# BuildingId is an integer,
# poly is a ogr.Geometry Polygon
# buildinglist = []
# polys_df = pd.read_csv(csv_path)
# image_ids = set(polys_df['ImageId'].tolist())
# for image_id in image_ids:
# img_df = polys_df.loc[polys_df['ImageId'] == image_id]
# building_ids = set(img_df['BuildingId'].tolist())
# for building_id in building_ids:
#
# building_df = img_df.loc[img_df['BuildingId'] == building_id]
# poly = ogr.CreateGeometryFromWkt(building_df.iloc[0, 2])
# buildinglist.append({'ImageId': image_id, 'BuildingId': building_id, 'poly': poly})
buildinglist = []
with open(csv_path, 'rb') as csvfile:
building_reader = csv.reader(csvfile, delimiter=',', quotechar='"')
next(building_reader, None) # skip the headers
for row in building_reader:
poly = ogr.CreateGeometryFromWkt(row[2])
buildinglist.append({'ImageId': row[0], 'BuildingId': int(row[1]), 'poly': poly})
return buildinglist
def exporttogeojson(geojsonfilename, buildinglist):
#
# geojsonname should end with .geojson
# building list should be list of dictionaries
# list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'poly': poly}
# image_id is a string,
# BuildingId is an integer,
# poly is a ogr.Geometry Polygon
#
# returns geojsonfilename
driver = ogr.GetDriverByName('geojson')
if os.path.exists(geojsonfilename):
driver.DeleteDataSource(geojsonfilename)
datasource = driver.CreateDataSource(geojsonfilename)
layer = datasource.CreateLayer('buildings', geom_type=ogr.wkbPolygon)
field_name = ogr.FieldDefn("ImageId", ogr.OFTString)
field_name.SetWidth(75)
layer.CreateField(field_name)
layer.CreateField(ogr.FieldDefn("BuildingId", ogr.OFTInteger))
# loop through buildings
for building in buildinglist:
# create feature
feature = ogr.Feature(layer.GetLayerDefn())
feature.SetField("ImageId", building['ImageId'])
feature.SetField("BuildingId", building['BuildingId'])
feature.SetGeometry(building['poly'])
# Create the feature in the layer (geojson)
layer.CreateFeature(feature)
# Destroy the feature to free resources
feature.Destroy()
datasource.Destroy()
return geojsonfilename
def createmaskfrompolygons(polygons):
pass
def latLonToPixel(lat, lon, input_raster='', targetsr='', geomTransform=''):
sourcesr = osr.SpatialReference()
sourcesr.ImportFromEPSG(4326)
geom = ogr.Geometry(ogr.wkbPoint)
geom.AddPoint(lon, lat)
if targetsr == '':
src_raster = gdal.Open(input_raster)
targetsr = osr.SpatialReference()
targetsr.ImportFromWkt(src_raster.GetProjectionRef())
coordTrans = osr.CoordinateTransformation(sourcesr, targetsr)
if geomTransform == '':
src_raster = gdal.Open(input_raster)
transform = src_raster.GetGeoTransform()
else:
transform = geomTransform
xOrigin = transform[0]
# print xOrigin
yOrigin = transform[3]
# print yOrigin
pixelWidth = transform[1]
# print pixelWidth
pixelHeight = transform[5]
# print pixelHeight
geom.Transform(coordTrans)
# print geom.GetPoint()
xPix = (geom.GetPoint()[0] - xOrigin) / pixelWidth
yPix = (geom.GetPoint()[1] - yOrigin) / pixelHeight
return (xPix, yPix)
def pixelToLatLon(xPix, yPix, inputRaster, targetSR=''):
if targetSR == '':
targetSR = osr.SpatialReference()
targetSR.ImportFromEPSG(4326)
geom = ogr.Geometry(ogr.wkbPoint)
srcRaster = gdal.Open(inputRaster)
sourceSR = osr.SpatialReference()
sourceSR.ImportFromWkt(srcRaster.GetProjectionRef())
coordTrans = osr.CoordinateTransformation(sourceSR, targetSR)
transform = srcRaster.GetGeoTransform()
xOrigin = transform[0]
yOrigin = transform[3]
pixelWidth = transform[1]
pixelHeight = transform[5]
xCoord = (xPix * pixelWidth) + xOrigin
yCoord = (yPix * pixelHeight) + yOrigin
geom.AddPoint(xCoord, yCoord)
geom.Transform(coordTrans)
return (geom.GetX(), geom.GetY())
def geoPolygonToPixelPolygonWKT(geom, inputRaster, targetSR, geomTransform):
# Returns Pixel Coordinate List and GeoCoordinateList
polygonPixBufferList = []
polygonPixBufferWKTList = []
if geom.GetGeometryName() == 'POLYGON':
polygonPix = ogr.Geometry(ogr.wkbPolygon)
for ring in geom:
# GetPoint returns a tuple not a Geometry
ringPix = ogr.Geometry(ogr.wkbLinearRing)
for pIdx in xrange(ring.GetPointCount()):
lon, lat, z = ring.GetPoint(pIdx)
xPix, yPix = latLonToPixel(lat, lon, inputRaster, targetSR, geomTransform)
ringPix.AddPoint(xPix, yPix)
polygonPix.AddGeometry(ringPix)
polygonPixBuffer = polygonPix.Buffer(0.0)
polygonPixBufferList.append([polygonPixBuffer, geom])
elif geom.GetGeometryName() == 'MULTIPOLYGON':
for poly in geom:
polygonPix = ogr.Geometry(ogr.wkbPolygon)
for ring in geom:
# GetPoint returns a tuple not a Geometry
ringPix = ogr.Geometry(ogr.wkbLinearRing)
for pIdx in xrange(ring.GetPointCount()):
lon, lat, z = ring.GetPoint(pIdx)
xPix, yPix = latLonToPixel(lat, lon, inputRaster, targetSR, geomTransform)
ringPix.AddPoint(xPix, yPix)
polygonPix.AddGeometry(ringPix)
polygonPixBuffer = polygonPix.Buffer(0.0)
polygonPixBufferList.append([polygonPixBuffer, geom])
for polygonTest in polygonPixBufferList:
if polygonTest[0].GetGeometryName() == 'POLYGON':
polygonPixBufferWKTList.append([polygonTest[0].ExportToWkt(), polygonTest[1].ExportToWkt()])
elif polygonTest[0].GetGeometryName() == 'MULTIPOLYGON':
for polygonTest2 in polygonTest[0]:
polygonPixBufferWKTList.append([polygonTest2.ExportToWkt(), polygonTest[1].ExportToWkt()])
return polygonPixBufferWKTList
def convert_wgs84geojson_to_pixgeojson(wgs84geojson, inputraster, image_id=[], pixelgeojson=[]):
dataSource = ogr.Open(wgs84geojson, 0)
layer = dataSource.GetLayer()
print(layer.GetFeatureCount())
building_id = 0
# check if geoJsonisEmpty
buildinglist = []
if not image_id:
image_id = inputraster.replace(".tif", "")
if layer.GetFeatureCount() > 0:
srcRaster = gdal.Open(inputraster)
targetSR = osr.SpatialReference()
targetSR.ImportFromWkt(srcRaster.GetProjectionRef())
geomTransform = srcRaster.GetGeoTransform()
for feature in layer:
geom = feature.GetGeometryRef()
## Calculate 3 band
polygonWKTList = geoPolygonToPixelPolygonWKT(geom, inputraster, targetSR, geomTransform)
for polygonWKT in polygonWKTList:
building_id += 1
buildinglist.append({'ImageId': image_id,
'BuildingId': building_id,
'poly': ogr.CreateGeometryFromWkt(polygonWKT)})
if pixelgeojson:
exporttogeojson(pixelToLatLon, buildinglist=buildinglist)
return buildinglist
def create_rtreefromdict(buildinglist):
# create index
index = rtree.index.Index(interleaved=False)
for idx, building in enumerate(buildinglist):
index.insert(idx, building['poly'].GetEnvelope())
return index
def create_rtree_from_poly(poly_list):
# create index
index = rtree.index.Index(interleaved=False)
for idx, building in enumerate(poly_list):
index.insert(idx, building.GetEnvelope())
return index
def search_rtree(test_building, index):
# input test poly ogr.Geometry and rtree index
if test_building.GetGeometryName() == 'POLYGON' or \
test_building.GetGeometryName() == 'MULTIPOLYGON':
fidlist = index.intersection(test_building.GetEnvelope())
else:
fidlist = []
return fidlist
def get_envelope(poly):
env = poly.GetEnvelope()
# Get Envelope returns a tuple (minX, maxX, minY, maxY)
# Create ring
ring = ogr.Geometry(ogr.wkbLinearRing)
ring.AddPoint(env[0], env[2])
ring.AddPoint(env[0], env[3])
ring.AddPoint(env[1], env[3])
ring.AddPoint(env[1], env[2])
ring.AddPoint(env[0], env[2])
# Create polygon
poly1 = ogr.Geometry(ogr.wkbPolygon)
poly1.AddGeometry(ring)
return poly1
def utm_getZone(longitude):
return (int(1+(longitude+180.0)/6.0))
def utm_isNorthern(latitude):
if (latitude < 0.0):
return 0
else:
return 1
def createUTMTransform(polyGeom):
# pt = polyGeom.Boundary().GetPoint()
utm_zone = utm_getZone(polyGeom.GetEnvelope()[0])
is_northern = utm_isNorthern(polyGeom.GetEnvelope()[2])
utm_cs = osr.SpatialReference()
utm_cs.SetWellKnownGeogCS('WGS84')
utm_cs.SetUTM(utm_zone, is_northern);
wgs84_cs = osr.SpatialReference()
wgs84_cs.ImportFromEPSG(4326)
transform_WGS84_To_UTM = osr.CoordinateTransformation(wgs84_cs, utm_cs)
transform_UTM_To_WGS84 = osr.CoordinateTransformation(utm_cs, wgs84_cs)
return transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs
def getRasterExtent(srcImage):
geoTrans = srcImage.GetGeoTransform()
ulX = geoTrans[0]
ulY = geoTrans[3]
xDist = geoTrans[1]
yDist = geoTrans[5]
rtnX = geoTrans[2]
rtnY = geoTrans[4]
cols = srcImage.RasterXSize
rows = srcImage.RasterYSize
lrX = ulX + xDist * cols
lrY = ulY + yDist * rows
# Create ring
ring = ogr.Geometry(ogr.wkbLinearRing)
ring.AddPoint(lrX, lrY)
ring.AddPoint(lrX, ulY)
ring.AddPoint(ulX, ulY)
ring.AddPoint(ulX, lrY)
ring.AddPoint(lrX, lrY)
# Create polygon
poly = ogr.Geometry(ogr.wkbPolygon)
poly.AddGeometry(ring)
return geoTrans, poly, ulX, ulY, lrX, lrY
def createPolygonFromCorners(lrX,lrY,ulX, ulY):
# Create ring
ring = ogr.Geometry(ogr.wkbLinearRing)
ring.AddPoint(lrX, lrY)
ring.AddPoint(lrX, ulY)
ring.AddPoint(ulX, ulY)
ring.AddPoint(ulX, lrY)
ring.AddPoint(lrX, lrY)
# Create polygon
poly = ogr.Geometry(ogr.wkbPolygon)
poly.AddGeometry(ring)
return poly
def clipShapeFile(shapeSrc, outputFileName, polyToCut):
source_layer = shapeSrc.GetLayer()
source_srs = source_layer.GetSpatialRef()
# Create the output Layer
outGeoJSon = outputFileName.replace('.tif', '.geojson')
outDriver = ogr.GetDriverByName("geojson")
if os.path.exists(outGeoJSon):
outDriver.DeleteDataSource(outGeoJSon)
outDataSource = outDriver.CreateDataSource(outGeoJSon)
outLayer = outDataSource.CreateLayer("groundTruth", source_srs, geom_type=ogr.wkbPolygon)
# Add input Layer Fields to the output Layer
inLayerDefn = source_layer.GetLayerDefn()
for i in range(0, inLayerDefn.GetFieldCount()):
fieldDefn = inLayerDefn.GetFieldDefn(i)
outLayer.CreateField(fieldDefn)
outLayer.CreateField(ogr.FieldDefn("partialBuilding", ogr.OFTInteger))
outLayerDefn = outLayer.GetLayerDefn()
source_layer.SetSpatialFilter(polyToCut)
for inFeature in source_layer:
outFeature = ogr.Feature(outLayerDefn)
for i in range (0, inLayerDefn.GetFieldCount()):
outFeature.SetField(inLayerDefn.GetFieldDefn(i).GetNameRef(), inFeature.GetField(i))
geom = inFeature.GetGeometryRef()
geomNew = geom.Intersection(polyToCut)
if geomNew:
if geom.GetArea() == geomNew.GetArea():
outFeature.SetField("partialBuilding", 0)
else:
outFeature.SetField("partialBuilding", 1)
else:
outFeature.SetField("partialBuilding", 1)
outFeature.SetGeometry(geomNew)
outLayer.CreateFeature(outFeature)
def cutChipFromMosaic(rasterFile, shapeFileSrc, outlineSrc,outputDirectory='', outputPrefix='clip_',
clipSizeMX=100, clipSizeMY=100):
#rasterFile = '/Users/dlindenbaum/dataStorage/spacenet/mosaic_8band/013022223103.tif'
srcImage = gdal.Open(rasterFile)
geoTrans, poly, ulX, ulY, lrX, lrY = getRasterExtent(srcImage)
rasterFileBase = os.path.basename(rasterFile)
if outputDirectory=="":
outputDirectory=os.path.dirname(rasterFile)
transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = createUTMTransform(poly)
poly.Transform(transform_WGS84_To_UTM)
env = poly.GetEnvelope()
minX = env[0]
minY = env[2]
maxX = env[1]
maxY = env[3]
#return poly to WGS84
poly.Transform(transform_UTM_To_WGS84)
shapeSrc = ogr.Open(shapeFileSrc)
outline = ogr.Open(outlineSrc)
layer = outline.GetLayer()
for feature in layer:
geom = feature.GetGeometryRef()
for llX in np.arange(minX, maxX, clipSizeMX):
for llY in np.arange(minY, maxY, clipSizeMY):
uRX = llX+clipSizeMX
uRY = llY+clipSizeMY
polyCut = createPolygonFromCorners(llX, llY, uRX, uRY)
polyCut.Transform(transform_UTM_To_WGS84)
if (polyCut).Intersection(geom):
print "Do it."
envCut = polyCut.GetEnvelope()
minXCut = envCut[0]
minYCut = envCut[2]
maxXCut = envCut[1]
maxYCut = envCut[3]
outputFileName = os.path.join(outputDirectory, outputPrefix+rasterFileBase.replace('.tif', "_{}_{}.tif".format(minXCut,minYCut)))
## Clip Image
subprocess.call(["gdalwarp", "-te", "{}".format(minXCut), "{}".format(minYCut),
"{}".format(maxXCut), "{}".format(maxYCut), rasterFile, outputFileName])
outGeoJSon = outputFileName.replace('.tif', '.geojson')
### Clip poly to cust to Raster Extent
polyVectorCut=polyCut.Intersection(poly)
clipShapeFile(shapeSrc, outputFileName, polyVectorCut)
#subprocess.call(["ogr2ogr", "-f", "ESRI Shapefile",
# "-spat", "{}".format(minXCut), "{}".format(minYCut),
# "{}".format(maxXCut), "{}".format(maxYCut), "-clipsrc", outGeoJSon, shapeFileSrc])
## ClipShapeFile
else:
print "Ain't nobody got time for that!"
|
apache-2.0
|
gfyoung/pandas
|
pandas/core/arrays/timedeltas.py
|
1
|
35624
|
from __future__ import annotations
from datetime import timedelta
from typing import List, Optional, Union
import numpy as np
from pandas._libs import lib, tslibs
from pandas._libs.tslibs import (
BaseOffset,
NaT,
NaTType,
Period,
Tick,
Timedelta,
Timestamp,
iNaT,
to_offset,
)
from pandas._libs.tslibs.conversion import precision_from_unit
from pandas._libs.tslibs.fields import get_timedelta_field
from pandas._libs.tslibs.timedeltas import (
array_to_timedelta64,
ints_to_pytimedelta,
parse_timedelta_unit,
)
from pandas._typing import NpDtype
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.cast import astype_td64_unit_conversion
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
TD64NS_DTYPE,
is_categorical_dtype,
is_dtype_equal,
is_float_dtype,
is_integer_dtype,
is_object_dtype,
is_scalar,
is_string_dtype,
is_timedelta64_dtype,
pandas_dtype,
)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.generic import ABCSeries, ABCTimedeltaIndex
from pandas.core.dtypes.missing import isna
from pandas.core import nanops
from pandas.core.algorithms import checked_add_with_arr
from pandas.core.arrays import IntegerArray, datetimelike as dtl
from pandas.core.arrays._ranges import generate_regular_range
import pandas.core.common as com
from pandas.core.construction import extract_array
from pandas.core.ops.common import unpack_zerodim_and_defer
def _field_accessor(name: str, alias: str, docstring: str):
def f(self) -> np.ndarray:
values = self.asi8
result = get_timedelta_field(values, alias)
if self._hasnans:
result = self._maybe_mask_results(
result, fill_value=None, convert="float64"
)
return result
f.__name__ = name
f.__doc__ = f"\n{docstring}\n"
return property(f)
class TimedeltaArray(dtl.TimelikeOps):
"""
Pandas ExtensionArray for timedelta data.
.. versionadded:: 0.24.0
.. warning::
TimedeltaArray is currently experimental, and its API may change
without warning. In particular, :attr:`TimedeltaArray.dtype` is
expected to change to be an instance of an ``ExtensionDtype``
subclass.
Parameters
----------
values : array-like
The timedelta data.
dtype : numpy.dtype
Currently, only ``numpy.dtype("timedelta64[ns]")`` is accepted.
freq : Offset, optional
copy : bool, default False
Whether to copy the underlying array of data.
Attributes
----------
None
Methods
-------
None
"""
_typ = "timedeltaarray"
_scalar_type = Timedelta
_recognized_scalars = (timedelta, np.timedelta64, Tick)
_is_recognized_dtype = is_timedelta64_dtype
_infer_matches = ("timedelta", "timedelta64")
__array_priority__ = 1000
# define my properties & methods for delegation
_other_ops: List[str] = []
_bool_ops: List[str] = []
_object_ops = ["freq"]
_field_ops = ["days", "seconds", "microseconds", "nanoseconds"]
_datetimelike_ops = _field_ops + _object_ops + _bool_ops
_datetimelike_methods = [
"to_pytimedelta",
"total_seconds",
"round",
"floor",
"ceil",
]
# Note: ndim must be defined to ensure NaT.__richcmp__(TimedeltaArray)
# operates pointwise.
def _box_func(self, x) -> Union[Timedelta, NaTType]:
return Timedelta(x, unit="ns")
@property
def dtype(self) -> np.dtype:
"""
The dtype for the TimedeltaArray.
.. warning::
A future version of pandas will change dtype to be an instance
of a :class:`pandas.api.extensions.ExtensionDtype` subclass,
not a ``numpy.dtype``.
Returns
-------
numpy.dtype
"""
return TD64NS_DTYPE
# ----------------------------------------------------------------
# Constructors
def __init__(self, values, dtype=TD64NS_DTYPE, freq=lib.no_default, copy=False):
values = extract_array(values)
inferred_freq = getattr(values, "_freq", None)
explicit_none = freq is None
freq = freq if freq is not lib.no_default else None
if isinstance(values, type(self)):
if explicit_none:
# dont inherit from values
pass
elif freq is None:
freq = values.freq
elif freq and values.freq:
freq = to_offset(freq)
freq, _ = dtl.validate_inferred_freq(freq, values.freq, False)
values = values._data
if not isinstance(values, np.ndarray):
msg = (
f"Unexpected type '{type(values).__name__}'. 'values' must be a "
"TimedeltaArray ndarray, or Series or Index containing one of those."
)
raise ValueError(msg)
if values.ndim not in [1, 2]:
raise ValueError("Only 1-dimensional input arrays are supported.")
if values.dtype == "i8":
# for compat with datetime/timedelta/period shared methods,
# we can sometimes get here with int64 values. These represent
# nanosecond UTC (or tz-naive) unix timestamps
values = values.view(TD64NS_DTYPE)
_validate_td64_dtype(values.dtype)
dtype = _validate_td64_dtype(dtype)
if freq == "infer":
msg = (
"Frequency inference not allowed in TimedeltaArray.__init__. "
"Use 'pd.array()' instead."
)
raise ValueError(msg)
if copy:
values = values.copy()
if freq:
freq = to_offset(freq)
self._data = values
self._dtype = dtype
self._freq = freq
if inferred_freq is None and freq is not None:
type(self)._validate_frequency(self, freq)
@classmethod
def _simple_new(
cls, values, freq: Optional[BaseOffset] = None, dtype=TD64NS_DTYPE
) -> TimedeltaArray:
assert dtype == TD64NS_DTYPE, dtype
assert isinstance(values, np.ndarray), type(values)
if values.dtype != TD64NS_DTYPE:
assert values.dtype == "i8"
values = values.view(TD64NS_DTYPE)
result = object.__new__(cls)
result._data = values
result._freq = to_offset(freq)
result._dtype = TD64NS_DTYPE
return result
@classmethod
def _from_sequence(
cls, data, *, dtype=TD64NS_DTYPE, copy: bool = False
) -> TimedeltaArray:
if dtype:
_validate_td64_dtype(dtype)
data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=None)
freq, _ = dtl.validate_inferred_freq(None, inferred_freq, False)
return cls._simple_new(data, freq=freq)
@classmethod
def _from_sequence_not_strict(
cls,
data,
dtype=TD64NS_DTYPE,
copy: bool = False,
freq=lib.no_default,
unit=None,
) -> TimedeltaArray:
if dtype:
_validate_td64_dtype(dtype)
explicit_none = freq is None
freq = freq if freq is not lib.no_default else None
freq, freq_infer = dtl.maybe_infer_freq(freq)
data, inferred_freq = sequence_to_td64ns(data, copy=copy, unit=unit)
freq, freq_infer = dtl.validate_inferred_freq(freq, inferred_freq, freq_infer)
if explicit_none:
freq = None
result = cls._simple_new(data, freq=freq)
if inferred_freq is None and freq is not None:
# this condition precludes `freq_infer`
cls._validate_frequency(result, freq)
elif freq_infer:
# Set _freq directly to bypass duplicative _validate_frequency
# check.
result._freq = to_offset(result.inferred_freq)
return result
@classmethod
def _generate_range(cls, start, end, periods, freq, closed=None):
periods = dtl.validate_periods(periods)
if freq is None and any(x is None for x in [periods, start, end]):
raise ValueError("Must provide freq argument if no data is supplied")
if com.count_not_none(start, end, periods, freq) != 3:
raise ValueError(
"Of the four parameters: start, end, periods, "
"and freq, exactly three must be specified"
)
if start is not None:
start = Timedelta(start)
if end is not None:
end = Timedelta(end)
left_closed, right_closed = dtl.validate_endpoints(closed)
if freq is not None:
index = generate_regular_range(start, end, periods, freq)
else:
index = np.linspace(start.value, end.value, periods).astype("i8")
if not left_closed:
index = index[1:]
if not right_closed:
index = index[:-1]
return cls._simple_new(index, freq=freq)
# ----------------------------------------------------------------
# DatetimeLike Interface
def _unbox_scalar(self, value, setitem: bool = False) -> np.timedelta64:
if not isinstance(value, self._scalar_type) and value is not NaT:
raise ValueError("'value' should be a Timedelta.")
self._check_compatible_with(value, setitem=setitem)
return np.timedelta64(value.value, "ns")
def _scalar_from_string(self, value):
return Timedelta(value)
def _check_compatible_with(self, other, setitem: bool = False):
# we don't have anything to validate.
pass
# ----------------------------------------------------------------
# Array-Like / EA-Interface Methods
def astype(self, dtype, copy: bool = True):
# We handle
# --> timedelta64[ns]
# --> timedelta64
# DatetimeLikeArrayMixin super call handles other cases
dtype = pandas_dtype(dtype)
if dtype.kind == "m":
return astype_td64_unit_conversion(self._data, dtype, copy=copy)
return dtl.DatetimeLikeArrayMixin.astype(self, dtype, copy=copy)
def __iter__(self):
if self.ndim > 1:
for i in range(len(self)):
yield self[i]
else:
# convert in chunks of 10k for efficiency
data = self.asi8
length = len(self)
chunksize = 10000
chunks = (length // chunksize) + 1
for i in range(chunks):
start_i = i * chunksize
end_i = min((i + 1) * chunksize, length)
converted = ints_to_pytimedelta(data[start_i:end_i], box=True)
yield from converted
# ----------------------------------------------------------------
# Reductions
def sum(
self,
*,
axis=None,
dtype: Optional[NpDtype] = None,
out=None,
keepdims: bool = False,
initial=None,
skipna: bool = True,
min_count: int = 0,
):
nv.validate_sum(
(), {"dtype": dtype, "out": out, "keepdims": keepdims, "initial": initial}
)
result = nanops.nansum(
self._ndarray, axis=axis, skipna=skipna, min_count=min_count
)
return self._wrap_reduction_result(axis, result)
def std(
self,
*,
axis=None,
dtype: Optional[NpDtype] = None,
out=None,
ddof: int = 1,
keepdims: bool = False,
skipna: bool = True,
):
nv.validate_stat_ddof_func(
(), {"dtype": dtype, "out": out, "keepdims": keepdims}, fname="std"
)
result = nanops.nanstd(self._ndarray, axis=axis, skipna=skipna, ddof=ddof)
if axis is None or self.ndim == 1:
return self._box_func(result)
return self._from_backing_data(result)
# ----------------------------------------------------------------
# Rendering Methods
def _formatter(self, boxed=False):
from pandas.io.formats.format import get_format_timedelta64
return get_format_timedelta64(self, box=True)
@dtl.ravel_compat
def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
from pandas.io.formats.format import get_format_timedelta64
formatter = get_format_timedelta64(self._data, na_rep)
return np.array([formatter(x) for x in self._data])
# ----------------------------------------------------------------
# Arithmetic Methods
def _add_offset(self, other):
assert not isinstance(other, Tick)
raise TypeError(
f"cannot add the type {type(other).__name__} to a {type(self).__name__}"
)
def _add_period(self, other: Period):
"""
Add a Period object.
"""
# We will wrap in a PeriodArray and defer to the reversed operation
from .period import PeriodArray
i8vals = np.broadcast_to(other.ordinal, self.shape)
oth = PeriodArray(i8vals, freq=other.freq)
return oth + self
def _add_datetime_arraylike(self, other):
"""
Add DatetimeArray/Index or ndarray[datetime64] to TimedeltaArray.
"""
if isinstance(other, np.ndarray):
# At this point we have already checked that dtype is datetime64
from pandas.core.arrays import DatetimeArray
other = DatetimeArray(other)
# defer to implementation in DatetimeArray
return other + self
def _add_datetimelike_scalar(self, other):
# adding a timedeltaindex to a datetimelike
from pandas.core.arrays import DatetimeArray
assert other is not NaT
other = Timestamp(other)
if other is NaT:
# In this case we specifically interpret NaT as a datetime, not
# the timedelta interpretation we would get by returning self + NaT
result = self.asi8.view("m8[ms]") + NaT.to_datetime64()
return DatetimeArray(result)
i8 = self.asi8
result = checked_add_with_arr(i8, other.value, arr_mask=self._isnan)
result = self._maybe_mask_results(result)
dtype = DatetimeTZDtype(tz=other.tz) if other.tz else DT64NS_DTYPE
return DatetimeArray(result, dtype=dtype, freq=self.freq)
def _addsub_object_array(self, other, op):
# Add or subtract Array-like of objects
try:
# TimedeltaIndex can only operate with a subset of DateOffset
# subclasses. Incompatible classes will raise AttributeError,
# which we re-raise as TypeError
return super()._addsub_object_array(other, op)
except AttributeError as err:
raise TypeError(
f"Cannot add/subtract non-tick DateOffset to {type(self).__name__}"
) from err
@unpack_zerodim_and_defer("__mul__")
def __mul__(self, other) -> TimedeltaArray:
if is_scalar(other):
# numpy will accept float and int, raise TypeError for others
result = self._data * other
freq = None
if self.freq is not None and not isna(other):
freq = self.freq * other
return type(self)(result, freq=freq)
if not hasattr(other, "dtype"):
# list, tuple
other = np.array(other)
if len(other) != len(self) and not is_timedelta64_dtype(other.dtype):
# Exclude timedelta64 here so we correctly raise TypeError
# for that instead of ValueError
raise ValueError("Cannot multiply with unequal lengths")
if is_object_dtype(other.dtype):
# this multiplication will succeed only if all elements of other
# are int or float scalars, so we will end up with
# timedelta64[ns]-dtyped result
result = [self[n] * other[n] for n in range(len(self))]
result = np.array(result)
return type(self)(result)
# numpy will accept float or int dtype, raise TypeError for others
result = self._data * other
return type(self)(result)
__rmul__ = __mul__
@unpack_zerodim_and_defer("__truediv__")
def __truediv__(self, other):
# timedelta / X is well-defined for timedelta-like or numeric X
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
if other is NaT:
# specifically timedelta64-NaT
result = np.empty(self.shape, dtype=np.float64)
result.fill(np.nan)
return result
# otherwise, dispatch to Timedelta implementation
return self._data / other
elif lib.is_scalar(other):
# assume it is numeric
result = self._data / other
freq = None
if self.freq is not None:
# Tick division is not implemented, so operate on Timedelta
freq = self.freq.delta / other
return type(self)(result, freq=freq)
if not hasattr(other, "dtype"):
# e.g. list, tuple
other = np.array(other)
if len(other) != len(self):
raise ValueError("Cannot divide vectors with unequal lengths")
elif is_timedelta64_dtype(other.dtype):
# let numpy handle it
return self._data / other
elif is_object_dtype(other.dtype):
# We operate on raveled arrays to avoid problems in inference
# on NaT
srav = self.ravel()
orav = other.ravel()
result = [srav[n] / orav[n] for n in range(len(srav))]
result = np.array(result).reshape(self.shape)
# We need to do dtype inference in order to keep DataFrame ops
# behavior consistent with Series behavior
inferred = lib.infer_dtype(result)
if inferred == "timedelta":
flat = result.ravel()
result = type(self)._from_sequence(flat).reshape(result.shape)
elif inferred == "floating":
result = result.astype(float)
return result
else:
result = self._data / other
return type(self)(result)
@unpack_zerodim_and_defer("__rtruediv__")
def __rtruediv__(self, other):
# X / timedelta is defined only for timedelta-like X
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
if other is NaT:
# specifically timedelta64-NaT
result = np.empty(self.shape, dtype=np.float64)
result.fill(np.nan)
return result
# otherwise, dispatch to Timedelta implementation
return other / self._data
elif lib.is_scalar(other):
raise TypeError(
f"Cannot divide {type(other).__name__} by {type(self).__name__}"
)
if not hasattr(other, "dtype"):
# e.g. list, tuple
other = np.array(other)
if len(other) != len(self):
raise ValueError("Cannot divide vectors with unequal lengths")
elif is_timedelta64_dtype(other.dtype):
# let numpy handle it
return other / self._data
elif is_object_dtype(other.dtype):
# Note: unlike in __truediv__, we do not _need_ to do type
# inference on the result. It does not raise, a numeric array
# is returned. GH#23829
result = [other[n] / self[n] for n in range(len(self))]
return np.array(result)
else:
raise TypeError(
f"Cannot divide {other.dtype} data by {type(self).__name__}"
)
@unpack_zerodim_and_defer("__floordiv__")
def __floordiv__(self, other):
if is_scalar(other):
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
if other is NaT:
# treat this specifically as timedelta-NaT
result = np.empty(self.shape, dtype=np.float64)
result.fill(np.nan)
return result
# dispatch to Timedelta implementation
result = other.__rfloordiv__(self._data)
return result
# at this point we should only have numeric scalars; anything
# else will raise
result = self.asi8 // other
np.putmask(result, self._isnan, iNaT)
freq = None
if self.freq is not None:
# Note: freq gets division, not floor-division
freq = self.freq / other
if freq.nanos == 0 and self.freq.nanos != 0:
# e.g. if self.freq is Nano(1) then dividing by 2
# rounds down to zero
freq = None
return type(self)(result.view("m8[ns]"), freq=freq)
if not hasattr(other, "dtype"):
# list, tuple
other = np.array(other)
if len(other) != len(self):
raise ValueError("Cannot divide with unequal lengths")
elif is_timedelta64_dtype(other.dtype):
other = type(self)(other)
# numpy timedelta64 does not natively support floordiv, so operate
# on the i8 values
result = self.asi8 // other.asi8
mask = self._isnan | other._isnan
if mask.any():
result = result.astype(np.float64)
np.putmask(result, mask, np.nan)
return result
elif is_object_dtype(other.dtype):
result = [self[n] // other[n] for n in range(len(self))]
result = np.array(result)
if lib.infer_dtype(result, skipna=False) == "timedelta":
result, _ = sequence_to_td64ns(result)
return type(self)(result)
return result
elif is_integer_dtype(other.dtype) or is_float_dtype(other.dtype):
result = self._data // other
return type(self)(result)
else:
dtype = getattr(other, "dtype", type(other).__name__)
raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}")
@unpack_zerodim_and_defer("__rfloordiv__")
def __rfloordiv__(self, other):
if is_scalar(other):
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
if other is NaT:
# treat this specifically as timedelta-NaT
result = np.empty(self.shape, dtype=np.float64)
result.fill(np.nan)
return result
# dispatch to Timedelta implementation
result = other.__floordiv__(self._data)
return result
raise TypeError(
f"Cannot divide {type(other).__name__} by {type(self).__name__}"
)
if not hasattr(other, "dtype"):
# list, tuple
other = np.array(other)
if len(other) != len(self):
raise ValueError("Cannot divide with unequal lengths")
elif is_timedelta64_dtype(other.dtype):
other = type(self)(other)
# numpy timedelta64 does not natively support floordiv, so operate
# on the i8 values
result = other.asi8 // self.asi8
mask = self._isnan | other._isnan
if mask.any():
result = result.astype(np.float64)
np.putmask(result, mask, np.nan)
return result
elif is_object_dtype(other.dtype):
result = [other[n] // self[n] for n in range(len(self))]
result = np.array(result)
return result
else:
dtype = getattr(other, "dtype", type(other).__name__)
raise TypeError(f"Cannot divide {dtype} by {type(self).__name__}")
@unpack_zerodim_and_defer("__mod__")
def __mod__(self, other):
# Note: This is a naive implementation, can likely be optimized
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
return self - (self // other) * other
@unpack_zerodim_and_defer("__rmod__")
def __rmod__(self, other):
# Note: This is a naive implementation, can likely be optimized
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
return other - (other // self) * self
@unpack_zerodim_and_defer("__divmod__")
def __divmod__(self, other):
# Note: This is a naive implementation, can likely be optimized
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
res1 = self // other
res2 = self - res1 * other
return res1, res2
@unpack_zerodim_and_defer("__rdivmod__")
def __rdivmod__(self, other):
# Note: This is a naive implementation, can likely be optimized
if isinstance(other, self._recognized_scalars):
other = Timedelta(other)
res1 = other // self
res2 = other - res1 * self
return res1, res2
def __neg__(self) -> TimedeltaArray:
if self.freq is not None:
return type(self)(-self._data, freq=-self.freq)
return type(self)(-self._data)
def __pos__(self) -> TimedeltaArray:
return type(self)(self._data, freq=self.freq)
def __abs__(self) -> TimedeltaArray:
# Note: freq is not preserved
return type(self)(np.abs(self._data))
# ----------------------------------------------------------------
# Conversion Methods - Vectorized analogues of Timedelta methods
def total_seconds(self) -> np.ndarray:
"""
Return total duration of each element expressed in seconds.
This method is available directly on TimedeltaArray, TimedeltaIndex
and on Series containing timedelta values under the ``.dt`` namespace.
Returns
-------
seconds : [ndarray, Float64Index, Series]
When the calling object is a TimedeltaArray, the return type
is ndarray. When the calling object is a TimedeltaIndex,
the return type is a Float64Index. When the calling object
is a Series, the return type is Series of type `float64` whose
index is the same as the original.
See Also
--------
datetime.timedelta.total_seconds : Standard library version
of this method.
TimedeltaIndex.components : Return a DataFrame with components of
each Timedelta.
Examples
--------
**Series**
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d'))
>>> s
0 0 days
1 1 days
2 2 days
3 3 days
4 4 days
dtype: timedelta64[ns]
>>> s.dt.total_seconds()
0 0.0
1 86400.0
2 172800.0
3 259200.0
4 345600.0
dtype: float64
**TimedeltaIndex**
>>> idx = pd.to_timedelta(np.arange(5), unit='d')
>>> idx
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq=None)
>>> idx.total_seconds()
Float64Index([0.0, 86400.0, 172800.0, 259200.00000000003, 345600.0],
dtype='float64')
"""
return self._maybe_mask_results(1e-9 * self.asi8, fill_value=None)
def to_pytimedelta(self) -> np.ndarray:
"""
Return Timedelta Array/Index as object ndarray of datetime.timedelta
objects.
Returns
-------
datetimes : ndarray
"""
return tslibs.ints_to_pytimedelta(self.asi8)
days = _field_accessor("days", "days", "Number of days for each element.")
seconds = _field_accessor(
"seconds",
"seconds",
"Number of seconds (>= 0 and less than 1 day) for each element.",
)
microseconds = _field_accessor(
"microseconds",
"microseconds",
"Number of microseconds (>= 0 and less than 1 second) for each element.",
)
nanoseconds = _field_accessor(
"nanoseconds",
"nanoseconds",
"Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.",
)
@property
def components(self):
"""
Return a dataframe of the components (days, hours, minutes,
seconds, milliseconds, microseconds, nanoseconds) of the Timedeltas.
Returns
-------
a DataFrame
"""
from pandas import DataFrame
columns = [
"days",
"hours",
"minutes",
"seconds",
"milliseconds",
"microseconds",
"nanoseconds",
]
hasnans = self._hasnans
if hasnans:
def f(x):
if isna(x):
return [np.nan] * len(columns)
return x.components
else:
def f(x):
return x.components
result = DataFrame([f(x) for x in self], columns=columns)
if not hasnans:
result = result.astype("int64")
return result
# ---------------------------------------------------------------------
# Constructor Helpers
def sequence_to_td64ns(data, copy=False, unit=None, errors="raise"):
"""
Parameters
----------
data : list-like
copy : bool, default False
unit : str, optional
The timedelta unit to treat integers as multiples of. For numeric
data this defaults to ``'ns'``.
Must be un-specified if the data contains a str and ``errors=="raise"``.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
converted : numpy.ndarray
The sequence converted to a numpy array with dtype ``timedelta64[ns]``.
inferred_freq : Tick or None
The inferred frequency of the sequence.
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting ``errors=ignore`` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level.
"""
inferred_freq = None
if unit is not None:
unit = parse_timedelta_unit(unit)
# Unwrap whatever we have into a np.ndarray
if not hasattr(data, "dtype"):
# e.g. list, tuple
if np.ndim(data) == 0:
# i.e. generator
data = list(data)
data = np.array(data, copy=False)
elif isinstance(data, ABCSeries):
data = data._values
elif isinstance(data, (ABCTimedeltaIndex, TimedeltaArray)):
inferred_freq = data.freq
data = data._data
elif isinstance(data, IntegerArray):
data = data.to_numpy("int64", na_value=tslibs.iNaT)
elif is_categorical_dtype(data.dtype):
data = data.categories.take(data.codes, fill_value=NaT)._values
copy = False
# Convert whatever we have into timedelta64[ns] dtype
if is_object_dtype(data.dtype) or is_string_dtype(data.dtype):
# no need to make a copy, need to convert if string-dtyped
data = objects_to_td64ns(data, unit=unit, errors=errors)
copy = False
elif is_integer_dtype(data.dtype):
# treat as multiples of the given unit
data, copy_made = ints_to_td64ns(data, unit=unit)
copy = copy and not copy_made
elif is_float_dtype(data.dtype):
# cast the unit, multiply base/frac separately
# to avoid precision issues from float -> int
mask = np.isnan(data)
m, p = precision_from_unit(unit or "ns")
base = data.astype(np.int64)
frac = data - base
if p:
frac = np.round(frac, p)
data = (base * m + (frac * m).astype(np.int64)).view("timedelta64[ns]")
data[mask] = iNaT
copy = False
elif is_timedelta64_dtype(data.dtype):
if data.dtype != TD64NS_DTYPE:
# non-nano unit
# TODO: watch out for overflows
data = data.astype(TD64NS_DTYPE)
copy = False
else:
# This includes datetime64-dtype, see GH#23539, GH#29794
raise TypeError(f"dtype {data.dtype} cannot be converted to timedelta64[ns]")
data = np.array(data, copy=copy)
assert data.dtype == "m8[ns]", data
return data, inferred_freq
def ints_to_td64ns(data, unit="ns"):
"""
Convert an ndarray with integer-dtype to timedelta64[ns] dtype, treating
the integers as multiples of the given timedelta unit.
Parameters
----------
data : numpy.ndarray with integer-dtype
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
bool : whether a copy was made
"""
copy_made = False
unit = unit if unit is not None else "ns"
if data.dtype != np.int64:
# converting to int64 makes a copy, so we can avoid
# re-copying later
data = data.astype(np.int64)
copy_made = True
if unit != "ns":
dtype_str = f"timedelta64[{unit}]"
data = data.view(dtype_str)
# TODO: watch out for overflows when converting from lower-resolution
data = data.astype("timedelta64[ns]")
# the astype conversion makes a copy, so we can avoid re-copying later
copy_made = True
else:
data = data.view("timedelta64[ns]")
return data, copy_made
def objects_to_td64ns(data, unit=None, errors="raise"):
"""
Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
Parameters
----------
data : ndarray or Index
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
Must not be specified if the data contains a str.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level.
"""
# coerce Index to np.ndarray, converting string-dtype if necessary
values = np.array(data, dtype=np.object_, copy=False)
result = array_to_timedelta64(values, unit=unit, errors=errors)
return result.view("timedelta64[ns]")
def _validate_td64_dtype(dtype):
dtype = pandas_dtype(dtype)
if is_dtype_equal(dtype, np.dtype("timedelta64")):
# no precision disallowed GH#24806
msg = (
"Passing in 'timedelta' dtype with no precision is not allowed. "
"Please pass in 'timedelta64[ns]' instead."
)
raise ValueError(msg)
if not is_dtype_equal(dtype, TD64NS_DTYPE):
raise ValueError(f"dtype {dtype} cannot be converted to timedelta64[ns]")
return dtype
|
bsd-3-clause
|
qiudebo/13learn
|
code/matplotlib/aqy/aqy_lines_bars3.py
|
1
|
1499
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = 'qiudebo'
import numpy as np
import matplotlib.pyplot as plt
if __name__ == '__main__':
plt.rcdefaults()
fig, ax = plt.subplots()
labels = (u"硕士以上", u"本科", u"大专", u"高中-中专", u"初中", u"小学")
x = (0.13, 0.22, 0.25, 0.18, 0.13, 0.10)
x1 = (-0.11, -0.19, -0.23, -0.20, -0.17, -0.10)
width = 0.35 # 条形图的宽度
y = np.arange(len(labels))
ax.barh(y, x, width, align='center', color='g', label=u'我的前半生')
ax.barh(y, x1, width, align='center', color='b', label=u'三生三世十里桃花')
ax.set_yticks(y + width/2)
ax.set_yticklabels(labels)
ax.invert_yaxis()
ax.set_xlabel('')
ax.set_title('')
# ax.yaxis.grid(True) # 水平网格
ax.xaxis.grid(True)
plt.xlim(-0.3, 0.3)
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号
#ax.set_xticks(()) # 隐藏y轴
# 设置图例
# plt.legend(loc="lower right", bbox_to_anchor=[1, 0.95],shadow=True,
# ncol=1, title="Legend", fancybox=True)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=6)
# 去除样例周边的边框
# plt.legend(frameon=False)
ax.get_legend().get_title().set_color("red")
plt.show()
|
mit
|
LEX2016WoKaGru/pyClamster
|
scripts/cam_kite/cam_kite_doppel_all.py
|
1
|
9888
|
#!/usr/bin/env python3
import pyclamster
from pyclamster.coordinates import Coordinates3d
import numpy as np
import matplotlib.pyplot as plt
import logging
import pickle
import datetime
logging.basicConfig(level=logging.DEBUG)
def read_theo_data(file_list):
m = []
for measure_files in file_list:
time1 = []
star1 = []
azim1 = []
elev1 = []
for k in range(len(measure_files)):
time1.append([])
star1.append([])
azim1.append([])
elev1.append([])
with open(measure_files[k]) as f:
f.readline()
eof = False
while not eof:
line = f.readline()
if line[0]=='E':
eof = True
time1[-1].append(int(line[3:6]))
star1[-1].append(1 if line[6:7]=='*' else 0)
azim1[-1].append(pyclamster.utils.deg2rad(float(line[7:13])))
elev1[-1].append(pyclamster.utils.deg2rad(float(line[15:20])))
time = []
star = []
azim = []
elev = []
for i in range(len(measure_files)):
time.append(np.array(time1[i]))
star.append(np.array(star1[i]))
azim.append(np.array(azim1[i]))
elev.append(np.array(elev1[i]))
maxlen = min([len(time[i]) for i in [0,1]])
m1 = np.empty([6,maxlen])
m1[0,:] = time[0][:maxlen]
m1[1,:] = star[0][:maxlen]+star[1][:maxlen]
m1[2,:] = azim[0][:maxlen]
m1[3,:] = azim[1][:maxlen]
m1[4,:] = elev[0][:maxlen]
m1[5,:] = elev[1][:maxlen]
m.append(m1)
return m
#t3m1,t4m1,t3m2,t4m2
datafiles = [["scripts/theo_kite/Theo_daten/rot/000R_20160901_110235.td4",
"scripts/theo_kite/Theo_daten/gelb/000G_20160901_110235.td4"],
["scripts/theo_kite/Theo_daten/rot/000R_20160901_112343.td4",
"scripts/theo_kite/Theo_daten/gelb/000G_20160901_112344.td4"]]
data = read_theo_data(datafiles)
theo3_gk = Coordinates3d(x=4450909.840, y=6040800.456, z=6
,azimuth_offset=3/2*np.pi,azimuth_clockwise=True)
theo4_gk = Coordinates3d(x=4450713.646, y=6040934.273, z=1
,azimuth_offset=3/2*np.pi,azimuth_clockwise=True)
hotel_gk = Coordinates3d(x=4449525.439, y=6041088.713
,azimuth_offset=3/2*np.pi,azimuth_clockwise=True)
nordcorr3 = -np.arctan2(hotel_gk.y-theo3_gk.y,hotel_gk.x-theo3_gk.x)+np.pi*0.5
nordcorr4 = -np.arctan2(hotel_gk.y-theo4_gk.y,hotel_gk.x-theo4_gk.x)+np.pi*0.5
# first measurements
theo3_1 = Coordinates3d(
azimuth = pyclamster.pos_rad(data[0][2,:]+nordcorr3),
elevation = data[0][4,:],
azimuth_clockwise = True,
azimuth_offset = 3/2*np.pi,
elevation_type = "ground"
)
theo4_1 = Coordinates3d(
azimuth = pyclamster.pos_rad(data[0][3,:]+nordcorr4),
elevation = data[0][5,:],
azimuth_clockwise = True,
azimuth_offset = 3/2*np.pi,
elevation_type = "ground"
)
# second measurements
theo3_2 = Coordinates3d(
azimuth = pyclamster.pos_rad(data[1][2,:]+nordcorr3),
elevation = data[1][4,:],
azimuth_clockwise = True,
azimuth_offset = 3/2*np.pi,
elevation_type = "ground"
)
theo4_2 = Coordinates3d(
azimuth = pyclamster.pos_rad(data[1][3,:]+nordcorr4),
elevation = data[1][5,:],
azimuth_clockwise = True,
azimuth_offset = 3/2*np.pi,
elevation_type = "ground"
)
# calculate 3d positions via doppelanschnitt
doppel1,var_list1 = pyclamster.doppelanschnitt_Coordinates3d(
aziele1 = theo3_1, aziele2 = theo4_1,
pos1 = theo3_gk, pos2 = theo4_gk,plot_info=True
)
doppel2,var_list2 = pyclamster.doppelanschnitt_Coordinates3d(
aziele1 = theo3_2, aziele2 = theo4_2,
pos1 = theo3_gk, pos2 = theo4_gk,plot_info=True
)
session3_old = pickle.load(open('data/sessions/FE3_session.pk','rb'))
session4_old = pickle.load(open('data/sessions/FE4_session.pk','rb'))
session3_new = pickle.load(open('data/sessions/FE3_session_new.pk','rb'))
session4_new = pickle.load(open('data/sessions/FE4_session_new.pk','rb'))
cam3_old,time3_old=pickle.load(open("data/cam_kite/FE3_cam_kite.pk","rb"))
cam4_old,time4_old=pickle.load(open("data/cam_kite/FE4_cam_kite.pk","rb"))
time4_old = time3_old
cam3_new,time3_new=pickle.load(open("data/cam_kite/FE3_cam_kite_new.pk","rb"))
cam3_new.elevation = np.pi/2 - cam3_new.elevation
cam4_new,time4_new=pickle.load(open("data/cam_kite/FE4_cam_kite_new.pk","rb"))
cam4_new.elevation = np.pi/2 - cam4_new.elevation
# calculate 3d positions via doppelanschnitt
doppel_cam_old,var_list_old = pyclamster.doppelanschnitt_Coordinates3d(
aziele1 = cam3_old, aziele2 = cam4_old,
pos1 = session3_old.position, pos2 = session4_old.position,plot_info=True
)
# calculate 3d positions via doppelanschnitt new
doppel_cam_new,var_list_new = pyclamster.doppelanschnitt_Coordinates3d(
aziele1 = cam3_new, aziele2 = cam4_new,
pos1 = session3_new.position, pos2 = session4_new.position,plot_info=True
)
if 0:# plot results doppelanschnitt_plot function
pyclamster.doppelanschnitt_plot('theo1',doppel1,var_list1,theo3_gk,theo4_gk,
plot_view=1,plot_position=1,plot_n=1)
pyclamster.doppelanschnitt_plot('theo2',doppel2,var_list2,theo3_gk,theo4_gk,
plot_view=1,plot_position=1,plot_n=1)
pyclamster.doppelanschnitt_plot('cam_old',doppel_cam_old,var_list_old,
session3_old.position,session4_old.position,
plot_view=1,plot_position=1,plot_n=1)
pyclamster.doppelanschnitt_plot('cam_new',doppel_cam_new,var_list_new,
session3_new.position,session4_new.position,
plot_view=1,plot_position=1,plot_n=1)
if 0:# plot results theo
plt.style.use("fivethirtyeight")
ax = doppel1.plot3d(method="line")
ax.set_title("THEO first measurement")
ax.scatter3D(theo3_gk.x,theo3_gk.y,theo3_gk.z,label="Theo 3",c='r')
ax.scatter3D(theo4_gk.x,theo4_gk.y,theo4_gk.z,label="Theo 4",c='g')
ax.set_zlim(0,200)
plt.legend()
ax = doppel2.plot3d(method="line")
ax.set_title("THEO second measurement")
ax.scatter3D(theo3_gk.x,theo3_gk.y,theo3_gk.z,label="Theo 3",c='r')
ax.scatter3D(theo4_gk.x,theo4_gk.y,theo4_gk.z,label="Theo 4",c='g')
ax.set_zlim(0,200)
plt.legend()
if 0:# plot results cam
ax = doppel_cam_old.plot3d(method="line")
ax.scatter3D(session3_old.position.x,session3_old.position.y,
session3_old.position.z,label="Cam 3",c='r')
ax.scatter3D(session4_old.position.x,session4_old.position.y,
session4_old.position.z,label="Cam 4",c='g')
ax.set_title("CAM all with bad calibration [BUG, DON'T INTERPRET THIS!]")
ax.set_zlim(0,300)
plt.legend()
ax = doppel_cam_new.plot3d(method="line")
ax.scatter3D(session3_new.position.x,session3_new.position.y,
session3_old.position.z,label="Cam 3",c='r')
ax.scatter3D(session4_new.position.x,session4_new.position.y,
session4_old.position.z,label="Cam 4",c='g')
ax.set_title("CAM all with new calibration")
ax.set_zlim(0,300)
plt.legend()
if 1:# plot time series
#fig, ax = plt.subplots()
#ax.plot(time3_old,cam3_old.elevation,label="cam3 old elevation")
#ax.plot(time3_old,cam3_old.azimuth,label="cam3 old azimuth")
#ax.plot(time4_old,cam4_old.elevation,label="cam4 old elevation")
#ax.plot(time4_old,cam4_old.azimuth,label="cam4 old azimuth")
#plt.legend(loc="best")
fig, ax = plt.subplots()
ax.plot(time3_new,cam3_new.azimuth/np.pi*180,label="cam3 new azimuth")
ax.plot(time4_new,cam4_new.azimuth/np.pi*180,label="cam4 new azimuth")
plt.legend(loc="best")
ax.set_title("Drachen Doppelanschnitt Azimuth")
ax.set_xlabel("Zeit [Uhr]")
ax.set_ylabel("Winkel [°]")
starttime1 = datetime.datetime(2016,9,1,11+1,2,35) #utc +1
timeseries1 = np.array([starttime1+datetime.timedelta(seconds = data[0][0,i]) for i in range(len(data[0][0,:]))])
ax.plot(timeseries1,theo3_1.azimuth/np.pi*180,'x',label="theo3 (first) azimuth")
ax.plot(timeseries1,theo4_1.azimuth/np.pi*180,'x',label="theo4 (first) azimuth")
plt.legend(loc="best")
starttime2 = datetime.datetime(2016,9,1,11+1,23,44) #utc +1
timeseries2 = np.array([starttime2+datetime.timedelta(seconds = data[1][0,i]) for i in range(len(data[1][0,:]))])
ax.plot(timeseries2,theo3_2.azimuth/np.pi*180,'x',label="theo3 (second) azimuth")
ax.plot(timeseries2,theo4_2.azimuth/np.pi*180,'x',label="theo4 (second) azimuth")
plt.legend(loc="best")
fig, ax = plt.subplots()
ax.plot(time3_new,cam3_new.elevation/np.pi*180,label="cam3 new elevation")
ax.plot(time4_new,cam4_new.elevation/np.pi*180,label="cam4 new elevation")
plt.legend(loc="best")
ax.set_title("Drachen Doppelanschnitt Elevation")
ax.set_xlabel("Zeit [Uhr]")
ax.set_ylabel("Winkel [°]")
starttime1 = datetime.datetime(2016,9,1,11+1,2,35) #utc +1
timeseries1 = np.array([starttime1+datetime.timedelta(seconds = data[0][0,i]) for i in range(len(data[0][0,:]))])
ax.plot(timeseries1,theo3_1.elevation/np.pi*180,'x',label="theo3 (first) elevation")
ax.plot(timeseries1,theo4_1.elevation/np.pi*180,'x',label="theo4 (first) elevation")
plt.legend(loc="best")
starttime2 = datetime.datetime(2016,9,1,11+1,23,44) #utc +1
timeseries2 = np.array([starttime2+datetime.timedelta(seconds = data[1][0,i]) for i in range(len(data[1][0,:]))])
ax.plot(timeseries2,theo3_2.elevation/np.pi*180,'x',label="theo3 (second) elevation")
ax.plot(timeseries2,theo4_2.elevation/np.pi*180,'x',label="theo4 (second) elevation")
plt.legend(loc="best")
plt.show()
|
gpl-3.0
|
BennettLandman/pyPheWAS
|
Novelty_PheDAS/novelty_pubmed_reg_preproc.py
|
1
|
4578
|
import pandas as pd
import argparse
import os
import numpy as np
import os.path as osp
from ast import literal_eval
from tqdm import tqdm
def parse_args():
parser = argparse.ArgumentParser(description="Add PubMED results to pyPheWAS stat file")
parser.add_argument('--statfile', required=True, type=str, help='Name of the stat file (e.g. regressions.csv)')
parser.add_argument('--dx_pm', required=True, type=str, help='Name of the Dx PubMED file (e.g. dx_PubMED_results.csv)')
parser.add_argument('--pm_dir', required=True, type=str, help ='Path to PheCode PubMED directory')
parser.add_argument('--path', required=False, default='.', type=str, help='Path to all input files and destination of output files')
parser.add_argument('--outfile', required=False, default=None, type=str, help='Name of the updated regression output file (default: same as statfile)')
args = parser.parse_args()
return args
def main():
### Get Args ###
args = parse_args()
if args.outfile is None:
outfile = osp.join(args.path, args.statfile)
else:
outfile = osp.join(args.path, args.outfile)
reg_f = open(osp.join(args.path, args.statfile))
reg_hdr = reg_f.readline()
# TODO: Add str back in - fix saving of reg file
reg = pd.read_csv(reg_f) # , dtype={"PheWAS Code":str})
reg_f.close()
reg.rename(columns={'Conf-interval beta':'beta.ci','std_error':'beta.se'},inplace=True)
dx_pubmed = pd.read_csv(osp.join(args.path, args.dx_pm))
pubmed_dir = args.pm_dir
### Set-up Dx list of UIDs ###
dx_set = set(literal_eval(dx_pubmed.loc[0, "IdsList"]))
dx_count = len(dx_set)
reg["AD.PM.count"] = dx_count
### Get PheCode PubMED counts & joint PubMED counts ###
reg["phe.PM.count"] = np.nan
reg["joint.PM.count"] = np.nan
reg["P.PM.phe.given.AD"] = np.nan
reg["P.PM.AD.given.phe"] = np.nan
tmp = open("tmp.txt","w+")
pubmed_file_list = os.listdir(pubmed_dir)
for j in tqdm(range(0,1856)):
fname = "phewas_counts_%d.csv" % j
if fname in pubmed_file_list:
try:
tmp.write("%s\n" % fname)
phecode_pubmed = pd.read_csv(osp.join(pubmed_dir, fname)) # , dtype={"phewas_code": str})
except Exception as e:
ef = open("novelty_pubmed_errors.csv", 'a+')
ef.write("error opening%s,%s" % (fname, e.args[0]))
ef.close()
for ix, data in phecode_pubmed.iterrows():
phe = data["phewas_code"]
tmp.write("%s\n" % phe)
if phe in reg["PheWAS Code"].values:
phe_ix = reg["PheWAS Code"] == phe
phe_set = set(literal_eval(data["IdsList"]))
reg.loc[phe_ix, "phe.PM.count"] = len(phe_set)
joint_set = dx_set.intersection(phe_set)
reg.loc[phe_ix, "joint.PM.count"] = len(joint_set)
if len(dx_set) != 0:
reg.loc[phe_ix, "P.PM.phe.given.AD"] = len(joint_set) / len(dx_set)
if len(phe_set) != 0:
reg.loc[phe_ix, "P.PM.AD.given.phe"] = len(joint_set) / len(phe_set)
for j in tqdm(['','2','3']):
fname = "phewas_counts_missed%s.csv" % j
if fname in pubmed_file_list:
try:
tmp.write("%s\n" % fname)
phecode_pubmed = pd.read_csv(osp.join(pubmed_dir, fname)) # , dtype={"phewas_code": str})
except Exception as e:
ef = open("novelty_pubmed_errors.csv", 'a+')
ef.write("error opening%s,%s" % (fname, e.args[0]))
ef.close()
for ix, data in phecode_pubmed.iterrows():
phe = data["phewas_code"]
tmp.write("%s\n" % phe)
if phe in reg["PheWAS Code"].values:
phe_ix = reg["PheWAS Code"] == phe
phe_set = set(literal_eval(data["IdsList"]))
reg.loc[phe_ix, "phe.PM.count"] = len(phe_set)
joint_set = dx_set.intersection(phe_set)
reg.loc[phe_ix, "joint.PM.count"] = len(joint_set)
if len(dx_set) != 0:
reg.loc[phe_ix, "P.PM.phe.given.AD"] = len(joint_set) / len(dx_set)
if len(phe_set) != 0:
reg.loc[phe_ix, "P.PM.AD.given.phe"] = len(joint_set) / len(phe_set)
tmp.close()
reg.to_csv(outfile,index=False)
if __name__ == '__main__':
main()
|
mit
|
alexcritschristoph/VICA
|
identify_host.py
|
4
|
11633
|
'''
Alex Crits-Christoph
License: GPL3
Identifies closest GenBank prokaryote genomes by euclidean distance between tetranucleotide frequencies of query sequences.
'''
import json
from scipy.spatial import distance
import sys
from Bio import SeqIO
from marker_genes import meta_marker
import operator
import argparse
import os.path
from sklearn import manifold
import matplotlib.pyplot as plt
import numpy as np
from collections import Counter
##Calculates the tetramer counts for an input sequence
def calc_tetra(seqs):
#Create tetramers dict
tetramers = {}
for a in ['A', 'C', 'G', 'T']:
for b in ['A', 'C', 'G', 'T']:
for c in ['A', 'C', 'G', 'T']:
for d in ['A', 'C', 'G', 'T']:
tetramers[a+b+c+d] = 0
#Count tetramers across sequence in a 4 bp sliding window
start = 0
end = 4
for i in range(0,len(str(seqs.seq))):
if len(str(seqs.seq[start:end])) == 4:
try:
tetramers[str(seqs.seq[start:end])] += 1
except:
pass
start += 1
end += 1
#Return tetramers dictionary
return tetramers
##
##
def read_data(tetramers, data):
#Normalize
total = sum(tetramers.values())
for k in tetramers.keys():
tetramers[k] = float(tetramers[k]) / float(total)
for species in data.keys():
total = sum(data[species].values())
for k in data[species].keys():
data[species][k] = float(data[species][k]) / float(total)
#compare
query_dat = []
for d in sorted(tetramers.keys()):
query_dat.append(tetramers[d])
distances = {}
for species in data.keys():
subject_data = []
for d in sorted(data[species].keys()):
subject_data.append(data[species][d])
distances[species] = round(distance.euclidean(query_dat, subject_data),5)
count = 0
result_string = ''
for w in sorted(distances, key=distances.get):
if count <= 3:
result_string += "(" + w + ", " + str(distances[w]) + "), "
count += 1
else:
break
return result_string
def tetrat_compare(tetramers1, results):
#Normalize
distances = {}
real_names = {}
for tets in results[0]:
tetramers2 = results[0][tets]
name = results[1][tets] + " (" + tets + ")"
real_names[results[1][tets] + " (" + tets + ")"] = tets
total = sum(tetramers1.values())
for k in tetramers1.keys():
tetramers1[k] = float(tetramers1[k]) / float(total)
total = sum(tetramers2.values())
for k in tetramers2.keys():
tetramers2[k] = float(tetramers2[k]) / float(total)
query_dat = []
for d in sorted(tetramers1.keys()):
query_dat.append(tetramers1[d])
subject_dat = []
for d in sorted(tetramers2.keys()):
subject_dat.append(tetramers2[d])
distances[name] = round(distance.euclidean(query_dat, subject_dat),5)
return [real_names, sorted(distances.items(), key=operator.itemgetter(1))]
def visualize(data, file_name):
tetramer_array = []
names = data[0][1]
sizes = data[0][2]
contig_count = 0
name_pos = []
for t in sorted(data[0][0].keys()):
name_pos.append(t)
contig_count += 1
temp = []
for tet in sorted(data[0][0][t].keys()):
temp.append(data[0][0][t][tet])
tetramer_array.append(temp)
for t in sorted(data[1].keys()):
temp = []
for tet in sorted(data[1][t].keys()):
temp.append(data[1][t][tet])
tetramer_array.append(temp)
tetramers_np = np.array(tetramer_array)
seed = np.random.RandomState(seed=3)
mds = manifold.MDS(n_components=2, max_iter=3000, eps=1e-9, random_state=seed,
dissimilarity="euclidean", n_jobs=1)
fit = mds.fit_transform(tetramers_np)
#Contig sizing
for contig in sizes:
sizes[contig] = float(sizes[contig]) / float(max(sizes.values())) * 250 + 35
sizes_list = []
for contig in sorted(sizes.keys()):
sizes_list.append(sizes[contig])
#Contig colors
color_assigned = {}
color_list = []
color_brewer = ['#1f78b4', '#33a02c', '#e31a1c', '#ff7f00', '#6a3d9a', '#b15928', '#a6cee3', '#b2df8a', '#fb9a99', '#fdbf6f', '#cab2d6', '#ffff99']
i = 0
#assign colors
most_common_names = Counter(names.values()).most_common()
for name in most_common_names:
if len(name) > 1:
n = name[0]
if n not in color_assigned:
if i < len(color_brewer)-1:
color_assigned[n] = color_brewer[i]
i += 1
else:
color_assigned[n] = color_brewer[i]
else:
color_assigned[n] = '#ffff99'
#create color list
for name in sorted(names.keys()):
color_list.append(color_assigned[names[name]])
#plot it
plt.scatter(fit[0:contig_count,0], fit[0:contig_count:,1], s=sizes_list, c=color_list, marker= 'o', alpha=0.9)
plt.scatter(fit[contig_count:,0], fit[contig_count:,1], marker= 'x', s=50, c="#ef1a1a", alpha=1)
#Output graphical data to file
f = open('./pca_data.txt', 'w')
f.write("Host contigs:\n")
f.write("Contig name, BLAST hit, x coord, y coord\n")
i = 0
for t in sorted(data[0][0].keys()):
f.write(str(t) + "," + str(names[t]) + "," + str(fit[i,0]) + "," + str(fit[i,1]) + "\n")
i += 1
f.write("Viral contigs:\n")
f.write("Contig name, closest host contig, closest host name, x coord, y coord\n")
for t in sorted(data[1].keys()):
name = data[2][t]
f.write(str(t) + "," + str(name) + "," + str(names[name]) + "," + str(fit[i,0]) + "," + str(fit[i,1]) + "\n")
i += 1
f.close()
#Add labels
positions = {}
i = 0
avg_size = sum(sizes.values()) / len(sizes.values())
for name in sorted(names.keys()):
if len(names.keys()) > 5 and sizes[name] > avg_size:
positions[names[name]] = [fit[i,0], fit[i,1]]
i += 1
#Plot virus - closest host lines
i = 0
for t in sorted(data[1].keys()):
#get pos of name in fit
name = name_pos.index(data[2][t])
#plot line
plt.plot([fit[contig_count+i,0], fit[name,0]], [fit[contig_count+i,1], fit[name,1]], alpha=0.5)
# plt.annotate(
# t,
# xy = (fit[contig_count+i,0], fit[contig_count+i,1]), xytext = (0, -20),
# textcoords = 'offset points', ha = 'right', va = 'bottom',
# fontsize = 8,
# bbox = dict(boxstyle = 'round,pad=0.2', fc = 'white', alpha = 0.5),
# arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
i += 1
for name in positions.keys():
x = positions[name][0]
y = positions[name][1]
plt.annotate(
name,
xy = (x, y), xytext = (-30, 30),
textcoords = 'offset points', ha = 'right', va = 'bottom',
fontsize = 10,
bbox = dict(boxstyle = 'round,pad=0.2', fc = 'white', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
plt.savefig(file_name)
plt.show()
## Main Function
if __name__ == "__main__":
## Handle arguments.
__author__ = "Alex Crits-Christoph"
parser = argparse.ArgumentParser(description='Predicts host(s) for a contig through tetranucleotide similarity.')
parser.add_argument('-i','--input', help='Viral contig(s) fasta file',required=True)
parser.add_argument('-a','--assembly',help='Cellular metagenome assembly FASTA file (to search for host marker genes).', required=False)
parser.add_argument('-v', '--visualize', help="Visualizes NMDS of tetranucleotide frequencies for host contigs and viral contigs.", required=False, action='store_true')
parser.add_argument('-b', '--blast_path', help="Path to the BLASTP executable (default: blastp).", required=False)
parser.add_argument('-hm', '--hmmsearch_path', help="Path to the hmmsearch executable (default: hmmsearch).", required=False)
parser.add_argument('-o', '--output', help="Path to the directory to store output (will create if does not exist).", required=False)
args = parser.parse_args()
#Check to see if output directory exists
if args.output:
if os.path.isdir(args.output):
output_dir = args.output
else:
os.system("mkdir " + args.output)
if os.path.isdir(args.output):
output_dir = args.output
else:
print "[ERROR] Could not create output directory."
sys.exit(1)
else:
output_dir = './'
blast_path = 'blastp'
if args.blast_path:
blast_path = args.blast_path
hmmsearch_path = 'hmmsearch'
if args.hmmsearch_path:
hmmsearch_path = args.hmmsearch_path
if args.input:
if os.path.isfile(args.input):
contigs = args.input
else:
print "[ERROR] The provided viral contig FASTA file was not found"
sys.exit()
else:
print "[ERROR] No viral contig FASTA file was provided"
sys.exit()
database = './data/tetramer_database.dat'
if os.path.isfile('./data/tetramer_database.dat'):
print "[SETUP] Found tetramer database file, using " + database + "."
else:
print "[ERROR] could not find tetramer database. Check for ./data/tetramer_database.dat in the local directory"
sys.exit()
if args.assembly:
if os.path.isfile(args.input):
print "[SETUP] Assembly included. Will search for host marker genes in the assembly and compare viral contigs to genomes of identified hosts"
metagenome = args.assembly
else:
print "[ERROR] The provided metagenome assembly contig FASTA file was not found"
sys.exit()
else:
print "[SETUP] No assembly file included. Will compare viral contigs to all host genomes in GenBank"
#Read in viral queries
handle = open(contigs, "rU")
records = list(SeqIO.parse(handle, "fasta"))
handle.close()
print "[EXEC] Loading database"
with open(database) as data_file:
data = json.load(data_file)
if args.assembly:
print "[EXEC] Searching for marker contigs in metagenome assembly"
results = meta_marker.find_markers(metagenome, blast_path, hmmsearch_path, output_dir)
# #Compare with database
# print "[EXEC] Comparing query tetranucleotide frequencies to entire database"
# f = open(output_dir.rstrip("/") + "/" + contigs.split("/")[-1] + "_genbank.txt", 'w+')
# f.write("Viral contig name: (Closest GenBank match, Euclidean Distance)\n")
# for seqs in records:
# tetramers = calc_tetra(seqs)
# result_string = read_data(tetramers, data)
# f.write(seqs.id + ": " + result_string + "\n")
# f.close()
if args.assembly:
#Compare with contigs
print "[EXEC] Comparing query tetranucleotide frequencies to contigs with marker proteins"
f = open(output_dir.rstrip("/") + "/" + contigs.split("/")[-1].split(".")[0] + "_markercontigs.txt", 'w+')
f.write("Viral contig name: (Closest contig identity (contig name), Euclidean Distance to marker contig)\n")
combined_results = [results, {}, {}]
for seqs in records:
tetramers = calc_tetra(seqs)
tetrats = tetrat_compare(tetramers, results)
top_three = tetrats[1][:3]
f.write(seqs.id + ": " + str(top_three).replace("]","").replace("[","") + "\n")
combined_results[1][seqs.id] = tetramers
combined_results[2][seqs.id] = tetrats[0][top_three[0][0]]
f.close()
#Compare with closest BLAST matches in the database
print "[EXEC] Comparing query tetranucleotide frequencies to genomes of species identified in metagenome"
f = open(output_dir.rstrip("/") + "/" + contigs.split("/")[-1].split(".")[0] + "_markergenbank.txt", 'w+')
f.write("Viral contig name: (Identified species (marker contig name), Euclidean Distance to GenBank genome)\n")
#Create subset of genomes which were found in this assembly
genomes_in_metagenome = {}
species_to_contigs = {}
for contig in results[1]:
genus = results[1][contig]
for species in data.keys():
if genus in species:
genomes_in_metagenome[species + " (" + contig + ")"] = data[species]
species_to_contigs[species] = contig
for seqs in records:
tetramers = calc_tetra(seqs)
result_string = read_data(tetramers, genomes_in_metagenome)
f.write(seqs.id + ": " + result_string + "\n")
f.close()
#Visualization
print "[EXEC] Visualizing results"
if args.visualize:
visualize(combined_results, output_dir.rstrip("/") + "/" + contigs.split("/")[-1].split(".")[0] + ".png")
|
gpl-2.0
|
radk0s/pathfinding
|
algorithm/adaptation.py
|
1
|
4143
|
import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
import random
import copy
import yaml
from datetime import datetime
from cost import cost as costNorm
import annealing
def prepareElevationFunction(file, x, y, z):
with open(file, 'r') as file:
for line in file.readlines():
val = line.split('\t')
x.append(float(val[0]))
y.append(float(val[1]))
z.append(float(val[2]))
x = np.array(x)
y = np.array(y)
z = np.array(z)
xi, yi = np.linspace(x.min(), x.max(), 100), np.linspace(y.min(), y.max(), 100)
xi, yi = np.meshgrid(xi, yi)
rbf = scipy.interpolate.Rbf(x, y, z, function='linear')
return (rbf(xi, yi), rbf)
def calculateSegmentCost(start, end):
return cost(start, end)
def calculateTotalCost(path, elevationFn):
total_cost= 0
for i in xrange(len(path) - 1):
total_cost += costNorm(path[i][0], path[i][1], elevationFn(path[i][0], path[i][1]),
path[i + 1][0], path[i + 1][1], elevationFn(path[i + 1][0], path[i + 1][1]))
return total_cost
# generate initial solution
# TODO jak punkty sa zdefiniowane w ten sposob (czyt na odwrot (lon,lat) zamiast (lat,lon)) -> 'from geopy.distance import vincenty' bedzie zle przeliczalo odleglosci!
def generateRandomPoints(start, end, x, y, z, randomPoints):
points = [start, end]
return points[:1] + zip(random.sample(x, randomPoints), random.sample(y, randomPoints)) + points[1:]
def generatePoints(start, end, parts):
randomPoints = 0
points = [start, end]
diff_x = end[0] - start[0]
diff_y = end[1] - start[1]
step_x = diff_x/parts
step_y = diff_y/parts
return points[:1] + [(start[0] + step_x * i, start[1] + step_y * i) for i in range(parts)] + points[1:]
def drawPlot(filename, points, x, y, z, mesh, totalCost, steps, elapsed =0):
xx, yy = zip(*points)
plt.imshow(mesh, vmin=np.array(z).min(), vmax=np.array(z).max(), origin='lower',
extent=[np.array(x).min(), np.array(x).max(), np.array(y).min(), np.array(y).max()], cmap='terrain')
plt.plot(xx, yy, color='red', lw=2)
plt.colorbar()
text = 'path cost: ' + str(totalCost) + '\n' + \
'steps: ' + str(steps) + '\n' +\
'time: ' + str(elapsed)
plt.suptitle(text, fontsize=14, fontweight='bold')
plt.savefig(filename + '.png')
plt.close()
# plt.show()
def adaptation_path(configfile, pointsD = None):
start_time = datetime.now()
points = copy.deepcopy(pointsD);
config = None
with open(configfile, 'r') as stream:
config = yaml.load(stream)
steps = None
print(points)
if config['steps'] != None:
steps = int(config['steps'])
x, y, z = [], [], []
mesh, getElevation = prepareElevationFunction(config['data_file'], x, y, z)
if points == None:
if config['random_points'] == 1:
points = generateRandomPoints(
(config['start_lon'], config['start_lat']),
(config['end_lon'], config['end_lat']),
x, y, z, int(config['initial_points_count']))
else:
points = generatePoints(
(config['start_lon'], config['start_lat']),
(config['end_lon'], config['end_lat']),
int(config['initial_points_count']))
pathCost = calculateTotalCost(points, getElevation)
elapsed = datetime.now() - start_time
drawPlot('initial_path', points, x, y, z, mesh, calculateTotalCost(points, getElevation), 0, elapsed)
tsp = annealing.TSP(points, getElevation, calculateTotalCost, 200,
np.array(x).min(), np.array(x).max(),
np.array(y).min(), np.array(y).max()
)
if steps != None:
tsp.steps = steps;
tsp.copy_strategy = "slice"
state, e = tsp.anneal()
elapsed = datetime.now() - start_time
drawPlot('optimized_path_' + (str(steps) if steps != None else 'covergence'), state, x, y, z, mesh, calculateTotalCost(state, getElevation), tsp.currentStep, elapsed)
|
mit
|
RobertABT/heightmap
|
build/matplotlib/examples/widgets/span_selector.py
|
9
|
1091
|
#!/usr/bin/env python
"""
The SpanSelector is a mouse widget to select a xmin/xmax range and plot the
detail view of the selected region in the lower axes
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import SpanSelector
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(211, axisbg='#FFFFCC')
x = np.arange(0.0, 5.0, 0.01)
y = np.sin(2*np.pi*x) + 0.5*np.random.randn(len(x))
ax.plot(x, y, '-')
ax.set_ylim(-2,2)
ax.set_title('Press left mouse button and drag to test')
ax2 = fig.add_subplot(212, axisbg='#FFFFCC')
line2, = ax2.plot(x, y, '-')
def onselect(xmin, xmax):
indmin, indmax = np.searchsorted(x, (xmin, xmax))
indmax = min(len(x)-1, indmax)
thisx = x[indmin:indmax]
thisy = y[indmin:indmax]
line2.set_data(thisx, thisy)
ax2.set_xlim(thisx[0], thisx[-1])
ax2.set_ylim(thisy.min(), thisy.max())
fig.canvas.draw()
# set useblit True on gtkagg for enhanced performance
span = SpanSelector(ax, onselect, 'horizontal', useblit=True,
rectprops=dict(alpha=0.5, facecolor='red') )
plt.show()
|
mit
|
Chemcy/vnpy
|
vn.trader/ctaStrategy/ctaBacktesting.py
|
3
|
40071
|
# encoding: UTF-8
'''
本文件中包含的是CTA模块的回测引擎,回测引擎的API和CTA引擎一致,
可以使用和实盘相同的代码进行回测。
'''
from __future__ import division
from datetime import datetime, timedelta
from collections import OrderedDict
from itertools import product
import multiprocessing
import pymongo
from ctaBase import *
from vtConstant import *
from vtGateway import VtOrderData, VtTradeData
from vtFunction import loadMongoSetting
########################################################################
class BacktestingEngine(object):
"""
CTA回测引擎
函数接口和策略引擎保持一样,
从而实现同一套代码从回测到实盘。
"""
TICK_MODE = 'tick'
BAR_MODE = 'bar'
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
# 本地停止单编号计数
self.stopOrderCount = 0
# stopOrderID = STOPORDERPREFIX + str(stopOrderCount)
# 本地停止单字典
# key为stopOrderID,value为stopOrder对象
self.stopOrderDict = {} # 停止单撤销后不会从本字典中删除
self.workingStopOrderDict = {} # 停止单撤销后会从本字典中删除
# 引擎类型为回测
self.engineType = ENGINETYPE_BACKTESTING
# 回测相关
self.strategy = None # 回测策略
self.mode = self.BAR_MODE # 回测模式,默认为K线
self.startDate = ''
self.initDays = 0
self.endDate = ''
self.slippage = 0 # 回测时假设的滑点
self.rate = 0 # 回测时假设的佣金比例(适用于百分比佣金)
self.size = 1 # 合约大小,默认为1
self.priceTick = 0 # 价格最小变动
self.dbClient = None # 数据库客户端
self.dbCursor = None # 数据库指针
#self.historyData = [] # 历史数据的列表,回测用
self.initData = [] # 初始化用的数据
#self.backtestingData = [] # 回测用的数据
self.dbName = '' # 回测数据库名
self.symbol = '' # 回测集合名
self.dataStartDate = None # 回测数据开始日期,datetime对象
self.dataEndDate = None # 回测数据结束日期,datetime对象
self.strategyStartDate = None # 策略启动日期(即前面的数据用于初始化),datetime对象
self.limitOrderDict = OrderedDict() # 限价单字典
self.workingLimitOrderDict = OrderedDict() # 活动限价单字典,用于进行撮合用
self.limitOrderCount = 0 # 限价单编号
self.tradeCount = 0 # 成交编号
self.tradeDict = OrderedDict() # 成交字典
self.logList = [] # 日志记录
# 当前最新数据,用于模拟成交用
self.tick = None
self.bar = None
self.dt = None # 最新的时间
#----------------------------------------------------------------------
def setStartDate(self, startDate='20100416', initDays=10):
"""设置回测的启动日期"""
self.startDate = startDate
self.initDays = initDays
self.dataStartDate = datetime.strptime(startDate, '%Y%m%d')
initTimeDelta = timedelta(initDays)
self.strategyStartDate = self.dataStartDate + initTimeDelta
#----------------------------------------------------------------------
def setEndDate(self, endDate=''):
"""设置回测的结束日期"""
self.endDate = endDate
if endDate:
self.dataEndDate= datetime.strptime(endDate, '%Y%m%d')
# 若不修改时间则会导致不包含dataEndDate当天数据
self.dataEndDate.replace(hour=23, minute=59)
#----------------------------------------------------------------------
def setBacktestingMode(self, mode):
"""设置回测模式"""
self.mode = mode
#----------------------------------------------------------------------
def setDatabase(self, dbName, symbol):
"""设置历史数据所用的数据库"""
self.dbName = dbName
self.symbol = symbol
#----------------------------------------------------------------------
def loadHistoryData(self):
"""载入历史数据"""
host, port, logging = loadMongoSetting()
self.dbClient = pymongo.MongoClient(host, port)
collection = self.dbClient[self.dbName][self.symbol]
self.output(u'开始载入数据')
# 首先根据回测模式,确认要使用的数据类
if self.mode == self.BAR_MODE:
dataClass = CtaBarData
func = self.newBar
else:
dataClass = CtaTickData
func = self.newTick
# 载入初始化需要用的数据
flt = {'datetime':{'$gte':self.dataStartDate,
'$lt':self.strategyStartDate}}
initCursor = collection.find(flt)
# 将数据从查询指针中读取出,并生成列表
self.initData = [] # 清空initData列表
for d in initCursor:
data = dataClass()
data.__dict__ = d
self.initData.append(data)
# 载入回测数据
if not self.dataEndDate:
flt = {'datetime':{'$gte':self.strategyStartDate}} # 数据过滤条件
else:
flt = {'datetime':{'$gte':self.strategyStartDate,
'$lte':self.dataEndDate}}
self.dbCursor = collection.find(flt)
self.output(u'载入完成,数据量:%s' %(initCursor.count() + self.dbCursor.count()))
#----------------------------------------------------------------------
def runBacktesting(self):
"""运行回测"""
# 载入历史数据
self.loadHistoryData()
# 首先根据回测模式,确认要使用的数据类
if self.mode == self.BAR_MODE:
dataClass = CtaBarData
func = self.newBar
else:
dataClass = CtaTickData
func = self.newTick
self.output(u'开始回测')
self.strategy.inited = True
self.strategy.onInit()
self.output(u'策略初始化完成')
self.strategy.trading = True
self.strategy.onStart()
self.output(u'策略启动完成')
self.output(u'开始回放数据')
for d in self.dbCursor:
data = dataClass()
data.__dict__ = d
func(data)
self.output(u'数据回放结束')
#----------------------------------------------------------------------
def newBar(self, bar):
"""新的K线"""
self.bar = bar
self.dt = bar.datetime
self.crossLimitOrder() # 先撮合限价单
self.crossStopOrder() # 再撮合停止单
self.strategy.onBar(bar) # 推送K线到策略中
#----------------------------------------------------------------------
def newTick(self, tick):
"""新的Tick"""
self.tick = tick
self.dt = tick.datetime
self.crossLimitOrder()
self.crossStopOrder()
self.strategy.onTick(tick)
#----------------------------------------------------------------------
def initStrategy(self, strategyClass, setting=None):
"""
初始化策略
setting是策略的参数设置,如果使用类中写好的默认设置则可以不传该参数
"""
self.strategy = strategyClass(self, setting)
self.strategy.name = self.strategy.className
#----------------------------------------------------------------------
def sendOrder(self, vtSymbol, orderType, price, volume, strategy):
"""发单"""
self.limitOrderCount += 1
orderID = str(self.limitOrderCount)
order = VtOrderData()
order.vtSymbol = vtSymbol
order.price = self.roundToPriceTick(price)
order.totalVolume = volume
order.status = STATUS_NOTTRADED # 刚提交尚未成交
order.orderID = orderID
order.vtOrderID = orderID
order.orderTime = str(self.dt)
# CTA委托类型映射
if orderType == CTAORDER_BUY:
order.direction = DIRECTION_LONG
order.offset = OFFSET_OPEN
elif orderType == CTAORDER_SELL:
order.direction = DIRECTION_SHORT
order.offset = OFFSET_CLOSE
elif orderType == CTAORDER_SHORT:
order.direction = DIRECTION_SHORT
order.offset = OFFSET_OPEN
elif orderType == CTAORDER_COVER:
order.direction = DIRECTION_LONG
order.offset = OFFSET_CLOSE
# 保存到限价单字典中
self.workingLimitOrderDict[orderID] = order
self.limitOrderDict[orderID] = order
return orderID
#----------------------------------------------------------------------
def cancelOrder(self, vtOrderID):
"""撤单"""
if vtOrderID in self.workingLimitOrderDict:
order = self.workingLimitOrderDict[vtOrderID]
order.status = STATUS_CANCELLED
order.cancelTime = str(self.dt)
del self.workingLimitOrderDict[vtOrderID]
#----------------------------------------------------------------------
def sendStopOrder(self, vtSymbol, orderType, price, volume, strategy):
"""发停止单(本地实现)"""
self.stopOrderCount += 1
stopOrderID = STOPORDERPREFIX + str(self.stopOrderCount)
so = StopOrder()
so.vtSymbol = vtSymbol
so.price = self.roundToPriceTick(price)
so.volume = volume
so.strategy = strategy
so.stopOrderID = stopOrderID
so.status = STOPORDER_WAITING
if orderType == CTAORDER_BUY:
so.direction = DIRECTION_LONG
so.offset = OFFSET_OPEN
elif orderType == CTAORDER_SELL:
so.direction = DIRECTION_SHORT
so.offset = OFFSET_CLOSE
elif orderType == CTAORDER_SHORT:
so.direction = DIRECTION_SHORT
so.offset = OFFSET_OPEN
elif orderType == CTAORDER_COVER:
so.direction = DIRECTION_LONG
so.offset = OFFSET_CLOSE
# 保存stopOrder对象到字典中
self.stopOrderDict[stopOrderID] = so
self.workingStopOrderDict[stopOrderID] = so
return stopOrderID
#----------------------------------------------------------------------
def cancelStopOrder(self, stopOrderID):
"""撤销停止单"""
# 检查停止单是否存在
if stopOrderID in self.workingStopOrderDict:
so = self.workingStopOrderDict[stopOrderID]
so.status = STOPORDER_CANCELLED
del self.workingStopOrderDict[stopOrderID]
#----------------------------------------------------------------------
def crossLimitOrder(self):
"""基于最新数据撮合限价单"""
# 先确定会撮合成交的价格
if self.mode == self.BAR_MODE:
buyCrossPrice = self.bar.low # 若买入方向限价单价格高于该价格,则会成交
sellCrossPrice = self.bar.high # 若卖出方向限价单价格低于该价格,则会成交
buyBestCrossPrice = self.bar.open # 在当前时间点前发出的买入委托可能的最优成交价
sellBestCrossPrice = self.bar.open # 在当前时间点前发出的卖出委托可能的最优成交价
else:
buyCrossPrice = self.tick.askPrice1
sellCrossPrice = self.tick.bidPrice1
buyBestCrossPrice = self.tick.askPrice1
sellBestCrossPrice = self.tick.bidPrice1
# 遍历限价单字典中的所有限价单
for orderID, order in self.workingLimitOrderDict.items():
# 判断是否会成交
buyCross = (order.direction==DIRECTION_LONG and
order.price>=buyCrossPrice and
buyCrossPrice > 0) # 国内的tick行情在涨停时askPrice1为0,此时买无法成交
sellCross = (order.direction==DIRECTION_SHORT and
order.price<=sellCrossPrice and
sellCrossPrice > 0) # 国内的tick行情在跌停时bidPrice1为0,此时卖无法成交
# 如果发生了成交
if buyCross or sellCross:
# 推送成交数据
self.tradeCount += 1 # 成交编号自增1
tradeID = str(self.tradeCount)
trade = VtTradeData()
trade.vtSymbol = order.vtSymbol
trade.tradeID = tradeID
trade.vtTradeID = tradeID
trade.orderID = order.orderID
trade.vtOrderID = order.orderID
trade.direction = order.direction
trade.offset = order.offset
# 以买入为例:
# 1. 假设当根K线的OHLC分别为:100, 125, 90, 110
# 2. 假设在上一根K线结束(也是当前K线开始)的时刻,策略发出的委托为限价105
# 3. 则在实际中的成交价会是100而不是105,因为委托发出时市场的最优价格是100
if buyCross:
trade.price = min(order.price, buyBestCrossPrice)
self.strategy.pos += order.totalVolume
else:
trade.price = max(order.price, sellBestCrossPrice)
self.strategy.pos -= order.totalVolume
trade.volume = order.totalVolume
trade.tradeTime = str(self.dt)
trade.dt = self.dt
self.strategy.onTrade(trade)
self.tradeDict[tradeID] = trade
# 推送委托数据
order.tradedVolume = order.totalVolume
order.status = STATUS_ALLTRADED
self.strategy.onOrder(order)
# 从字典中删除该限价单
del self.workingLimitOrderDict[orderID]
#----------------------------------------------------------------------
def crossStopOrder(self):
"""基于最新数据撮合停止单"""
# 先确定会撮合成交的价格,这里和限价单规则相反
if self.mode == self.BAR_MODE:
buyCrossPrice = self.bar.high # 若买入方向停止单价格低于该价格,则会成交
sellCrossPrice = self.bar.low # 若卖出方向限价单价格高于该价格,则会成交
bestCrossPrice = self.bar.open # 最优成交价,买入停止单不能低于,卖出停止单不能高于
else:
buyCrossPrice = self.tick.lastPrice
sellCrossPrice = self.tick.lastPrice
bestCrossPrice = self.tick.lastPrice
# 遍历停止单字典中的所有停止单
for stopOrderID, so in self.workingStopOrderDict.items():
# 判断是否会成交
buyCross = so.direction==DIRECTION_LONG and so.price<=buyCrossPrice
sellCross = so.direction==DIRECTION_SHORT and so.price>=sellCrossPrice
# 如果发生了成交
if buyCross or sellCross:
# 推送成交数据
self.tradeCount += 1 # 成交编号自增1
tradeID = str(self.tradeCount)
trade = VtTradeData()
trade.vtSymbol = so.vtSymbol
trade.tradeID = tradeID
trade.vtTradeID = tradeID
if buyCross:
self.strategy.pos += so.volume
trade.price = max(bestCrossPrice, so.price)
else:
self.strategy.pos -= so.volume
trade.price = min(bestCrossPrice, so.price)
self.limitOrderCount += 1
orderID = str(self.limitOrderCount)
trade.orderID = orderID
trade.vtOrderID = orderID
trade.direction = so.direction
trade.offset = so.offset
trade.volume = so.volume
trade.tradeTime = str(self.dt)
trade.dt = self.dt
self.strategy.onTrade(trade)
self.tradeDict[tradeID] = trade
# 推送委托数据
so.status = STOPORDER_TRIGGERED
order = VtOrderData()
order.vtSymbol = so.vtSymbol
order.symbol = so.vtSymbol
order.orderID = orderID
order.vtOrderID = orderID
order.direction = so.direction
order.offset = so.offset
order.price = so.price
order.totalVolume = so.volume
order.tradedVolume = so.volume
order.status = STATUS_ALLTRADED
order.orderTime = trade.tradeTime
self.strategy.onOrder(order)
self.limitOrderDict[orderID] = order
# 从字典中删除该限价单
if stopOrderID in self.workingStopOrderDict:
del self.workingStopOrderDict[stopOrderID]
#----------------------------------------------------------------------
def insertData(self, dbName, collectionName, data):
"""考虑到回测中不允许向数据库插入数据,防止实盘交易中的一些代码出错"""
pass
#----------------------------------------------------------------------
def loadBar(self, dbName, collectionName, startDate):
"""直接返回初始化数据列表中的Bar"""
return self.initData
#----------------------------------------------------------------------
def loadTick(self, dbName, collectionName, startDate):
"""直接返回初始化数据列表中的Tick"""
return self.initData
#----------------------------------------------------------------------
def writeCtaLog(self, content):
"""记录日志"""
log = str(self.dt) + ' ' + content
self.logList.append(log)
#----------------------------------------------------------------------
def output(self, content):
"""输出内容"""
print str(datetime.now()) + "\t" + content
#----------------------------------------------------------------------
def calculateBacktestingResult(self):
"""
计算回测结果
"""
self.output(u'计算回测结果')
# 首先基于回测后的成交记录,计算每笔交易的盈亏
resultList = [] # 交易结果列表
longTrade = [] # 未平仓的多头交易
shortTrade = [] # 未平仓的空头交易
tradeTimeList = [] # 每笔成交时间戳
posList = [0] # 每笔成交后的持仓情况
for trade in self.tradeDict.values():
# 多头交易
if trade.direction == DIRECTION_LONG:
# 如果尚无空头交易
if not shortTrade:
longTrade.append(trade)
# 当前多头交易为平空
else:
while True:
entryTrade = shortTrade[0]
exitTrade = trade
# 清算开平仓交易
closedVolume = min(exitTrade.volume, entryTrade.volume)
result = TradingResult(entryTrade.price, entryTrade.dt,
exitTrade.price, exitTrade.dt,
-closedVolume, self.rate, self.slippage, self.size)
resultList.append(result)
posList.extend([-1,0])
tradeTimeList.extend([result.entryDt, result.exitDt])
# 计算未清算部分
entryTrade.volume -= closedVolume
exitTrade.volume -= closedVolume
# 如果开仓交易已经全部清算,则从列表中移除
if not entryTrade.volume:
shortTrade.pop(0)
# 如果平仓交易已经全部清算,则退出循环
if not exitTrade.volume:
break
# 如果平仓交易未全部清算,
if exitTrade.volume:
# 且开仓交易已经全部清算完,则平仓交易剩余的部分
# 等于新的反向开仓交易,添加到队列中
if not shortTrade:
longTrade.append(exitTrade)
break
# 如果开仓交易还有剩余,则进入下一轮循环
else:
pass
# 空头交易
else:
# 如果尚无多头交易
if not longTrade:
shortTrade.append(trade)
# 当前空头交易为平多
else:
while True:
entryTrade = longTrade[0]
exitTrade = trade
# 清算开平仓交易
closedVolume = min(exitTrade.volume, entryTrade.volume)
result = TradingResult(entryTrade.price, entryTrade.dt,
exitTrade.price, exitTrade.dt,
closedVolume, self.rate, self.slippage, self.size)
resultList.append(result)
posList.extend([1,0])
tradeTimeList.extend([result.entryDt, result.exitDt])
# 计算未清算部分
entryTrade.volume -= closedVolume
exitTrade.volume -= closedVolume
# 如果开仓交易已经全部清算,则从列表中移除
if not entryTrade.volume:
longTrade.pop(0)
# 如果平仓交易已经全部清算,则退出循环
if not exitTrade.volume:
break
# 如果平仓交易未全部清算,
if exitTrade.volume:
# 且开仓交易已经全部清算完,则平仓交易剩余的部分
# 等于新的反向开仓交易,添加到队列中
if not longTrade:
shortTrade.append(exitTrade)
break
# 如果开仓交易还有剩余,则进入下一轮循环
else:
pass
# 检查是否有交易
if not resultList:
self.output(u'无交易结果')
return {}
# 然后基于每笔交易的结果,我们可以计算具体的盈亏曲线和最大回撤等
capital = 0 # 资金
maxCapital = 0 # 资金最高净值
drawdown = 0 # 回撤
totalResult = 0 # 总成交数量
totalTurnover = 0 # 总成交金额(合约面值)
totalCommission = 0 # 总手续费
totalSlippage = 0 # 总滑点
timeList = [] # 时间序列
pnlList = [] # 每笔盈亏序列
capitalList = [] # 盈亏汇总的时间序列
drawdownList = [] # 回撤的时间序列
winningResult = 0 # 盈利次数
losingResult = 0 # 亏损次数
totalWinning = 0 # 总盈利金额
totalLosing = 0 # 总亏损金额
for result in resultList:
capital += result.pnl
maxCapital = max(capital, maxCapital)
drawdown = capital - maxCapital
pnlList.append(result.pnl)
timeList.append(result.exitDt) # 交易的时间戳使用平仓时间
capitalList.append(capital)
drawdownList.append(drawdown)
totalResult += 1
totalTurnover += result.turnover
totalCommission += result.commission
totalSlippage += result.slippage
if result.pnl >= 0:
winningResult += 1
totalWinning += result.pnl
else:
losingResult += 1
totalLosing += result.pnl
# 计算盈亏相关数据
winningRate = winningResult/totalResult*100 # 胜率
averageWinning = 0 # 这里把数据都初始化为0
averageLosing = 0
profitLossRatio = 0
if winningResult:
averageWinning = totalWinning/winningResult # 平均每笔盈利
if losingResult:
averageLosing = totalLosing/losingResult # 平均每笔亏损
if averageLosing:
profitLossRatio = -averageWinning/averageLosing # 盈亏比
# 返回回测结果
d = {}
d['capital'] = capital
d['maxCapital'] = maxCapital
d['drawdown'] = drawdown
d['totalResult'] = totalResult
d['totalTurnover'] = totalTurnover
d['totalCommission'] = totalCommission
d['totalSlippage'] = totalSlippage
d['timeList'] = timeList
d['pnlList'] = pnlList
d['capitalList'] = capitalList
d['drawdownList'] = drawdownList
d['winningRate'] = winningRate
d['averageWinning'] = averageWinning
d['averageLosing'] = averageLosing
d['profitLossRatio'] = profitLossRatio
d['posList'] = posList
d['tradeTimeList'] = tradeTimeList
return d
#----------------------------------------------------------------------
def showBacktestingResult(self):
"""显示回测结果"""
d = self.calculateBacktestingResult()
# 输出
self.output('-' * 30)
self.output(u'第一笔交易:\t%s' % d['timeList'][0])
self.output(u'最后一笔交易:\t%s' % d['timeList'][-1])
self.output(u'总交易次数:\t%s' % formatNumber(d['totalResult']))
self.output(u'总盈亏:\t%s' % formatNumber(d['capital']))
self.output(u'最大回撤: \t%s' % formatNumber(min(d['drawdownList'])))
self.output(u'平均每笔盈利:\t%s' %formatNumber(d['capital']/d['totalResult']))
self.output(u'平均每笔滑点:\t%s' %formatNumber(d['totalSlippage']/d['totalResult']))
self.output(u'平均每笔佣金:\t%s' %formatNumber(d['totalCommission']/d['totalResult']))
self.output(u'胜率\t\t%s%%' %formatNumber(d['winningRate']))
self.output(u'盈利交易平均值\t%s' %formatNumber(d['averageWinning']))
self.output(u'亏损交易平均值\t%s' %formatNumber(d['averageLosing']))
self.output(u'盈亏比:\t%s' %formatNumber(d['profitLossRatio']))
# 绘图
import matplotlib.pyplot as plt
import numpy as np
try:
import seaborn as sns # 如果安装了seaborn则设置为白色风格
sns.set_style('whitegrid')
except ImportError:
pass
pCapital = plt.subplot(4, 1, 1)
pCapital.set_ylabel("capital")
pCapital.plot(d['capitalList'], color='r', lw=0.8)
pDD = plt.subplot(4, 1, 2)
pDD.set_ylabel("DD")
pDD.bar(range(len(d['drawdownList'])), d['drawdownList'], color='g')
pPnl = plt.subplot(4, 1, 3)
pPnl.set_ylabel("pnl")
pPnl.hist(d['pnlList'], bins=50, color='c')
pPos = plt.subplot(4, 1, 4)
pPos.set_ylabel("Position")
if d['posList'][-1] == 0:
del d['posList'][-1]
tradeTimeIndex = [item.strftime("%m/%d %H:%M:%S") for item in d['tradeTimeList']]
xindex = np.arange(0, len(tradeTimeIndex), np.int(len(tradeTimeIndex)/10))
tradeTimeIndex = map(lambda i: tradeTimeIndex[i], xindex)
pPos.plot(d['posList'], color='k', drawstyle='steps-pre')
pPos.set_ylim(-1.2, 1.2)
plt.sca(pPos)
plt.tight_layout()
plt.xticks(xindex, tradeTimeIndex, rotation=30) # 旋转15
plt.show()
#----------------------------------------------------------------------
def putStrategyEvent(self, name):
"""发送策略更新事件,回测中忽略"""
pass
#----------------------------------------------------------------------
def setSlippage(self, slippage):
"""设置滑点点数"""
self.slippage = slippage
#----------------------------------------------------------------------
def setSize(self, size):
"""设置合约大小"""
self.size = size
#----------------------------------------------------------------------
def setRate(self, rate):
"""设置佣金比例"""
self.rate = rate
#----------------------------------------------------------------------
def setPriceTick(self, priceTick):
"""设置价格最小变动"""
self.priceTick = priceTick
#----------------------------------------------------------------------
def runOptimization(self, strategyClass, optimizationSetting):
"""优化参数"""
# 获取优化设置
settingList = optimizationSetting.generateSetting()
targetName = optimizationSetting.optimizeTarget
# 检查参数设置问题
if not settingList or not targetName:
self.output(u'优化设置有问题,请检查')
# 遍历优化
resultList = []
for setting in settingList:
self.clearBacktestingResult()
self.output('-' * 30)
self.output('setting: %s' %str(setting))
self.initStrategy(strategyClass, setting)
self.runBacktesting()
d = self.calculateBacktestingResult()
try:
targetValue = d[targetName]
except KeyError:
targetValue = 0
resultList.append(([str(setting)], targetValue))
# 显示结果
resultList.sort(reverse=True, key=lambda result:result[1])
self.output('-' * 30)
self.output(u'优化结果:')
for result in resultList:
self.output(u'%s: %s' %(result[0], result[1]))
return result
#----------------------------------------------------------------------
def clearBacktestingResult(self):
"""清空之前回测的结果"""
# 清空限价单相关
self.limitOrderCount = 0
self.limitOrderDict.clear()
self.workingLimitOrderDict.clear()
# 清空停止单相关
self.stopOrderCount = 0
self.stopOrderDict.clear()
self.workingStopOrderDict.clear()
# 清空成交相关
self.tradeCount = 0
self.tradeDict.clear()
#----------------------------------------------------------------------
def runParallelOptimization(self, strategyClass, optimizationSetting):
"""并行优化参数"""
# 获取优化设置
settingList = optimizationSetting.generateSetting()
targetName = optimizationSetting.optimizeTarget
# 检查参数设置问题
if not settingList or not targetName:
self.output(u'优化设置有问题,请检查')
# 多进程优化,启动一个对应CPU核心数量的进程池
pool = multiprocessing.Pool(multiprocessing.cpu_count())
l = []
for setting in settingList:
l.append(pool.apply_async(optimize, (strategyClass, setting,
targetName, self.mode,
self.startDate, self.initDays, self.endDate,
self.slippage, self.rate, self.size,
self.dbName, self.symbol)))
pool.close()
pool.join()
# 显示结果
resultList = [res.get() for res in l]
resultList.sort(reverse=True, key=lambda result:result[1])
self.output('-' * 30)
self.output(u'优化结果:')
for result in resultList:
self.output(u'%s: %s' %(result[0], result[1]))
#----------------------------------------------------------------------
def roundToPriceTick(self, price):
"""取整价格到合约最小价格变动"""
if not self.priceTick:
return price
newPrice = round(price/self.priceTick, 0) * self.priceTick
return newPrice
########################################################################
class TradingResult(object):
"""每笔交易的结果"""
#----------------------------------------------------------------------
def __init__(self, entryPrice, entryDt, exitPrice,
exitDt, volume, rate, slippage, size):
"""Constructor"""
self.entryPrice = entryPrice # 开仓价格
self.exitPrice = exitPrice # 平仓价格
self.entryDt = entryDt # 开仓时间datetime
self.exitDt = exitDt # 平仓时间
self.volume = volume # 交易数量(+/-代表方向)
self.turnover = (self.entryPrice+self.exitPrice)*size*abs(volume) # 成交金额
self.commission = self.turnover*rate # 手续费成本
self.slippage = slippage*2*size*abs(volume) # 滑点成本
self.pnl = ((self.exitPrice - self.entryPrice) * volume * size
- self.commission - self.slippage) # 净盈亏
########################################################################
class OptimizationSetting(object):
"""优化设置"""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
self.paramDict = OrderedDict()
self.optimizeTarget = '' # 优化目标字段
#----------------------------------------------------------------------
def addParameter(self, name, start, end=None, step=None):
"""增加优化参数"""
if end is None and step is None:
self.paramDict[name] = [start]
return
if end < start:
print u'参数起始点必须不大于终止点'
return
if step <= 0:
print u'参数布进必须大于0'
return
l = []
param = start
while param <= end:
l.append(param)
param += step
self.paramDict[name] = l
#----------------------------------------------------------------------
def generateSetting(self):
"""生成优化参数组合"""
# 参数名的列表
nameList = self.paramDict.keys()
paramList = self.paramDict.values()
# 使用迭代工具生产参数对组合
productList = list(product(*paramList))
# 把参数对组合打包到一个个字典组成的列表中
settingList = []
for p in productList:
d = dict(zip(nameList, p))
settingList.append(d)
return settingList
#----------------------------------------------------------------------
def setOptimizeTarget(self, target):
"""设置优化目标字段"""
self.optimizeTarget = target
#----------------------------------------------------------------------
def formatNumber(n):
"""格式化数字到字符串"""
rn = round(n, 2) # 保留两位小数
return format(rn, ',') # 加上千分符
#----------------------------------------------------------------------
def optimize(strategyClass, setting, targetName,
mode, startDate, initDays, endDate,
slippage, rate, size,
dbName, symbol):
"""多进程优化时跑在每个进程中运行的函数"""
engine = BacktestingEngine()
engine.setBacktestingMode(mode)
engine.setStartDate(startDate, initDays)
engine.setEndDate(endDate)
engine.setSlippage(slippage)
engine.setRate(rate)
engine.setSize(size)
engine.setDatabase(dbName, symbol)
engine.initStrategy(strategyClass, setting)
engine.runBacktesting()
d = engine.calculateBacktestingResult()
try:
targetValue = d[targetName]
except KeyError:
targetValue = 0
return (str(setting), targetValue)
if __name__ == '__main__':
# 以下内容是一段回测脚本的演示,用户可以根据自己的需求修改
# 建议使用ipython notebook或者spyder来做回测
# 同样可以在命令模式下进行回测(一行一行输入运行)
from strategy.strategyEmaDemo import *
# 创建回测引擎
engine = BacktestingEngine()
# 设置引擎的回测模式为K线
engine.setBacktestingMode(engine.BAR_MODE)
# 设置回测用的数据起始日期
engine.setStartDate('20110101')
# 载入历史数据到引擎中
engine.setDatabase(MINUTE_DB_NAME, 'IF0000')
# 设置产品相关参数
engine.setSlippage(0.2) # 股指1跳
engine.setRate(0.3/10000) # 万0.3
engine.setSize(300) # 股指合约大小
# 在引擎中创建策略对象
engine.initStrategy(EmaDemoStrategy, {})
# 开始跑回测
engine.runBacktesting()
# 显示回测结果
# spyder或者ipython notebook中运行时,会弹出盈亏曲线图
# 直接在cmd中回测则只会打印一些回测数值
engine.showBacktestingResult()
|
mit
|
dingocuster/scikit-learn
|
sklearn/ensemble/tests/test_weight_boosting.py
|
83
|
17276
|
"""Testing for the boost module (sklearn.ensemble.boost)."""
import numpy as np
from sklearn.utils.testing import assert_array_equal, assert_array_less
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal, assert_true
from sklearn.utils.testing import assert_raises, assert_raises_regexp
from sklearn.base import BaseEstimator
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import weight_boosting
from scipy.sparse import csc_matrix
from scipy.sparse import csr_matrix
from scipy.sparse import coo_matrix
from scipy.sparse import dok_matrix
from scipy.sparse import lil_matrix
from sklearn.svm import SVC, SVR
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.utils import shuffle
from sklearn import datasets
# Common random state
rng = np.random.RandomState(0)
# Toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y_class = ["foo", "foo", "foo", 1, 1, 1] # test string class labels
y_regr = [-1, -1, -1, 1, 1, 1]
T = [[-1, -1], [2, 2], [3, 2]]
y_t_class = ["foo", 1, 1]
y_t_regr = [-1, 1, 1]
# Load the iris dataset and randomly permute it
iris = datasets.load_iris()
perm = rng.permutation(iris.target.size)
iris.data, iris.target = shuffle(iris.data, iris.target, random_state=rng)
# Load the boston dataset and randomly permute it
boston = datasets.load_boston()
boston.data, boston.target = shuffle(boston.data, boston.target,
random_state=rng)
def test_samme_proba():
# Test the `_samme_proba` helper function.
# Define some example (bad) `predict_proba` output.
probs = np.array([[1, 1e-6, 0],
[0.19, 0.6, 0.2],
[-999, 0.51, 0.5],
[1e-6, 1, 1e-9]])
probs /= np.abs(probs.sum(axis=1))[:, np.newaxis]
# _samme_proba calls estimator.predict_proba.
# Make a mock object so I can control what gets returned.
class MockEstimator(object):
def predict_proba(self, X):
assert_array_equal(X.shape, probs.shape)
return probs
mock = MockEstimator()
samme_proba = weight_boosting._samme_proba(mock, 3, np.ones_like(probs))
assert_array_equal(samme_proba.shape, probs.shape)
assert_true(np.isfinite(samme_proba).all())
# Make sure that the correct elements come out as smallest --
# `_samme_proba` should preserve the ordering in each example.
assert_array_equal(np.argmin(samme_proba, axis=1), [2, 0, 0, 2])
assert_array_equal(np.argmax(samme_proba, axis=1), [0, 1, 1, 1])
def test_classification_toy():
# Check classification on a toy dataset.
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg, random_state=0)
clf.fit(X, y_class)
assert_array_equal(clf.predict(T), y_t_class)
assert_array_equal(np.unique(np.asarray(y_t_class)), clf.classes_)
assert_equal(clf.predict_proba(T).shape, (len(T), 2))
assert_equal(clf.decision_function(T).shape, (len(T),))
def test_regression_toy():
# Check classification on a toy dataset.
clf = AdaBoostRegressor(random_state=0)
clf.fit(X, y_regr)
assert_array_equal(clf.predict(T), y_t_regr)
def test_iris():
# Check consistency on dataset iris.
classes = np.unique(iris.target)
clf_samme = prob_samme = None
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg)
clf.fit(iris.data, iris.target)
assert_array_equal(classes, clf.classes_)
proba = clf.predict_proba(iris.data)
if alg == "SAMME":
clf_samme = clf
prob_samme = proba
assert_equal(proba.shape[1], len(classes))
assert_equal(clf.decision_function(iris.data).shape[1], len(classes))
score = clf.score(iris.data, iris.target)
assert score > 0.9, "Failed with algorithm %s and score = %f" % \
(alg, score)
# Somewhat hacky regression test: prior to
# ae7adc880d624615a34bafdb1d75ef67051b8200,
# predict_proba returned SAMME.R values for SAMME.
clf_samme.algorithm = "SAMME.R"
assert_array_less(0,
np.abs(clf_samme.predict_proba(iris.data) - prob_samme))
def test_boston():
# Check consistency on dataset boston house prices.
clf = AdaBoostRegressor(random_state=0)
clf.fit(boston.data, boston.target)
score = clf.score(boston.data, boston.target)
assert score > 0.85
def test_staged_predict():
# Check staged predictions.
rng = np.random.RandomState(0)
iris_weights = rng.randint(10, size=iris.target.shape)
boston_weights = rng.randint(10, size=boston.target.shape)
# AdaBoost classification
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg, n_estimators=10)
clf.fit(iris.data, iris.target, sample_weight=iris_weights)
predictions = clf.predict(iris.data)
staged_predictions = [p for p in clf.staged_predict(iris.data)]
proba = clf.predict_proba(iris.data)
staged_probas = [p for p in clf.staged_predict_proba(iris.data)]
score = clf.score(iris.data, iris.target, sample_weight=iris_weights)
staged_scores = [
s for s in clf.staged_score(
iris.data, iris.target, sample_weight=iris_weights)]
assert_equal(len(staged_predictions), 10)
assert_array_almost_equal(predictions, staged_predictions[-1])
assert_equal(len(staged_probas), 10)
assert_array_almost_equal(proba, staged_probas[-1])
assert_equal(len(staged_scores), 10)
assert_array_almost_equal(score, staged_scores[-1])
# AdaBoost regression
clf = AdaBoostRegressor(n_estimators=10, random_state=0)
clf.fit(boston.data, boston.target, sample_weight=boston_weights)
predictions = clf.predict(boston.data)
staged_predictions = [p for p in clf.staged_predict(boston.data)]
score = clf.score(boston.data, boston.target, sample_weight=boston_weights)
staged_scores = [
s for s in clf.staged_score(
boston.data, boston.target, sample_weight=boston_weights)]
assert_equal(len(staged_predictions), 10)
assert_array_almost_equal(predictions, staged_predictions[-1])
assert_equal(len(staged_scores), 10)
assert_array_almost_equal(score, staged_scores[-1])
def test_gridsearch():
# Check that base trees can be grid-searched.
# AdaBoost classification
boost = AdaBoostClassifier(base_estimator=DecisionTreeClassifier())
parameters = {'n_estimators': (1, 2),
'base_estimator__max_depth': (1, 2),
'algorithm': ('SAMME', 'SAMME.R')}
clf = GridSearchCV(boost, parameters)
clf.fit(iris.data, iris.target)
# AdaBoost regression
boost = AdaBoostRegressor(base_estimator=DecisionTreeRegressor(),
random_state=0)
parameters = {'n_estimators': (1, 2),
'base_estimator__max_depth': (1, 2)}
clf = GridSearchCV(boost, parameters)
clf.fit(boston.data, boston.target)
def test_pickle():
# Check pickability.
import pickle
# Adaboost classifier
for alg in ['SAMME', 'SAMME.R']:
obj = AdaBoostClassifier(algorithm=alg)
obj.fit(iris.data, iris.target)
score = obj.score(iris.data, iris.target)
s = pickle.dumps(obj)
obj2 = pickle.loads(s)
assert_equal(type(obj2), obj.__class__)
score2 = obj2.score(iris.data, iris.target)
assert_equal(score, score2)
# Adaboost regressor
obj = AdaBoostRegressor(random_state=0)
obj.fit(boston.data, boston.target)
score = obj.score(boston.data, boston.target)
s = pickle.dumps(obj)
obj2 = pickle.loads(s)
assert_equal(type(obj2), obj.__class__)
score2 = obj2.score(boston.data, boston.target)
assert_equal(score, score2)
def test_importances():
# Check variable importances.
X, y = datasets.make_classification(n_samples=2000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
shuffle=False,
random_state=1)
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg)
clf.fit(X, y)
importances = clf.feature_importances_
assert_equal(importances.shape[0], 10)
assert_equal((importances[:3, np.newaxis] >= importances[3:]).all(),
True)
def test_error():
# Test that it gives proper exception on deficient input.
assert_raises(ValueError,
AdaBoostClassifier(learning_rate=-1).fit,
X, y_class)
assert_raises(ValueError,
AdaBoostClassifier(algorithm="foo").fit,
X, y_class)
assert_raises(ValueError,
AdaBoostClassifier().fit,
X, y_class, sample_weight=np.asarray([-1]))
def test_base_estimator():
# Test different base estimators.
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# XXX doesn't work with y_class because RF doesn't support classes_
# Shouldn't AdaBoost run a LabelBinarizer?
clf = AdaBoostClassifier(RandomForestClassifier())
clf.fit(X, y_regr)
clf = AdaBoostClassifier(SVC(), algorithm="SAMME")
clf.fit(X, y_class)
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
clf = AdaBoostRegressor(RandomForestRegressor(), random_state=0)
clf.fit(X, y_regr)
clf = AdaBoostRegressor(SVR(), random_state=0)
clf.fit(X, y_regr)
# Check that an empty discrete ensemble fails in fit, not predict.
X_fail = [[1, 1], [1, 1], [1, 1], [1, 1]]
y_fail = ["foo", "bar", 1, 2]
clf = AdaBoostClassifier(SVC(), algorithm="SAMME")
assert_raises_regexp(ValueError, "worse than random",
clf.fit, X_fail, y_fail)
def test_sample_weight_missing():
from sklearn.linear_model import LogisticRegression
from sklearn.cluster import KMeans
clf = AdaBoostClassifier(LogisticRegression(), algorithm="SAMME")
assert_raises(ValueError, clf.fit, X, y_regr)
clf = AdaBoostClassifier(KMeans(), algorithm="SAMME")
assert_raises(ValueError, clf.fit, X, y_regr)
clf = AdaBoostRegressor(KMeans())
assert_raises(ValueError, clf.fit, X, y_regr)
def test_sparse_classification():
# Check classification with sparse input.
class CustomSVC(SVC):
"""SVC variant that records the nature of the training set."""
def fit(self, X, y, sample_weight=None):
"""Modification on fit caries data type for later verification."""
super(CustomSVC, self).fit(X, y, sample_weight=sample_weight)
self.data_type_ = type(X)
return self
X, y = datasets.make_multilabel_classification(n_classes=1, n_samples=15,
n_features=5,
random_state=42)
# Flatten y to a 1d array
y = np.ravel(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
for sparse_format in [csc_matrix, csr_matrix, lil_matrix, coo_matrix,
dok_matrix]:
X_train_sparse = sparse_format(X_train)
X_test_sparse = sparse_format(X_test)
# Trained on sparse format
sparse_classifier = AdaBoostClassifier(
base_estimator=CustomSVC(probability=True),
random_state=1,
algorithm="SAMME"
).fit(X_train_sparse, y_train)
# Trained on dense format
dense_classifier = AdaBoostClassifier(
base_estimator=CustomSVC(probability=True),
random_state=1,
algorithm="SAMME"
).fit(X_train, y_train)
# predict
sparse_results = sparse_classifier.predict(X_test_sparse)
dense_results = dense_classifier.predict(X_test)
assert_array_equal(sparse_results, dense_results)
# decision_function
sparse_results = sparse_classifier.decision_function(X_test_sparse)
dense_results = dense_classifier.decision_function(X_test)
assert_array_equal(sparse_results, dense_results)
# predict_log_proba
sparse_results = sparse_classifier.predict_log_proba(X_test_sparse)
dense_results = dense_classifier.predict_log_proba(X_test)
assert_array_equal(sparse_results, dense_results)
# predict_proba
sparse_results = sparse_classifier.predict_proba(X_test_sparse)
dense_results = dense_classifier.predict_proba(X_test)
assert_array_equal(sparse_results, dense_results)
# score
sparse_results = sparse_classifier.score(X_test_sparse, y_test)
dense_results = dense_classifier.score(X_test, y_test)
assert_array_equal(sparse_results, dense_results)
# staged_decision_function
sparse_results = sparse_classifier.staged_decision_function(
X_test_sparse)
dense_results = dense_classifier.staged_decision_function(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_predict
sparse_results = sparse_classifier.staged_predict(X_test_sparse)
dense_results = dense_classifier.staged_predict(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_predict_proba
sparse_results = sparse_classifier.staged_predict_proba(X_test_sparse)
dense_results = dense_classifier.staged_predict_proba(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_score
sparse_results = sparse_classifier.staged_score(X_test_sparse,
y_test)
dense_results = dense_classifier.staged_score(X_test, y_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# Verify sparsity of data is maintained during training
types = [i.data_type_ for i in sparse_classifier.estimators_]
assert all([(t == csc_matrix or t == csr_matrix)
for t in types])
def test_sparse_regression():
# Check regression with sparse input.
class CustomSVR(SVR):
"""SVR variant that records the nature of the training set."""
def fit(self, X, y, sample_weight=None):
"""Modification on fit caries data type for later verification."""
super(CustomSVR, self).fit(X, y, sample_weight=sample_weight)
self.data_type_ = type(X)
return self
X, y = datasets.make_regression(n_samples=15, n_features=50, n_targets=1,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
for sparse_format in [csc_matrix, csr_matrix, lil_matrix, coo_matrix,
dok_matrix]:
X_train_sparse = sparse_format(X_train)
X_test_sparse = sparse_format(X_test)
# Trained on sparse format
sparse_classifier = AdaBoostRegressor(
base_estimator=CustomSVR(),
random_state=1
).fit(X_train_sparse, y_train)
# Trained on dense format
dense_classifier = dense_results = AdaBoostRegressor(
base_estimator=CustomSVR(),
random_state=1
).fit(X_train, y_train)
# predict
sparse_results = sparse_classifier.predict(X_test_sparse)
dense_results = dense_classifier.predict(X_test)
assert_array_equal(sparse_results, dense_results)
# staged_predict
sparse_results = sparse_classifier.staged_predict(X_test_sparse)
dense_results = dense_classifier.staged_predict(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
types = [i.data_type_ for i in sparse_classifier.estimators_]
assert all([(t == csc_matrix or t == csr_matrix)
for t in types])
def test_sample_weight_adaboost_regressor():
"""
AdaBoostRegressor should work without sample_weights in the base estimator
The random weighted sampling is done internally in the _boost method in
AdaBoostRegressor.
"""
class DummyEstimator(BaseEstimator):
def fit(self, X, y):
pass
def predict(self, X):
return np.zeros(X.shape[0])
boost = AdaBoostRegressor(DummyEstimator(), n_estimators=3)
boost.fit(X, y_regr)
assert_equal(len(boost.estimator_weights_), len(boost.estimator_errors_))
|
bsd-3-clause
|
chayapan/pyfolio
|
pyfolio/tests/test_timeseries.py
|
2
|
12272
|
from __future__ import division
import os
from unittest import TestCase
from nose_parameterized import parameterized
from numpy.testing import assert_allclose, assert_almost_equal
from pandas.util.testing import assert_series_equal
import numpy as np
import pandas as pd
from .. import timeseries
from pyfolio.utils import to_utc, to_series
import gzip
DECIMAL_PLACES = 8
class TestDrawdown(TestCase):
drawdown_list = np.array(
[100, 90, 75]
) / 10.
dt = pd.date_range('2000-1-3', periods=3, freq='D')
drawdown_serie = pd.Series(drawdown_list, index=dt)
@parameterized.expand([
(drawdown_serie,)
])
def test_get_max_drawdown_begins_first_day(self, px):
rets = px.pct_change()
drawdowns = timeseries.gen_drawdown_table(rets, top=1)
self.assertEqual(drawdowns.loc[0, 'Net drawdown in %'], 25)
drawdown_list = np.array(
[100, 110, 120, 150, 180, 200, 100, 120,
160, 180, 200, 300, 400, 500, 600, 800,
900, 1000, 650, 600]
) / 10.
dt = pd.date_range('2000-1-3', periods=20, freq='D')
drawdown_serie = pd.Series(drawdown_list, index=dt)
@parameterized.expand([
(drawdown_serie,
pd.Timestamp('2000-01-08'),
pd.Timestamp('2000-01-09'),
pd.Timestamp('2000-01-13'),
50,
pd.Timestamp('2000-01-20'),
pd.Timestamp('2000-01-22'),
None,
40
)
])
def test_gen_drawdown_table_relative(
self, px,
first_expected_peak, first_expected_valley,
first_expected_recovery, first_net_drawdown,
second_expected_peak, second_expected_valley,
second_expected_recovery, second_net_drawdown
):
rets = px.pct_change()
drawdowns = timeseries.gen_drawdown_table(rets, top=2)
self.assertEqual(np.round(drawdowns.loc[0, 'Net drawdown in %']),
first_net_drawdown)
self.assertEqual(drawdowns.loc[0, 'Peak date'],
first_expected_peak)
self.assertEqual(drawdowns.loc[0, 'Valley date'],
first_expected_valley)
self.assertEqual(drawdowns.loc[0, 'Recovery date'],
first_expected_recovery)
self.assertEqual(np.round(drawdowns.loc[1, 'Net drawdown in %']),
second_net_drawdown)
self.assertEqual(drawdowns.loc[1, 'Peak date'],
second_expected_peak)
self.assertEqual(drawdowns.loc[1, 'Valley date'],
second_expected_valley)
self.assertTrue(pd.isnull(drawdowns.loc[1, 'Recovery date']))
px_list_1 = np.array(
[100, 120, 100, 80, 70, 110, 180, 150]) / 100. # Simple
px_list_2 = np.array(
[100, 120, 100, 80, 70, 80, 90, 90]) / 100. # Ends in drawdown
dt = pd.date_range('2000-1-3', periods=8, freq='D')
@parameterized.expand([
(pd.Series(px_list_1,
index=dt),
pd.Timestamp('2000-1-4'),
pd.Timestamp('2000-1-7'),
pd.Timestamp('2000-1-9')),
(pd.Series(px_list_2,
index=dt),
pd.Timestamp('2000-1-4'),
pd.Timestamp('2000-1-7'),
None)
])
def test_get_max_drawdown(
self, px, expected_peak, expected_valley, expected_recovery):
rets = px.pct_change().iloc[1:]
peak, valley, recovery = timeseries.get_max_drawdown(rets)
# Need to use isnull because the result can be NaN, NaT, etc.
self.assertTrue(
pd.isnull(peak)) if expected_peak is None else self.assertEqual(
peak,
expected_peak)
self.assertTrue(
pd.isnull(valley)) if expected_valley is None else \
self.assertEqual(
valley,
expected_valley)
self.assertTrue(
pd.isnull(recovery)) if expected_recovery is None else \
self.assertEqual(
recovery,
expected_recovery)
@parameterized.expand([
(pd.Series(px_list_2,
index=dt),
pd.Timestamp('2000-1-4'),
pd.Timestamp('2000-1-7'),
None,
None),
(pd.Series(px_list_1,
index=dt),
pd.Timestamp('2000-1-4'),
pd.Timestamp('2000-1-7'),
pd.Timestamp('2000-1-9'),
4)
])
def test_gen_drawdown_table(self, px, expected_peak,
expected_valley, expected_recovery,
expected_duration):
rets = px.pct_change().iloc[1:]
drawdowns = timeseries.gen_drawdown_table(rets, top=1)
self.assertTrue(
pd.isnull(
drawdowns.loc[
0,
'Peak date'])) if expected_peak is None \
else self.assertEqual(drawdowns.loc[0, 'Peak date'],
expected_peak)
self.assertTrue(
pd.isnull(
drawdowns.loc[0, 'Valley date'])) \
if expected_valley is None else self.assertEqual(
drawdowns.loc[0, 'Valley date'],
expected_valley)
self.assertTrue(
pd.isnull(
drawdowns.loc[0, 'Recovery date'])) \
if expected_recovery is None else self.assertEqual(
drawdowns.loc[0, 'Recovery date'],
expected_recovery)
self.assertTrue(
pd.isnull(drawdowns.loc[0, 'Duration'])) \
if expected_duration is None else self.assertEqual(
drawdowns.loc[0, 'Duration'], expected_duration)
def test_drawdown_overlaps(self):
rand = np.random.RandomState(1337)
n_samples = 252 * 5
spy_returns = pd.Series(
rand.standard_t(3.1, n_samples),
pd.date_range('2005-01-02', periods=n_samples),
)
spy_drawdowns = timeseries.gen_drawdown_table(
spy_returns,
top=20).sort_values(by='Peak date')
# Compare the recovery date of each drawdown with the peak of the next
# Last pair might contain a NaT if drawdown didn't finish, so ignore it
pairs = list(zip(spy_drawdowns['Recovery date'],
spy_drawdowns['Peak date'].shift(-1)))[:-1]
self.assertGreater(len(pairs), 0)
for recovery, peak in pairs:
if not pd.isnull(recovery):
self.assertLessEqual(recovery, peak)
@parameterized.expand([
(pd.Series(px_list_1,
index=dt),
1,
[(pd.Timestamp('2000-01-03 00:00:00'),
pd.Timestamp('2000-01-03 00:00:00'),
pd.Timestamp('2000-01-03 00:00:00'))])
])
def test_top_drawdowns(self, returns, top, expected):
self.assertEqual(
timeseries.get_top_drawdowns(
returns,
top=top),
expected)
class TestVariance(TestCase):
@parameterized.expand([
(1e7, 0.5, 1, 1, -10000000.0)
])
def test_var_cov_var_normal(self, P, c, mu, sigma, expected):
self.assertEqual(
timeseries.var_cov_var_normal(
P,
c,
mu,
sigma),
expected)
class TestNormalize(TestCase):
dt = pd.date_range('2000-1-3', periods=8, freq='D')
px_list = [1.0, 1.2, 1.0, 0.8, 0.7, 0.8, 0.8, 0.8]
@parameterized.expand([
(pd.Series(np.array(px_list) * 100, index=dt),
pd.Series(px_list, index=dt))
])
def test_normalize(self, returns, expected):
self.assertTrue(timeseries.normalize(returns).equals(expected))
class TestStats(TestCase):
simple_rets = pd.Series(
[0.1] * 3 + [0] * 497,
pd.date_range(
'2000-1-3',
periods=500,
freq='D'))
simple_week_rets = pd.Series(
[0.1] * 3 + [0] * 497,
pd.date_range(
'2000-1-31',
periods=500,
freq='W'))
simple_month_rets = pd.Series(
[0.1] * 3 + [0] * 497,
pd.date_range(
'2000-1-31',
periods=500,
freq='M'))
simple_benchmark = pd.Series(
[0.03] * 4 + [0] * 496,
pd.date_range(
'2000-1-1',
periods=500,
freq='D'))
px_list = np.array(
[10, -10, 10]) / 100. # Ends in drawdown
dt = pd.date_range('2000-1-3', periods=3, freq='D')
px_list_2 = [1.0, 1.2, 1.0, 0.8, 0.7, 0.8, 0.8, 0.8]
dt_2 = pd.date_range('2000-1-3', periods=8, freq='D')
@parameterized.expand([
(simple_rets[:5], 2, [np.nan, np.inf, np.inf, 11.224972160321, np.inf])
])
def test_sharpe_2(self, returns, rolling_sharpe_window, expected):
np.testing.assert_array_almost_equal(
timeseries.rolling_sharpe(returns,
rolling_sharpe_window).values,
np.asarray(expected))
@parameterized.expand([
(simple_rets[:5], simple_benchmark, 2, 0)
])
def test_beta(self, returns, benchmark_rets, rolling_window, expected):
actual = timeseries.rolling_beta(
returns,
benchmark_rets,
rolling_window=rolling_window,
).values.tolist()[2]
np.testing.assert_almost_equal(actual, expected)
class TestCone(TestCase):
def test_bootstrap_cone_against_linear_cone_normal_returns(self):
random_seed = 100
np.random.seed(random_seed)
days_forward = 200
cone_stdevs = (1., 1.5, 2.)
mu = .005
sigma = .002
rets = pd.Series(np.random.normal(mu, sigma, 10000))
midline = np.cumprod(1 + (rets.mean() * np.ones(days_forward)))
stdev = rets.std() * midline * np.sqrt(np.arange(days_forward)+1)
normal_cone = pd.DataFrame(columns=pd.Float64Index([]))
for s in cone_stdevs:
normal_cone[s] = midline + s * stdev
normal_cone[-s] = midline - s * stdev
bootstrap_cone = timeseries.forecast_cone_bootstrap(
rets, days_forward, cone_stdevs, starting_value=1,
random_seed=random_seed, num_samples=10000)
for col, vals in bootstrap_cone.iteritems():
expected = normal_cone[col].values
assert_allclose(vals.values, expected, rtol=.005)
class TestBootstrap(TestCase):
@parameterized.expand([
(0., 1., 1000),
(1., 2., 500),
(-1., 0.1, 10),
])
def test_calc_bootstrap(self, true_mean, true_sd, n):
"""Compare bootstrap distribution of the mean to sampling distribution
of the mean.
"""
np.random.seed(123)
func = np.mean
returns = pd.Series((np.random.randn(n) * true_sd) +
true_mean)
samples = timeseries.calc_bootstrap(func, returns,
n_samples=10000)
# Calculate statistics of sampling distribution of the mean
mean_of_mean = np.mean(returns)
sd_of_mean = np.std(returns) / np.sqrt(n)
assert_almost_equal(
np.mean(samples),
mean_of_mean,
3,
'Mean of bootstrap does not match theoretical mean of'
'sampling distribution')
assert_almost_equal(
np.std(samples),
sd_of_mean,
3,
'SD of bootstrap does not match theoretical SD of'
'sampling distribution')
class TestGrossLev(TestCase):
__location__ = os.path.realpath(
os.path.join(os.getcwd(), os.path.dirname(__file__)))
test_pos = to_utc(pd.read_csv(
gzip.open(__location__ + '/test_data/test_pos.csv.gz'),
index_col=0, parse_dates=True))
test_gross_lev = pd.read_csv(
gzip.open(
__location__ + '/test_data/test_gross_lev.csv.gz'),
index_col=0, parse_dates=True)
test_gross_lev = to_series(to_utc(test_gross_lev))
def test_gross_lev_calculation(self):
assert_series_equal(
timeseries.gross_lev(self.test_pos)['2004-02-01':],
self.test_gross_lev['2004-02-01':], check_names=False)
|
apache-2.0
|
c11/yatsm
|
yatsm/algorithms/yatsm.py
|
1
|
11153
|
""" Yet Another TimeSeries Model baseclass
"""
import numpy as np
import patsy
import sklearn
import sklearn.linear_model
from .._cyprep import get_valid_mask
from ..regression.diagnostics import rmse
from ..regression.transforms import harm # noqa
class YATSM(object):
""" Yet Another TimeSeries Model baseclass
.. note::
When ``YATSM`` objects are fit, the intended order of method calls is:
1. Setup the model with :func:`~setup`
2. Preprocess a time series for one unit area with
:func:`~preprocess`
3. Fit the time series with the YATSM model using :func:`~fit`
4. A fitted model can be used to
* Predict on additional design matrixes with :func:`~predict`
* Plot diagnostic information with :func:`~plot`
* Return goodness of fit diagnostic metrics with :func:`~score`
.. note::
Record structured arrays must contain the following:
* ``start`` (`int`): starting dates of timeseries segments
* ``end`` (`int`): ending dates of timeseries segments
* ``break`` (`int`): break dates of timeseries segments
* ``coef`` (`double (n x p shape)`): number of bands x number of
features coefficients matrix for predictions
* ``rmse`` (`double (n length)`): Root Mean Squared Error for each
band
* ``px`` (`int`): pixel X coordinate
* ``py`` (`int`): pixel Y coordinate
Args:
test_indices (numpy.ndarray): Test for changes with these
indices of ``Y``. If not provided, all series in ``Y`` will be used
as test indices
estimator (dict): dictionary containing estimation model from
``scikit-learn`` used to fit and predict timeseries and,
optionally, a dict of options for the estimation model ``fit``
method (default: ``{'object': Lasso(alpha=20), 'fit': {}}``)
kwargs (dict): dictionary of addition keyword arguments
(for sub-classes)
Attributes:
record_template (numpy.ndarray): An empty NumPy structured array that
is a template for the model's ``record``
models (numpy.ndarray): prediction model objects
record (numpy.ndarray): NumPy structured array containing timeseries
model attribute information
n_record (int): number of recorded segments in time series model
n_series (int): number of bands in ``Y``
px (int): pixel X location or index
n_features (int): number of coefficients in ``X`` design matrix
py (int): pixel Y location or index
"""
def __init__(self,
test_indices=None,
estimator={'object': sklearn.linear_model.Lasso(alpha=20),
'fit': {}},
**kwargs):
self.test_indices = np.asarray(test_indices)
self.estimator = sklearn.clone(estimator['object'])
self.estimator_fit = estimator.get('fit', {})
self.models = [] # leave empty, fill in during `fit`
self.n_record = 0
self.record = []
self.n_series, self.n_features = 0, 0
self.px = kwargs.get('px', 0)
self.py = kwargs.get('py', 0)
@property
def record_template(self):
""" YATSM record template for features in X and series in Y
Record template will set ``px`` and ``py`` if defined as class
attributes. Otherwise ``px`` and ``py`` coordinates will default to 0.
Returns:
numpy.ndarray: NumPy structured array containing a template of a
YATSM record
"""
record_template = np.zeros(1, dtype=[
('start', 'i4'),
('end', 'i4'),
('break', 'i4'),
('coef', 'float32', (self.n_coef, self.n_series)),
('rmse', 'float32', (self.n_series)),
('px', 'u2'),
('py', 'u2')
])
record_template['px'] = self.px
record_template['py'] = self.py
return record_template
# SETUP & PREPROCESSING
def setup(self, df, **config):
""" Setup model for input dataset and (optionally) return design matrix
Args:
df (pandas.DataFrame): Pandas dataframe containing dataset
attributes (e.g., dates, image ID, path/row, metadata, etc.)
config (dict): YATSM configuration dictionary from user, including
'dataset' and 'YATSM' sub-configurations
Returns:
numpy.ndarray or None: return design matrix if used by algorithm
"""
X = patsy.dmatrix(config['YATSM']['design_matrix'], data=df)
return X
def preprocess(self, X, Y, dates,
min_values=None, max_values=None,
mask_band=None, mask_values=None, **kwargs):
""" Preprocess a unit area of data (e.g., pixel, segment, etc.)
This preprocessing step will remove all observations that either
fall outside of the minimum/maximum range of the data or are flagged
for masking in the ``mask_band`` variable in ``Y``. If ``min_values``
or ``max_values`` are not specified, this masking step is skipped.
Similarly, masking based on a QA/QC or cloud mask will not be performed
if ``mask_band`` or ``mask_values`` are not provided.
Args:
X (numpy.ndarray): design matrix (number of observations x number
of features)
Y (numpy.ndarray): independent variable matrix (number of series x
number of observations)
dates (numpy.ndarray): ordinal dates for each observation in X/Y
min_values (np.ndarray): Minimum possible range of values for each
variable in Y (optional)
max_values (np.ndarray): Maximum possible range of values for each
variable in Y (optional)
mask_band (int): The mask band in Y (optional)
mask_values (sequence): A list or np.ndarray of values in the
``mask_band`` to mask (optional)
Returns:
tuple (np.ndarray, np.ndarray, np.ndarray): X, Y, and dates after
being preprocessed and masked
"""
if min_values is None or max_values is None:
valid = np.ones(dates.shape[0], dtype=np.bool)
else:
# Mask range of data
valid = get_valid_mask(Y, min_values, max_values).astype(bool)
# Apply mask band
if mask_band is not None and mask_values is not None:
idx_mask = mask_band - 1
valid *= np.in1d(Y.take(idx_mask, axis=0), mask_values,
invert=True).astype(np.bool)
Y = np.delete(Y, idx_mask, axis=0)[:, valid]
X = X[valid, :]
dates = dates[valid]
return X, Y, dates
# TIMESERIES ENSEMBLE FIT/PREDICT
def fit(self, X, Y, dates):
""" Fit timeseries model
Args:
X (numpy.ndarray): design matrix (number of observations x number
of features)
Y (numpy.ndarray): independent variable matrix (number of series x
number of observations)
dates (numpy.ndarray): ordinal dates for each observation in X/Y
Returns:
cls: Return ``self``
"""
raise NotImplementedError('Subclasses should implement fit method')
def fit_models(self, X, Y, bands=None):
""" Fit timeseries models for `bands` within `Y` for a given `X`
Updates or initializes fit for ``self.models``
Args:
X (numpy.ndarray): design matrix (number of observations x number
of features)
Y (numpy.ndarray): independent variable matrix (number of series x
number of observations) observation in the X design matrix
bands (iterable): Subset of bands of `Y` to fit. If None are
provided, fit all bands in Y
"""
if bands is None:
bands = np.arange(self.n_series)
for b in bands:
y = Y[b, :]
model = self.models[b]
model.fit(X, y, **self.estimator_fit)
# Add in RMSE calculation
model.rmse = rmse(y, model.predict(X))
# Add intercept to intercept term of design matrix
model.coef = model.coef_.copy()
model.coef[0] += model.intercept_
def predict(self, X, dates, series=None):
""" Return a 2D NumPy array of y-hat predictions for a given X
Predictions are made from ensemble of timeseries models such that
predicted values are generated for each date using the model from the
timeseries segment that intersects each date.
Args:
X (numpy.ndarray): Design matrix (number of observations x number
of features)
dates (int or numpy.ndarray): A single ordinal date or a np.ndarray
of length X.shape[0] specifying the ordinal dates for each
prediction
series (iterable, optional): Return prediction for subset of series
within timeseries model. If None is provided, returns
predictions from all series
Returns:
numpy.ndarray: Prediction for given X (number of series x number of
observations)
"""
raise NotImplementedError('Subclasses should implement "predict" '
'method')
# DIAGNOSTICS
def score(self, X, Y, dates):
""" Return timeseries model performance scores
Args:
X (numpy.ndarray): design matrix (number of observations x number
of features)
Y (numpy.ndarray): independent variable matrix (number of series x
number of observations)
dates (numpy.ndarray): ordinal dates for each observation in X/Y
Returns:
namedtuple: performance summary statistics
"""
raise NotImplementedError('Subclasses should implement "score" method')
def plot(self, X, Y, dates, **config):
""" Plot the timeseries model results
Args:
X (numpy.ndarray): design matrix (number of observations x number
of features)
Y (numpy.ndarray): independent variable matrix (number of series x
number of observations)
dates (numpy.ndarray): ordinal dates for each observation in X/Y
config (dict): YATSM configuration dictionary from user, including
'dataset' and 'YATSM' sub-configurations
"""
raise NotImplementedError('Subclasses should implement "plot" method')
# MAKE ITERABLE
def __iter__(self):
""" Iterate over the timeseries segment records
"""
for record in self.record:
yield record
def __len__(self):
""" Return the number of segments in this timeseries model
"""
return len(self.record)
|
mit
|
lizardsystem/threedilib
|
threedilib/modeling/addheight.py
|
1
|
22410
|
# -*- coding: utf-8 -*-
# (c) Nelen & Schuurmans. GPL licensed, see LICENSE.rst.
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from __future__ import division
import argparse
import os
import re
from osgeo import gdal
from osgeo import ogr
from scipy import ndimage
import numpy as np
from threedilib.modeling import progress
from threedilib.modeling import vector
from threedilib import config
DESCRIPTION = """
Convert a shapefile containing 2D linestrings to a shapefile with
embedded elevation from an elevation map.
Target shapefile can have two layouts: A 'point' layout where the
elevation is stored in the third coordinate of a 3D linestring, and
a 'line' layout where a separate feature is created in the target
shapefile for each segment of each feature in the source shapefile,
with two extra attributes compared to the original shapefile, one
to store the elevation, and another to store an arbitrary feature
id referring to the source feature in the source shapefile.
For the script to work, a configuration variable AHN_PATH must be
set in threedilib/localconfig.py pointing to the location of the
elevation map, and a variable INDEX_PATH pointing to the .shp file
that contains the index to the elevation map.
"""
LAYOUT_POINT = 'point'
LAYOUT_LINE = 'line'
PIXELSIZE = 0.5 # AHN2
STEPSIZE = 0.5 # For looking perpendicular to line.
LINESTRINGS = ogr.wkbLineString, ogr.wkbLineString25D
MULTILINESTRINGS = ogr.wkbMultiLineString, ogr.wkbMultiLineString25D
SHEET = re.compile('^(?P<unit>[0-9]{2}[a-z])[a-z][0-9]_[0-9]{2}$')
def get_parser():
""" Return arguments dictionary. """
parser = argparse.ArgumentParser(
description=DESCRIPTION,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument('source_path',
metavar='SOURCE',
help='Path to shapefile with 2D linestrings.')
parser.add_argument('target_path',
metavar='TARGET',
help='Path to target shapefile.')
parser.add_argument('-o', '--overwrite',
action='store_true',
help='Overwrite TARGET if it exists.')
parser.add_argument('-d', '--distance',
metavar='DISTANCE',
type=float,
default=0,
help=('Distance (half-width) to look '
'perpendicular to the segments to '
'find the highest (or lowest, with '
' --inverse) points on the elevation '
' map. Defaults to 0.0.'))
parser.add_argument('-w', '--width',
metavar='WIDTH',
type=float,
default=0,
help=('Guaranteed width of maximum. '
'Defaults to 0.0.'))
parser.add_argument('-m', '--modify',
action='store_true',
help='Change horizontal geometry.')
parser.add_argument('-a', '--average',
metavar='AMOUNT',
type=int,
default=0,
help='Average of points and minimum of values.')
parser.add_argument('-l', '--layout',
metavar='LAYOUT',
choices=[LAYOUT_POINT, LAYOUT_LINE],
default=LAYOUT_POINT,
help="Target shapefile layout.")
parser.add_argument('-f', '--feature-id-attribute',
metavar='FEATURE_ID_ATTRIBUTE',
default='_feat_id',
help='Attribute name for the feature id.')
parser.add_argument('-e', '--elevation-attribute',
metavar='ELEVATION_ATTRIBUTE',
default='_elevation',
help='Attribute name for the elevation.')
parser.add_argument('-i', '--inverse',
#metavar='INVERSE',
action='store_true',
help='Look for lowest points instead of highest.')
return parser
def get_index():
""" Return index from container or open from config location. """
key = 'index'
if key not in cache:
if os.path.exists(config.INDEX_PATH):
dataset = ogr.Open(config.INDEX_PATH)
else:
raise OSError('File not found :{}'.format(config.INDEX_PATH))
cache[key] = dataset
return cache[key][0]
def get_dataset(leafno):
""" Return gdal_dataset from cache or file. """
if leafno in cache:
return cache[leafno]
if len(cache) > 10:
for key in cache.keys():
if SHEET.match(key):
del cache[key] # Maybe unnecessary, see top and lsof.
# Add to cache and return.
unit = SHEET.match(leafno).group('unit')
try:
prefix = config.AHN_PREFIX
except AttributeError:
prefix = 'i'
path = os.path.join(config.AHN_PATH, unit, prefix + leafno + '.tif')
dataset = gdal.Open(path)
cache[leafno] = Dataset(dataset)
dataset = None
return cache[leafno]
def get_carpet(mline, distance, step=None):
"""
Return MxNx2 numpy array.
It contains the first point of the first line, the centers, and the
last point of the last line of the MagicLine, but perpendicularly
repeated along the normals to the segments of the MagicLine, up to
distance, with step.
"""
# Determine the offsets from the points on the line
if step is None or step == 0:
length = 2
else:
# Length must be uneven, and no less than 2 * distance / step + 1
# Take a look at np.around() (round to even values)!
length = 2 * np.round(0.5 + distance / step) + 1
offsets_1d = np.mgrid[-distance:distance:length * 1j]
# Normalize and rotate the vectors of the linesegments
vectors = vector.normalize(vector.rotate(mline.vectors, 270))
# Extend vectors and centers.
evectors = np.concatenate([[vectors[0]], vectors[:], [vectors[-1]]])
ecenters = np.concatenate([[mline.points[0]],
mline.centers[:],
[mline.points[-1]]])
offsets_2d = evectors.reshape(-1, 1, 2) * offsets_1d.reshape(1, -1, 1)
points = offsets_2d + ecenters.reshape(-1, 1, 2)
return points
def get_leafnos(mline, distance):
""" Return the leafnos for the outermost lines of the carpet. """
# Convert magic line to carpet to linestring around carpet
pline = mline.pixelize(size=PIXELSIZE, endsonly=True)
points = get_carpet(mline=pline,
distance=distance)
# Create polygon containing outermost lines
linering = np.vstack([
points[:, 0], points[::-1, -1], points[:1, 0]
])
linestring = vector.polygon2geometry(linering)
# Query the index with it
index = get_index()
index.SetSpatialFilter(linestring)
return [feature[b'BLADNR'] for feature in index]
def paste_values(points, values, leafno):
""" Paste values of evelation pixels at points. """
dataset = get_dataset(leafno)
xmin, ymin, xmax, ymax = dataset.get_extent()
cellsize = dataset.get_cellsize()
origin = dataset.get_origin()
# Determine which points are outside leaf's extent.
# '=' added for the corner where the index origin is.
index = np.logical_and(np.logical_and(points[..., 0] >= xmin,
points[..., 0] < xmax),
np.logical_and(points[..., 1] > ymin,
points[..., 1] <= ymax))
# Determine indices for these points
indices = tuple(np.uint64(
(points[index] - origin) / cellsize,
).transpose())[::-1]
# Assign data for these points to corresponding values.
values[index] = dataset.data[indices]
def average_result(amount, lines, centers, values):
"""
Return dictionary of numpy arrays.
Points and values are averaged in groups of amount, but lines are
converted per group to a line from the start point of the first line
in the group to the end point of the last line in the group.
"""
# Determine the size needed to fit an integer multiple of amount
oldsize = values.size
newsize = int(np.ceil(values.size / amount) * amount)
# Determine lines
ma_lines = np.ma.array(np.empty((newsize, 2, 2)), mask=True)
ma_lines[:oldsize] = lines
ma_lines[oldsize:] = lines[-1] # Repeat last line
result_lines = np.array([
ma_lines.reshape(-1, amount, 2, 2)[:, 0, 0],
ma_lines.reshape(-1, amount, 2, 2)[:, -1, 1],
]).transpose(1, 0, 2)
# Calculate points and values by averaging
ma_centers = np.ma.array(np.empty((newsize, 2)), mask=True)
ma_centers[:oldsize] = centers
ma_values = np.ma.array(np.empty(newsize), mask=True)
ma_values[:oldsize] = values
return dict(lines=result_lines,
values=ma_values.reshape(-1, amount).min(1),
centers=ma_centers.reshape(-1, amount, 2).mean(1))
class Dataset(object):
def __init__(self, dataset):
""" Initialize from gdal dataset. """
# There exist datasets with slightly offset origins.
# Here origin is rounded to whole meters.
a, b, c, d, e, f = dataset.GetGeoTransform()
self.geotransform = round(a), b, c, round(d), e, f
self.size = dataset.RasterXSize, dataset.RasterYSize
self.data = dataset.ReadAsArray()
# Check for holes in the data
nodatavalue = dataset.GetRasterBand(1).GetNoDataValue()
if nodatavalue in self.data:
raise ValueError('Dataset {} contains nodatavalues!'.format(
dataset.GetFileList()[0]
))
def get_extent(self):
""" Return tuple of xmin, ymin, xmax, ymax. """
return (self.geotransform[0],
self.geotransform[3] + self.size[1] * self.geotransform[5],
self.geotransform[0] + self.size[0] * self.geotransform[1],
self.geotransform[3])
def get_cellsize(self):
""" Return numpy array. """
return np.array([[self.geotransform[1], self.geotransform[5]]])
def get_origin(self):
""" Return numpy array. """
return np.array([[self.geotransform[0], self.geotransform[3]]])
class BaseWriter(object):
""" Base class for common writer methods. """
def __init__(self, path, **kwargs):
self.path = path
for key, value in kwargs.items():
setattr(self, key, value)
def __enter__(self):
""" Creates or replaces the target shapefile. """
driver = ogr.GetDriverByName(b'ESRI Shapefile')
if os.path.exists(self.path):
driver.DeleteDataSource(str(self.path))
self.dataset = driver.CreateDataSource(str(self.path))
return self
def __exit__(self, type, value, traceback):
""" Close dataset. """
self.layer = None
self.dataset = None
def _modify(self, points, values, mline, step):
""" Return dictionary of numpy arrays. """
# First a minimum or maximum filter with requested width
filtersize = np.round(self.width / step)
if filtersize > 0:
# Choices based on inverse or not
cval = values.max() if self.inverse else values.min()
if self.inverse:
extremum_filter = ndimage.maximum_filter
else:
extremum_filter = ndimage.minimum_filter
# Filtering
fpoints = ndimage.convolve(
points, np.ones((1, filtersize, 1)) / filtersize,
) # Moving average for the points
fvalues = extremum_filter(
values, size=(1, filtersize), mode='constant', cval=cval,
) # Moving extremum for the values
else:
fpoints = points
fvalues = values
if self.inverse:
# Find the minimum per filtered line
index = (np.arange(len(fvalues)), fvalues.argmin(axis=1))
else:
# Find the maximum per filtered line
index = (np.arange(len(fvalues)), fvalues.argmax(axis=1))
mpoints = fpoints[index]
mvalues = fvalues[index]
# Sorting points and values according to projection on mline
parameters = mline.project(mpoints)
ordering = parameters.argsort()
spoints = mpoints[ordering]
svalues = mvalues[ordering]
# Quick 'n dirty way of getting to result dict
rlines = np.array([spoints[:-1], spoints[1:]]).transpose(1, 0, 2)
rcenters = spoints[1:]
rvalues = svalues[1:]
return dict(lines=rlines,
centers=rcenters,
values=rvalues)
def _calculate(self, wkb_line_string):
""" Return lines, points, values tuple of numpy arrays. """
# Determine the leafnos
mline = vector.MagicLine(np.array(wkb_line_string.GetPoints())[:, :2])
leafnos = get_leafnos(mline=mline, distance=self.distance)
# Determine the point and values carpets
pline = mline.pixelize(size=PIXELSIZE)
points = get_carpet(
mline=pline,
distance=self.distance,
step=STEPSIZE,
)
values = np.ma.array(
np.empty(points.shape[:2]),
mask=True,
)
# Get the values into the carpet per leafno
for leafno in leafnos:
paste_values(points, values, leafno)
# Debugging plotcode
#from matplotlib.backends import backend_agg
#from matplotlib import figure
#from PIL import Image
#fig = figure.Figure()
#backend_agg.FigureCanvasAgg(fig)
#axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
#axes.axis('equal')
#axes.plot(
#points[~values.mask][..., 0], points[~values.mask][..., 1],
#'.g',
#)
#axes.plot(
#points[values.mask][..., 0], points[values.mask][..., 1],
#'.r',
#)
#buf, size = axes.figure.canvas.print_to_buffer()
#Image.fromstring('RGBA', size, buf).show()
# End debugging plotcode
if values.mask.any():
raise ValueError('Masked values remaining after filling!')
# Return lines, centers, values
if self.modify:
result = self._modify(points=points,
values=values,
mline=mline,
step=STEPSIZE)
else:
result = dict(lines=pline.lines,
centers=pline.centers,
values=values.data[1:-1].max(1))
if self.average:
return average_result(amount=self.average, **result)
else:
return result
def _add_layer(self, layer):
""" Add empty copy of layer. """
# Create layer
self.layer = self.dataset.CreateLayer(layer.GetName())
# Copy field definitions
layer_definition = layer.GetLayerDefn()
for i in range(layer_definition.GetFieldCount()):
self.layer.CreateField(layer_definition.GetFieldDefn(i))
class CoordinateWriter(BaseWriter):
""" Writes a shapefile with height in z coordinate. """
def _convert_wkb_line_string(self, source_wkb_line_string):
""" Return a wkb line string. """
result = self._calculate(wkb_line_string=source_wkb_line_string)
target_wkb_line_string = ogr.Geometry(ogr.wkbLineString)
# Add the first point of the first line
(x, y), z = result['lines'][0, 0], result['values'][0]
target_wkb_line_string.AddPoint(float(x), float(y), float(z))
# Add the centers (x, y) and values (z)
for (x, y), z in zip(result['centers'], result['values']):
target_wkb_line_string.AddPoint(float(x), float(y), float(z))
# Add the last point of the last line
(x, y), z = result['lines'][-1, 1], result['values'][-1]
target_wkb_line_string.AddPoint(float(x), float(y), float(z))
return target_wkb_line_string
def _convert(self, source_geometry):
"""
Return converted linestring or multiline.
"""
geometry_type = source_geometry.GetGeometryType()
if geometry_type in LINESTRINGS:
return self._convert_wkb_line_string(source_geometry)
if geometry_type in MULTILINESTRINGS:
target_geometry = ogr.Geometry(source_geometry.GetGeometryType())
for source_wkb_line_string in source_geometry:
target_geometry.AddGeometry(
self._convert_wkb_line_string(source_wkb_line_string),
)
return target_geometry
raise ValueError('Unexpected geometry type: {}'.format(
source_geometry.GetGeometryName(),
))
def _add_feature(self, feature):
""" Add converted feature. """
# Create feature
layer_definition = self.layer.GetLayerDefn()
new_feature = ogr.Feature(layer_definition)
# Copy attributes
for key, value in feature.items().items():
new_feature[key] = value
# Set geometry and add to layer
geometry = self._convert(source_geometry=feature.geometry())
new_feature.SetGeometry(geometry)
self.layer.CreateFeature(new_feature)
self.indicator.update()
def add(self, path, **kwargs):
""" Convert dataset at path. """
dataset = ogr.Open(path)
count = sum(layer.GetFeatureCount() for layer in dataset)
self.indicator = progress.Indicator(count)
for layer in dataset:
self._add_layer(layer)
for feature in layer:
try:
self._add_feature(feature)
except Exception as e:
with open('errors.txt', 'a') as errorfile:
errorfile.write(unicode(e) + '\n')
errorfile.write(unicode(feature.items()) + '\n')
dataset = None
class AttributeWriter(BaseWriter):
""" Writes a shapefile with height in z attribute. """
def _convert(self, source_geometry):
"""
Return generator of (geometry, height) tuples.
"""
geometry_type = source_geometry.GetGeometryType()
if geometry_type in LINESTRINGS:
source_wkb_line_strings = [source_geometry]
elif geometry_type in MULTILINESTRINGS:
source_wkb_line_strings = [line for line in source_geometry]
else:
raise ValueError('Unexpected geometry type: {}'.format(
source_geometry.GetGeometryName(),
))
for source_wkb_line_string in source_wkb_line_strings:
result = self._calculate(wkb_line_string=source_wkb_line_string)
for line, value in zip(result['lines'], result['values']):
yield vector.line2geometry(line), str(value)
def _add_fields(self):
""" Create extra fields. """
for name, kind in ((str(self.elevation_attribute), ogr.OFTReal),
(str(self.feature_id_attribute), ogr.OFTInteger)):
definition = ogr.FieldDefn(name, kind)
self.layer.CreateField(definition)
def _add_feature(self, feature_id, feature):
""" Add converted features. """
layer_definition = self.layer.GetLayerDefn()
generator = self._convert(source_geometry=feature.geometry())
for geometry, elevation in generator:
# Create feature
new_feature = ogr.Feature(layer_definition)
# Copy attributes
for key, value in feature.items().items():
new_feature[key] = value
# Add special attributes
new_feature[str(self.elevation_attribute)] = elevation
new_feature[str(self.feature_id_attribute)] = feature_id
# Set geometry and add to layer
new_feature.SetGeometry(geometry)
self.layer.CreateFeature(new_feature)
self.indicator.update()
def add(self, path):
""" Convert dataset at path. """
dataset = ogr.Open(path)
count = sum(layer.GetFeatureCount() for layer in dataset)
self.indicator = progress.Indicator(count)
for layer in dataset:
self._add_layer(layer)
self._add_fields()
for feature_id, feature in enumerate(layer):
try:
self._add_feature(feature_id=feature_id, feature=feature)
except Exception as e:
with open('errors.txt', 'a') as errorfile:
errorfile.write(unicode(e) + '\n')
errorfile.write(unicode(feature.items()) + '\n')
dataset = None
def addheight(source_path, target_path, overwrite,
distance, width, modify, average, inverse,
layout, elevation_attribute, feature_id_attribute):
"""
Take linestrings from source and create target with height added.
Source and target are both shapefiles.
"""
if os.path.exists(target_path) and not overwrite:
print("'{}' already exists. Use --overwrite.".format(target_path))
return 1
if modify and not distance:
print('Warning: --modify used with zero distance.')
Writer = CoordinateWriter if layout == LAYOUT_POINT else AttributeWriter
with Writer(target_path,
distance=distance,
width=width,
modify=modify,
average=average,
inverse=inverse,
elevation_attribute=elevation_attribute,
feature_id_attribute=feature_id_attribute) as writer:
writer.add(source_path)
return 0
def main():
""" Call addheight() with commandline args. """
addheight(**vars(get_parser().parse_args()))
cache = {} # Contains leafno's and the index
if __name__ == '__main__':
exit(main())
|
gpl-3.0
|
cwu2011/seaborn
|
doc/conf.py
|
6
|
9112
|
# -*- coding: utf-8 -*-
#
# seaborn documentation build configuration file, created by
# sphinx-quickstart on Mon Jul 29 23:25:46 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
import sphinx_bootstrap_theme
import matplotlib as mpl
mpl.use("Agg")
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
sys.path.insert(0, os.path.abspath('sphinxext'))
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.coverage',
'sphinx.ext.mathjax',
'sphinx.ext.autosummary',
'plot_generator',
'plot_directive',
'numpydoc',
'ipython_directive',
'ipython_console_highlighting',
]
# Generate the API documentation when building
autosummary_generate = True
numpydoc_show_class_members = False
# Include the example source for plots in API docs
plot_include_source = True
plot_formats = [("png", 90)]
plot_html_show_formats = False
plot_html_show_source_link = False
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'seaborn'
copyright = u'2012-2015, Michael Waskom'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
sys.path.insert(0, os.path.abspath(os.path.pardir))
import seaborn
version = seaborn.__version__
# The full version, including alpha/beta/rc tags.
release = seaborn.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'bootstrap'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
'source_link_position': "footer",
'bootswatch_theme': "flatly",
'navbar_sidebarrel': False,
'bootstrap_version': "3",
'navbar_links': [("Tutorial", "tutorial"),
("Gallery", "examples/index")],
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static', 'example_thumbs']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'seaborndoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'seaborn.tex', u'seaborn Documentation',
u'Michael Waskom', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'seaborn', u'seaborn Documentation',
[u'Michael Waskom'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'seaborn', u'seaborn Documentation',
u'Michael Waskom', 'seaborn', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# Add the 'copybutton' javascript, to hide/show the prompt in code
# examples, originally taken from scikit-learn's doc/conf.py
def setup(app):
app.add_javascript('copybutton.js')
app.add_stylesheet('style.css')
|
bsd-3-clause
|
MatthieuBizien/scikit-learn
|
sklearn/metrics/setup.py
|
24
|
1059
|
import os
import os.path
import numpy
from numpy.distutils.misc_util import Configuration
from sklearn._build_utils import get_blas_info
def configuration(parent_package="", top_path=None):
config = Configuration("metrics", parent_package, top_path)
cblas_libs, blas_info = get_blas_info()
if os.name == 'posix':
cblas_libs.append('m')
config.add_extension("pairwise_fast",
sources=["pairwise_fast.c"],
include_dirs=[os.path.join('..', 'src', 'cblas'),
numpy.get_include(),
blas_info.pop('include_dirs', [])],
libraries=cblas_libs,
extra_compile_args=blas_info.pop('extra_compile_args',
[]),
**blas_info)
config.add_subpackage('tests')
return config
if __name__ == "__main__":
from numpy.distutils.core import setup
setup(**configuration().todict())
|
bsd-3-clause
|
shoyer/numpy
|
numpy/lib/twodim_base.py
|
4
|
27413
|
""" Basic functions for manipulating 2d arrays
"""
from __future__ import division, absolute_import, print_function
import functools
from numpy.core.numeric import (
absolute, asanyarray, arange, zeros, greater_equal, multiply, ones,
asarray, where, int8, int16, int32, int64, empty, promote_types, diagonal,
nonzero
)
from numpy.core.overrides import set_module
from numpy.core import overrides
from numpy.core import iinfo, transpose
__all__ = [
'diag', 'diagflat', 'eye', 'fliplr', 'flipud', 'tri', 'triu',
'tril', 'vander', 'histogram2d', 'mask_indices', 'tril_indices',
'tril_indices_from', 'triu_indices', 'triu_indices_from', ]
array_function_dispatch = functools.partial(
overrides.array_function_dispatch, module='numpy')
i1 = iinfo(int8)
i2 = iinfo(int16)
i4 = iinfo(int32)
def _min_int(low, high):
""" get small int that fits the range """
if high <= i1.max and low >= i1.min:
return int8
if high <= i2.max and low >= i2.min:
return int16
if high <= i4.max and low >= i4.min:
return int32
return int64
def _flip_dispatcher(m):
return (m,)
@array_function_dispatch(_flip_dispatcher)
def fliplr(m):
"""
Flip array in the left/right direction.
Flip the entries in each row in the left/right direction.
Columns are preserved, but appear in a different order than before.
Parameters
----------
m : array_like
Input array, must be at least 2-D.
Returns
-------
f : ndarray
A view of `m` with the columns reversed. Since a view
is returned, this operation is :math:`\\mathcal O(1)`.
See Also
--------
flipud : Flip array in the up/down direction.
rot90 : Rotate array counterclockwise.
Notes
-----
Equivalent to m[:,::-1]. Requires the array to be at least 2-D.
Examples
--------
>>> A = np.diag([1.,2.,3.])
>>> A
array([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
>>> np.fliplr(A)
array([[0., 0., 1.],
[0., 2., 0.],
[3., 0., 0.]])
>>> A = np.random.randn(2,3,5)
>>> np.all(np.fliplr(A) == A[:,::-1,...])
True
"""
m = asanyarray(m)
if m.ndim < 2:
raise ValueError("Input must be >= 2-d.")
return m[:, ::-1]
@array_function_dispatch(_flip_dispatcher)
def flipud(m):
"""
Flip array in the up/down direction.
Flip the entries in each column in the up/down direction.
Rows are preserved, but appear in a different order than before.
Parameters
----------
m : array_like
Input array.
Returns
-------
out : array_like
A view of `m` with the rows reversed. Since a view is
returned, this operation is :math:`\\mathcal O(1)`.
See Also
--------
fliplr : Flip array in the left/right direction.
rot90 : Rotate array counterclockwise.
Notes
-----
Equivalent to ``m[::-1,...]``.
Does not require the array to be two-dimensional.
Examples
--------
>>> A = np.diag([1.0, 2, 3])
>>> A
array([[1., 0., 0.],
[0., 2., 0.],
[0., 0., 3.]])
>>> np.flipud(A)
array([[0., 0., 3.],
[0., 2., 0.],
[1., 0., 0.]])
>>> A = np.random.randn(2,3,5)
>>> np.all(np.flipud(A) == A[::-1,...])
True
>>> np.flipud([1,2])
array([2, 1])
"""
m = asanyarray(m)
if m.ndim < 1:
raise ValueError("Input must be >= 1-d.")
return m[::-1, ...]
@set_module('numpy')
def eye(N, M=None, k=0, dtype=float, order='C'):
"""
Return a 2-D array with ones on the diagonal and zeros elsewhere.
Parameters
----------
N : int
Number of rows in the output.
M : int, optional
Number of columns in the output. If None, defaults to `N`.
k : int, optional
Index of the diagonal: 0 (the default) refers to the main diagonal,
a positive value refers to an upper diagonal, and a negative value
to a lower diagonal.
dtype : data-type, optional
Data-type of the returned array.
order : {'C', 'F'}, optional
Whether the output should be stored in row-major (C-style) or
column-major (Fortran-style) order in memory.
.. versionadded:: 1.14.0
Returns
-------
I : ndarray of shape (N,M)
An array where all elements are equal to zero, except for the `k`-th
diagonal, whose values are equal to one.
See Also
--------
identity : (almost) equivalent function
diag : diagonal 2-D array from a 1-D array specified by the user.
Examples
--------
>>> np.eye(2, dtype=int)
array([[1, 0],
[0, 1]])
>>> np.eye(3, k=1)
array([[0., 1., 0.],
[0., 0., 1.],
[0., 0., 0.]])
"""
if M is None:
M = N
m = zeros((N, M), dtype=dtype, order=order)
if k >= M:
return m
if k >= 0:
i = k
else:
i = (-k) * M
m[:M-k].flat[i::M+1] = 1
return m
def _diag_dispatcher(v, k=None):
return (v,)
@array_function_dispatch(_diag_dispatcher)
def diag(v, k=0):
"""
Extract a diagonal or construct a diagonal array.
See the more detailed documentation for ``numpy.diagonal`` if you use this
function to extract a diagonal and wish to write to the resulting array;
whether it returns a copy or a view depends on what version of numpy you
are using.
Parameters
----------
v : array_like
If `v` is a 2-D array, return a copy of its `k`-th diagonal.
If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th
diagonal.
k : int, optional
Diagonal in question. The default is 0. Use `k>0` for diagonals
above the main diagonal, and `k<0` for diagonals below the main
diagonal.
Returns
-------
out : ndarray
The extracted diagonal or constructed diagonal array.
See Also
--------
diagonal : Return specified diagonals.
diagflat : Create a 2-D array with the flattened input as a diagonal.
trace : Sum along diagonals.
triu : Upper triangle of an array.
tril : Lower triangle of an array.
Examples
--------
>>> x = np.arange(9).reshape((3,3))
>>> x
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> np.diag(x)
array([0, 4, 8])
>>> np.diag(x, k=1)
array([1, 5])
>>> np.diag(x, k=-1)
array([3, 7])
>>> np.diag(np.diag(x))
array([[0, 0, 0],
[0, 4, 0],
[0, 0, 8]])
"""
v = asanyarray(v)
s = v.shape
if len(s) == 1:
n = s[0]+abs(k)
res = zeros((n, n), v.dtype)
if k >= 0:
i = k
else:
i = (-k) * n
res[:n-k].flat[i::n+1] = v
return res
elif len(s) == 2:
return diagonal(v, k)
else:
raise ValueError("Input must be 1- or 2-d.")
@array_function_dispatch(_diag_dispatcher)
def diagflat(v, k=0):
"""
Create a two-dimensional array with the flattened input as a diagonal.
Parameters
----------
v : array_like
Input data, which is flattened and set as the `k`-th
diagonal of the output.
k : int, optional
Diagonal to set; 0, the default, corresponds to the "main" diagonal,
a positive (negative) `k` giving the number of the diagonal above
(below) the main.
Returns
-------
out : ndarray
The 2-D output array.
See Also
--------
diag : MATLAB work-alike for 1-D and 2-D arrays.
diagonal : Return specified diagonals.
trace : Sum along diagonals.
Examples
--------
>>> np.diagflat([[1,2], [3,4]])
array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
>>> np.diagflat([1,2], 1)
array([[0, 1, 0],
[0, 0, 2],
[0, 0, 0]])
"""
try:
wrap = v.__array_wrap__
except AttributeError:
wrap = None
v = asarray(v).ravel()
s = len(v)
n = s + abs(k)
res = zeros((n, n), v.dtype)
if (k >= 0):
i = arange(0, n-k)
fi = i+k+i*n
else:
i = arange(0, n+k)
fi = i+(i-k)*n
res.flat[fi] = v
if not wrap:
return res
return wrap(res)
@set_module('numpy')
def tri(N, M=None, k=0, dtype=float):
"""
An array with ones at and below the given diagonal and zeros elsewhere.
Parameters
----------
N : int
Number of rows in the array.
M : int, optional
Number of columns in the array.
By default, `M` is taken equal to `N`.
k : int, optional
The sub-diagonal at and below which the array is filled.
`k` = 0 is the main diagonal, while `k` < 0 is below it,
and `k` > 0 is above. The default is 0.
dtype : dtype, optional
Data type of the returned array. The default is float.
Returns
-------
tri : ndarray of shape (N, M)
Array with its lower triangle filled with ones and zero elsewhere;
in other words ``T[i,j] == 1`` for ``i <= j + k``, 0 otherwise.
Examples
--------
>>> np.tri(3, 5, 2, dtype=int)
array([[1, 1, 1, 0, 0],
[1, 1, 1, 1, 0],
[1, 1, 1, 1, 1]])
>>> np.tri(3, 5, -1)
array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.]])
"""
if M is None:
M = N
m = greater_equal.outer(arange(N, dtype=_min_int(0, N)),
arange(-k, M-k, dtype=_min_int(-k, M - k)))
# Avoid making a copy if the requested type is already bool
m = m.astype(dtype, copy=False)
return m
def _trilu_dispatcher(m, k=None):
return (m,)
@array_function_dispatch(_trilu_dispatcher)
def tril(m, k=0):
"""
Lower triangle of an array.
Return a copy of an array with elements above the `k`-th diagonal zeroed.
Parameters
----------
m : array_like, shape (M, N)
Input array.
k : int, optional
Diagonal above which to zero elements. `k = 0` (the default) is the
main diagonal, `k < 0` is below it and `k > 0` is above.
Returns
-------
tril : ndarray, shape (M, N)
Lower triangle of `m`, of same shape and data-type as `m`.
See Also
--------
triu : same thing, only for the upper triangle
Examples
--------
>>> np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 0, 0, 0],
[ 4, 0, 0],
[ 7, 8, 0],
[10, 11, 12]])
"""
m = asanyarray(m)
mask = tri(*m.shape[-2:], k=k, dtype=bool)
return where(mask, m, zeros(1, m.dtype))
@array_function_dispatch(_trilu_dispatcher)
def triu(m, k=0):
"""
Upper triangle of an array.
Return a copy of a matrix with the elements below the `k`-th diagonal
zeroed.
Please refer to the documentation for `tril` for further details.
See Also
--------
tril : lower triangle of an array
Examples
--------
>>> np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 1, 2, 3],
[ 4, 5, 6],
[ 0, 8, 9],
[ 0, 0, 12]])
"""
m = asanyarray(m)
mask = tri(*m.shape[-2:], k=k-1, dtype=bool)
return where(mask, zeros(1, m.dtype), m)
def _vander_dispatcher(x, N=None, increasing=None):
return (x,)
# Originally borrowed from John Hunter and matplotlib
@array_function_dispatch(_vander_dispatcher)
def vander(x, N=None, increasing=False):
"""
Generate a Vandermonde matrix.
The columns of the output matrix are powers of the input vector. The
order of the powers is determined by the `increasing` boolean argument.
Specifically, when `increasing` is False, the `i`-th output column is
the input vector raised element-wise to the power of ``N - i - 1``. Such
a matrix with a geometric progression in each row is named for Alexandre-
Theophile Vandermonde.
Parameters
----------
x : array_like
1-D input array.
N : int, optional
Number of columns in the output. If `N` is not specified, a square
array is returned (``N = len(x)``).
increasing : bool, optional
Order of the powers of the columns. If True, the powers increase
from left to right, if False (the default) they are reversed.
.. versionadded:: 1.9.0
Returns
-------
out : ndarray
Vandermonde matrix. If `increasing` is False, the first column is
``x^(N-1)``, the second ``x^(N-2)`` and so forth. If `increasing` is
True, the columns are ``x^0, x^1, ..., x^(N-1)``.
See Also
--------
polynomial.polynomial.polyvander
Examples
--------
>>> x = np.array([1, 2, 3, 5])
>>> N = 3
>>> np.vander(x, N)
array([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
>>> np.column_stack([x**(N-1-i) for i in range(N)])
array([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
>>> x = np.array([1, 2, 3, 5])
>>> np.vander(x)
array([[ 1, 1, 1, 1],
[ 8, 4, 2, 1],
[ 27, 9, 3, 1],
[125, 25, 5, 1]])
>>> np.vander(x, increasing=True)
array([[ 1, 1, 1, 1],
[ 1, 2, 4, 8],
[ 1, 3, 9, 27],
[ 1, 5, 25, 125]])
The determinant of a square Vandermonde matrix is the product
of the differences between the values of the input vector:
>>> np.linalg.det(np.vander(x))
48.000000000000043 # may vary
>>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1)
48
"""
x = asarray(x)
if x.ndim != 1:
raise ValueError("x must be a one-dimensional array or sequence.")
if N is None:
N = len(x)
v = empty((len(x), N), dtype=promote_types(x.dtype, int))
tmp = v[:, ::-1] if not increasing else v
if N > 0:
tmp[:, 0] = 1
if N > 1:
tmp[:, 1:] = x[:, None]
multiply.accumulate(tmp[:, 1:], out=tmp[:, 1:], axis=1)
return v
def _histogram2d_dispatcher(x, y, bins=None, range=None, normed=None,
weights=None, density=None):
return (x, y, bins, weights)
@array_function_dispatch(_histogram2d_dispatcher)
def histogram2d(x, y, bins=10, range=None, normed=None, weights=None,
density=None):
"""
Compute the bi-dimensional histogram of two data samples.
Parameters
----------
x : array_like, shape (N,)
An array containing the x coordinates of the points to be
histogrammed.
y : array_like, shape (N,)
An array containing the y coordinates of the points to be
histogrammed.
bins : int or array_like or [int, int] or [array, array], optional
The bin specification:
* If int, the number of bins for the two dimensions (nx=ny=bins).
* If array_like, the bin edges for the two dimensions
(x_edges=y_edges=bins).
* If [int, int], the number of bins in each dimension
(nx, ny = bins).
* If [array, array], the bin edges in each dimension
(x_edges, y_edges = bins).
* A combination [int, array] or [array, int], where int
is the number of bins and array is the bin edges.
range : array_like, shape(2,2), optional
The leftmost and rightmost edges of the bins along each dimension
(if not specified explicitly in the `bins` parameters):
``[[xmin, xmax], [ymin, ymax]]``. All values outside of this range
will be considered outliers and not tallied in the histogram.
density : bool, optional
If False, the default, returns the number of samples in each bin.
If True, returns the probability *density* function at the bin,
``bin_count / sample_count / bin_area``.
normed : bool, optional
An alias for the density argument that behaves identically. To avoid
confusion with the broken normed argument to `histogram`, `density`
should be preferred.
weights : array_like, shape(N,), optional
An array of values ``w_i`` weighing each sample ``(x_i, y_i)``.
Weights are normalized to 1 if `normed` is True. If `normed` is
False, the values of the returned histogram are equal to the sum of
the weights belonging to the samples falling into each bin.
Returns
-------
H : ndarray, shape(nx, ny)
The bi-dimensional histogram of samples `x` and `y`. Values in `x`
are histogrammed along the first dimension and values in `y` are
histogrammed along the second dimension.
xedges : ndarray, shape(nx+1,)
The bin edges along the first dimension.
yedges : ndarray, shape(ny+1,)
The bin edges along the second dimension.
See Also
--------
histogram : 1D histogram
histogramdd : Multidimensional histogram
Notes
-----
When `normed` is True, then the returned histogram is the sample
density, defined such that the sum over bins of the product
``bin_value * bin_area`` is 1.
Please note that the histogram does not follow the Cartesian convention
where `x` values are on the abscissa and `y` values on the ordinate
axis. Rather, `x` is histogrammed along the first dimension of the
array (vertical), and `y` along the second dimension of the array
(horizontal). This ensures compatibility with `histogramdd`.
Examples
--------
>>> from matplotlib.image import NonUniformImage
>>> import matplotlib.pyplot as plt
Construct a 2-D histogram with variable bin width. First define the bin
edges:
>>> xedges = [0, 1, 3, 5]
>>> yedges = [0, 2, 3, 4, 6]
Next we create a histogram H with random bin content:
>>> x = np.random.normal(2, 1, 100)
>>> y = np.random.normal(1, 1, 100)
>>> H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges))
>>> H = H.T # Let each row list bins with common y range.
:func:`imshow <matplotlib.pyplot.imshow>` can only display square bins:
>>> fig = plt.figure(figsize=(7, 3))
>>> ax = fig.add_subplot(131, title='imshow: square bins')
>>> plt.imshow(H, interpolation='nearest', origin='low',
... extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
<matplotlib.image.AxesImage object at 0x...>
:func:`pcolormesh <matplotlib.pyplot.pcolormesh>` can display actual edges:
>>> ax = fig.add_subplot(132, title='pcolormesh: actual edges',
... aspect='equal')
>>> X, Y = np.meshgrid(xedges, yedges)
>>> ax.pcolormesh(X, Y, H)
<matplotlib.collections.QuadMesh object at 0x...>
:class:`NonUniformImage <matplotlib.image.NonUniformImage>` can be used to
display actual bin edges with interpolation:
>>> ax = fig.add_subplot(133, title='NonUniformImage: interpolated',
... aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]])
>>> im = NonUniformImage(ax, interpolation='bilinear')
>>> xcenters = (xedges[:-1] + xedges[1:]) / 2
>>> ycenters = (yedges[:-1] + yedges[1:]) / 2
>>> im.set_data(xcenters, ycenters, H)
>>> ax.images.append(im)
>>> plt.show()
"""
from numpy import histogramdd
try:
N = len(bins)
except TypeError:
N = 1
if N != 1 and N != 2:
xedges = yedges = asarray(bins)
bins = [xedges, yedges]
hist, edges = histogramdd([x, y], bins, range, normed, weights, density)
return hist, edges[0], edges[1]
@set_module('numpy')
def mask_indices(n, mask_func, k=0):
"""
Return the indices to access (n, n) arrays, given a masking function.
Assume `mask_func` is a function that, for a square array a of size
``(n, n)`` with a possible offset argument `k`, when called as
``mask_func(a, k)`` returns a new array with zeros in certain locations
(functions like `triu` or `tril` do precisely this). Then this function
returns the indices where the non-zero values would be located.
Parameters
----------
n : int
The returned indices will be valid to access arrays of shape (n, n).
mask_func : callable
A function whose call signature is similar to that of `triu`, `tril`.
That is, ``mask_func(x, k)`` returns a boolean array, shaped like `x`.
`k` is an optional argument to the function.
k : scalar
An optional argument which is passed through to `mask_func`. Functions
like `triu`, `tril` take a second argument that is interpreted as an
offset.
Returns
-------
indices : tuple of arrays.
The `n` arrays of indices corresponding to the locations where
``mask_func(np.ones((n, n)), k)`` is True.
See Also
--------
triu, tril, triu_indices, tril_indices
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
These are the indices that would allow you to access the upper triangular
part of any 3x3 array:
>>> iu = np.mask_indices(3, np.triu)
For example, if `a` is a 3x3 array:
>>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a[iu]
array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the
indices starting on the first diagonal right of the main one:
>>> iu1 = np.mask_indices(3, np.triu, 1)
with which we now extract only three elements:
>>> a[iu1]
array([1, 2, 5])
"""
m = ones((n, n), int)
a = mask_func(m, k)
return nonzero(a != 0)
@set_module('numpy')
def tril_indices(n, k=0, m=None):
"""
Return the indices for the lower-triangle of an (n, m) array.
Parameters
----------
n : int
The row dimension of the arrays for which the returned
indices will be valid.
k : int, optional
Diagonal offset (see `tril` for details).
m : int, optional
.. versionadded:: 1.9.0
The column dimension of the arrays for which the returned
arrays will be valid.
By default `m` is taken equal to `n`.
Returns
-------
inds : tuple of arrays
The indices for the triangle. The returned tuple contains two arrays,
each with the indices along one dimension of the array.
See also
--------
triu_indices : similar function, for upper-triangular.
mask_indices : generic function accepting an arbitrary mask function.
tril, triu
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
Compute two different sets of indices to access 4x4 arrays, one for the
lower triangular part starting at the main diagonal, and one starting two
diagonals further right:
>>> il1 = np.tril_indices(4)
>>> il2 = np.tril_indices(4, 2)
Here is how they can be used with a sample array:
>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
Both for indexing:
>>> a[il1]
array([ 0, 4, 5, ..., 13, 14, 15])
And for assigning values:
>>> a[il1] = -1
>>> a
array([[-1, 1, 2, 3],
[-1, -1, 6, 7],
[-1, -1, -1, 11],
[-1, -1, -1, -1]])
These cover almost the whole array (two diagonals right of the main one):
>>> a[il2] = -10
>>> a
array([[-10, -10, -10, 3],
[-10, -10, -10, -10],
[-10, -10, -10, -10],
[-10, -10, -10, -10]])
"""
return nonzero(tri(n, m, k=k, dtype=bool))
def _trilu_indices_form_dispatcher(arr, k=None):
return (arr,)
@array_function_dispatch(_trilu_indices_form_dispatcher)
def tril_indices_from(arr, k=0):
"""
Return the indices for the lower-triangle of arr.
See `tril_indices` for full details.
Parameters
----------
arr : array_like
The indices will be valid for square arrays whose dimensions are
the same as arr.
k : int, optional
Diagonal offset (see `tril` for details).
See Also
--------
tril_indices, tril
Notes
-----
.. versionadded:: 1.4.0
"""
if arr.ndim != 2:
raise ValueError("input array must be 2-d")
return tril_indices(arr.shape[-2], k=k, m=arr.shape[-1])
@set_module('numpy')
def triu_indices(n, k=0, m=None):
"""
Return the indices for the upper-triangle of an (n, m) array.
Parameters
----------
n : int
The size of the arrays for which the returned indices will
be valid.
k : int, optional
Diagonal offset (see `triu` for details).
m : int, optional
.. versionadded:: 1.9.0
The column dimension of the arrays for which the returned
arrays will be valid.
By default `m` is taken equal to `n`.
Returns
-------
inds : tuple, shape(2) of ndarrays, shape(`n`)
The indices for the triangle. The returned tuple contains two arrays,
each with the indices along one dimension of the array. Can be used
to slice a ndarray of shape(`n`, `n`).
See also
--------
tril_indices : similar function, for lower-triangular.
mask_indices : generic function accepting an arbitrary mask function.
triu, tril
Notes
-----
.. versionadded:: 1.4.0
Examples
--------
Compute two different sets of indices to access 4x4 arrays, one for the
upper triangular part starting at the main diagonal, and one starting two
diagonals further right:
>>> iu1 = np.triu_indices(4)
>>> iu2 = np.triu_indices(4, 2)
Here is how they can be used with a sample array:
>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
Both for indexing:
>>> a[iu1]
array([ 0, 1, 2, ..., 10, 11, 15])
And for assigning values:
>>> a[iu1] = -1
>>> a
array([[-1, -1, -1, -1],
[ 4, -1, -1, -1],
[ 8, 9, -1, -1],
[12, 13, 14, -1]])
These cover only a small part of the whole array (two diagonals right
of the main one):
>>> a[iu2] = -10
>>> a
array([[ -1, -1, -10, -10],
[ 4, -1, -1, -10],
[ 8, 9, -1, -1],
[ 12, 13, 14, -1]])
"""
return nonzero(~tri(n, m, k=k-1, dtype=bool))
@array_function_dispatch(_trilu_indices_form_dispatcher)
def triu_indices_from(arr, k=0):
"""
Return the indices for the upper-triangle of arr.
See `triu_indices` for full details.
Parameters
----------
arr : ndarray, shape(N, N)
The indices will be valid for square arrays.
k : int, optional
Diagonal offset (see `triu` for details).
Returns
-------
triu_indices_from : tuple, shape(2) of ndarray, shape(N)
Indices for the upper-triangle of `arr`.
See Also
--------
triu_indices, triu
Notes
-----
.. versionadded:: 1.4.0
"""
if arr.ndim != 2:
raise ValueError("input array must be 2-d")
return triu_indices(arr.shape[-2], k=k, m=arr.shape[-1])
|
bsd-3-clause
|
bkloppenborg/simtoi
|
src/scripts/plot_histogram.py
|
3
|
7152
|
#!/usr/bin/python
from numpy import loadtxt
from optparse import OptionParser
import matplotlib.pyplot as plt
import re
from scipy.stats import norm, cauchy
import matplotlib.mlab as mlab
import os
def plot_histogram(filename,
column_names=[], skip_cols=[], nbins=10, trimends=False,
autosave=False, save_directory='', save_format='svg', delimiter=None):
"""
Plots a histogram formed from the columns of the specified file.
If column_names is specified, the titles of the plots will be renamed
accordingly. Otherwise "Title" is inserted instead.
skip_cols specifies any columns in the data that should be skipped.
Columns at the end of the line may be skipped by using negative numbers.
In this scheme the last column in a row is -1.
"""
infile = open(filename, 'r')
if(delimiter):
data = loadtxt(infile, dtype=float, delimiter=',')
else:
data = loadtxt(infile, dtype=float)
infile.close()
end_col = data.shape[1]
norm_stats = list()
cauchy_stats = list()
# Reinterpret any negative numbers in skip_cols to be at the end of the line
for column in range(0, len(skip_cols)):
if skip_cols[column] < 0:
skip_cols[column] = end_col + skip_cols[column]
namecol = 0
for column in range(0, end_col):
# Skip the column if instructed to do so:
if(column in skip_cols):
continue;
# extract the data column:
temp = data[:,column]
if(trimends):
minval = min(temp)
maxval = max(temp)
temp = filter(lambda x: x > minval, temp)
temp = filter(lambda x: x < maxval, temp)
# plot a histogram of the data:
[n, bins, patches] = plt.hist(temp, bins=nbins, normed=True, label='Binned data')
# fit a normal distribution:
[norm_mu, norm_sigma] = norm.fit(temp)
y = mlab.normpdf(bins, norm_mu, norm_sigma)
legend_gauss = r'Normal: $\mu=%.3f,\ \sigma=%.3f$' % (norm_mu, norm_sigma)
l = plt.plot(bins, y, 'r--', linewidth=2, label=legend_gauss)
# fit a Lorentz/Cauchy distribution:
# bug workaround for http://projects.scipy.org/scipy/ticket/1530
# - specify a starting centroid value for the fit
[cauchy_mu, cauchy_gamma] = cauchy.fit(temp, loc=norm_mu)
y = cauchy.pdf(bins, loc=cauchy_mu, scale=cauchy_gamma)
legend_cauchy = r'Cauchy: $\mu=%.3f,\ \gamma=%.3f$' % (cauchy_mu, cauchy_gamma)
l = plt.plot(bins, y, 'g--', linewidth=2, label=legend_cauchy)
# now setup the axes labels:
try:
title = column_names[namecol]
namecol += 1
except:
title = "Title"
plt.title(title)
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.legend(loc='best')
if autosave:
plt.savefig(save_directory + '/stats_hist_' + title + '.' + save_format, transparent=True, format=save_format)
plt.close()
else:
plt.show()
# Add in the statistical information.
norm_stats.append([title, norm_mu, norm_sigma])
cauchy_stats.append([title, cauchy_mu, cauchy_gamma])
# Now either print out or save the statistical information
if(not autosave):
print "Normal Statistics:"
write_statistics(save_directory + '/stats_normal.txt', norm_stats, autosave)
if(not autosave):
print "Cauchy Statistics:"
write_statistics(save_directory + '/stats_cauchy.txt', cauchy_stats, autosave)
def col_names(filename):
"""
Reads in the column names from the `best_fit.txt` files.
Column names will always occur on the first non-comment line in the file.
"""
columns = list()
try:
infile = open(filename, 'r')
for line in infile:
if re.match('#', line):
continue
line = line.strip()
columns = line.split(',')
break
except:
print "Could not find best_fit.txt file. No graph titles will be created."
return columns
def main():
filename = ""
usage = "Usage: %prog [options] filename"
parser = OptionParser(usage=usage)
parser.add_option("--nbins", dest="nbins", action="store", type="int", default=10,
help="Number of binning columns. [default: 10]")
parser.add_option("--autosave", dest="autosave", action="store_true", default=False,
help="Automatically save the plots. [default: False]")
parser.add_option("--savefmt", dest="savefmt", action="store", type="string", default="svg",
help="Automatic save file format. [default: %default]")
parser.add_option("--trimends", dest="trimends", action="store_true", default=False,
help="Remove the minimum and maximum bins from the histogram [default: %default]")
(options, args) = parser.parse_args()
# now read the filenames
filename = args[0]
# Set parameters specifying columns that should not be plotted
# and attempt to find the namefile:
skip_cols=[]
directory = os.path.dirname(os.path.realpath(filename))
delimiter = None
if re.search('bootstrap_levmar', filename):
skip_cols = [-1]
delimiter = ','
print "Found bootstrap file."
elif re.search('multinest.txt', filename):
skip_cols = [0,1]
print "Found MultiNest file."
else:
print "Unknown file format found, I'll do the best I can."
column_names = []
if len(directory) > 1:
column_names = col_names(directory + '/best_fit.txt')
plot_histogram(filename, column_names=column_names, skip_cols=skip_cols, nbins=options.nbins, autosave=options.autosave, save_directory=directory, save_format=options.savefmt, trimends=options.trimends, delimiter=delimiter)
def write_statistics(filename, statistics, save_to_file):
"""
Writes the statistical information as rows formatted as:
title, mu, sigma
to the specified file.
statistics should be a list of triplets:
[[title, mu, sigma], ..., [title, mu, sigma] ]
Data will be written as follows:
# col1, sig_col1, ..., colN, sig_colN
val1, sig_val1, ..., valN, sig_valN
"""
# first do the title line
titles = list()
values = list()
for [title, value, sigma] in statistics:
titles.append(title)
titles.append("sig_" + title)
values.append(value)
values.append(sigma)
titles = map(str, titles)
values = map(str, values)
title_line = "# " + ', '.join(titles)
value_line = ', '.join(values)
if(save_to_file):
outfile = open(filename, 'w')
outfile.write(title_line + "\n")
outfile.write(value_line)
outfile.close()
else:
print title_line
print value_line
# Run the main function if this is a top-level script:
if __name__ == "__main__":
main()
|
gpl-3.0
|
loli/sklearn-ensembletrees
|
sklearn/linear_model/tests/test_sparse_coordinate_descent.py
|
1
|
10006
|
import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import ignore_warnings
from sklearn.linear_model.coordinate_descent import (Lasso, ElasticNet,
LassoCV, ElasticNetCV)
def test_sparse_coef():
""" Check that the sparse_coef propery works """
clf = ElasticNet()
clf.coef_ = [1, 2, 3]
assert_true(sp.isspmatrix(clf.sparse_coef_))
assert_equal(clf.sparse_coef_.todense().tolist()[0], clf.coef_)
def test_normalize_option():
""" Check that the normalize option in enet works """
X = sp.csc_matrix([[-1], [0], [1]])
y = [-1, 0, 1]
clf_dense = ElasticNet(fit_intercept=True, normalize=True)
clf_sparse = ElasticNet(fit_intercept=True, normalize=True)
clf_dense.fit(X, y)
X = sp.csc_matrix(X)
clf_sparse.fit(X, y)
assert_almost_equal(clf_dense.dual_gap_, 0)
assert_array_almost_equal(clf_dense.coef_, clf_sparse.coef_)
def test_lasso_zero():
"""Check that the sparse lasso can handle zero data without crashing"""
X = sp.csc_matrix((3, 1))
y = [0, 0, 0]
T = np.array([[1], [2], [3]])
clf = Lasso().fit(X, y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0])
assert_array_almost_equal(pred, [0, 0, 0])
assert_almost_equal(clf.dual_gap_, 0)
def test_enet_toy_list_input():
"""Test ElasticNet for various values of alpha and l1_ratio with list X"""
X = np.array([[-1], [0], [1]])
X = sp.csc_matrix(X)
Y = [-1, 0, 1] # just a straight line
T = np.array([[2], [3], [4]]) # test sample
# this should be the same as unregularized least squares
clf = ElasticNet(alpha=0, l1_ratio=1.0)
# catch warning about alpha=0.
# this is discouraged but should work.
ignore_warnings(clf.fit)(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [1])
assert_array_almost_equal(pred, [2, 3, 4])
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, l1_ratio=0.3, max_iter=1000)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.50819], decimal=3)
assert_array_almost_equal(pred, [1.0163, 1.5245, 2.0327], decimal=3)
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, l1_ratio=0.5)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.45454], 3)
assert_array_almost_equal(pred, [0.9090, 1.3636, 1.8181], 3)
assert_almost_equal(clf.dual_gap_, 0)
def test_enet_toy_explicit_sparse_input():
"""Test ElasticNet for various values of alpha and l1_ratio with sparse
X"""
f = ignore_warnings
# training samples
X = sp.lil_matrix((3, 1))
X[0, 0] = -1
# X[1, 0] = 0
X[2, 0] = 1
Y = [-1, 0, 1] # just a straight line (the identity function)
# test samples
T = sp.lil_matrix((3, 1))
T[0, 0] = 2
T[1, 0] = 3
T[2, 0] = 4
# this should be the same as lasso
clf = ElasticNet(alpha=0, l1_ratio=1.0)
f(clf.fit)(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [1])
assert_array_almost_equal(pred, [2, 3, 4])
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, l1_ratio=0.3, max_iter=1000)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.50819], decimal=3)
assert_array_almost_equal(pred, [1.0163, 1.5245, 2.0327], decimal=3)
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, l1_ratio=0.5)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.45454], 3)
assert_array_almost_equal(pred, [0.9090, 1.3636, 1.8181], 3)
assert_almost_equal(clf.dual_gap_, 0)
def make_sparse_data(n_samples=100, n_features=100, n_informative=10, seed=42,
positive=False, n_targets=1):
random_state = np.random.RandomState(seed)
# build an ill-posed linear regression problem with many noisy features and
# comparatively few samples
# generate a ground truth model
w = random_state.randn(n_features, n_targets)
w[n_informative:] = 0.0 # only the top features are impacting the model
if positive:
w = np.abs(w)
X = random_state.randn(n_samples, n_features)
rnd = random_state.uniform(size=(n_samples, n_features))
X[rnd > 0.5] = 0.0 # 50% of zeros in input signal
# generate training ground truth labels
y = np.dot(X, w)
X = sp.csc_matrix(X)
if n_targets == 1:
y = np.ravel(y)
return X, y
def _test_sparse_enet_not_as_toy_dataset(alpha, fit_intercept, positive):
n_samples, n_features, max_iter = 100, 100, 1000
n_informative = 10
X, y = make_sparse_data(n_samples, n_features, n_informative,
positive=positive)
X_train, X_test = X[n_samples / 2:], X[:n_samples / 2]
y_train, y_test = y[n_samples / 2:], y[:n_samples / 2]
s_clf = ElasticNet(alpha=alpha, l1_ratio=0.8, fit_intercept=fit_intercept,
max_iter=max_iter, tol=1e-7, positive=positive,
warm_start=True)
s_clf.fit(X_train, y_train)
assert_almost_equal(s_clf.dual_gap_, 0, 4)
assert_greater(s_clf.score(X_test, y_test), 0.85)
# check the convergence is the same as the dense version
d_clf = ElasticNet(alpha=alpha, l1_ratio=0.8, fit_intercept=fit_intercept,
max_iter=max_iter, tol=1e-7, positive=positive,
warm_start=True)
d_clf.fit(X_train.todense(), y_train)
assert_almost_equal(d_clf.dual_gap_, 0, 4)
assert_greater(d_clf.score(X_test, y_test), 0.85)
assert_almost_equal(s_clf.coef_, d_clf.coef_, 5)
assert_almost_equal(s_clf.intercept_, d_clf.intercept_, 5)
# check that the coefs are sparse
assert_less(np.sum(s_clf.coef_ != 0.0), 2 * n_informative)
def test_sparse_enet_not_as_toy_dataset():
_test_sparse_enet_not_as_toy_dataset(alpha=0.1, fit_intercept=False,
positive=False)
_test_sparse_enet_not_as_toy_dataset(alpha=0.1, fit_intercept=True,
positive=False)
_test_sparse_enet_not_as_toy_dataset(alpha=1e-3, fit_intercept=False,
positive=True)
_test_sparse_enet_not_as_toy_dataset(alpha=1e-3, fit_intercept=True,
positive=True)
def test_sparse_lasso_not_as_toy_dataset():
n_samples = 100
max_iter = 1000
n_informative = 10
X, y = make_sparse_data(n_samples=n_samples, n_informative=n_informative)
X_train, X_test = X[n_samples / 2:], X[:n_samples / 2]
y_train, y_test = y[n_samples / 2:], y[:n_samples / 2]
s_clf = Lasso(alpha=0.1, fit_intercept=False, max_iter=max_iter, tol=1e-7)
s_clf.fit(X_train, y_train)
assert_almost_equal(s_clf.dual_gap_, 0, 4)
assert_greater(s_clf.score(X_test, y_test), 0.85)
# check the convergence is the same as the dense version
d_clf = Lasso(alpha=0.1, fit_intercept=False, max_iter=max_iter, tol=1e-7)
d_clf.fit(X_train.todense(), y_train)
assert_almost_equal(d_clf.dual_gap_, 0, 4)
assert_greater(d_clf.score(X_test, y_test), 0.85)
# check that the coefs are sparse
assert_equal(np.sum(s_clf.coef_ != 0.0), n_informative)
def test_enet_multitarget():
n_targets = 3
X, y = make_sparse_data(n_targets=n_targets)
estimator = ElasticNet(alpha=0.01, fit_intercept=True, precompute=None)
# XXX: There is a bug when precompute is not None!
estimator.fit(X, y)
coef, intercept, dual_gap = (estimator.coef_,
estimator.intercept_,
estimator.dual_gap_)
for k in range(n_targets):
estimator.fit(X, y[:, k])
assert_array_almost_equal(coef[k, :], estimator.coef_)
assert_array_almost_equal(intercept[k], estimator.intercept_)
assert_array_almost_equal(dual_gap[k], estimator.dual_gap_)
def test_path_parameters():
X, y = make_sparse_data()
max_iter = 50
n_alphas = 10
clf = ElasticNetCV(n_alphas=n_alphas, eps=1e-3, max_iter=max_iter,
l1_ratio=0.5, fit_intercept=False)
ignore_warnings(clf.fit)(X, y) # new params
assert_almost_equal(0.5, clf.l1_ratio)
assert_equal(n_alphas, clf.n_alphas)
assert_equal(n_alphas, len(clf.alphas_))
sparse_mse_path = clf.mse_path_
ignore_warnings(clf.fit)(X.toarray(), y) # compare with dense data
assert_almost_equal(clf.mse_path_, sparse_mse_path)
def test_same_output_sparse_dense_lasso_and_enet_cv():
X, y = make_sparse_data(n_samples=40, n_features=10)
for normalize in [True, False]:
clfs = ElasticNetCV(max_iter=100, cv=5, normalize=normalize)
ignore_warnings(clfs.fit)(X, y)
clfd = ElasticNetCV(max_iter=100, cv=5, normalize=normalize)
ignore_warnings(clfd.fit)(X.todense(), y)
assert_almost_equal(clfs.alpha_, clfd.alpha_, 7)
assert_almost_equal(clfs.intercept_, clfd.intercept_, 7)
assert_array_almost_equal(clfs.mse_path_, clfd.mse_path_)
assert_array_almost_equal(clfs.alphas_, clfd.alphas_)
clfs = LassoCV(max_iter=100, cv=4, normalize=normalize)
ignore_warnings(clfs.fit)(X, y)
clfd = LassoCV(max_iter=100, cv=4, normalize=normalize)
ignore_warnings(clfd.fit)(X.todense(), y)
assert_almost_equal(clfs.alpha_, clfd.alpha_, 7)
assert_almost_equal(clfs.intercept_, clfd.intercept_, 7)
assert_array_almost_equal(clfs.mse_path_, clfd.mse_path_)
assert_array_almost_equal(clfs.alphas_, clfd.alphas_)
|
bsd-3-clause
|
sawsimeon/PCM
|
Descriptors_Selection.py
|
3
|
5025
|
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 17 09:19:41 2014
@author: Fujitsu
"""
def VIP(X, Y, H, NumDes):
from sklearn.cross_decomposition import PLSRegression
import numpy as np
from sklearn.cross_validation import KFold
import PCM_workflow as PW
print '############## VIP is being processed ###############'
M = list(X.viewkeys())
H_VIP, X_VIP, Y_VIP, HArray = {},{},{},{}
NumDesVIP = np.zeros((13,6), dtype=int)
for kk in M:
Xtrain, Ytrain = X[kk], Y
kf = KFold(len(Ytrain), 10, indices=True, shuffle=True, random_state=1)
HH = H[kk]
nrow, ncol = np.shape(Xtrain)
ArrayYpredCV, Q2, RMSE_CV, OptimalPC = PW.CV_Processing(Xtrain,Ytrain,kf)
plsmodel = PLSRegression(n_components=OptimalPC)
plsmodel.fit(Xtrain,Ytrain)
x_scores = plsmodel.x_scores_
x_weighted = plsmodel.x_weights_
m, p = nrow, ncol
m, h = np.shape(x_scores)
p, h = np.shape(x_weighted)
X_S, X_W = x_scores, x_weighted
co=[]
for i in range(h):
corr = np.corrcoef(np.squeeze(Ytrain), X_S[:,i])
co.append(corr[0][1]**2)
s = sum(co)
vip=[]
for j in range(p):
d=[]
for k in range(h):
d.append(co[k]*X_W[j,k]**2)
q=sum(d)
vip.append(np.sqrt(p*q/s))
idx_keep = [idx for idx, val in enumerate(vip) if vip[idx] >= 1]
idxDes = NumDes[int(kk[6:])-1,:]
L,P,LxP,LxL,PxP = [],[],[],[],[]
for idx in idx_keep:
if idx >= 0 and idx < np.sum(idxDes[0:1]):
L.append(idx)
elif idx >= np.sum(idxDes[0:1]) and idx < np.sum(idxDes[0:2]):
P.append(idx)
elif idx >= np.sum(idxDes[0:2]) and idx < np.sum(idxDes[0:3]):
LxP.append(idx)
elif idx >= np.sum(idxDes[0:3]) and idx < np.sum(idxDes[0:4]):
LxL.append(idx)
elif idx >= np.sum(idxDes[0:4]) and idx < np.sum(idxDes):
PxP.append(idx)
NVIP= np.array([len(L),len(P),len(LxP),len(LxL),len(PxP),len(idx_keep)])
NumDesVIP[int(kk[6:])-1,:] = NumDesVIP[int(kk[6:])-1,:]+NVIP
hvip = np.array(HH)[idx_keep]
vvip = np.array(vip)[idx_keep]
H_VIP[kk] = hvip
X_VIP[kk] = Xtrain[:,idx_keep]
Y_VIP = Ytrain
hvip = np.reshape(hvip,(len(hvip),1))
vvip = np.reshape(vvip, (len(vvip),1))
HArray[kk] = np.append(hvip, vvip, axis=1)
return X_VIP, Y_VIP, H_VIP, HArray, NumDesVIP
def VarinceThreshold(X):
import numpy as np
STDEV = np.std(X, axis=0)
return [idx for idx, val in enumerate(STDEV) if val > 0.1]
def Correlation(X, Y):
from scipy.stats import pearsonr
nrow, ncol = X.shape
Corr_XY = []
for i in range(ncol):
Corr_XY.append(pearsonr(X[:,i],Y)[1])
A = [j[0] for j in sorted(enumerate(Corr_XY), key=lambda x:x[1])]
AA = []
while A != []:
i_keep = []
for k in range(len(A)):
if k == 0:
i_keep.append(1)
else:
p = pearsonr(X[:,A[0]], X[:,A[k]])[1]
if p <= 0.05: #highly correlation
i_keep.append(0)
else:
i_keep.append(1)
A = [A[ind]for ind,val in enumerate(i_keep) if val == 1]
AA.append(A.pop(0))
return AA
def VIP_origin(X, Y, H):
from sklearn.cross_decomposition import PLSRegression
import numpy as np
from sklearn.cross_validation import KFold
import PCM_workflow as PW
print '############## VIP is being processed ###############'
Y = Y.astype(np.float)
Xtrain, Ytrain = X, Y
kf = KFold(len(Ytrain), 10, indices=True, shuffle=True, random_state=1)
nrow, ncol = np.shape(Xtrain)
ArrayYpredCV, Q2, RMSE_CV, OptimalPC = PW.CV_Processing(Xtrain,Ytrain,kf)
plsmodel = PLSRegression(n_components=OptimalPC)
plsmodel.fit(Xtrain,Ytrain)
x_scores = plsmodel.x_scores_
x_weighted = plsmodel.x_weights_
m, p = nrow, ncol
m, h = np.shape(x_scores)
p, h = np.shape(x_weighted)
X_S, X_W = x_scores, x_weighted
co=[]
for i in range(h):
corr = np.corrcoef(np.squeeze(Ytrain), X_S[:,i])
co.append(corr[0][1]**2)
s = sum(co)
vip=[]
for j in range(p):
d=[]
for k in range(h):
d.append(co[k]*X_W[j,k]**2)
q=sum(d)
vip.append(np.sqrt(p*q/s))
idx_keep = [idx for idx, val in enumerate(vip) if vip[idx] >= 1]
H_VIP = np.squeeze(np.array(H))[idx_keep]
X_VIP = Xtrain[:,idx_keep]
Y_VIP = Ytrain
return X_VIP, Y_VIP, H_VIP
|
gpl-2.0
|
DGrady/pandas
|
pandas/tests/indexing/test_timedelta.py
|
7
|
1913
|
import pytest
import pandas as pd
from pandas.util import testing as tm
class TestTimedeltaIndexing(object):
def test_boolean_indexing(self):
# GH 14946
df = pd.DataFrame({'x': range(10)})
df.index = pd.to_timedelta(range(10), unit='s')
conditions = [df['x'] > 3, df['x'] == 3, df['x'] < 3]
expected_data = [[0, 1, 2, 3, 10, 10, 10, 10, 10, 10],
[0, 1, 2, 10, 4, 5, 6, 7, 8, 9],
[10, 10, 10, 3, 4, 5, 6, 7, 8, 9]]
for cond, data in zip(conditions, expected_data):
result = df.assign(x=df.mask(cond, 10).astype('int64'))
expected = pd.DataFrame(data,
index=pd.to_timedelta(range(10), unit='s'),
columns=['x'],
dtype='int64')
tm.assert_frame_equal(expected, result)
@pytest.mark.parametrize(
"indexer, expected",
[(0, [20, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
(slice(4, 8), [0, 1, 2, 3, 20, 20, 20, 20, 8, 9]),
([3, 5], [0, 1, 2, 20, 4, 20, 6, 7, 8, 9])])
def test_list_like_indexing(self, indexer, expected):
# GH 16637
df = pd.DataFrame({'x': range(10)}, dtype="int64")
df.index = pd.to_timedelta(range(10), unit='s')
df.loc[df.index[indexer], 'x'] = 20
expected = pd.DataFrame(expected,
index=pd.to_timedelta(range(10), unit='s'),
columns=['x'],
dtype="int64")
tm.assert_frame_equal(expected, df)
def test_string_indexing(self):
# GH 16896
df = pd.DataFrame({'x': range(3)},
index=pd.to_timedelta(range(3), unit='days'))
expected = df.iloc[0]
sliced = df.loc['0 days']
tm.assert_series_equal(sliced, expected)
|
bsd-3-clause
|
lyoshiwo/resume_job_matching
|
Step8_basal_classifier_save.py
|
1
|
17377
|
# encoding=utf8
from sklearn import cross_validation
import pandas as pd
import os
import time
from keras.preprocessing import sequence
import util
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
tree_test = False
xgb_test = False
cnn_test = False
rf_test = False
lstm_test = False
# max_depth=12 0.433092948718
# rn>200,rh=21 0.537 to 0.541; rh=21,rn=400; 0.542
CV_FLAG = 1
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.03
param['max_depth'] = 6
param['eval_metric'] = 'merror'
param['silent'] = 1
param['min_child_weight'] = 10
param['subsample'] = 0.7
param['colsample_bytree'] = 0.2
param['nthread'] = 4
param['num_class'] = -1
keys = {'salary': [353, 2, 7], 'size': [223, 3, 6], 'degree': [450, 4, 8], 'position': [390, 7, 16]}
# keys = {'salary': [1,1,1], 'size': [1,1,1], 'degree': [1,1,1], 'position': [1,1,1]}
def get_all_by_name(name):
import numpy as np
if os.path.exists(util.features_prefix + name + "_XY.pkl") is False:
print name + 'file does not exist'
exit()
if os.path.exists(util.features_prefix + name + '_XXXYYY.pkl') is False:
[train_X, train_Y] = pd.read_pickle(util.features_prefix + name + '_XY.pkl')
X_train, X_test, y_train, y_test = cross_validation.train_test_split(train_X, train_Y, test_size=0.33,
random_state=0)
X_train, X_validate, y_train, y_validate = cross_validation.train_test_split(X_train, y_train, test_size=0.33,
random_state=0)
X_train = np.array(X_train)
y_train = np.array(y_train)
y_test = np.array(y_test)
X_test = np.array(X_test)
X_validate = np.array(X_validate)
y_validate = np.array(y_validate)
pd.to_pickle([X_train, X_validate, X_test, y_train, y_validate, y_test],
util.features_prefix + name + '_XXXYYY.pkl')
if os.path.exists(util.features_prefix + name + '_XXXYYY.pkl'):
from sklearn.ensemble import RandomForestClassifier
if rf_test is False:
[train_X, train_Y] = pd.read_pickle(util.features_prefix + name + '_XY.pkl')
[X_train, X_validate, X_test, y_train, y_validate, y_test] = pd.read_pickle(
util.features_prefix + name + '_XXXYYY.pkl')
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
print 'rf'
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
# for max in range(12, 23, 5):
clf = RandomForestClassifier(n_jobs=4, n_estimators=400, max_depth=22)
clf.fit(x, y)
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
pd.to_pickle(clf, util.models_prefix + name + '_rf.pkl')
y_p = clf.predict(X_test)
print name + ' score:' + util.score_lists(y_test, y_p)
if xgb_test is False:
[train_X, train_Y] = pd.read_pickle(util.features_prefix + name + '_XY.pkl')
[X_train, X_validate, X_test, y_train, y_validate, y_test] = pd.read_pickle(
util.features_prefix + name + '_XXXYYY.pkl')
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
print 'xg'
import xgboost as xgb
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
set_y = set(train_Y)
param["num_class"] = len(set_y)
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
dtrain = xgb.DMatrix(x, label=y)
param['objective'] = 'multi:softmax'
xgb_2 = xgb.train(param, dtrain, keys[name][0])
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
xgb_2.save_model(util.models_prefix + name + '_xgb.pkl')
dtest = xgb.DMatrix(X_test)
y_p = xgb_2.predict(dtest)
print name + ' score:' + util.score_lists(y_test, y_p)
param['objective'] = 'multi:softprob'
dtrain = xgb.DMatrix(x, label=y)
xgb_1 = xgb.train(param, dtrain, keys[name][0])
xgb_1.save_model(util.models_prefix + name + '_xgb_prob.pkl')
if cnn_test is False:
[train_X, train_Y] = pd.read_pickle(util.features_prefix + name + '_XY.pkl')
[X_train, X_validate, X_test, y_train, y_validate, y_test] = pd.read_pickle(
util.features_prefix + name + '_XXXYYY.pkl')
print 'cnn'
import copy
import numpy as np
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
from keras.layers.convolutional import Convolution1D
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
label_dict = LabelEncoder().fit(train_Y)
label_num = len(label_dict.classes_)
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
train_Y = np_utils.to_categorical(y, label_num)
# x = np.concatenate([X_train, X_validate], axis=0)
X_train = x
X_semantic = np.array(copy.deepcopy(X_train[:, range(95, 475)]))
X_manual = np.array(copy.deepcopy(X_train[:, range(0, 95)]))
X_cluster = np.array(copy.deepcopy(X_train[:, range(475, 545)]))
X_document = np.array(copy.deepcopy(X_train[:, range(545, 547)]))
X_document[:, [0]] = X_document[:, [0]] + train_X[:, [-1]].max()
dic_num_cluster = X_cluster.max()
dic_num_manual = train_X.max()
dic_num_document = X_document[:, [0]].max()
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.core import Merge
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.recurrent import LSTM
X_semantic = X_semantic.reshape(X_semantic.shape[0], 10, -1)
X_semantic_1 = np.zeros((X_semantic.shape[0], X_semantic.shape[2], X_semantic.shape[1]))
for i in range(int(X_semantic.shape[0])):
X_semantic_1[i] = np.transpose(X_semantic[i])
model_semantic = Sequential()
model_lstm = Sequential()
model_lstm.add(LSTM(output_dim=30, input_shape=X_semantic_1.shape[1:], go_backwards=True))
model_semantic.add(Convolution1D(nb_filter=32,
filter_length=2,
border_mode='valid',
activation='relu', input_shape=X_semantic_1.shape[1:]))
# model_semantic.add(MaxPooling1D(pool_length=2))
model_semantic.add(Convolution1D(nb_filter=8,
filter_length=2,
border_mode='valid',
activation='relu'))
# model_semantic.add(MaxPooling1D(pool_length=2))
model_semantic.add(Flatten())
# we use standard max pooling (halving the output of the previous layer):
model_manual = Sequential()
model_manual.add(Embedding(input_dim=dic_num_manual + 1, output_dim=20, input_length=X_manual.shape[1]))
# model_manual.add(Convolution1D(nb_filter=2,
# filter_length=2,
# border_mode='valid',
# activation='relu'))
# model_manual.add(MaxPooling1D(pool_length=2))
# model_manual.add(Convolution1D(nb_filter=8,
# filter_length=2,
# border_mode='valid',
# activation='relu'))
# model_manual.add(MaxPooling1D(pool_length=2))
model_manual.add(Flatten())
model_document = Sequential()
model_document.add(
Embedding(input_dim=dic_num_document + 1, output_dim=2, input_length=X_document.shape[1]))
model_document.add(Flatten())
model_cluster = Sequential()
model_cluster.add(Embedding(input_dim=dic_num_cluster + 1, output_dim=5, input_length=X_cluster.shape[1]))
model_cluster.add(Flatten())
model = Sequential()
# model = model_cluster
model.add(Merge([model_document, model_cluster, model_manual, model_semantic], mode='concat',
concat_axis=1))
model.add(Dense(512))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(128))
model.add(Dropout(0.5))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(label_num))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
# model.fit(X_cluster_1, train_Y, batch_size=100,
# nb_epoch=100, validation_split=0.33, verbose=1)
model.fit([X_document, X_cluster, X_manual, X_semantic_1], train_Y,
batch_size=100, nb_epoch=keys[name][1])
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
json_string = model.to_json()
pd.to_pickle(json_string, util.models_prefix + name + '_json_string_cnn.pkl')
model.save_weights(util.models_prefix + name + '_nn_weight_cnn.h5')
X_semantic = np.array(copy.deepcopy(X_test[:, range(95, 475)]))
X_manual = np.array(copy.deepcopy(X_test[:, range(0, 95)]))
X_cluster = np.array(copy.deepcopy(X_test[:, range(475, 545)]))
X_document = np.array(copy.deepcopy(X_test[:, range(545, 547)]))
X_document[:, [0]] = X_document[:, [0]] + train_X[:, [-1]].max()
X_semantic = X_semantic.reshape(X_semantic.shape[0], 10, -1)
X_semantic_1 = np.zeros((X_semantic.shape[0], X_semantic.shape[2], X_semantic.shape[1]))
for i in range(int(X_semantic.shape[0])):
X_semantic_1[i] = np.transpose(X_semantic[i])
cnn_list = model.predict_classes([X_document, X_cluster, X_manual, X_semantic_1])
print name + ' score:' + util.score_lists(y_test, cnn_list)
if lstm_test is False:
import numpy as np
[train_X, train_Y] = pd.read_pickle(util.features_prefix + name + '_XY.pkl')
[X_train, X_validate, X_test, y_train, y_validate, y_test] = pd.read_pickle(
util.features_prefix + name + '_XXXYYY.pkl')
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
print 'lstm'
import copy
import numpy as np
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
from keras.layers.convolutional import Convolution1D
label_dict = LabelEncoder().fit(train_Y)
label_num = len(label_dict.classes_)
x = np.concatenate([X_train, X_validate], axis=0)
y = np.concatenate([y_train, y_validate], axis=0)
train_Y = np_utils.to_categorical(y, label_num)
# x = np.concatenate([X_train, X_validate], axis=0)
X_train = x
X_semantic = np.array(copy.deepcopy(X_train[:, range(95, 475)]))
X_manual = np.array(copy.deepcopy(X_train[:, range(0, 95)]))
X_cluster = np.array(copy.deepcopy(X_train[:, range(475, 545)]))
X_document = np.array(copy.deepcopy(X_train[:, range(545, 547)]))
X_document[:, [0]] = X_document[:, [0]] + train_X[:, [-1]].max()
dic_num_cluster = X_cluster.max()
dic_num_manual = train_X.max()
dic_num_document = X_document[:, [0]].max()
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.core import Merge
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.recurrent import LSTM
X_semantic = X_semantic.reshape(X_semantic.shape[0], 10, -1)
X_semantic_1 = np.zeros((X_semantic.shape[0], X_semantic.shape[2], X_semantic.shape[1]))
for i in range(int(X_semantic.shape[0])):
X_semantic_1[i] = np.transpose(X_semantic[i])
model_semantic = Sequential()
model_lstm = Sequential()
model_lstm.add(LSTM(output_dim=30, input_shape=X_semantic_1.shape[1:], go_backwards=True))
model_semantic.add(Convolution1D(nb_filter=32,
filter_length=2,
border_mode='valid',
activation='relu', input_shape=X_semantic_1.shape[1:]))
# model_semantic.add(MaxPooling1D(pool_length=2))
model_semantic.add(Convolution1D(nb_filter=8,
filter_length=2,
border_mode='valid',
activation='relu'))
# model_semantic.add(MaxPooling1D(pool_length=2))
model_semantic.add(Flatten())
# we use standard max pooling (halving the output of the previous layer):
model_manual = Sequential()
model_manual.add(Embedding(input_dim=dic_num_manual + 1, output_dim=20, input_length=X_manual.shape[1]))
# model_manual.add(Convolution1D(nb_filter=2,
# filter_length=2,
# border_mode='valid',
# activation='relu'))
# model_manual.add(MaxPooling1D(pool_length=2))
# model_manual.add(Convolution1D(nb_filter=8,
# filter_length=2,
# border_mode='valid',
# activation='relu'))
# model_manual.add(MaxPooling1D(pool_length=2))
model_manual.add(Flatten())
model_document = Sequential()
model_document.add(
Embedding(input_dim=dic_num_document + 1, output_dim=2, input_length=X_document.shape[1]))
model_document.add(Flatten())
model_cluster = Sequential()
model_cluster.add(Embedding(input_dim=dic_num_cluster + 1, output_dim=5, input_length=X_cluster.shape[1]))
model_cluster.add(Flatten())
model = Sequential()
# model = model_cluster
model.add(Merge([model_document, model_cluster, model_manual, model_lstm], mode='concat',
concat_axis=1))
model.add(Dense(512))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(128))
model.add(Dropout(0.5))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(label_num))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
# model.fit(X_cluster_1, train_Y, batch_size=100,
# nb_epoch=100, validation_split=0.33, verbose=1)
model.fit([X_document, X_cluster, X_manual, X_semantic_1], train_Y,
batch_size=100, nb_epoch=keys[name][2])
print time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
json_string = model.to_json()
pd.to_pickle(json_string, util.models_prefix + name + '_json_string_lstm.pkl')
model.save_weights(util.models_prefix + name + '_nn_weight_lstm.h5')
X_semantic = np.array(copy.deepcopy(X_test[:, range(95, 475)]))
X_manual = np.array(copy.deepcopy(X_test[:, range(0, 95)]))
X_cluster = np.array(copy.deepcopy(X_test[:, range(475, 545)]))
X_document = np.array(copy.deepcopy(X_test[:, range(545, 547)]))
X_document[:, [0]] = X_document[:, [0]] + train_X[:, [-1]].max()
X_semantic = X_semantic.reshape(X_semantic.shape[0], 10, -1)
X_semantic_1 = np.zeros((X_semantic.shape[0], X_semantic.shape[2], X_semantic.shape[1]))
for i in range(int(X_semantic.shape[0])):
X_semantic_1[i] = np.transpose(X_semantic[i])
lstm_list = model.predict_classes([X_document, X_cluster, X_manual, X_semantic_1])
print name + ' score:' + util.score_lists(y_test, lstm_list)
if __name__ == "__main__":
for name in ['salary', 'size', 'degree', 'position']:
print name
get_all_by_name(name)
|
apache-2.0
|
jakobworldpeace/scikit-learn
|
sklearn/cross_decomposition/pls_.py
|
21
|
30770
|
"""
The :mod:`sklearn.pls` module implements Partial Least Squares (PLS).
"""
# Author: Edouard Duchesnay <[email protected]>
# License: BSD 3 clause
from distutils.version import LooseVersion
from sklearn.utils.extmath import svd_flip
from ..base import BaseEstimator, RegressorMixin, TransformerMixin
from ..utils import check_array, check_consistent_length
from ..externals import six
import warnings
from abc import ABCMeta, abstractmethod
import numpy as np
from scipy import linalg
from ..utils import arpack
from ..utils.validation import check_is_fitted, FLOAT_DTYPES
__all__ = ['PLSCanonical', 'PLSRegression', 'PLSSVD']
import scipy
pinv2_args = {}
if LooseVersion(scipy.__version__) >= LooseVersion('0.12'):
# check_finite=False is an optimization available only in scipy >=0.12
pinv2_args = {'check_finite': False}
def _nipals_twoblocks_inner_loop(X, Y, mode="A", max_iter=500, tol=1e-06,
norm_y_weights=False):
"""Inner loop of the iterative NIPALS algorithm.
Provides an alternative to the svd(X'Y); returns the first left and right
singular vectors of X'Y. See PLS for the meaning of the parameters. It is
similar to the Power method for determining the eigenvectors and
eigenvalues of a X'Y.
"""
y_score = Y[:, [0]]
x_weights_old = 0
ite = 1
X_pinv = Y_pinv = None
eps = np.finfo(X.dtype).eps
# Inner loop of the Wold algo.
while True:
# 1.1 Update u: the X weights
if mode == "B":
if X_pinv is None:
# We use slower pinv2 (same as np.linalg.pinv) for stability
# reasons
X_pinv = linalg.pinv2(X, **pinv2_args)
x_weights = np.dot(X_pinv, y_score)
else: # mode A
# Mode A regress each X column on y_score
x_weights = np.dot(X.T, y_score) / np.dot(y_score.T, y_score)
# If y_score only has zeros x_weights will only have zeros. In
# this case add an epsilon to converge to a more acceptable
# solution
if np.dot(x_weights.T, x_weights) < eps:
x_weights += eps
# 1.2 Normalize u
x_weights /= np.sqrt(np.dot(x_weights.T, x_weights)) + eps
# 1.3 Update x_score: the X latent scores
x_score = np.dot(X, x_weights)
# 2.1 Update y_weights
if mode == "B":
if Y_pinv is None:
Y_pinv = linalg.pinv2(Y, **pinv2_args) # compute once pinv(Y)
y_weights = np.dot(Y_pinv, x_score)
else:
# Mode A regress each Y column on x_score
y_weights = np.dot(Y.T, x_score) / np.dot(x_score.T, x_score)
# 2.2 Normalize y_weights
if norm_y_weights:
y_weights /= np.sqrt(np.dot(y_weights.T, y_weights)) + eps
# 2.3 Update y_score: the Y latent scores
y_score = np.dot(Y, y_weights) / (np.dot(y_weights.T, y_weights) + eps)
# y_score = np.dot(Y, y_weights) / np.dot(y_score.T, y_score) ## BUG
x_weights_diff = x_weights - x_weights_old
if np.dot(x_weights_diff.T, x_weights_diff) < tol or Y.shape[1] == 1:
break
if ite == max_iter:
warnings.warn('Maximum number of iterations reached')
break
x_weights_old = x_weights
ite += 1
return x_weights, y_weights, ite
def _svd_cross_product(X, Y):
C = np.dot(X.T, Y)
U, s, Vh = linalg.svd(C, full_matrices=False)
u = U[:, [0]]
v = Vh.T[:, [0]]
return u, v
def _center_scale_xy(X, Y, scale=True):
""" Center X, Y and scale if the scale parameter==True
Returns
-------
X, Y, x_mean, y_mean, x_std, y_std
"""
# center
x_mean = X.mean(axis=0)
X -= x_mean
y_mean = Y.mean(axis=0)
Y -= y_mean
# scale
if scale:
x_std = X.std(axis=0, ddof=1)
x_std[x_std == 0.0] = 1.0
X /= x_std
y_std = Y.std(axis=0, ddof=1)
y_std[y_std == 0.0] = 1.0
Y /= y_std
else:
x_std = np.ones(X.shape[1])
y_std = np.ones(Y.shape[1])
return X, Y, x_mean, y_mean, x_std, y_std
class _PLS(six.with_metaclass(ABCMeta), BaseEstimator, TransformerMixin,
RegressorMixin):
"""Partial Least Squares (PLS)
This class implements the generic PLS algorithm, constructors' parameters
allow to obtain a specific implementation such as:
- PLS2 regression, i.e., PLS 2 blocks, mode A, with asymmetric deflation
and unnormalized y weights such as defined by [Tenenhaus 1998] p. 132.
With univariate response it implements PLS1.
- PLS canonical, i.e., PLS 2 blocks, mode A, with symmetric deflation and
normalized y weights such as defined by [Tenenhaus 1998] (p. 132) and
[Wegelin et al. 2000]. This parametrization implements the original Wold
algorithm.
We use the terminology defined by [Wegelin et al. 2000].
This implementation uses the PLS Wold 2 blocks algorithm based on two
nested loops:
(i) The outer loop iterate over components.
(ii) The inner loop estimates the weights vectors. This can be done
with two algo. (a) the inner loop of the original NIPALS algo. or (b) a
SVD on residuals cross-covariance matrices.
n_components : int, number of components to keep. (default 2).
scale : boolean, scale data? (default True)
deflation_mode : str, "canonical" or "regression". See notes.
mode : "A" classical PLS and "B" CCA. See notes.
norm_y_weights : boolean, normalize Y weights to one? (default False)
algorithm : string, "nipals" or "svd"
The algorithm used to estimate the weights. It will be called
n_components times, i.e. once for each iteration of the outer loop.
max_iter : an integer, the maximum number of iterations (default 500)
of the NIPALS inner loop (used only if algorithm="nipals")
tol : non-negative real, default 1e-06
The tolerance used in the iterative algorithm.
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effects.
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_loadings_ : array, [p, n_components]
X block loadings vectors.
y_loadings_ : array, [q, n_components]
Y block loadings vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
x_rotations_ : array, [p, n_components]
X block to latents rotations.
y_rotations_ : array, [q, n_components]
Y block to latents rotations.
coef_ : array, [p, q]
The coefficients of the linear model: ``Y = X coef_ + Err``
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component. Not useful if the algorithm given is "svd".
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
In French but still a reference:
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
See also
--------
PLSCanonical
PLSRegression
CCA
PLS_SVD
"""
@abstractmethod
def __init__(self, n_components=2, scale=True, deflation_mode="regression",
mode="A", algorithm="nipals", norm_y_weights=False,
max_iter=500, tol=1e-06, copy=True):
self.n_components = n_components
self.deflation_mode = deflation_mode
self.mode = mode
self.norm_y_weights = norm_y_weights
self.scale = scale
self.algorithm = algorithm
self.max_iter = max_iter
self.tol = tol
self.copy = copy
def fit(self, X, Y):
"""Fit model to data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples in the number of samples and
n_features is the number of predictors.
Y : array-like of response, shape = [n_samples, n_targets]
Target vectors, where n_samples in the number of samples and
n_targets is the number of response variables.
"""
# copy since this will contains the residuals (deflated) matrices
check_consistent_length(X, Y)
X = check_array(X, dtype=np.float64, copy=self.copy)
Y = check_array(Y, dtype=np.float64, copy=self.copy, ensure_2d=False)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
n = X.shape[0]
p = X.shape[1]
q = Y.shape[1]
if self.n_components < 1 or self.n_components > p:
raise ValueError('Invalid number of components: %d' %
self.n_components)
if self.algorithm not in ("svd", "nipals"):
raise ValueError("Got algorithm %s when only 'svd' "
"and 'nipals' are known" % self.algorithm)
if self.algorithm == "svd" and self.mode == "B":
raise ValueError('Incompatible configuration: mode B is not '
'implemented with svd algorithm')
if self.deflation_mode not in ["canonical", "regression"]:
raise ValueError('The deflation mode is unknown')
# Scale (in place)
X, Y, self.x_mean_, self.y_mean_, self.x_std_, self.y_std_ = (
_center_scale_xy(X, Y, self.scale))
# Residuals (deflated) matrices
Xk = X
Yk = Y
# Results matrices
self.x_scores_ = np.zeros((n, self.n_components))
self.y_scores_ = np.zeros((n, self.n_components))
self.x_weights_ = np.zeros((p, self.n_components))
self.y_weights_ = np.zeros((q, self.n_components))
self.x_loadings_ = np.zeros((p, self.n_components))
self.y_loadings_ = np.zeros((q, self.n_components))
self.n_iter_ = []
# NIPALS algo: outer loop, over components
for k in range(self.n_components):
if np.all(np.dot(Yk.T, Yk) < np.finfo(np.double).eps):
# Yk constant
warnings.warn('Y residual constant at iteration %s' % k)
break
# 1) weights estimation (inner loop)
# -----------------------------------
if self.algorithm == "nipals":
x_weights, y_weights, n_iter_ = \
_nipals_twoblocks_inner_loop(
X=Xk, Y=Yk, mode=self.mode, max_iter=self.max_iter,
tol=self.tol, norm_y_weights=self.norm_y_weights)
self.n_iter_.append(n_iter_)
elif self.algorithm == "svd":
x_weights, y_weights = _svd_cross_product(X=Xk, Y=Yk)
# Forces sign stability of x_weights and y_weights
# Sign undeterminacy issue from svd if algorithm == "svd"
# and from platform dependent computation if algorithm == 'nipals'
x_weights, y_weights = svd_flip(x_weights, y_weights.T)
y_weights = y_weights.T
# compute scores
x_scores = np.dot(Xk, x_weights)
if self.norm_y_weights:
y_ss = 1
else:
y_ss = np.dot(y_weights.T, y_weights)
y_scores = np.dot(Yk, y_weights) / y_ss
# test for null variance
if np.dot(x_scores.T, x_scores) < np.finfo(np.double).eps:
warnings.warn('X scores are null at iteration %s' % k)
break
# 2) Deflation (in place)
# ----------------------
# Possible memory footprint reduction may done here: in order to
# avoid the allocation of a data chunk for the rank-one
# approximations matrix which is then subtracted to Xk, we suggest
# to perform a column-wise deflation.
#
# - regress Xk's on x_score
x_loadings = np.dot(Xk.T, x_scores) / np.dot(x_scores.T, x_scores)
# - subtract rank-one approximations to obtain remainder matrix
Xk -= np.dot(x_scores, x_loadings.T)
if self.deflation_mode == "canonical":
# - regress Yk's on y_score, then subtract rank-one approx.
y_loadings = (np.dot(Yk.T, y_scores)
/ np.dot(y_scores.T, y_scores))
Yk -= np.dot(y_scores, y_loadings.T)
if self.deflation_mode == "regression":
# - regress Yk's on x_score, then subtract rank-one approx.
y_loadings = (np.dot(Yk.T, x_scores)
/ np.dot(x_scores.T, x_scores))
Yk -= np.dot(x_scores, y_loadings.T)
# 3) Store weights, scores and loadings # Notation:
self.x_scores_[:, k] = x_scores.ravel() # T
self.y_scores_[:, k] = y_scores.ravel() # U
self.x_weights_[:, k] = x_weights.ravel() # W
self.y_weights_[:, k] = y_weights.ravel() # C
self.x_loadings_[:, k] = x_loadings.ravel() # P
self.y_loadings_[:, k] = y_loadings.ravel() # Q
# Such that: X = TP' + Err and Y = UQ' + Err
# 4) rotations from input space to transformed space (scores)
# T = X W(P'W)^-1 = XW* (W* : p x k matrix)
# U = Y C(Q'C)^-1 = YC* (W* : q x k matrix)
self.x_rotations_ = np.dot(
self.x_weights_,
linalg.pinv2(np.dot(self.x_loadings_.T, self.x_weights_),
**pinv2_args))
if Y.shape[1] > 1:
self.y_rotations_ = np.dot(
self.y_weights_,
linalg.pinv2(np.dot(self.y_loadings_.T, self.y_weights_),
**pinv2_args))
else:
self.y_rotations_ = np.ones(1)
if True or self.deflation_mode == "regression":
# FIXME what's with the if?
# Estimate regression coefficient
# Regress Y on T
# Y = TQ' + Err,
# Then express in function of X
# Y = X W(P'W)^-1Q' + Err = XB + Err
# => B = W*Q' (p x q)
self.coef_ = np.dot(self.x_rotations_, self.y_loadings_.T)
self.coef_ = (1. / self.x_std_.reshape((p, 1)) * self.coef_ *
self.y_std_)
return self
def transform(self, X, Y=None, copy=True):
"""Apply the dimension reduction learned on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
check_is_fitted(self, 'x_mean_')
X = check_array(X, copy=copy, dtype=FLOAT_DTYPES)
# Normalize
X -= self.x_mean_
X /= self.x_std_
# Apply rotation
x_scores = np.dot(X, self.x_rotations_)
if Y is not None:
Y = check_array(Y, ensure_2d=False, copy=copy, dtype=FLOAT_DTYPES)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
Y -= self.y_mean_
Y /= self.y_std_
y_scores = np.dot(Y, self.y_rotations_)
return x_scores, y_scores
return x_scores
def predict(self, X, copy=True):
"""Apply the dimension reduction learned on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Notes
-----
This call requires the estimation of a p x q matrix, which may
be an issue in high dimensional space.
"""
check_is_fitted(self, 'x_mean_')
X = check_array(X, copy=copy, dtype=FLOAT_DTYPES)
# Normalize
X -= self.x_mean_
X /= self.x_std_
Ypred = np.dot(X, self.coef_)
return Ypred + self.y_mean_
def fit_transform(self, X, y=None, **fit_params):
"""Learn and apply the dimension reduction on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
return self.fit(X, y, **fit_params).transform(X, y)
class PLSRegression(_PLS):
"""PLS regression
PLSRegression implements the PLS 2 blocks regression known as PLS2 or PLS1
in case of one dimensional response.
This class inherits from _PLS with mode="A", deflation_mode="regression",
norm_y_weights=False and algorithm="nipals".
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
n_components : int, (default 2)
Number of components to keep.
scale : boolean, (default True)
whether to scale the data
max_iter : an integer, (default 500)
the maximum number of iterations of the NIPALS inner loop (used
only if algorithm="nipals")
tol : non-negative real
Tolerance used in the iterative algorithm default 1e-06.
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effect
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_loadings_ : array, [p, n_components]
X block loadings vectors.
y_loadings_ : array, [q, n_components]
Y block loadings vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
x_rotations_ : array, [p, n_components]
X block to latents rotations.
y_rotations_ : array, [q, n_components]
Y block to latents rotations.
coef_ : array, [p, q]
The coefficients of the linear model: ``Y = X coef_ + Err``
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component.
Notes
-----
Matrices::
T: x_scores_
U: y_scores_
W: x_weights_
C: y_weights_
P: x_loadings_
Q: y_loadings__
Are computed such that::
X = T P.T + Err and Y = U Q.T + Err
T[:, k] = Xk W[:, k] for k in range(n_components)
U[:, k] = Yk C[:, k] for k in range(n_components)
x_rotations_ = W (P.T W)^(-1)
y_rotations_ = C (Q.T C)^(-1)
where Xk and Yk are residual matrices at iteration k.
`Slides explaining PLS <http://www.eigenvector.com/Docs/Wise_pls_properties.pdf>`
For each component k, find weights u, v that optimizes:
``max corr(Xk u, Yk v) * std(Xk u) std(Yk u)``, such that ``|u| = 1``
Note that it maximizes both the correlations between the scores and the
intra-block variances.
The residual matrix of X (Xk+1) block is obtained by the deflation on
the current X score: x_score.
The residual matrix of Y (Yk+1) block is obtained by deflation on the
current X score. This performs the PLS regression known as PLS2. This
mode is prediction oriented.
This implementation provides the same results that 3 PLS packages
provided in the R language (R-project):
- "mixOmics" with function pls(X, Y, mode = "regression")
- "plspm " with function plsreg2(X, Y)
- "pls" with function oscorespls.fit(X, Y)
Examples
--------
>>> from sklearn.cross_decomposition import PLSRegression
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> pls2 = PLSRegression(n_components=2)
>>> pls2.fit(X, Y)
... # doctest: +NORMALIZE_WHITESPACE
PLSRegression(copy=True, max_iter=500, n_components=2, scale=True,
tol=1e-06)
>>> Y_pred = pls2.predict(X)
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
In french but still a reference:
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
"""
def __init__(self, n_components=2, scale=True,
max_iter=500, tol=1e-06, copy=True):
super(PLSRegression, self).__init__(
n_components=n_components, scale=scale,
deflation_mode="regression", mode="A",
norm_y_weights=False, max_iter=max_iter, tol=tol,
copy=copy)
class PLSCanonical(_PLS):
""" PLSCanonical implements the 2 blocks canonical PLS of the original Wold
algorithm [Tenenhaus 1998] p.204, referred as PLS-C2A in [Wegelin 2000].
This class inherits from PLS with mode="A" and deflation_mode="canonical",
norm_y_weights=True and algorithm="nipals", but svd should provide similar
results up to numerical errors.
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
scale : boolean, scale data? (default True)
algorithm : string, "nipals" or "svd"
The algorithm used to estimate the weights. It will be called
n_components times, i.e. once for each iteration of the outer loop.
max_iter : an integer, (default 500)
the maximum number of iterations of the NIPALS inner loop (used
only if algorithm="nipals")
tol : non-negative real, default 1e-06
the tolerance used in the iterative algorithm
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effect
n_components : int, number of components to keep. (default 2).
Attributes
----------
x_weights_ : array, shape = [p, n_components]
X block weights vectors.
y_weights_ : array, shape = [q, n_components]
Y block weights vectors.
x_loadings_ : array, shape = [p, n_components]
X block loadings vectors.
y_loadings_ : array, shape = [q, n_components]
Y block loadings vectors.
x_scores_ : array, shape = [n_samples, n_components]
X scores.
y_scores_ : array, shape = [n_samples, n_components]
Y scores.
x_rotations_ : array, shape = [p, n_components]
X block to latents rotations.
y_rotations_ : array, shape = [q, n_components]
Y block to latents rotations.
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component. Not useful if the algorithm provided is "svd".
Notes
-----
Matrices::
T: x_scores_
U: y_scores_
W: x_weights_
C: y_weights_
P: x_loadings_
Q: y_loadings__
Are computed such that::
X = T P.T + Err and Y = U Q.T + Err
T[:, k] = Xk W[:, k] for k in range(n_components)
U[:, k] = Yk C[:, k] for k in range(n_components)
x_rotations_ = W (P.T W)^(-1)
y_rotations_ = C (Q.T C)^(-1)
where Xk and Yk are residual matrices at iteration k.
`Slides explaining PLS <http://www.eigenvector.com/Docs/Wise_pls_properties.pdf>`
For each component k, find weights u, v that optimize::
max corr(Xk u, Yk v) * std(Xk u) std(Yk u), such that ``|u| = |v| = 1``
Note that it maximizes both the correlations between the scores and the
intra-block variances.
The residual matrix of X (Xk+1) block is obtained by the deflation on the
current X score: x_score.
The residual matrix of Y (Yk+1) block is obtained by deflation on the
current Y score. This performs a canonical symmetric version of the PLS
regression. But slightly different than the CCA. This is mostly used
for modeling.
This implementation provides the same results that the "plspm" package
provided in the R language (R-project), using the function plsca(X, Y).
Results are equal or collinear with the function
``pls(..., mode = "canonical")`` of the "mixOmics" package. The difference
relies in the fact that mixOmics implementation does not exactly implement
the Wold algorithm since it does not normalize y_weights to one.
Examples
--------
>>> from sklearn.cross_decomposition import PLSCanonical
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> plsca = PLSCanonical(n_components=2)
>>> plsca.fit(X, Y)
... # doctest: +NORMALIZE_WHITESPACE
PLSCanonical(algorithm='nipals', copy=True, max_iter=500, n_components=2,
scale=True, tol=1e-06)
>>> X_c, Y_c = plsca.transform(X, Y)
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
See also
--------
CCA
PLSSVD
"""
def __init__(self, n_components=2, scale=True, algorithm="nipals",
max_iter=500, tol=1e-06, copy=True):
super(PLSCanonical, self).__init__(
n_components=n_components, scale=scale,
deflation_mode="canonical", mode="A",
norm_y_weights=True, algorithm=algorithm,
max_iter=max_iter, tol=tol, copy=copy)
class PLSSVD(BaseEstimator, TransformerMixin):
"""Partial Least Square SVD
Simply perform a svd on the crosscovariance matrix: X'Y
There are no iterative deflation here.
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
n_components : int, default 2
Number of components to keep.
scale : boolean, default True
Whether to scale X and Y.
copy : boolean, default True
Whether to copy X and Y, or perform in-place computations.
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
See also
--------
PLSCanonical
CCA
"""
def __init__(self, n_components=2, scale=True, copy=True):
self.n_components = n_components
self.scale = scale
self.copy = copy
def fit(self, X, Y):
# copy since this will contains the centered data
check_consistent_length(X, Y)
X = check_array(X, dtype=np.float64, copy=self.copy)
Y = check_array(Y, dtype=np.float64, copy=self.copy, ensure_2d=False)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
if self.n_components > max(Y.shape[1], X.shape[1]):
raise ValueError("Invalid number of components n_components=%d"
" with X of shape %s and Y of shape %s."
% (self.n_components, str(X.shape), str(Y.shape)))
# Scale (in place)
X, Y, self.x_mean_, self.y_mean_, self.x_std_, self.y_std_ = (
_center_scale_xy(X, Y, self.scale))
# svd(X'Y)
C = np.dot(X.T, Y)
# The arpack svds solver only works if the number of extracted
# components is smaller than rank(X) - 1. Hence, if we want to extract
# all the components (C.shape[1]), we have to use another one. Else,
# let's use arpacks to compute only the interesting components.
if self.n_components >= np.min(C.shape):
U, s, V = linalg.svd(C, full_matrices=False)
else:
U, s, V = arpack.svds(C, k=self.n_components)
# Deterministic output
U, V = svd_flip(U, V)
V = V.T
self.x_scores_ = np.dot(X, U)
self.y_scores_ = np.dot(Y, V)
self.x_weights_ = U
self.y_weights_ = V
return self
def transform(self, X, Y=None):
"""Apply the dimension reduction learned on the train data."""
check_is_fitted(self, 'x_mean_')
X = check_array(X, dtype=np.float64)
Xr = (X - self.x_mean_) / self.x_std_
x_scores = np.dot(Xr, self.x_weights_)
if Y is not None:
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
Yr = (Y - self.y_mean_) / self.y_std_
y_scores = np.dot(Yr, self.y_weights_)
return x_scores, y_scores
return x_scores
def fit_transform(self, X, y=None, **fit_params):
"""Learn and apply the dimension reduction on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
return self.fit(X, y, **fit_params).transform(X, y)
|
bsd-3-clause
|
CVML/scikit-learn
|
sklearn/utils/tests/test_shortest_path.py
|
88
|
2828
|
from collections import defaultdict
import numpy as np
from numpy.testing import assert_array_almost_equal
from sklearn.utils.graph import (graph_shortest_path,
single_source_shortest_path_length)
def floyd_warshall_slow(graph, directed=False):
N = graph.shape[0]
#set nonzero entries to infinity
graph[np.where(graph == 0)] = np.inf
#set diagonal to zero
graph.flat[::N + 1] = 0
if not directed:
graph = np.minimum(graph, graph.T)
for k in range(N):
for i in range(N):
for j in range(N):
graph[i, j] = min(graph[i, j], graph[i, k] + graph[k, j])
graph[np.where(np.isinf(graph))] = 0
return graph
def generate_graph(N=20):
#sparse grid of distances
rng = np.random.RandomState(0)
dist_matrix = rng.random_sample((N, N))
#make symmetric: distances are not direction-dependent
dist_matrix += dist_matrix.T
#make graph sparse
i = (rng.randint(N, size=N * N // 2), rng.randint(N, size=N * N // 2))
dist_matrix[i] = 0
#set diagonal to zero
dist_matrix.flat[::N + 1] = 0
return dist_matrix
def test_floyd_warshall():
dist_matrix = generate_graph(20)
for directed in (True, False):
graph_FW = graph_shortest_path(dist_matrix, directed, 'FW')
graph_py = floyd_warshall_slow(dist_matrix.copy(), directed)
assert_array_almost_equal(graph_FW, graph_py)
def test_dijkstra():
dist_matrix = generate_graph(20)
for directed in (True, False):
graph_D = graph_shortest_path(dist_matrix, directed, 'D')
graph_py = floyd_warshall_slow(dist_matrix.copy(), directed)
assert_array_almost_equal(graph_D, graph_py)
def test_shortest_path():
dist_matrix = generate_graph(20)
# We compare path length and not costs (-> set distances to 0 or 1)
dist_matrix[dist_matrix != 0] = 1
for directed in (True, False):
if not directed:
dist_matrix = np.minimum(dist_matrix, dist_matrix.T)
graph_py = floyd_warshall_slow(dist_matrix.copy(), directed)
for i in range(dist_matrix.shape[0]):
# Non-reachable nodes have distance 0 in graph_py
dist_dict = defaultdict(int)
dist_dict.update(single_source_shortest_path_length(dist_matrix,
i))
for j in range(graph_py[i].shape[0]):
assert_array_almost_equal(dist_dict[j], graph_py[i, j])
def test_dijkstra_bug_fix():
X = np.array([[0., 0., 4.],
[1., 0., 2.],
[0., 5., 0.]])
dist_FW = graph_shortest_path(X, directed=False, method='FW')
dist_D = graph_shortest_path(X, directed=False, method='D')
assert_array_almost_equal(dist_D, dist_FW)
|
bsd-3-clause
|
wzbozon/scikit-learn
|
sklearn/mixture/tests/test_dpgmm.py
|
261
|
4490
|
import unittest
import sys
import numpy as np
from sklearn.mixture import DPGMM, VBGMM
from sklearn.mixture.dpgmm import log_normalize
from sklearn.datasets import make_blobs
from sklearn.utils.testing import assert_array_less, assert_equal
from sklearn.mixture.tests.test_gmm import GMMTester
from sklearn.externals.six.moves import cStringIO as StringIO
np.seterr(all='warn')
def test_class_weights():
# check that the class weights are updated
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50)
dpgmm.fit(X)
# get indices of components that are used:
indices = np.unique(dpgmm.predict(X))
active = np.zeros(10, dtype=np.bool)
active[indices] = True
# used components are important
assert_array_less(.1, dpgmm.weights_[active])
# others are not
assert_array_less(dpgmm.weights_[~active], .05)
def test_verbose_boolean():
# checks that the output for the verbose output is the same
# for the flag values '1' and 'True'
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm_bool = Model(n_components=10, random_state=1, alpha=20,
n_iter=50, verbose=True)
dpgmm_int = Model(n_components=10, random_state=1, alpha=20,
n_iter=50, verbose=1)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
# generate output with the boolean flag
dpgmm_bool.fit(X)
verbose_output = sys.stdout
verbose_output.seek(0)
bool_output = verbose_output.readline()
# generate output with the int flag
dpgmm_int.fit(X)
verbose_output = sys.stdout
verbose_output.seek(0)
int_output = verbose_output.readline()
assert_equal(bool_output, int_output)
finally:
sys.stdout = old_stdout
def test_verbose_first_level():
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50,
verbose=1)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
dpgmm.fit(X)
finally:
sys.stdout = old_stdout
def test_verbose_second_level():
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50,
verbose=2)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
dpgmm.fit(X)
finally:
sys.stdout = old_stdout
def test_log_normalize():
v = np.array([0.1, 0.8, 0.01, 0.09])
a = np.log(2 * v)
assert np.allclose(v, log_normalize(a), rtol=0.01)
def do_model(self, **kwds):
return VBGMM(verbose=False, **kwds)
class DPGMMTester(GMMTester):
model = DPGMM
do_test_eval = False
def score(self, g, train_obs):
_, z = g.score_samples(train_obs)
return g.lower_bound(train_obs, z)
class TestDPGMMWithSphericalCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'spherical'
setUp = GMMTester._setUp
class TestDPGMMWithDiagCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'diag'
setUp = GMMTester._setUp
class TestDPGMMWithTiedCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'tied'
setUp = GMMTester._setUp
class TestDPGMMWithFullCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'full'
setUp = GMMTester._setUp
class VBGMMTester(GMMTester):
model = do_model
do_test_eval = False
def score(self, g, train_obs):
_, z = g.score_samples(train_obs)
return g.lower_bound(train_obs, z)
class TestVBGMMWithSphericalCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'spherical'
setUp = GMMTester._setUp
class TestVBGMMWithDiagCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'diag'
setUp = GMMTester._setUp
class TestVBGMMWithTiedCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'tied'
setUp = GMMTester._setUp
class TestVBGMMWithFullCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'full'
setUp = GMMTester._setUp
|
bsd-3-clause
|
lungetech/luigi
|
examples/pyspark_wc.py
|
21
|
3380
|
# -*- coding: utf-8 -*-
#
# Copyright 2012-2015 Spotify AB
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import luigi
from luigi.s3 import S3Target
from luigi.contrib.spark import SparkSubmitTask, PySparkTask
class InlinePySparkWordCount(PySparkTask):
"""
This task runs a :py:class:`luigi.contrib.spark.PySparkTask` task
over the target data in :py:meth:`wordcount.input` (a file in S3) and
writes the result into its :py:meth:`wordcount.output` target (a file in S3).
This class uses :py:meth:`luigi.contrib.spark.PySparkTask.main`.
Example luigi configuration::
[spark]
spark-submit: /usr/local/spark/bin/spark-submit
master: spark://spark.example.org:7077
# py-packages: numpy, pandas
"""
driver_memory = '2g'
executor_memory = '3g'
def input(self):
return S3Target("s3n://bucket.example.org/wordcount.input")
def output(self):
return S3Target('s3n://bucket.example.org/wordcount.output')
def main(self, sc, *args):
sc.textFile(self.input().path) \
.flatMap(lambda line: line.split()) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b) \
.saveAsTextFile(self.output().path)
class PySparkWordCount(SparkSubmitTask):
"""
This task is the same as :py:class:`InlinePySparkWordCount` above but uses
an external python driver file specified in :py:meth:`app`
It runs a :py:class:`luigi.contrib.spark.SparkSubmitTask` task
over the target data in :py:meth:`wordcount.input` (a file in S3) and
writes the result into its :py:meth:`wordcount.output` target (a file in S3).
This class uses :py:meth:`luigi.contrib.spark.SparkSubmitTask.run`.
Example luigi configuration::
[spark]
spark-submit: /usr/local/spark/bin/spark-submit
master: spark://spark.example.org:7077
deploy-mode: client
"""
driver_memory = '2g'
executor_memory = '3g'
total_executor_cores = luigi.IntParameter(default=100, significant=False)
name = "PySpark Word Count"
app = 'wordcount.py'
def app_options(self):
# These are passed to the Spark main args in the defined order.
return [self.input().path, self.output().path]
def input(self):
return S3Target("s3n://bucket.example.org/wordcount.input")
def output(self):
return S3Target('s3n://bucket.example.org/wordcount.output')
'''
// Corresponding example Spark Job, running Word count with Spark's Python API
// This file would have to be saved into wordcount.py
import sys
from pyspark import SparkContext
if __name__ == "__main__":
sc = SparkContext()
sc.textFile(sys.argv[1]) \
.flatMap(lambda line: line.split()) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b) \
.saveAsTextFile(sys.argv[2])
'''
|
apache-2.0
|
ChinaQuants/zipline
|
zipline/history/history_container.py
|
12
|
33513
|
#
# Copyright 2014 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from bisect import insort_left
from collections import namedtuple
from itertools import groupby, product
import logbook
import numpy as np
import pandas as pd
from six import itervalues, iteritems, iterkeys
from . history import HistorySpec
from zipline.utils.data import RollingPanel, _ensure_index
from zipline.utils.munge import ffill, bfill
logger = logbook.Logger('History Container')
# The closing price is referred to by multiple names,
# allow both for price rollover logic etc.
CLOSING_PRICE_FIELDS = frozenset({'price', 'close_price'})
def ffill_buffer_from_prior_values(freq,
field,
buffer_frame,
digest_frame,
pv_frame,
raw=False):
"""
Forward-fill a buffer frame, falling back to the end-of-period values of a
digest frame if the buffer frame has leading NaNs.
"""
# convert to ndarray if necessary
digest_values = digest_frame
if raw and isinstance(digest_frame, pd.DataFrame):
digest_values = digest_frame.values
buffer_values = buffer_frame
if raw and isinstance(buffer_frame, pd.DataFrame):
buffer_values = buffer_frame.values
nan_sids = pd.isnull(buffer_values[0])
if np.any(nan_sids) and len(digest_values):
# If we have any leading nans in the buffer and we have a non-empty
# digest frame, use the oldest digest values as the initial buffer
# values.
buffer_values[0, nan_sids] = digest_values[-1, nan_sids]
nan_sids = pd.isnull(buffer_values[0])
if np.any(nan_sids):
# If we still have leading nans, fall back to the last known values
# from before the digest.
key_loc = pv_frame.index.get_loc((freq.freq_str, field))
filler = pv_frame.values[key_loc, nan_sids]
buffer_values[0, nan_sids] = filler
if raw:
filled = ffill(buffer_values)
return filled
return buffer_frame.ffill()
def ffill_digest_frame_from_prior_values(freq,
field,
digest_frame,
pv_frame,
raw=False):
"""
Forward-fill a digest frame, falling back to the last known prior values if
necessary.
"""
# convert to ndarray if necessary
values = digest_frame
if raw and isinstance(digest_frame, pd.DataFrame):
values = digest_frame.values
nan_sids = pd.isnull(values[0])
if np.any(nan_sids):
# If we have any leading nans in the frame, use values from pv_frame to
# seed values for those sids.
key_loc = pv_frame.index.get_loc((freq.freq_str, field))
filler = pv_frame.values[key_loc, nan_sids]
values[0, nan_sids] = filler
if raw:
filled = ffill(values)
return filled
return digest_frame.ffill()
def freq_str_and_bar_count(history_spec):
"""
Helper for getting the frequency string and bar count from a history spec.
"""
return (history_spec.frequency.freq_str, history_spec.bar_count)
def next_bar(spec, env):
"""
Returns a function that will return the next bar for a given datetime.
"""
if spec.frequency.unit_str == 'd':
if spec.frequency.data_frequency == 'minute':
return lambda dt: env.get_open_and_close(
env.next_trading_day(dt),
)[1]
else:
return env.next_trading_day
else:
return env.next_market_minute
def compute_largest_specs(history_specs):
"""
Maps a Frequency to the largest HistorySpec at that frequency from an
iterable of HistorySpecs.
"""
return {key: max(group, key=lambda f: f.bar_count)
for key, group in groupby(
sorted(history_specs, key=freq_str_and_bar_count),
key=lambda spec: spec.frequency)}
# tuples to store a change to the shape of a HistoryContainer
FrequencyDelta = namedtuple(
'FrequencyDelta',
['freq', 'buffer_delta'],
)
LengthDelta = namedtuple(
'LengthDelta',
['freq', 'delta'],
)
HistoryContainerDeltaSuper = namedtuple(
'HistoryContainerDelta',
['field', 'frequency_delta', 'length_delta'],
)
class HistoryContainerDelta(HistoryContainerDeltaSuper):
"""
A class representing a resize of the history container.
"""
def __new__(cls, field=None, frequency_delta=None, length_delta=None):
"""
field is a new field that was added.
frequency is a FrequencyDelta representing a new frequency was added.
length is a bar LengthDelta which is a frequency and a bar_count.
If any field is None, then no change occurred of that type.
"""
return super(HistoryContainerDelta, cls).__new__(
cls, field, frequency_delta, length_delta,
)
@property
def empty(self):
"""
Checks if the delta is empty.
"""
return (self.field is None and
self.frequency_delta is None and
self.length_delta is None)
def normalize_to_data_freq(data_frequency, dt):
if data_frequency == 'minute':
return dt
return pd.tslib.normalize_date(dt)
class HistoryContainer(object):
"""
Container for all history panels and frames used by an algoscript.
To be used internally by TradingAlgorithm, but *not* passed directly to the
algorithm.
Entry point for the algoscript is the result of `get_history`.
"""
VALID_FIELDS = {
'price', 'open_price', 'volume', 'high', 'low', 'close_price',
}
def __init__(self,
history_specs,
initial_sids,
initial_dt,
data_frequency,
env,
bar_data=None):
"""
A container to hold a rolling window of historical data within a user's
algorithm.
Args:
history_specs (dict[Frequency:HistorySpec]): The starting history
specs that this container should be able to service.
initial_sids (set[Asset or Int]): The starting sids to watch.
initial_dt (datetime): The datetime to start collecting history from.
bar_data (BarData): If this container is being constructed during
handle_data, this is the BarData for the current bar to fill the
buffer with. If this is constructed elsewhere, it is None.
Returns:
An instance of a new HistoryContainer
"""
# Store a reference to the env
self.env = env
# History specs to be served by this container.
self.history_specs = history_specs
self.largest_specs = compute_largest_specs(
itervalues(self.history_specs)
)
# The set of fields specified by all history specs
self.fields = pd.Index(
sorted(set(spec.field for spec in itervalues(history_specs)))
)
self.sids = pd.Index(
sorted(set(initial_sids or []))
)
self.data_frequency = data_frequency
initial_dt = normalize_to_data_freq(self.data_frequency, initial_dt)
# This panel contains raw minutes for periods that haven't been fully
# completed. When a frequency period rolls over, these minutes are
# digested using some sort of aggregation call on the panel (e.g. `sum`
# for volume, `max` for high, `min` for low, etc.).
self.buffer_panel = self.create_buffer_panel(initial_dt, bar_data)
# Dictionaries with Frequency objects as keys.
self.digest_panels, self.cur_window_starts, self.cur_window_closes = \
self.create_digest_panels(initial_sids, initial_dt)
# Helps prop up the prior day panel against having a nan, when the data
# has been seen.
self.last_known_prior_values = pd.DataFrame(
data=None,
index=self.prior_values_index,
columns=self.prior_values_columns,
# Note: For bizarre "intricacies of the spaghetti that is pandas
# indexing logic" reasons, setting this dtype prevents indexing
# errors in update_last_known_values. This is safe for the time
# being because our only forward-fillable fields are floats. If we
# need to add a non-float-typed forward-fillable field, then we may
# find ourselves having to track down and fix a pandas bug.
dtype=np.float64,
)
_ffillable_fields = None
@property
def ffillable_fields(self):
if self._ffillable_fields is None:
fillables = self.fields.intersection(HistorySpec.FORWARD_FILLABLE)
self._ffillable_fields = fillables
return self._ffillable_fields
@property
def prior_values_index(self):
index_values = list(
product(
(freq.freq_str for freq in self.unique_frequencies),
# Only store prior values for forward-fillable fields.
self.ffillable_fields,
)
)
if index_values:
return pd.MultiIndex.from_tuples(index_values)
else:
# MultiIndex doesn't gracefully support empty input, so we return
# an empty regular Index if we have values.
return pd.Index(index_values)
@property
def prior_values_columns(self):
return self.sids
@property
def all_panels(self):
yield self.buffer_panel
for panel in self.digest_panels.values():
yield panel
@property
def unique_frequencies(self):
"""
Return an iterator over all the unique frequencies serviced by this
container.
"""
return iterkeys(self.largest_specs)
def _add_frequency(self, spec, dt, data):
"""
Adds a new frequency to the container. This reshapes the buffer_panel
if needed.
"""
freq = spec.frequency
self.largest_specs[freq] = spec
new_buffer_len = 0
if freq.max_bars > self.buffer_panel.window_length:
# More bars need to be held in the buffer_panel to support this
# freq
if freq.data_frequency \
!= self.buffer_spec.frequency.data_frequency:
# If the data_frequencies are not the same, then we need to
# create a fresh buffer.
self.buffer_panel = self.create_buffer_panel(
dt, bar_data=data,
)
new_buffer_len = None
else:
# The frequencies are the same, we just need to add more bars.
self._resize_panel(
self.buffer_panel,
freq.max_bars,
dt,
self.buffer_spec.frequency,
)
new_buffer_len = freq.max_minutes
# update the current buffer_spec to reflect the new lenght.
self.buffer_spec.bar_count = new_buffer_len + 1
if spec.bar_count > 1:
# This spec has more than one bar, construct a digest panel for it.
self.digest_panels[freq] = self._create_digest_panel(dt, spec=spec)
else:
self.cur_window_starts[freq] = dt
self.cur_window_closes[freq] = freq.window_close(
self.cur_window_starts[freq]
)
self.last_known_prior_values = self.last_known_prior_values.reindex(
index=self.prior_values_index,
)
return FrequencyDelta(freq, new_buffer_len)
def _add_field(self, field):
"""
Adds a new field to the container.
"""
# self.fields is already sorted, so we just need to insert the new
# field in the correct index.
ls = list(self.fields)
insort_left(ls, field)
self.fields = pd.Index(ls)
# unset fillable fields cache
self._ffillable_fields = None
self._realign_fields()
self.last_known_prior_values = self.last_known_prior_values.reindex(
index=self.prior_values_index,
)
return field
def _add_length(self, spec, dt):
"""
Increases the length of the digest panel for spec.frequency. If this
does not have a panel, and one is needed; a digest panel will be
constructed.
"""
old_count = self.largest_specs[spec.frequency].bar_count
self.largest_specs[spec.frequency] = spec
delta = spec.bar_count - old_count
panel = self.digest_panels.get(spec.frequency)
if panel is None:
# The old length for this frequency was 1 bar, meaning no digest
# panel was held. We must construct a new one here.
panel = self._create_digest_panel(dt, spec=spec)
else:
self._resize_panel(panel, spec.bar_count - 1, dt,
freq=spec.frequency)
self.digest_panels[spec.frequency] = panel
return LengthDelta(spec.frequency, delta)
def _resize_panel(self, panel, size, dt, freq):
"""
Resizes a panel, fills the date_buf with the correct values.
"""
# This is the oldest datetime that will be shown in the current window
# of the panel.
oldest_dt = pd.Timestamp(panel.start_date, tz='utc',)
delta = size - panel.window_length
# Construct the missing dates.
missing_dts = self._create_window_date_buf(
delta, freq.unit_str, freq.data_frequency, oldest_dt,
)
panel.extend_back(missing_dts)
def _create_window_date_buf(self,
window,
unit_str,
data_frequency,
dt):
"""
Creates a window length date_buf looking backwards from dt.
"""
if unit_str == 'd':
# Get the properly key'd datetime64 out of the pandas Timestamp
if data_frequency != 'daily':
arr = self.env.open_close_window(
dt,
window,
offset=-window,
).market_close.astype('datetime64[ns]').values
else:
arr = self.env.open_close_window(
dt,
window,
offset=-window,
).index.values
return arr
else:
return self.env.market_minute_window(
self.env.previous_market_minute(dt),
window,
step=-1,
)[::-1].values
def _create_panel(self, dt, spec):
"""
Constructs a rolling panel with a properly aligned date_buf.
"""
dt = normalize_to_data_freq(spec.frequency.data_frequency, dt)
window = spec.bar_count - 1
date_buf = self._create_window_date_buf(
window,
spec.frequency.unit_str,
spec.frequency.data_frequency,
dt,
)
panel = RollingPanel(
window=window,
items=self.fields,
sids=self.sids,
initial_dates=date_buf,
)
return panel
def _create_digest_panel(self,
dt,
spec,
window_starts=None,
window_closes=None):
"""
Creates a digest panel, setting the window_starts and window_closes.
If window_starts or window_closes are None, then self.cur_window_starts
or self.cur_window_closes will be used.
"""
freq = spec.frequency
window_starts = window_starts if window_starts is not None \
else self.cur_window_starts
window_closes = window_closes if window_closes is not None \
else self.cur_window_closes
window_starts[freq] = freq.normalize(dt)
window_closes[freq] = freq.window_close(window_starts[freq])
return self._create_panel(dt, spec)
def ensure_spec(self, spec, dt, bar_data):
"""
Ensure that this container has enough space to hold the data for the
given spec. This returns a HistoryContainerDelta to represent the
changes in shape that the container made to support the new
HistorySpec.
"""
updated = {}
if spec.field not in self.fields:
updated['field'] = self._add_field(spec.field)
if spec.frequency not in self.largest_specs:
updated['frequency_delta'] = self._add_frequency(
spec, dt, bar_data,
)
if spec.bar_count > self.largest_specs[spec.frequency].bar_count:
updated['length_delta'] = self._add_length(spec, dt)
return HistoryContainerDelta(**updated)
def add_sids(self, to_add):
"""
Add new sids to the container.
"""
self.sids = pd.Index(
sorted(self.sids.union(_ensure_index(to_add))),
)
self._realign_sids()
def drop_sids(self, to_drop):
"""
Remove sids from the container.
"""
self.sids = pd.Index(
sorted(self.sids.difference(_ensure_index(to_drop))),
)
self._realign_sids()
def _realign_sids(self):
"""
Realign our constituent panels after adding or removing sids.
"""
self.last_known_prior_values = self.last_known_prior_values.reindex(
columns=self.sids,
)
for panel in self.all_panels:
panel.set_minor_axis(self.sids)
def _realign_fields(self):
self.last_known_prior_values = self.last_known_prior_values.reindex(
index=self.prior_values_index,
)
for panel in self.all_panels:
panel.set_items(self.fields)
def create_digest_panels(self,
initial_sids,
initial_dt):
"""
Initialize a RollingPanel for each unique panel frequency being stored
by this container. Each RollingPanel pre-allocates enough storage
space to service the highest bar-count of any history call that it
serves.
"""
# Map from frequency -> first/last minute of the next digest to be
# rolled for that frequency.
first_window_starts = {}
first_window_closes = {}
# Map from frequency -> digest_panels.
panels = {}
for freq, largest_spec in iteritems(self.largest_specs):
if largest_spec.bar_count == 1:
# No need to allocate a digest panel; this frequency will only
# ever use data drawn from self.buffer_panel.
first_window_starts[freq] = freq.normalize(initial_dt)
first_window_closes[freq] = freq.window_close(
first_window_starts[freq]
)
continue
dt = initial_dt
rp = self._create_digest_panel(
dt,
spec=largest_spec,
window_starts=first_window_starts,
window_closes=first_window_closes,
)
panels[freq] = rp
return panels, first_window_starts, first_window_closes
def create_buffer_panel(self, initial_dt, bar_data):
"""
Initialize a RollingPanel containing enough minutes to service all our
frequencies.
"""
max_bars_needed = max(
freq.max_bars for freq in self.unique_frequencies
)
freq = '1m' if self.data_frequency == 'minute' else '1d'
spec = HistorySpec(
max_bars_needed + 1, freq, None, None, self.env,
self.data_frequency,
)
rp = self._create_panel(
initial_dt, spec,
)
self.buffer_spec = spec
if bar_data is not None:
frame = self.frame_from_bardata(bar_data, initial_dt)
rp.add_frame(initial_dt, frame)
return rp
def convert_columns(self, values):
"""
If columns have a specific type you want to enforce, overwrite this
method and return the transformed values.
"""
return values
def digest_bars(self, history_spec, do_ffill):
"""
Get the last (history_spec.bar_count - 1) bars from self.digest_panel
for the requested HistorySpec.
"""
bar_count = history_spec.bar_count
if bar_count == 1:
# slicing with [1 - bar_count:] doesn't work when bar_count == 1,
# so special-casing this.
res = pd.DataFrame(index=[], columns=self.sids, dtype=float)
return res.values, res.index
field = history_spec.field
# Panel axes are (field, dates, sids). We want just the entries for
# the requested field, the last (bar_count - 1) data points, and all
# sids.
digest_panel = self.digest_panels[history_spec.frequency]
frame = digest_panel.get_current(field, raw=True)
if do_ffill:
# Do forward-filling *before* truncating down to the requested
# number of bars. This protects us from losing data if an illiquid
# stock has a gap in its price history.
filled = ffill_digest_frame_from_prior_values(
history_spec.frequency,
history_spec.field,
frame,
self.last_known_prior_values,
raw=True
# Truncate only after we've forward-filled
)
indexer = slice(1 - bar_count, None)
return filled[indexer], digest_panel.current_dates()[indexer]
else:
indexer = slice(1 - bar_count, None)
return frame[indexer, :], digest_panel.current_dates()[indexer]
def buffer_panel_minutes(self,
buffer_panel,
earliest_minute=None,
latest_minute=None,
raw=False):
"""
Get the minutes in @buffer_panel between @earliest_minute and
@latest_minute, inclusive.
@buffer_panel can be a RollingPanel or a plain Panel. If a
RollingPanel is supplied, we call `get_current` to extract a Panel
object.
If no value is specified for @earliest_minute, use all the minutes we
have up until @latest minute.
If no value for @latest_minute is specified, use all values up until
the latest minute.
"""
if isinstance(buffer_panel, RollingPanel):
buffer_panel = buffer_panel.get_current(start=earliest_minute,
end=latest_minute,
raw=raw)
return buffer_panel
# Using .ix here rather than .loc because loc requires that the keys
# are actually in the index, whereas .ix returns all the values between
# earliest_minute and latest_minute, which is what we want.
return buffer_panel.ix[:, earliest_minute:latest_minute, :]
def frame_from_bardata(self, data, algo_dt):
"""
Create a DataFrame from the given BarData and algo dt.
"""
data = data._data
frame_data = np.empty((len(self.fields), len(self.sids))) * np.nan
for j, sid in enumerate(self.sids):
sid_data = data.get(sid)
if not sid_data:
continue
if algo_dt != sid_data['dt']:
continue
for i, field in enumerate(self.fields):
frame_data[i, j] = sid_data.get(field, np.nan)
return pd.DataFrame(
frame_data,
index=self.fields.copy(),
columns=self.sids.copy(),
)
def update(self, data, algo_dt):
"""
Takes the bar at @algo_dt's @data, checks to see if we need to roll any
new digests, then adds new data to the buffer panel.
"""
frame = self.frame_from_bardata(data, algo_dt)
self.update_last_known_values()
self.update_digest_panels(algo_dt, self.buffer_panel)
self.buffer_panel.add_frame(algo_dt, frame)
def update_digest_panels(self, algo_dt, buffer_panel, freq_filter=None):
"""
Check whether @algo_dt is greater than cur_window_close for any of our
frequencies. If so, roll a digest for that frequency using data drawn
from @buffer panel and insert it into the appropriate digest panels.
If @freq_filter is specified, only use the given data to update
frequencies on which the filter returns True.
This takes `buffer_panel` as an argument rather than using
self.buffer_panel so that this method can be used to add supplemental
data from an external source.
"""
for frequency in filter(freq_filter, self.unique_frequencies):
# We don't keep a digest panel if we only have a length-1 history
# spec for a given frequency
digest_panel = self.digest_panels.get(frequency, None)
while algo_dt > self.cur_window_closes[frequency]:
earliest_minute = self.cur_window_starts[frequency]
latest_minute = self.cur_window_closes[frequency]
minutes_to_process = self.buffer_panel_minutes(
buffer_panel,
earliest_minute=earliest_minute,
latest_minute=latest_minute,
raw=True
)
if digest_panel is not None:
# Create a digest from minutes_to_process and add it to
# digest_panel.
digest_frame = self.create_new_digest_frame(
minutes_to_process,
self.fields,
self.sids
)
digest_panel.add_frame(
latest_minute,
digest_frame,
self.fields,
self.sids
)
# Update panel start/close for this frequency.
self.cur_window_starts[frequency] = \
frequency.next_window_start(latest_minute)
self.cur_window_closes[frequency] = \
frequency.window_close(self.cur_window_starts[frequency])
def frame_to_series(self, field, frame, columns=None):
"""
Convert a frame with a DatetimeIndex and sid columns into a series with
a sid index, using the aggregator defined by the given field.
"""
if isinstance(frame, pd.DataFrame):
columns = frame.columns
frame = frame.values
if not len(frame):
return pd.Series(
data=(0 if field == 'volume' else np.nan),
index=columns,
).values
if field in ['price', 'close_price']:
# shortcircuit for full last row
vals = frame[-1]
if np.all(~np.isnan(vals)):
return vals
return ffill(frame)[-1]
elif field == 'open_price':
return bfill(frame)[0]
elif field == 'volume':
return np.nansum(frame, axis=0)
elif field == 'high':
return np.nanmax(frame, axis=0)
elif field == 'low':
return np.nanmin(frame, axis=0)
else:
raise ValueError("Unknown field {}".format(field))
def aggregate_ohlcv_panel(self,
fields,
ohlcv_panel,
items=None,
minor_axis=None):
"""
Convert an OHLCV Panel into a DataFrame by aggregating each field's
frame into a Series.
"""
vals = ohlcv_panel
if isinstance(ohlcv_panel, pd.Panel):
vals = ohlcv_panel.values
items = ohlcv_panel.items
minor_axis = ohlcv_panel.minor_axis
data = [
self.frame_to_series(
field,
vals[items.get_loc(field)],
minor_axis
)
for field in fields
]
return np.array(data)
def create_new_digest_frame(self, buffer_minutes, items=None,
minor_axis=None):
"""
Package up minutes in @buffer_minutes into a single digest frame.
"""
return self.aggregate_ohlcv_panel(
self.fields,
buffer_minutes,
items=items,
minor_axis=minor_axis
)
def update_last_known_values(self):
"""
Store the non-NaN values from our oldest frame in each frequency.
"""
ffillable = self.ffillable_fields
if not len(ffillable):
return
for frequency in self.unique_frequencies:
digest_panel = self.digest_panels.get(frequency, None)
if digest_panel:
oldest_known_values = digest_panel.oldest_frame(raw=True)
else:
oldest_known_values = self.buffer_panel.oldest_frame(raw=True)
oldest_vals = oldest_known_values
oldest_columns = self.fields
for field in ffillable:
f_idx = oldest_columns.get_loc(field)
field_vals = oldest_vals[f_idx]
# isnan would be fast, possible to use?
non_nan_sids = np.where(pd.notnull(field_vals))
key = (frequency.freq_str, field)
key_loc = self.last_known_prior_values.index.get_loc(key)
self.last_known_prior_values.values[
key_loc, non_nan_sids
] = field_vals[non_nan_sids]
def get_history(self, history_spec, algo_dt):
"""
Main API used by the algoscript is mapped to this function.
Selects from the overarching history panel the values for the
@history_spec at the given @algo_dt.
"""
field = history_spec.field
do_ffill = history_spec.ffill
# Get our stored values from periods prior to the current period.
digest_frame, index = self.digest_bars(history_spec, do_ffill)
# Get minutes from our buffer panel to build the last row of the
# returned frame.
buffer_panel = self.buffer_panel_minutes(
self.buffer_panel,
earliest_minute=self.cur_window_starts[history_spec.frequency],
raw=True
)
buffer_frame = buffer_panel[self.fields.get_loc(field)]
if do_ffill:
buffer_frame = ffill_buffer_from_prior_values(
history_spec.frequency,
field,
buffer_frame,
digest_frame,
self.last_known_prior_values,
raw=True
)
last_period = self.frame_to_series(field, buffer_frame, self.sids)
return fast_build_history_output(digest_frame,
last_period,
algo_dt,
index=index,
columns=self.sids)
def fast_build_history_output(buffer_frame,
last_period,
algo_dt,
index=None,
columns=None):
"""
Optimized concatenation of DataFrame and Series for use in
HistoryContainer.get_history.
Relies on the fact that the input arrays have compatible shapes.
"""
buffer_values = buffer_frame
if isinstance(buffer_frame, pd.DataFrame):
buffer_values = buffer_frame.values
index = buffer_frame.index
columns = buffer_frame.columns
return pd.DataFrame(
data=np.vstack(
[
buffer_values,
last_period,
]
),
index=fast_append_date_to_index(
index,
pd.Timestamp(algo_dt)
),
columns=columns,
)
def fast_append_date_to_index(index, timestamp):
"""
Append a timestamp to a DatetimeIndex. DatetimeIndex.append does not
appear to work.
"""
return pd.DatetimeIndex(
np.hstack(
[
index.values,
[timestamp.asm8],
]
),
tz='UTC',
)
|
apache-2.0
|
BhallaLab/moose-core
|
python/rdesigneur/moogul.py
|
2
|
12667
|
# Moogul.py: MOOSE Graphics Using Lines
# This is a fallback graphics interface for displaying neurons using
# regular matplotlib routines.
# Put in because the GL versions like moogli need all sorts of difficult
# libraries and dependencies.
# Copyright (C) Upinder S. Bhalla NCBS 2018
# This program is licensed under the GNU Public License version 3.
#
import moose
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.art3d import Line3DCollection
class MoogulError( Exception ):
def __init__( self, value ):
self.value = value
def __str__( self ):
return repr( self.value )
class MooView:
''' The MooView class is a window in which to display one or more
moose cells, using the MooCell class.'''
def __init__( self, swx = 10, swy = 12, hideAxis = True
):
plt.ion()
self.fig_ = plt.figure( figsize = (swx, swy) )
self.ax = self.fig_.add_subplot(111, projection='3d' )
self.drawables_ = []
self.fig_.canvas.mpl_connect("key_press_event", self.moveView )
plt.rcParams['keymap.xscale'] = ''
plt.rcParams['keymap.yscale'] = ''
plt.rcParams['keymap.zoom'] = ''
plt.rcParams['keymap.back'] = ''
plt.rcParams['keymap.home'] = ''
plt.rcParams['keymap.forward'] = ''
plt.rcParams['keymap.all_axes'] = ''
self.hideAxis = hideAxis
if self.hideAxis:
self.ax.set_axis_off()
#self.ax.margins( tight = True )
self.ax.margins()
self.sensitivity = 7.0 # degrees rotation
self.zoom = 1.05
def addDrawable( self, n ):
self.drawables_.append( n )
def firstDraw( self, rotation = 0.0, elev = 0.0, azim = 0.0 ):
self.coordMax = 0.0
self.coordMin = 0.0
if rotation == 0.0:
self.doRotation = False
self.rotation = 7 # default rotation per frame, in degrees.
else:
self.doRotation = True
self.rotation = rotation * 180/np.pi # arg units: radians/frame
self.azim = azim * 180/np.pi
self.elev = elev * 180/np.pi
for i in self.drawables_:
cmax, cmin = i.drawForTheFirstTime( self.ax )
self.coordMax = max( cmax, self.coordMax )
self.coordMin = min( cmin, self.coordMin )
if self.coordMin == self.coordMax:
self.coordMax = 1+self.coordMin
self.ax.set_xlim3d( self.coordMin, self.coordMax )
self.ax.set_ylim3d( self.coordMin, self.coordMax )
self.ax.set_zlim3d( self.coordMin, self.coordMax )
self.ax.view_init( elev = self.elev, azim = self.azim )
#self.ax.view_init( elev = -80.0, azim = 90.0 )
#self.colorbar = plt.colorbar( self.drawables_[0].segments )
self.colorbar = self.fig_.colorbar( self.drawables_[0].segments )
self.colorbar.set_label( self.drawables_[0].fieldInfo[3])
self.timeStr = self.ax.text2D( 0.05, 0.05,
"Time= 0.0", transform=self.ax.transAxes)
self.fig_.canvas.draw()
plt.show()
def updateValues( self ):
time = moose.element( '/clock' ).currentTime
self.timeStr.set_text( "Time= " + str(time) )
for i in self.drawables_:
i.updateValues()
if self.doRotation and abs( self.rotation ) < 120:
self.ax.azim += self.rotation
#self.fig_.canvas.draw()
plt.pause(0.001)
def moveView(self, event):
x0 = self.ax.get_xbound()[0]
x1 = self.ax.get_xbound()[1]
xk = (x0 - x1) / self.sensitivity
y0 = self.ax.get_ybound()[0]
y1 = self.ax.get_ybound()[1]
yk = (y0 - y1) / self.sensitivity
z0 = self.ax.get_zbound()[0]
z1 = self.ax.get_zbound()[1]
zk = (z0 - z1) / self.sensitivity
if event.key == "up" or event.key == "k":
self.ax.set_ybound( y0 + yk, y1 + yk )
if event.key == "down" or event.key == "j":
self.ax.set_ybound( y0 - yk, y1 - yk )
if event.key == "left" or event.key == "h":
self.ax.set_xbound( x0 + xk, x1 + xk )
if event.key == "right" or event.key == "l":
self.ax.set_xbound( x0 - xk, x1 - xk )
if event.key == "ctrl+up":
self.ax.set_zbound( z0 + zk, z1 + zk )
if event.key == "ctrl+down":
self.ax.set_zbound( z0 - zk, z1 - zk )
if event.key == "." or event.key == ">": # zoom in, bigger
self.ax.set_xbound( x0/self.zoom, x1/self.zoom )
self.ax.set_ybound( y0/self.zoom, y1/self.zoom )
self.ax.set_zbound( z0/self.zoom, z1/self.zoom )
if event.key == "," or event.key == "<": # zoom out, smaller
self.ax.set_xbound( x0*self.zoom, x1*self.zoom )
self.ax.set_ybound( y0*self.zoom, y1*self.zoom )
self.ax.set_zbound( z0*self.zoom, z1*self.zoom )
if event.key == "a": # autoscale to fill view.
self.ax.set_xlim3d( self.coordMin, self.coordMax )
self.ax.set_ylim3d( self.coordMin, self.coordMax )
self.ax.set_zlim3d( self.coordMin, self.coordMax )
if event.key == "p": # pitch
self.ax.elev += self.sensitivity
if event.key == "P":
self.ax.elev -= self.sensitivity
if event.key == "y": # yaw
self.ax.azim += self.sensitivity
if event.key == "Y":
self.ax.azim -= self.sensitivity
# Don't have anything for roll
if event.key == "g":
self.hideAxis = not self.hideAxis
if self.hideAxis:
self.ax.set_axis_off()
else:
self.ax.set_axis_on()
if event.key == "t": # Turn on/off twisting/autorotate
self.doRotation = not self.doRotation
if event.key == "?": # Print out help for these commands
self.printMoogulHelp()
self.fig_.canvas.draw()
def printMoogulHelp( self ):
print( '''
Key bindings for Moogul:
Up or k: pan object up
Down or j: pan object down
left or h: pan object left. Bug: direction depends on azimuth.
right or l: pan object right Bug: direction depends on azimuth
. or >: Zoom in: make object appear bigger
, or <: Zoom out: make object appear smaller
a: Autoscale to fill view
p: Pitch down
P: Pitch up
y: Yaw counterclockwise
Y: Yaw counterclockwise
g: Toggle visibility of grid
t: Toggle turn (rotation along long axis of cell)
?: Print this help page.
''')
#####################################################################
class MooDrawable:
''' Base class for drawing things'''
def __init__( self,
fieldInfo, field, relativeObj, maxLineWidth,
colormap,
lenScale, diaScale, autoscale,
valMin, valMax
):
self.field = field
self.relativeObj = relativeObj
self.maxLineWidth = maxLineWidth
self.lenScale = lenScale
self.diaScale = diaScale
self.colormap = colormap
self.autoscale = autoscale
self.valMin = valMin
self.valMax = valMax
self.fieldInfo = fieldInfo
self.fieldScale = fieldInfo[2]
#FieldInfo = [baseclass, fieldGetFunc, scale, axisText, min, max]
def updateValues( self ):
''' Obtains values from the associated cell'''
self.val = np.array([moose.getField(i, self.field) for i in self.activeObjs]) * self.fieldScale
cmap = plt.get_cmap( self.colormap )
if self.autoscale:
valMin = min( self.val )
valMax = max( self.val )
else:
valMin = self.valMin
valMax = self.valMax
scaleVal = (self.val - valMin) / (valMax - valMin)
self.rgba = [ cmap(i) for i in scaleVal ]
self.segments.set_color( self.rgba )
return
def drawForTheFirstTime( self, ax ):
self.segments = Line3DCollection( self.activeCoords,
linewidths = self.linewidth, cmap = plt.get_cmap(self.colormap) )
self.cax = ax.add_collection3d( self.segments )
self.segments.set_array( self.valMin + np.array( range( len(self.activeCoords) ) ) * (self.valMax-self.valMin)/len(self.activeCoords) )
return self.coordMax, self.coordMin
#####################################################################
class MooNeuron( MooDrawable ):
''' Draws collection of line segments of defined dia and color'''
def __init__( self,
neuronId,
fieldInfo,
field = 'Vm',
relativeObj = '.',
maxLineWidth = 20,
colormap = 'jet',
lenScale = 1e6, diaScale = 1e6, autoscale = False,
valMin = -0.1, valMax = 0.05,
):
#self.isFieldOnCompt =
#field in ( 'Vm', 'Im', 'Rm', 'Cm', 'Ra', 'inject', 'diameter' )
MooDrawable.__init__( self, fieldInfo, field = field,
relativeObj = relativeObj, maxLineWidth = maxLineWidth,
colormap = colormap, lenScale = lenScale,
diaScale = diaScale, autoscale = autoscale,
valMin = valMin, valMax = valMax )
self.neuronId = neuronId
self.updateCoords()
def updateCoords( self ):
''' Obtains coords from the associated cell'''
self.compts_ = moose.wildcardFind( self.neuronId.path + "/#[ISA=CompartmentBase]" )
# Matplotlib3d isn't able to do full rotations about an y axis,
# which is what the NeuroMorpho models use, so
# here we shuffle the axes around. Should be an option.
#coords = np.array([[[i.x0,i.y0,i.z0],[i.x,i.y,i.z]]
#for i in self.compts_])
coords = np.array([[[i.z0,i.x0,i.y0],[i.z,i.x,i.y]]
for i in self.compts_])
dia = np.array([i.diameter for i in self.compts_])
if self.relativeObj == '.':
self.activeCoords = coords
self.activeDia = dia
self.activeObjs = self.compts_
else:
self.activeObjs = []
self.activeCoords = []
self.activeDia = []
for i,j,k in zip( self.compts_, coords, dia ):
if moose.exists( i.path + '/' + self.relativeObj ):
elm = moose.element( i.path + '/' + self.relativeObj )
self.activeObjs.append( elm )
self.activeCoords.append( j )
self.activeDia.append( k )
self.activeCoords = np.array( self.activeCoords ) * self.lenScale
self.coordMax = np.amax( self.activeCoords )
self.coordMin = np.amin( self.activeCoords )
self.linewidth = np.array( [ min(self.maxLineWidth, 1 + int(i * self.diaScale )) for i in self.activeDia ] )
return
#####################################################################
class MooReacSystem( MooDrawable ):
''' Draws collection of line segments of defined dia and color'''
def __init__( self,
mooObj, fieldInfo,
field = 'conc',
relativeObj = '.',
maxLineWidth = 100,
colormap = 'jet',
lenScale = 1e6, diaScale = 20e6, autoscale = False,
valMin = 0.0, valMax = 1.0
):
MooDrawable.__init__( self, fieldInfo, field = field,
relativeObj = relativeObj, maxLineWidth = maxLineWidth,
colormap = colormap, lenScale = lenScale,
diaScale = diaScale, autoscale = autoscale,
valMin = valMin, valMax = valMax )
self.mooObj = mooObj
self.updateCoords()
def updateCoords( self ):
''' For now a dummy cylinder '''
dx = 1e-6
dummyDia = 20e-6
numObj = len( self.mooObj )
x = np.arange( 0, (numObj+1) * dx, dx )
y = np.zeros( numObj + 1)
z = np.zeros( numObj + 1)
coords = np.array([[[i*dx,0,0],[(i+1)*dx,0,0]] for i in range( numObj )] )
dia = np.ones( numObj ) * dummyDia
self.activeCoords = coords
self.activeDia = dia
self.activeObjs = self.mooObj
self.activeCoords = np.array( self.activeCoords ) * self.lenScale
self.coordMax = np.amax( self.activeCoords )
self.coordMin = np.amin( self.activeCoords )
self.linewidth = np.array( [ min(self.maxLineWidth, 1 + int(i * self.diaScale )) for i in self.activeDia ] )
return
|
gpl-3.0
|
harshaneelhg/scikit-learn
|
sklearn/linear_model/tests/test_randomized_l1.py
|
214
|
4690
|
# Authors: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.linear_model.randomized_l1 import (lasso_stability_path,
RandomizedLasso,
RandomizedLogisticRegression)
from sklearn.datasets import load_diabetes, load_iris
from sklearn.feature_selection import f_regression, f_classif
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model.base import center_data
diabetes = load_diabetes()
X = diabetes.data
y = diabetes.target
X = StandardScaler().fit_transform(X)
X = X[:, [2, 3, 6, 7, 8]]
# test that the feature score of the best features
F, _ = f_regression(X, y)
def test_lasso_stability_path():
# Check lasso stability path
# Load diabetes data and add noisy features
scaling = 0.3
coef_grid, scores_path = lasso_stability_path(X, y, scaling=scaling,
random_state=42,
n_resampling=30)
assert_array_equal(np.argsort(F)[-3:],
np.argsort(np.sum(scores_path, axis=1))[-3:])
def test_randomized_lasso():
# Check randomized lasso
scaling = 0.3
selection_threshold = 0.5
# or with 1 alpha
clf = RandomizedLasso(verbose=False, alpha=1, random_state=42,
scaling=scaling,
selection_threshold=selection_threshold)
feature_scores = clf.fit(X, y).scores_
assert_array_equal(np.argsort(F)[-3:], np.argsort(feature_scores)[-3:])
# or with many alphas
clf = RandomizedLasso(verbose=False, alpha=[1, 0.8], random_state=42,
scaling=scaling,
selection_threshold=selection_threshold)
feature_scores = clf.fit(X, y).scores_
assert_equal(clf.all_scores_.shape, (X.shape[1], 2))
assert_array_equal(np.argsort(F)[-3:], np.argsort(feature_scores)[-3:])
X_r = clf.transform(X)
X_full = clf.inverse_transform(X_r)
assert_equal(X_r.shape[1], np.sum(feature_scores > selection_threshold))
assert_equal(X_full.shape, X.shape)
clf = RandomizedLasso(verbose=False, alpha='aic', random_state=42,
scaling=scaling)
feature_scores = clf.fit(X, y).scores_
assert_array_equal(feature_scores, X.shape[1] * [1.])
clf = RandomizedLasso(verbose=False, scaling=-0.1)
assert_raises(ValueError, clf.fit, X, y)
clf = RandomizedLasso(verbose=False, scaling=1.1)
assert_raises(ValueError, clf.fit, X, y)
def test_randomized_logistic():
# Check randomized sparse logistic regression
iris = load_iris()
X = iris.data[:, [0, 2]]
y = iris.target
X = X[y != 2]
y = y[y != 2]
F, _ = f_classif(X, y)
scaling = 0.3
clf = RandomizedLogisticRegression(verbose=False, C=1., random_state=42,
scaling=scaling, n_resampling=50,
tol=1e-3)
X_orig = X.copy()
feature_scores = clf.fit(X, y).scores_
assert_array_equal(X, X_orig) # fit does not modify X
assert_array_equal(np.argsort(F), np.argsort(feature_scores))
clf = RandomizedLogisticRegression(verbose=False, C=[1., 0.5],
random_state=42, scaling=scaling,
n_resampling=50, tol=1e-3)
feature_scores = clf.fit(X, y).scores_
assert_array_equal(np.argsort(F), np.argsort(feature_scores))
def test_randomized_logistic_sparse():
# Check randomized sparse logistic regression on sparse data
iris = load_iris()
X = iris.data[:, [0, 2]]
y = iris.target
X = X[y != 2]
y = y[y != 2]
# center here because sparse matrices are usually not centered
X, y, _, _, _ = center_data(X, y, True, True)
X_sp = sparse.csr_matrix(X)
F, _ = f_classif(X, y)
scaling = 0.3
clf = RandomizedLogisticRegression(verbose=False, C=1., random_state=42,
scaling=scaling, n_resampling=50,
tol=1e-3)
feature_scores = clf.fit(X, y).scores_
clf = RandomizedLogisticRegression(verbose=False, C=1., random_state=42,
scaling=scaling, n_resampling=50,
tol=1e-3)
feature_scores_sp = clf.fit(X_sp, y).scores_
assert_array_equal(feature_scores, feature_scores_sp)
|
bsd-3-clause
|
samueljackson92/elo
|
elo.py
|
1
|
4699
|
import numpy as np
import pandas as pd
class Elo(object):
def __init__(self, teams, **kwargs):
self.set_parameters(**kwargs)
self.ratings = self._initalise_ratings(teams)
def set_parameters(self, mean_rating=1500, k=20, home_advantage=0):
if mean_rating < 0:
raise ValueError("Mean rating must be >= 0")
self._mean_rating = mean_rating
if k <= 0:
raise ValueError("K parameter must be > 0")
self._k = k
if home_advantage < 0:
raise ValueError("Home advantage must be > 0")
self._home_advantage = home_advantage
def _initalise_ratings(self, teams):
inital_values = np.empty(teams.size)
inital_values.fill(self._mean_rating)
return pd.Series(data=inital_values, index=teams)
def calculate_likelihood(self, rating_a, rating_b, mov_multiplier=1, bias=0):
""" Compute the expected score for team A given
team A's rating (r_a) and team B's rating (r_b)
"""
if rating_a < 0 or rating_b < 0:
raise ValueError("Ratings must be positive")
diff = (rating_b - (rating_a + bias))
likelihood = 1.0 / (1.0 + 10**(diff / 400.0))
return likelihood
def _marign_of_victory_multiplier(self, rating_a, rating_b, mov):
return np.log(np.fabs(mov)+1) * (2.2/((rating_a-rating_b)*.001+2.2))
def update_rating(self, rating, score, likelihood, mov_multiplier=1):
""" Compute the new rating from team A given their old rating (r_a)
their actual score (s_a) and there expected score (e_a)
"""
if rating < 0 or score < 0 or likelihood < 0:
raise ValueError("Parameters must be positive")
if likelihood > 1:
raise ValueError("likelihood must be between 0 < likelihood < 1")
rating = rating + self._k * mov_multiplier * (score - likelihood)
return int(rating)
def predict(self, games):
if any([not isinstance(el, tuple) for el in games]):
games = [games]
predictions = [self._predict_single_game(game) for game in games]
df = pd.concat(predictions, axis=1).T
df.columns = ['Team A', 'Elo Score', 'Win %', 'Team B', 'Elo Score', 'Win %', 'Predicted Winner']
return df
def _predict_single_game(self, game):
home_team, away_team = game
if home_team not in self.ratings:
raise KeyError('Team %s is not found in the ratings' % home_team)
if away_team not in self.ratings:
raise KeyError('Team %s is not found in the ratings' % away_team)
home_rating = self.ratings[home_team]
away_rating = self.ratings[away_team]
likelihood_home = self.calculate_likelihood(home_rating, away_rating,
bias=self._home_advantage)
likelihood_away = self.calculate_likelihood(away_rating, home_rating,
bias=-self._home_advantage)
winner = home_team if likelihood_home > likelihood_away else away_team
return pd.Series([home_team, home_rating, likelihood_home, away_team, away_rating, likelihood_away, winner])
def train(self, game):
home_team, home_pts, away_team, away_pts = game
home_rating = self.ratings[home_team]
away_rating = self.ratings[away_team]
home_score, away_score = self._calculate_game_points(home_pts, away_pts)
prediction = self.predict((home_team, away_team))
home_likelihood = prediction.ix[:, 2][0]
away_likelihood = prediction.ix[:, 5][0]
mov = (home_pts-away_pts)
mov_multiplier = self._marign_of_victory_multiplier(home_rating, away_rating, mov)
self.ratings[home_team] = self.update_rating(home_rating, home_score, home_likelihood, mov_multiplier)
self.ratings[away_team] = self.update_rating(away_rating, away_score, away_likelihood, mov_multiplier)
def _calculate_game_points(self, home_pts, away_pts):
if home_pts == away_pts:
# game was a draw, no gain for either team.
return 0.5, 0.5
elif home_pts > away_pts:
# home team beat away team
return 1, 0
elif home_pts < away_pts:
# away team beat home team
return 0, 1
def revert_ratings_to_mean(self, reversion_weight):
if reversion_weight < 0 or reversion_weight > 1:
raise ValueError('Reversion weight must be 0 < w < 1')
self.ratings = (reversion_weight * self.ratings) + ((1-reversion_weight) * self._mean_rating)
self.ratings = self.ratings.astype(int)
|
mit
|
fuzzy-id/midas
|
midas/plots.py
|
1
|
9460
|
# -*- coding: utf-8 -*-
import collections
import datetime
import os.path
import string
import matplotlib.dates
import matplotlib.pyplot as plt
import numpy
import operator
import pandas
from midas.compat import imap
from midas.compat import str_type
from midas.see5 import calculate_recall_precision
from midas.see5 import calculate_tpr
from midas.see5 import calculate_fpr
from midas.tools import iter_files_content
from midas.pig_schema import FLATTENED_PARSER
from midas.pig_schema import SITES_W_COMPANY_PARSER
def iter_sites_w_company(directory_or_file):
contents = iter_files_content(directory_or_file)
for swc in imap(SITES_W_COMPANY_PARSER, contents):
ranks = map(operator.attrgetter('rank'), swc.ranking)
index = pandas.DatetimeIndex(map(operator.attrgetter('tstamp'),
swc.ranking))
ts = pandas.Series(ranks, index=index)
tstamp = pandas.Timestamp(swc.tstamp)
yield (swc.site, ts, swc.company, swc.code, tstamp)
##################################
## Funding Rounds per date Plot ##
##################################
def make_fr_per_date_plot(companies, plot_file=None):
contents = iter_files_content(companies)
d = collections.defaultdict(list)
min_date = datetime.date(2011, 3, 1)
months = set()
for c in imap(FLATTENED_PARSER, contents):
if c.tstamp >= min_date:
d[c.code].append(matplotlib.dates.date2num(c.tstamp))
months.add(datetime.date(c.tstamp.year, c.tstamp.month, 1))
months = sorted(months)
right_border = months[-1] + datetime.timedelta(31)
right_border = datetime.date(right_border.year, right_border.month, 1)
months.append(right_border)
fig = plt.figure(figsize=(4*1.4, 3*1.4))
ax = fig.add_subplot(111)
ax.hist(d.values(), label=map(str.title, d.keys()),
bins=matplotlib.dates.date2num(months))
ax.set_xlim(matplotlib.dates.date2num(months[0]),
matplotlib.dates.date2num(months[-1]))
ax.legend()
ax.xaxis.set_major_locator(
matplotlib.dates.MonthLocator(bymonthday=15, interval=2)
)
ax.xaxis.set_major_formatter(
matplotlib.ticker.FuncFormatter(
lambda d, _: matplotlib.dates.num2date(d).strftime('%B %Y')
)
)
fig.autofmt_xdate()
ax.set_ylabel('Number of Funding Rounds')
ax.grid(True, axis='y')
if plot_file:
fig.savefig(plot_file)
return fig
#########################################
## Available Days Before Funding Round ##
#########################################
def get_available_days_before_fr(ts, fr):
site, date, code = fr
date = pandas.Timestamp(date)
ts_site = ts[site].dropna()
return code, (ts_site.index[0] - date).days
def make_available_days_before_funding_rounds_plot_data(sites_w_company):
collected = collections.defaultdict(list)
for site, ts, company, code, tstamp in sites_w_company:
ts = ts.dropna()
available_days = (ts.index[0] - tstamp).days
if available_days > 365:
available_days = 400
elif available_days < 0:
available_days = -40
collected[code].append(available_days)
return collected
def make_available_days_before_funding_rounds_plot(sites_w_company,
plot_file=None):
data = make_available_days_before_funding_rounds_plot_data(
sites_w_company
)
fig = plt.figure()
ax = fig.add_subplot('111')
res = ax.hist(data.values(),
bins=10,
histtype='bar',
label=map(string.capitalize, data.keys()),
log=True)
ax.legend(loc='best')
ax.set_ylabel('Number of Funding Rounds')
ax.set_xlabel('Number of Days')
ax.grid(which='both')
if plot_file:
fig.savefig(plot_file)
return fig
#########################################
## Median of Rank before Funding Round ##
#########################################
def make_final_rank_before_funding_plot(sites_w_company):
before_days = [5, 95, 215]
offset_days = 10
data = [ make_rank_before_funding_plot_data(sites_w_company,
days,
offset_days)
for days in before_days ]
fig = plt.figure()
axes = [ fig.add_subplot(len(before_days), 1, i)
for i in range(1, len(before_days) + 1) ]
bins = range(0, 1000001, 100000)
for ax, d, days in zip(axes, data, before_days):
ax.hist(d.values(), bins=bins, label=map(string.capitalize, d.keys()))
ax.legend(loc='best')
ax.grid(True)
title = make_rank_before_funding_plot_title(days, offset_days)
ax.set_title(title, fontsize='medium')
axes[1].set_ylabel('Number of Funding Rounds', fontsize='x-large')
axes[2].set_xlabel('Rank', fontsize='x-large')
return fig
def make_rank_before_funding_plot(sites_w_company,
before_days,
offset_days=10,
plot_file=None):
collected = make_rank_before_funding_plot_data(sites_w_company,
before_days,
offset_days)
fig = plt.figure()
ax = fig.add_subplot('111')
res = ax.hist(collected.values(),
bins=9,
label=map(string.capitalize, collected.keys()))
ax.legend(loc='best')
ax.grid(True)
ax.set_xlabel('Rank')
ax.set_ylabel('Number of Funding Rounds')
ax.set_title(make_rank_before_funding_plot_title(before_days,
offset_days))
if plot_file:
fig.savefig(plot_file)
return fig
def make_rank_before_funding_plot_fname(directory,
before_days,
days_offset=10,
prefix='rank_before_funding_plot'):
return os.path.join(directory,
'{0}_-_before_days_{1}_-_days_offset_{2}.png'\
.format(prefix, before_days, days_offset))
def make_rank_before_funding_plot_data(sites_w_company, before_days,
days_offset=10):
offset = pandas.DateOffset(days=days_offset)
start = pandas.DateOffset(days=before_days)
collected = collections.defaultdict(list)
for site, ts, company, code, tstamp in sites_w_company:
try:
median = median_rank_of_ts_in_period(ts, tstamp - start, offset)
except KeyError:
continue
if numpy.isnan(median):
continue
collected[code].append(median)
return collected
def median_rank_of_ts_in_period(ts, start_date, offset):
period = ts[start_date:(start_date + offset)].dropna()
return period.median()
def make_rank_before_funding_plot_title(before_days, offset_days):
end_days = before_days - offset_days
if 0 > offset_days:
raise ValueError('offset_days must greater than zero')
elif 0 < end_days < before_days:
first = before_days
snd = '{} days before'.format(end_days)
elif end_days < 0 < before_days:
first = '{0} days before'.format(before_days)
snd = '{0} days after'.format(-1*end_days)
elif end_days < 0 == before_days:
first = 'Fund Raise'
snd = '{} days after'.format(end_days * -1)
elif end_days == 0 < before_days:
first = '{} days before'.format(before_days)
snd = ''
title = 'Median of Rank from {} to {} Fund Raise'.format(first, snd)
return title
###########################
## Recall Precision Plot ##
###########################
def make_recall_precision_plot(results, plot_file=None):
"""
``results`` should be the result of `midas.see5.main`
"""
fig = plt.figure()
ax = fig.add_subplot('111')
ax.set_ylabel('Precision')
ax.set_xlabel('Recall')
for args, per_cost_result in results.items():
xs = []
ys = []
for cm in per_cost_result.values():
x, y = calculate_recall_precision(cm)
if numpy.isnan(y):
continue
xs.append(x)
ys.append(y)
if not isinstance(args, str_type):
args = ' '.join(args)
ax.plot(xs, ys, 'o', label=args)
ax.legend(loc='best')
ax.grid(True)
if plot_file:
fig.savefig(plot_file)
return fig
def make_tpr_fpr_plot(results, plot_file=None):
"""
``results`` should be the result of `midas.see5.main`
"""
fig = plt.figure()
ax = fig.add_subplot('111')
ax.set_ylabel('True Positive Rate')
ax.set_xlabel('False Positive Rate')
for args, per_cost_result in results.items():
xs = []
ys = []
for confusion_matrix in per_cost_result.values():
xs.append(calculate_fpr(confusion_matrix))
ys.append(calculate_tpr(confusion_matrix))
if not isinstance(args, str_type):
args = ' '.join(args)
ax.plot(xs, ys, 'o', label=args)
ax.legend(loc='best')
ax.grid(True)
ax.plot([0.0, 0.5, 1.0], [0.0, 0.5, 1.0], ':k')
if plot_file:
fig.savefig(plot_file)
return fig
|
bsd-3-clause
|
florian-wagner/gimli
|
doc/tutorials/modelling/develop/divergenceTest.py
|
1
|
2822
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
"""
import pygimli as pg
from pygimli.viewer import show
import matplotlib.pyplot as plt
import numpy as np
from solverFVM import boundaryToCellDistances
from solverFVM import cellDataToCellGrad, cellDataToCellGrad2
from solverFVM import cellDataToBoundaryData, cellDataToBoundaryGrad
def divergenceCell(c, F):
ret = 0
for bi in range(c.boundaryCount()):
b = pg.findBoundary(c.boundaryNodes(bi))
#print(b.norm(c).dot(F[b.id()]))
ret += b.norm(c).dot(F[b.id()]) * b.size()
return ret
def divergence(mesh, F):
div = pg.RVector(mesh.cellCount())
for c in mesh.cells():
div[c.id()] = divergenceCell(c, F)
return div
grid = pg.createGrid(x=np.arange(3.+1), y=np.arange(2.+1))
pot = np.arange(3.*2)
print(grid, pot)
plt.ion()
show(grid,pot)
pN = pg.cellDataToPointData(grid, pot)
ax, cbar = show(grid, pN)
ax, cbar = show(grid, axes=ax)
vF = np.zeros((grid.boundaryCount(), 2))
cellGrad = cellDataToCellGrad(grid, pot)
print(cellGrad)
cellGrad = cellDataToCellGrad2(grid, pot)
print(cellGrad)
boundGrad = cellDataToBoundaryGrad(grid, pot)
boundGrad2 = cellDataToBoundaryGrad(grid, pot, 1)
for c in grid.cells():
#gr = c.grad(c.center(), pN)
gr = cellGrad[c.id()]
ax.arrow(c.center()[0], c.center()[1], gr[0], gr[1])
for b in grid.boundaries():
#gr = c.grad(c.center(), pN)
gr = boundGrad[b.id()]
ax.arrow(b.center()[0], b.center()[1], gr[0], gr[1], color='red')
for b in grid.boundaries():
#gr = c.grad(c.center(), pN)
gr = boundGrad2[b.id()]
ax.arrow(b.center()[0], b.center()[1], gr[0], gr[1], color='green')
ax.set_xlim([-0.5, 3.5])
ax.set_ylim([-0.5, 3.1])
#F = grid.grad(pot)
#print(F)
print("div:", divergence(grid, boundGrad))
import sys
sys.path.append('/home/carsten/src/fipy')
from fipy.meshes import Grid2D
from fipy.variables.cellVariable import CellVariable
mesh = Grid2D(nx=3, ny=2)
val = np.arange(3*2.)
var = CellVariable(mesh=mesh, value=val)
print(var.faceGrad._calcValueNoInline())
print('-____________')
#print(var.faceGrad)
print(var.faceGrad.divergence)
#print(var.__pos__)
##[ 4. 3. 2. -2. -3. -4.]
ax, cbar = show(grid, np.array(var))
show(grid, axes=ax)
X, Y = mesh.getCellCenters()
gr = var.grad.numericValue
m = 1
for i in range(len(X)):
ax.arrow(X[i], Y[i], gr[0][i], gr[1][i])
gr = var.faceGrad
#gr = var.faceGrad._calcValueInline()
X, Y = mesh.getFaceCenters()
m = 1
for i in range(len(X)):
ax.arrow(X[i], Y[i], gr[0][i], gr[1][i], color='red')
#from fipy.variables.faceGradVariable import _FaceGradVariable
#gr = _FaceGradVariable(var)
#for i in range(len(X)):
#ax.arrow(X[i], Y[i], gr[0][i], gr[1][i], color='green')
ax.set_xlim([-0.5,3.5])
ax.set_ylim([-0.5,3.1])
plt.ioff()
plt.show()
|
gpl-3.0
|
rvraghav93/scikit-learn
|
examples/linear_model/plot_omp.py
|
385
|
2263
|
"""
===========================
Orthogonal Matching Pursuit
===========================
Using orthogonal matching pursuit for recovering a sparse signal from a noisy
measurement encoded with a dictionary
"""
print(__doc__)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import OrthogonalMatchingPursuit
from sklearn.linear_model import OrthogonalMatchingPursuitCV
from sklearn.datasets import make_sparse_coded_signal
n_components, n_features = 512, 100
n_nonzero_coefs = 17
# generate the data
###################
# y = Xw
# |x|_0 = n_nonzero_coefs
y, X, w = make_sparse_coded_signal(n_samples=1,
n_components=n_components,
n_features=n_features,
n_nonzero_coefs=n_nonzero_coefs,
random_state=0)
idx, = w.nonzero()
# distort the clean signal
##########################
y_noisy = y + 0.05 * np.random.randn(len(y))
# plot the sparse signal
########################
plt.figure(figsize=(7, 7))
plt.subplot(4, 1, 1)
plt.xlim(0, 512)
plt.title("Sparse signal")
plt.stem(idx, w[idx])
# plot the noise-free reconstruction
####################################
omp = OrthogonalMatchingPursuit(n_nonzero_coefs=n_nonzero_coefs)
omp.fit(X, y)
coef = omp.coef_
idx_r, = coef.nonzero()
plt.subplot(4, 1, 2)
plt.xlim(0, 512)
plt.title("Recovered signal from noise-free measurements")
plt.stem(idx_r, coef[idx_r])
# plot the noisy reconstruction
###############################
omp.fit(X, y_noisy)
coef = omp.coef_
idx_r, = coef.nonzero()
plt.subplot(4, 1, 3)
plt.xlim(0, 512)
plt.title("Recovered signal from noisy measurements")
plt.stem(idx_r, coef[idx_r])
# plot the noisy reconstruction with number of non-zeros set by CV
##################################################################
omp_cv = OrthogonalMatchingPursuitCV()
omp_cv.fit(X, y_noisy)
coef = omp_cv.coef_
idx_r, = coef.nonzero()
plt.subplot(4, 1, 4)
plt.xlim(0, 512)
plt.title("Recovered signal from noisy measurements with CV")
plt.stem(idx_r, coef[idx_r])
plt.subplots_adjust(0.06, 0.04, 0.94, 0.90, 0.20, 0.38)
plt.suptitle('Sparse signal recovery with Orthogonal Matching Pursuit',
fontsize=16)
plt.show()
|
bsd-3-clause
|
mojoboss/scikit-learn
|
sklearn/svm/tests/test_svm.py
|
11
|
31158
|
"""
Testing for Support Vector Machine module (sklearn.svm)
TODO: remove hard coded numerical results when possible
"""
import numpy as np
import itertools
from numpy.testing import assert_array_equal, assert_array_almost_equal
from numpy.testing import assert_almost_equal
from scipy import sparse
from nose.tools import assert_raises, assert_true, assert_equal, assert_false
from sklearn.base import ChangedBehaviorWarning
from sklearn import svm, linear_model, datasets, metrics, base
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import f1_score
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.utils import check_random_state
from sklearn.utils import ConvergenceWarning
from sklearn.utils.testing import assert_greater, assert_in, assert_less
from sklearn.utils.testing import assert_raises_regexp, assert_warns
from sklearn.utils.testing import assert_warns_message, assert_raise_message
from sklearn.utils.testing import ignore_warnings
# toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
Y = [1, 1, 1, 2, 2, 2]
T = [[-1, -1], [2, 2], [3, 2]]
true_result = [1, 2, 2]
# also load the iris dataset
iris = datasets.load_iris()
rng = check_random_state(42)
perm = rng.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
def test_libsvm_parameters():
# Test parameters on classes that make use of libsvm.
clf = svm.SVC(kernel='linear').fit(X, Y)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.support_, [1, 3])
assert_array_equal(clf.support_vectors_, (X[1], X[3]))
assert_array_equal(clf.intercept_, [0.])
assert_array_equal(clf.predict(X), Y)
def test_libsvm_iris():
# Check consistency on dataset iris.
# shuffle the dataset so that labels are not ordered
for k in ('linear', 'rbf'):
clf = svm.SVC(kernel=k).fit(iris.data, iris.target)
assert_greater(np.mean(clf.predict(iris.data) == iris.target), 0.9)
assert_array_equal(clf.classes_, np.sort(clf.classes_))
# check also the low-level API
model = svm.libsvm.fit(iris.data, iris.target.astype(np.float64))
pred = svm.libsvm.predict(iris.data, *model)
assert_greater(np.mean(pred == iris.target), .95)
model = svm.libsvm.fit(iris.data, iris.target.astype(np.float64),
kernel='linear')
pred = svm.libsvm.predict(iris.data, *model, kernel='linear')
assert_greater(np.mean(pred == iris.target), .95)
pred = svm.libsvm.cross_validation(iris.data,
iris.target.astype(np.float64), 5,
kernel='linear',
random_seed=0)
assert_greater(np.mean(pred == iris.target), .95)
# If random_seed >= 0, the libsvm rng is seeded (by calling `srand`), hence
# we should get deteriministic results (assuming that there is no other
# thread calling this wrapper calling `srand` concurrently).
pred2 = svm.libsvm.cross_validation(iris.data,
iris.target.astype(np.float64), 5,
kernel='linear',
random_seed=0)
assert_array_equal(pred, pred2)
def test_single_sample_1d():
# Test whether SVCs work on a single sample given as a 1-d array
clf = svm.SVC().fit(X, Y)
clf.predict(X[0])
clf = svm.LinearSVC(random_state=0).fit(X, Y)
clf.predict(X[0])
def test_precomputed():
# SVC with a precomputed kernel.
# We test it with a toy dataset and with iris.
clf = svm.SVC(kernel='precomputed')
# Gram matrix for train data (square matrix)
# (we use just a linear kernel)
K = np.dot(X, np.array(X).T)
clf.fit(K, Y)
# Gram matrix for test data (rectangular matrix)
KT = np.dot(T, np.array(X).T)
pred = clf.predict(KT)
assert_raises(ValueError, clf.predict, KT.T)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.support_, [1, 3])
assert_array_equal(clf.intercept_, [0])
assert_array_almost_equal(clf.support_, [1, 3])
assert_array_equal(pred, true_result)
# Gram matrix for test data but compute KT[i,j]
# for support vectors j only.
KT = np.zeros_like(KT)
for i in range(len(T)):
for j in clf.support_:
KT[i, j] = np.dot(T[i], X[j])
pred = clf.predict(KT)
assert_array_equal(pred, true_result)
# same as before, but using a callable function instead of the kernel
# matrix. kernel is just a linear kernel
kfunc = lambda x, y: np.dot(x, y.T)
clf = svm.SVC(kernel=kfunc)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.intercept_, [0])
assert_array_almost_equal(clf.support_, [1, 3])
assert_array_equal(pred, true_result)
# test a precomputed kernel with the iris dataset
# and check parameters against a linear SVC
clf = svm.SVC(kernel='precomputed')
clf2 = svm.SVC(kernel='linear')
K = np.dot(iris.data, iris.data.T)
clf.fit(K, iris.target)
clf2.fit(iris.data, iris.target)
pred = clf.predict(K)
assert_array_almost_equal(clf.support_, clf2.support_)
assert_array_almost_equal(clf.dual_coef_, clf2.dual_coef_)
assert_array_almost_equal(clf.intercept_, clf2.intercept_)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
# Gram matrix for test data but compute KT[i,j]
# for support vectors j only.
K = np.zeros_like(K)
for i in range(len(iris.data)):
for j in clf.support_:
K[i, j] = np.dot(iris.data[i], iris.data[j])
pred = clf.predict(K)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
clf = svm.SVC(kernel=kfunc)
clf.fit(iris.data, iris.target)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
def test_svr():
# Test Support Vector Regression
diabetes = datasets.load_diabetes()
for clf in (svm.NuSVR(kernel='linear', nu=.4, C=1.0),
svm.NuSVR(kernel='linear', nu=.4, C=10.),
svm.SVR(kernel='linear', C=10.),
svm.LinearSVR(C=10.),
svm.LinearSVR(C=10.),
):
clf.fit(diabetes.data, diabetes.target)
assert_greater(clf.score(diabetes.data, diabetes.target), 0.02)
# non-regression test; previously, BaseLibSVM would check that
# len(np.unique(y)) < 2, which must only be done for SVC
svm.SVR().fit(diabetes.data, np.ones(len(diabetes.data)))
svm.LinearSVR().fit(diabetes.data, np.ones(len(diabetes.data)))
def test_linearsvr():
# check that SVR(kernel='linear') and LinearSVC() give
# comparable results
diabetes = datasets.load_diabetes()
lsvr = svm.LinearSVR(C=1e3).fit(diabetes.data, diabetes.target)
score1 = lsvr.score(diabetes.data, diabetes.target)
svr = svm.SVR(kernel='linear', C=1e3).fit(diabetes.data, diabetes.target)
score2 = svr.score(diabetes.data, diabetes.target)
assert np.linalg.norm(lsvr.coef_ - svr.coef_) / np.linalg.norm(svr.coef_) < .1
assert np.abs(score1 - score2) < 0.1
def test_svr_errors():
X = [[0.0], [1.0]]
y = [0.0, 0.5]
# Bad kernel
clf = svm.SVR(kernel=lambda x, y: np.array([[1.0]]))
clf.fit(X, y)
assert_raises(ValueError, clf.predict, X)
def test_oneclass():
# Test OneClassSVM
clf = svm.OneClassSVM()
clf.fit(X)
pred = clf.predict(T)
assert_array_almost_equal(pred, [-1, -1, -1])
assert_array_almost_equal(clf.intercept_, [-1.008], decimal=3)
assert_array_almost_equal(clf.dual_coef_,
[[0.632, 0.233, 0.633, 0.234, 0.632, 0.633]],
decimal=3)
assert_raises(ValueError, lambda: clf.coef_)
def test_oneclass_decision_function():
# Test OneClassSVM decision function
clf = svm.OneClassSVM()
rnd = check_random_state(2)
# Generate train data
X = 0.3 * rnd.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * rnd.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = rnd.uniform(low=-4, high=4, size=(20, 2))
# fit the model
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(X_train)
# predict things
y_pred_test = clf.predict(X_test)
assert_greater(np.mean(y_pred_test == 1), .9)
y_pred_outliers = clf.predict(X_outliers)
assert_greater(np.mean(y_pred_outliers == -1), .9)
dec_func_test = clf.decision_function(X_test)
assert_array_equal((dec_func_test > 0).ravel(), y_pred_test == 1)
dec_func_outliers = clf.decision_function(X_outliers)
assert_array_equal((dec_func_outliers > 0).ravel(), y_pred_outliers == 1)
def test_tweak_params():
# Make sure some tweaking of parameters works.
# We change clf.dual_coef_ at run time and expect .predict() to change
# accordingly. Notice that this is not trivial since it involves a lot
# of C/Python copying in the libsvm bindings.
# The success of this test ensures that the mapping between libsvm and
# the python classifier is complete.
clf = svm.SVC(kernel='linear', C=1.0)
clf.fit(X, Y)
assert_array_equal(clf.dual_coef_, [[-.25, .25]])
assert_array_equal(clf.predict([[-.1, -.1]]), [1])
clf._dual_coef_ = np.array([[.0, 1.]])
assert_array_equal(clf.predict([[-.1, -.1]]), [2])
def test_probability():
# Predict probabilities using SVC
# This uses cross validation, so we use a slightly bigger testing set.
for clf in (svm.SVC(probability=True, random_state=0, C=1.0),
svm.NuSVC(probability=True, random_state=0)):
clf.fit(iris.data, iris.target)
prob_predict = clf.predict_proba(iris.data)
assert_array_almost_equal(
np.sum(prob_predict, 1), np.ones(iris.data.shape[0]))
assert_true(np.mean(np.argmax(prob_predict, 1)
== clf.predict(iris.data)) > 0.9)
assert_almost_equal(clf.predict_proba(iris.data),
np.exp(clf.predict_log_proba(iris.data)), 8)
def test_decision_function():
# Test decision_function
# Sanity check, test that decision_function implemented in python
# returns the same as the one in libsvm
# multi class:
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovo').fit(iris.data, iris.target)
dec = np.dot(iris.data, clf.coef_.T) + clf.intercept_
assert_array_almost_equal(dec, clf.decision_function(iris.data))
# binary:
clf.fit(X, Y)
dec = np.dot(X, clf.coef_.T) + clf.intercept_
prediction = clf.predict(X)
assert_array_almost_equal(dec.ravel(), clf.decision_function(X))
assert_array_almost_equal(
prediction,
clf.classes_[(clf.decision_function(X) > 0).astype(np.int)])
expected = np.array([-1., -0.66, -1., 0.66, 1., 1.])
assert_array_almost_equal(clf.decision_function(X), expected, 2)
# kernel binary:
clf = svm.SVC(kernel='rbf', gamma=1, decision_function_shape='ovo')
clf.fit(X, Y)
rbfs = rbf_kernel(X, clf.support_vectors_, gamma=clf.gamma)
dec = np.dot(rbfs, clf.dual_coef_.T) + clf.intercept_
assert_array_almost_equal(dec.ravel(), clf.decision_function(X))
def test_decision_function_shape():
# check that decision_function_shape='ovr' gives
# correct shape and is consistent with predict
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovr').fit(iris.data, iris.target)
dec = clf.decision_function(iris.data)
assert_equal(dec.shape, (len(iris.data), 3))
assert_array_equal(clf.predict(iris.data), np.argmax(dec, axis=1))
# with five classes:
X, y = make_blobs(n_samples=80, centers=5, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovr').fit(X_train, y_train)
dec = clf.decision_function(X_test)
assert_equal(dec.shape, (len(X_test), 5))
assert_array_equal(clf.predict(X_test), np.argmax(dec, axis=1))
# check shape of ovo_decition_function=True
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovo').fit(X_train, y_train)
dec = clf.decision_function(X_train)
assert_equal(dec.shape, (len(X_train), 10))
# check deprecation warning
clf.decision_function_shape = None
msg = "change the shape of the decision function"
dec = assert_warns_message(ChangedBehaviorWarning, msg,
clf.decision_function, X_train)
assert_equal(dec.shape, (len(X_train), 10))
def test_svr_decision_function():
# Test SVR's decision_function
# Sanity check, test that decision_function implemented in python
# returns the same as the one in libsvm
X = iris.data
y = iris.target
# linear kernel
reg = svm.SVR(kernel='linear', C=0.1).fit(X, y)
dec = np.dot(X, reg.coef_.T) + reg.intercept_
assert_array_almost_equal(dec.ravel(), reg.decision_function(X).ravel())
# rbf kernel
reg = svm.SVR(kernel='rbf', gamma=1).fit(X, y)
rbfs = rbf_kernel(X, reg.support_vectors_, gamma=reg.gamma)
dec = np.dot(rbfs, reg.dual_coef_.T) + reg.intercept_
assert_array_almost_equal(dec.ravel(), reg.decision_function(X).ravel())
def test_weight():
# Test class weights
clf = svm.SVC(class_weight={1: 0.1})
# we give a small weights to class 1
clf.fit(X, Y)
# so all predicted values belong to class 2
assert_array_almost_equal(clf.predict(X), [2] * 6)
X_, y_ = make_classification(n_samples=200, n_features=10,
weights=[0.833, 0.167], random_state=2)
for clf in (linear_model.LogisticRegression(),
svm.LinearSVC(random_state=0), svm.SVC()):
clf.set_params(class_weight={0: .1, 1: 10})
clf.fit(X_[:100], y_[:100])
y_pred = clf.predict(X_[100:])
assert_true(f1_score(y_[100:], y_pred) > .3)
def test_sample_weights():
# Test weights on individual samples
# TODO: check on NuSVR, OneClass, etc.
clf = svm.SVC()
clf.fit(X, Y)
assert_array_equal(clf.predict(X[2]), [1.])
sample_weight = [.1] * 3 + [10] * 3
clf.fit(X, Y, sample_weight=sample_weight)
assert_array_equal(clf.predict(X[2]), [2.])
# test that rescaling all samples is the same as changing C
clf = svm.SVC()
clf.fit(X, Y)
dual_coef_no_weight = clf.dual_coef_
clf.set_params(C=100)
clf.fit(X, Y, sample_weight=np.repeat(0.01, len(X)))
assert_array_almost_equal(dual_coef_no_weight, clf.dual_coef_)
def test_auto_weight():
# Test class weights for imbalanced data
from sklearn.linear_model import LogisticRegression
# We take as dataset the two-dimensional projection of iris so
# that it is not separable and remove half of predictors from
# class 1.
# We add one to the targets as a non-regression test: class_weight="balanced"
# used to work only when the labels where a range [0..K).
from sklearn.utils import compute_class_weight
X, y = iris.data[:, :2], iris.target + 1
unbalanced = np.delete(np.arange(y.size), np.where(y > 2)[0][::2])
classes = np.unique(y[unbalanced])
class_weights = compute_class_weight('balanced', classes, y[unbalanced])
assert_true(np.argmax(class_weights) == 2)
for clf in (svm.SVC(kernel='linear'), svm.LinearSVC(random_state=0),
LogisticRegression()):
# check that score is better when class='balanced' is set.
y_pred = clf.fit(X[unbalanced], y[unbalanced]).predict(X)
clf.set_params(class_weight='balanced')
y_pred_balanced = clf.fit(X[unbalanced], y[unbalanced],).predict(X)
assert_true(metrics.f1_score(y, y_pred, average='weighted')
<= metrics.f1_score(y, y_pred_balanced,
average='weighted'))
def test_bad_input():
# Test that it gives proper exception on deficient input
# impossible value of C
assert_raises(ValueError, svm.SVC(C=-1).fit, X, Y)
# impossible value of nu
clf = svm.NuSVC(nu=0.0)
assert_raises(ValueError, clf.fit, X, Y)
Y2 = Y[:-1] # wrong dimensions for labels
assert_raises(ValueError, clf.fit, X, Y2)
# Test with arrays that are non-contiguous.
for clf in (svm.SVC(), svm.LinearSVC(random_state=0)):
Xf = np.asfortranarray(X)
assert_false(Xf.flags['C_CONTIGUOUS'])
yf = np.ascontiguousarray(np.tile(Y, (2, 1)).T)
yf = yf[:, -1]
assert_false(yf.flags['F_CONTIGUOUS'])
assert_false(yf.flags['C_CONTIGUOUS'])
clf.fit(Xf, yf)
assert_array_equal(clf.predict(T), true_result)
# error for precomputed kernelsx
clf = svm.SVC(kernel='precomputed')
assert_raises(ValueError, clf.fit, X, Y)
# sample_weight bad dimensions
clf = svm.SVC()
assert_raises(ValueError, clf.fit, X, Y, sample_weight=range(len(X) - 1))
# predict with sparse input when trained with dense
clf = svm.SVC().fit(X, Y)
assert_raises(ValueError, clf.predict, sparse.lil_matrix(X))
Xt = np.array(X).T
clf.fit(np.dot(X, Xt), Y)
assert_raises(ValueError, clf.predict, X)
clf = svm.SVC()
clf.fit(X, Y)
assert_raises(ValueError, clf.predict, Xt)
def test_sparse_precomputed():
clf = svm.SVC(kernel='precomputed')
sparse_gram = sparse.csr_matrix([[1, 0], [0, 1]])
try:
clf.fit(sparse_gram, [0, 1])
assert not "reached"
except TypeError as e:
assert_in("Sparse precomputed", str(e))
def test_linearsvc_parameters():
# Test possible parameter combinations in LinearSVC
# Generate list of possible parameter combinations
losses = ['hinge', 'squared_hinge', 'logistic_regression', 'foo']
penalties, duals = ['l1', 'l2', 'bar'], [True, False]
X, y = make_classification(n_samples=5, n_features=5)
for loss, penalty, dual in itertools.product(losses, penalties, duals):
clf = svm.LinearSVC(penalty=penalty, loss=loss, dual=dual)
if ((loss, penalty) == ('hinge', 'l1') or
(loss, penalty, dual) == ('hinge', 'l2', False) or
(penalty, dual) == ('l1', True) or
loss == 'foo' or penalty == 'bar'):
assert_raises_regexp(ValueError,
"Unsupported set of arguments.*penalty='%s.*"
"loss='%s.*dual=%s"
% (penalty, loss, dual),
clf.fit, X, y)
else:
clf.fit(X, y)
# Incorrect loss value - test if explicit error message is raised
assert_raises_regexp(ValueError, ".*loss='l3' is not supported.*",
svm.LinearSVC(loss="l3").fit, X, y)
# FIXME remove in 1.0
def test_linearsvx_loss_penalty_deprecations():
X, y = [[0.0], [1.0]], [0, 1]
msg = ("loss='%s' has been deprecated in favor of "
"loss='%s' as of 0.16. Backward compatibility"
" for the %s will be removed in %s")
# LinearSVC
# loss l1/L1 --> hinge
assert_warns_message(DeprecationWarning,
msg % ("l1", "hinge", "loss='l1'", "1.0"),
svm.LinearSVC(loss="l1").fit, X, y)
# loss l2/L2 --> squared_hinge
assert_warns_message(DeprecationWarning,
msg % ("L2", "squared_hinge", "loss='L2'", "1.0"),
svm.LinearSVC(loss="L2").fit, X, y)
# LinearSVR
# loss l1/L1 --> epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("L1", "epsilon_insensitive", "loss='L1'",
"1.0"),
svm.LinearSVR(loss="L1").fit, X, y)
# loss l2/L2 --> squared_epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("l2", "squared_epsilon_insensitive",
"loss='l2'", "1.0"),
svm.LinearSVR(loss="l2").fit, X, y)
# FIXME remove in 0.18
def test_linear_svx_uppercase_loss_penalty():
# Check if Upper case notation is supported by _fit_liblinear
# which is called by fit
X, y = [[0.0], [1.0]], [0, 1]
msg = ("loss='%s' has been deprecated in favor of "
"loss='%s' as of 0.16. Backward compatibility"
" for the uppercase notation will be removed in %s")
# loss SQUARED_hinge --> squared_hinge
assert_warns_message(DeprecationWarning,
msg % ("SQUARED_hinge", "squared_hinge", "0.18"),
svm.LinearSVC(loss="SQUARED_hinge").fit, X, y)
# penalty L2 --> l2
assert_warns_message(DeprecationWarning,
msg.replace("loss", "penalty")
% ("L2", "l2", "0.18"),
svm.LinearSVC(penalty="L2").fit, X, y)
# loss EPSILON_INSENSITIVE --> epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("EPSILON_INSENSITIVE", "epsilon_insensitive",
"0.18"),
svm.LinearSVR(loss="EPSILON_INSENSITIVE").fit, X, y)
def test_linearsvc():
# Test basic routines using LinearSVC
clf = svm.LinearSVC(random_state=0).fit(X, Y)
# by default should have intercept
assert_true(clf.fit_intercept)
assert_array_equal(clf.predict(T), true_result)
assert_array_almost_equal(clf.intercept_, [0], decimal=3)
# the same with l1 penalty
clf = svm.LinearSVC(penalty='l1', loss='squared_hinge', dual=False, random_state=0).fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# l2 penalty with dual formulation
clf = svm.LinearSVC(penalty='l2', dual=True, random_state=0).fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# l2 penalty, l1 loss
clf = svm.LinearSVC(penalty='l2', loss='hinge', dual=True, random_state=0)
clf.fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# test also decision function
dec = clf.decision_function(T)
res = (dec > 0).astype(np.int) + 1
assert_array_equal(res, true_result)
def test_linearsvc_crammer_singer():
# Test LinearSVC with crammer_singer multi-class svm
ovr_clf = svm.LinearSVC(random_state=0).fit(iris.data, iris.target)
cs_clf = svm.LinearSVC(multi_class='crammer_singer', random_state=0)
cs_clf.fit(iris.data, iris.target)
# similar prediction for ovr and crammer-singer:
assert_true((ovr_clf.predict(iris.data) ==
cs_clf.predict(iris.data)).mean() > .9)
# classifiers shouldn't be the same
assert_true((ovr_clf.coef_ != cs_clf.coef_).all())
# test decision function
assert_array_equal(cs_clf.predict(iris.data),
np.argmax(cs_clf.decision_function(iris.data), axis=1))
dec_func = np.dot(iris.data, cs_clf.coef_.T) + cs_clf.intercept_
assert_array_almost_equal(dec_func, cs_clf.decision_function(iris.data))
def test_crammer_singer_binary():
# Test Crammer-Singer formulation in the binary case
X, y = make_classification(n_classes=2, random_state=0)
for fit_intercept in (True, False):
acc = svm.LinearSVC(fit_intercept=fit_intercept,
multi_class="crammer_singer",
random_state=0).fit(X, y).score(X, y)
assert_greater(acc, 0.9)
def test_linearsvc_iris():
# Test that LinearSVC gives plausible predictions on the iris dataset
# Also, test symbolic class names (classes_).
target = iris.target_names[iris.target]
clf = svm.LinearSVC(random_state=0).fit(iris.data, target)
assert_equal(set(clf.classes_), set(iris.target_names))
assert_greater(np.mean(clf.predict(iris.data) == target), 0.8)
dec = clf.decision_function(iris.data)
pred = iris.target_names[np.argmax(dec, 1)]
assert_array_equal(pred, clf.predict(iris.data))
def test_dense_liblinear_intercept_handling(classifier=svm.LinearSVC):
# Test that dense liblinear honours intercept_scaling param
X = [[2, 1],
[3, 1],
[1, 3],
[2, 3]]
y = [0, 0, 1, 1]
clf = classifier(fit_intercept=True, penalty='l1', loss='squared_hinge',
dual=False, C=4, tol=1e-7, random_state=0)
assert_true(clf.intercept_scaling == 1, clf.intercept_scaling)
assert_true(clf.fit_intercept)
# when intercept_scaling is low the intercept value is highly "penalized"
# by regularization
clf.intercept_scaling = 1
clf.fit(X, y)
assert_almost_equal(clf.intercept_, 0, decimal=5)
# when intercept_scaling is sufficiently high, the intercept value
# is not affected by regularization
clf.intercept_scaling = 100
clf.fit(X, y)
intercept1 = clf.intercept_
assert_less(intercept1, -1)
# when intercept_scaling is sufficiently high, the intercept value
# doesn't depend on intercept_scaling value
clf.intercept_scaling = 1000
clf.fit(X, y)
intercept2 = clf.intercept_
assert_array_almost_equal(intercept1, intercept2, decimal=2)
def test_liblinear_set_coef():
# multi-class case
clf = svm.LinearSVC().fit(iris.data, iris.target)
values = clf.decision_function(iris.data)
clf.coef_ = clf.coef_.copy()
clf.intercept_ = clf.intercept_.copy()
values2 = clf.decision_function(iris.data)
assert_array_almost_equal(values, values2)
# binary-class case
X = [[2, 1],
[3, 1],
[1, 3],
[2, 3]]
y = [0, 0, 1, 1]
clf = svm.LinearSVC().fit(X, y)
values = clf.decision_function(X)
clf.coef_ = clf.coef_.copy()
clf.intercept_ = clf.intercept_.copy()
values2 = clf.decision_function(X)
assert_array_equal(values, values2)
def test_immutable_coef_property():
# Check that primal coef modification are not silently ignored
svms = [
svm.SVC(kernel='linear').fit(iris.data, iris.target),
svm.NuSVC(kernel='linear').fit(iris.data, iris.target),
svm.SVR(kernel='linear').fit(iris.data, iris.target),
svm.NuSVR(kernel='linear').fit(iris.data, iris.target),
svm.OneClassSVM(kernel='linear').fit(iris.data),
]
for clf in svms:
assert_raises(AttributeError, clf.__setattr__, 'coef_', np.arange(3))
assert_raises((RuntimeError, ValueError),
clf.coef_.__setitem__, (0, 0), 0)
def test_linearsvc_verbose():
# stdout: redirect
import os
stdout = os.dup(1) # save original stdout
os.dup2(os.pipe()[1], 1) # replace it
# actual call
clf = svm.LinearSVC(verbose=1)
clf.fit(X, Y)
# stdout: restore
os.dup2(stdout, 1) # restore original stdout
def test_svc_clone_with_callable_kernel():
# create SVM with callable linear kernel, check that results are the same
# as with built-in linear kernel
svm_callable = svm.SVC(kernel=lambda x, y: np.dot(x, y.T),
probability=True, random_state=0,
decision_function_shape='ovr')
# clone for checking clonability with lambda functions..
svm_cloned = base.clone(svm_callable)
svm_cloned.fit(iris.data, iris.target)
svm_builtin = svm.SVC(kernel='linear', probability=True, random_state=0,
decision_function_shape='ovr')
svm_builtin.fit(iris.data, iris.target)
assert_array_almost_equal(svm_cloned.dual_coef_,
svm_builtin.dual_coef_)
assert_array_almost_equal(svm_cloned.intercept_,
svm_builtin.intercept_)
assert_array_equal(svm_cloned.predict(iris.data),
svm_builtin.predict(iris.data))
assert_array_almost_equal(svm_cloned.predict_proba(iris.data),
svm_builtin.predict_proba(iris.data),
decimal=4)
assert_array_almost_equal(svm_cloned.decision_function(iris.data),
svm_builtin.decision_function(iris.data))
def test_svc_bad_kernel():
svc = svm.SVC(kernel=lambda x, y: x)
assert_raises(ValueError, svc.fit, X, Y)
def test_timeout():
a = svm.SVC(kernel=lambda x, y: np.dot(x, y.T), probability=True,
random_state=0, max_iter=1)
assert_warns(ConvergenceWarning, a.fit, X, Y)
def test_unfitted():
X = "foo!" # input validation not required when SVM not fitted
clf = svm.SVC()
assert_raises_regexp(Exception, r".*\bSVC\b.*\bnot\b.*\bfitted\b",
clf.predict, X)
clf = svm.NuSVR()
assert_raises_regexp(Exception, r".*\bNuSVR\b.*\bnot\b.*\bfitted\b",
clf.predict, X)
# ignore convergence warnings from max_iter=1
@ignore_warnings
def test_consistent_proba():
a = svm.SVC(probability=True, max_iter=1, random_state=0)
proba_1 = a.fit(X, Y).predict_proba(X)
a = svm.SVC(probability=True, max_iter=1, random_state=0)
proba_2 = a.fit(X, Y).predict_proba(X)
assert_array_almost_equal(proba_1, proba_2)
def test_linear_svc_convergence_warnings():
# Test that warnings are raised if model does not converge
lsvc = svm.LinearSVC(max_iter=2, verbose=1)
assert_warns(ConvergenceWarning, lsvc.fit, X, Y)
assert_equal(lsvc.n_iter_, 2)
def test_svr_coef_sign():
# Test that SVR(kernel="linear") has coef_ with the right sign.
# Non-regression test for #2933.
X = np.random.RandomState(21).randn(10, 3)
y = np.random.RandomState(12).randn(10)
for svr in [svm.SVR(kernel='linear'), svm.NuSVR(kernel='linear'),
svm.LinearSVR()]:
svr.fit(X, y)
assert_array_almost_equal(svr.predict(X),
np.dot(X, svr.coef_.ravel()) + svr.intercept_)
def test_linear_svc_intercept_scaling():
# Test that the right error message is thrown when intercept_scaling <= 0
for i in [-1, 0]:
lsvc = svm.LinearSVC(intercept_scaling=i)
msg = ('Intercept scaling is %r but needs to be greater than 0.'
' To disable fitting an intercept,'
' set fit_intercept=False.' % lsvc.intercept_scaling)
assert_raise_message(ValueError, msg, lsvc.fit, X, Y)
def test_lsvc_intercept_scaling_zero():
# Test that intercept_scaling is ignored when fit_intercept is False
lsvc = svm.LinearSVC(fit_intercept=False)
lsvc.fit(X, Y)
assert_equal(lsvc.intercept_, 0.)
def test_trick_proba():
# Test that the right error message is thrown when self.probability is manually set to be True
G = svm.SVC()
G.fit(iris.data, iris.target)
G.probability = True
msg = "predict_proba is not available when fitted with probability=False"
assert_raise_message(AttributeError, msg, getattr, G, "predict_proba")
|
bsd-3-clause
|
mblondel/scikit-learn
|
sklearn/linear_model/ridge.py
|
4
|
38949
|
"""
Ridge regression
"""
# Author: Mathieu Blondel <[email protected]>
# Reuben Fletcher-Costin <[email protected]>
# Fabian Pedregosa <[email protected]>
# Michael Eickenberg <[email protected]>
# License: BSD 3 clause
from abc import ABCMeta, abstractmethod
import warnings
import numpy as np
from scipy import linalg
from scipy import sparse
from scipy.sparse import linalg as sp_linalg
from .base import LinearClassifierMixin, LinearModel
from ..base import RegressorMixin
from ..utils.extmath import safe_sparse_dot
from ..utils import check_X_y
from ..utils import compute_sample_weight, compute_class_weight
from ..utils import column_or_1d
from ..preprocessing import LabelBinarizer
from ..grid_search import GridSearchCV
from ..externals import six
from ..metrics.scorer import check_scoring
def _solve_sparse_cg(X, y, alpha, max_iter=None, tol=1e-3, verbose=0):
n_samples, n_features = X.shape
X1 = sp_linalg.aslinearoperator(X)
coefs = np.empty((y.shape[1], n_features))
if n_features > n_samples:
def create_mv(curr_alpha):
def _mv(x):
return X1.matvec(X1.rmatvec(x)) + curr_alpha * x
return _mv
else:
def create_mv(curr_alpha):
def _mv(x):
return X1.rmatvec(X1.matvec(x)) + curr_alpha * x
return _mv
for i in range(y.shape[1]):
y_column = y[:, i]
mv = create_mv(alpha[i])
if n_features > n_samples:
# kernel ridge
# w = X.T * inv(X X^t + alpha*Id) y
C = sp_linalg.LinearOperator(
(n_samples, n_samples), matvec=mv, dtype=X.dtype)
coef, info = sp_linalg.cg(C, y_column, tol=tol)
coefs[i] = X1.rmatvec(coef)
else:
# linear ridge
# w = inv(X^t X + alpha*Id) * X.T y
y_column = X1.rmatvec(y_column)
C = sp_linalg.LinearOperator(
(n_features, n_features), matvec=mv, dtype=X.dtype)
coefs[i], info = sp_linalg.cg(C, y_column, maxiter=max_iter,
tol=tol)
if info < 0:
raise ValueError("Failed with error code %d" % info)
if max_iter is None and info > 0 and verbose:
warnings.warn("sparse_cg did not converge after %d iterations." %
info)
return coefs
def _solve_lsqr(X, y, alpha, max_iter=None, tol=1e-3):
n_samples, n_features = X.shape
coefs = np.empty((y.shape[1], n_features))
# According to the lsqr documentation, alpha = damp^2.
sqrt_alpha = np.sqrt(alpha)
for i in range(y.shape[1]):
y_column = y[:, i]
coefs[i] = sp_linalg.lsqr(X, y_column, damp=sqrt_alpha[i],
atol=tol, btol=tol, iter_lim=max_iter)[0]
return coefs
def _solve_cholesky(X, y, alpha):
# w = inv(X^t X + alpha*Id) * X.T y
n_samples, n_features = X.shape
n_targets = y.shape[1]
A = safe_sparse_dot(X.T, X, dense_output=True)
Xy = safe_sparse_dot(X.T, y, dense_output=True)
one_alpha = np.array_equal(alpha, len(alpha) * [alpha[0]])
if one_alpha:
A.flat[::n_features + 1] += alpha[0]
return linalg.solve(A, Xy, sym_pos=True,
overwrite_a=True).T
else:
coefs = np.empty([n_targets, n_features])
for coef, target, current_alpha in zip(coefs, Xy.T, alpha):
A.flat[::n_features + 1] += current_alpha
coef[:] = linalg.solve(A, target, sym_pos=True,
overwrite_a=False).ravel()
A.flat[::n_features + 1] -= current_alpha
return coefs
def _solve_cholesky_kernel(K, y, alpha, sample_weight=None, copy=False):
# dual_coef = inv(X X^t + alpha*Id) y
n_samples = K.shape[0]
n_targets = y.shape[1]
if copy:
K = K.copy()
alpha = np.atleast_1d(alpha)
one_alpha = (alpha == alpha[0]).all()
has_sw = isinstance(sample_weight, np.ndarray) \
or sample_weight not in [1.0, None]
if has_sw:
# Unlike other solvers, we need to support sample_weight directly
# because K might be a pre-computed kernel.
sw = np.sqrt(np.atleast_1d(sample_weight))
y = y * sw[:, np.newaxis]
K *= np.outer(sw, sw)
if one_alpha:
# Only one penalty, we can solve multi-target problems in one time.
K.flat[::n_samples + 1] += alpha[0]
try:
# Note: we must use overwrite_a=False in order to be able to
# use the fall-back solution below in case a LinAlgError
# is raised
dual_coef = linalg.solve(K, y, sym_pos=True,
overwrite_a=False)
except np.linalg.LinAlgError:
warnings.warn("Singular matrix in solving dual problem. Using "
"least-squares solution instead.")
dual_coef = linalg.lstsq(K, y)[0]
# K is expensive to compute and store in memory so change it back in
# case it was user-given.
K.flat[::n_samples + 1] -= alpha[0]
if has_sw:
dual_coef *= sw[:, np.newaxis]
return dual_coef
else:
# One penalty per target. We need to solve each target separately.
dual_coefs = np.empty([n_targets, n_samples])
for dual_coef, target, current_alpha in zip(dual_coefs, y.T, alpha):
K.flat[::n_samples + 1] += current_alpha
dual_coef[:] = linalg.solve(K, target, sym_pos=True,
overwrite_a=False).ravel()
K.flat[::n_samples + 1] -= current_alpha
if has_sw:
dual_coefs *= sw[np.newaxis, :]
return dual_coefs.T
def _solve_svd(X, y, alpha):
U, s, Vt = linalg.svd(X, full_matrices=False)
idx = s > 1e-15 # same default value as scipy.linalg.pinv
s_nnz = s[idx][:, np.newaxis]
UTy = np.dot(U.T, y)
d = np.zeros((s.size, alpha.size))
d[idx] = s_nnz / (s_nnz ** 2 + alpha)
d_UT_y = d * UTy
return np.dot(Vt.T, d_UT_y).T
def _deprecate_dense_cholesky(solver):
if solver == 'dense_cholesky':
warnings.warn(DeprecationWarning(
"The name 'dense_cholesky' is deprecated and will "
"be removed in 0.17. Use 'cholesky' instead. "))
solver = 'cholesky'
return solver
def _rescale_data(X, y, sample_weight):
"""Rescale data so as to support sample_weight"""
n_samples = X.shape[0]
sample_weight = sample_weight * np.ones(n_samples)
sample_weight = np.sqrt(sample_weight)
sw_matrix = sparse.dia_matrix((sample_weight, 0),
shape=(n_samples, n_samples))
X = safe_sparse_dot(sw_matrix, X)
y = safe_sparse_dot(sw_matrix, y)
return X, y
def ridge_regression(X, y, alpha, sample_weight=None, solver='auto',
max_iter=None, tol=1e-3, verbose=0):
"""Solve the ridge equation by the method of normal equations.
Parameters
----------
X : {array-like, sparse matrix, LinearOperator},
shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
alpha : {float, array-like},
shape = [n_targets] if array-like
The l_2 penalty to be used. If an array is passed, penalties are
assumed to be specific to targets
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
The default value is determined by scipy.sparse.linalg.
sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample. If sample_weight is set, then
the solver will automatically be set to 'cholesky'
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg'}
Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than
'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution via a Cholesky decomposition of
dot(X.T, X)
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fatest but may not be available
in old scipy versions. It also uses an iterative procedure.
All three solvers support both dense and sparse data.
tol : float
Precision of the solution.
verbose : int
Verbosity level. Setting verbose > 0 will display additional information
depending on the solver used.
Returns
-------
coef : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
Notes
-----
This function won't compute the intercept.
"""
n_samples, n_features = X.shape
if y.ndim > 2:
raise ValueError("Target y has the wrong shape %s" % str(y.shape))
ravel = False
if y.ndim == 1:
y = y.reshape(-1, 1)
ravel = True
n_samples_, n_targets = y.shape
if n_samples != n_samples_:
raise ValueError("Number of samples in X and y does not correspond:"
" %d != %d" % (n_samples, n_samples_))
has_sw = sample_weight is not None
solver = _deprecate_dense_cholesky(solver)
if solver == 'auto':
# cholesky if it's a dense array and cg in
# any other case
if not sparse.issparse(X) or has_sw:
solver = 'cholesky'
else:
solver = 'sparse_cg'
elif solver == 'lsqr' and not hasattr(sp_linalg, 'lsqr'):
warnings.warn("""lsqr not available on this machine, falling back
to sparse_cg.""")
solver = 'sparse_cg'
if has_sw:
if np.atleast_1d(sample_weight).ndim > 1:
raise ValueError("Sample weights must be 1D array or scalar")
# Sample weight can be implemented via a simple rescaling.
X, y = _rescale_data(X, y, sample_weight)
# There should be either 1 or n_targets penalties
alpha = np.asarray(alpha).ravel()
if alpha.size not in [1, n_targets]:
raise ValueError("Number of targets and number of penalties "
"do not correspond: %d != %d"
% (alpha.size, n_targets))
if alpha.size == 1 and n_targets > 1:
alpha = np.repeat(alpha, n_targets)
if solver not in ('sparse_cg', 'cholesky', 'svd', 'lsqr'):
raise ValueError('Solver %s not understood' % solver)
if solver == 'sparse_cg':
coef = _solve_sparse_cg(X, y, alpha, max_iter, tol, verbose)
elif solver == "lsqr":
coef = _solve_lsqr(X, y, alpha, max_iter, tol)
elif solver == 'cholesky':
if n_features > n_samples:
K = safe_sparse_dot(X, X.T, dense_output=True)
try:
dual_coef = _solve_cholesky_kernel(K, y, alpha)
coef = safe_sparse_dot(X.T, dual_coef, dense_output=True).T
except linalg.LinAlgError:
# use SVD solver if matrix is singular
solver = 'svd'
else:
try:
coef = _solve_cholesky(X, y, alpha)
except linalg.LinAlgError:
# use SVD solver if matrix is singular
solver = 'svd'
if solver == 'svd':
if sparse.issparse(X):
raise TypeError('SVD solver does not support sparse'
' inputs currently')
coef = _solve_svd(X, y, alpha)
if ravel:
# When y was passed as a 1d-array, we flatten the coefficients.
coef = coef.ravel()
return coef
class _BaseRidge(six.with_metaclass(ABCMeta, LinearModel)):
@abstractmethod
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, solver="auto"):
self.alpha = alpha
self.fit_intercept = fit_intercept
self.normalize = normalize
self.copy_X = copy_X
self.max_iter = max_iter
self.tol = tol
self.solver = solver
def fit(self, X, y, sample_weight=None):
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], dtype=np.float,
multi_output=True, y_numeric=True)
if ((sample_weight is not None) and
np.atleast_1d(sample_weight).ndim > 1):
raise ValueError("Sample weights must be 1D array or scalar")
X, y, X_mean, y_mean, X_std = self._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X,
sample_weight=sample_weight)
solver = _deprecate_dense_cholesky(self.solver)
self.coef_ = ridge_regression(X, y,
alpha=self.alpha,
sample_weight=sample_weight,
max_iter=self.max_iter,
tol=self.tol,
solver=solver)
self._set_intercept(X_mean, y_mean, X_std)
return self
class Ridge(_BaseRidge, RegressorMixin):
"""Linear least squares with l2 regularization.
This model solves a regression model where the loss function is
the linear least squares function and regularization is given by
the l2-norm. Also known as Ridge Regression or Tikhonov regularization.
This estimator has built-in support for multi-variate regression
(i.e., when y is a 2d-array of shape [n_samples, n_targets]).
Parameters
----------
alpha : {float, array-like}
shape = [n_targets]
Small positive values of alpha improve the conditioning of the problem
and reduce the variance of the estimates. Alpha corresponds to
``(2*C)^-1`` in other linear models such as LogisticRegression or
LinearSVC. If an array is passed, penalties are assumed to be specific
to the targets. Hence they must correspond in number.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
The default value is determined by scipy.sparse.linalg.
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg'}
Solver to use in the computational routines:
- 'auto' chooses the solver automatically based on the type of data.
- 'svd' uses a Singular Value Decomposition of X to compute the Ridge
coefficients. More stable for singular matrices than
'cholesky'.
- 'cholesky' uses the standard scipy.linalg.solve function to
obtain a closed-form solution.
- 'sparse_cg' uses the conjugate gradient solver as found in
scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
more appropriate than 'cholesky' for large-scale data
(possibility to set `tol` and `max_iter`).
- 'lsqr' uses the dedicated regularized least-squares routine
scipy.sparse.linalg.lsqr. It is the fatest but may not be available
in old scipy versions. It also uses an iterative procedure.
All three solvers support both dense and sparse data.
tol : float
Precision of the solution.
Attributes
----------
coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
See also
--------
RidgeClassifier, RidgeCV, KernelRidge
Examples
--------
>>> from sklearn.linear_model import Ridge
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = Ridge(alpha=1.0)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, solver='auto', tol=0.001)
"""
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, solver="auto"):
super(Ridge, self).__init__(alpha=alpha, fit_intercept=fit_intercept,
normalize=normalize, copy_X=copy_X,
max_iter=max_iter, tol=tol, solver=solver)
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample
Returns
-------
self : returns an instance of self.
"""
return super(Ridge, self).fit(X, y, sample_weight=sample_weight)
class RidgeClassifier(LinearClassifierMixin, _BaseRidge):
"""Classifier using Ridge regression.
Parameters
----------
alpha : float
Small positive values of alpha improve the conditioning of the problem
and reduce the variance of the estimates. Alpha corresponds to
``(2*C)^-1`` in other linear models such as LogisticRegression or
LinearSVC.
class_weight : dict, optional
Weights associated with classes in the form
``{class_label : weight}``. If not given, all classes are
supposed to have weight one.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set to false, no
intercept will be used in calculations (e.g. data is expected to be
already centered).
max_iter : int, optional
Maximum number of iterations for conjugate gradient solver.
The default value is determined by scipy.sparse.linalg.
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg'}
Solver to use in the computational
routines. 'svd' will use a Singular value decomposition to obtain
the solution, 'cholesky' will use the standard
scipy.linalg.solve function, 'sparse_cg' will use the
conjugate gradient solver as found in
scipy.sparse.linalg.cg while 'auto' will chose the most
appropriate depending on the matrix X. 'lsqr' uses
a direct regularized least-squares routine provided by scipy.
tol : float
Precision of the solution.
Attributes
----------
coef_ : array, shape = [n_features] or [n_classes, n_features]
Weight vector(s).
See also
--------
Ridge, RidgeClassifierCV
Notes
-----
For multi-class classification, n_class classifiers are trained in
a one-versus-all approach. Concretely, this is implemented by taking
advantage of the multi-variate response support in Ridge.
"""
def __init__(self, alpha=1.0, fit_intercept=True, normalize=False,
copy_X=True, max_iter=None, tol=1e-3, class_weight=None,
solver="auto"):
super(RidgeClassifier, self).__init__(
alpha=alpha, fit_intercept=fit_intercept, normalize=normalize,
copy_X=copy_X, max_iter=max_iter, tol=tol, solver=solver)
self.class_weight = class_weight
def fit(self, X, y):
"""Fit Ridge regression model.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples,n_features]
Training data
y : array-like, shape = [n_samples]
Target values
Returns
-------
self : returns an instance of self.
"""
self._label_binarizer = LabelBinarizer(pos_label=1, neg_label=-1)
Y = self._label_binarizer.fit_transform(y)
if not self._label_binarizer.y_type_.startswith('multilabel'):
y = column_or_1d(y, warn=True)
if self.class_weight:
# get the class weight corresponding to each sample
sample_weight = compute_sample_weight(self.class_weight, y)
else:
sample_weight = None
super(RidgeClassifier, self).fit(X, Y, sample_weight=sample_weight)
return self
@property
def classes_(self):
return self._label_binarizer.classes_
class _RidgeGCV(LinearModel):
"""Ridge regression with built-in Generalized Cross-Validation
It allows efficient Leave-One-Out cross-validation.
This class is not intended to be used directly. Use RidgeCV instead.
Notes
-----
We want to solve (K + alpha*Id)c = y,
where K = X X^T is the kernel matrix.
Let G = (K + alpha*Id)^-1.
Dual solution: c = Gy
Primal solution: w = X^T c
Compute eigendecomposition K = Q V Q^T.
Then G = Q (V + alpha*Id)^-1 Q^T,
where (V + alpha*Id) is diagonal.
It is thus inexpensive to inverse for many alphas.
Let loov be the vector of prediction values for each example
when the model was fitted with all examples but this example.
loov = (KGY - diag(KG)Y) / diag(I-KG)
Let looe be the vector of prediction errors for each example
when the model was fitted with all examples but this example.
looe = y - loov = c / diag(G)
References
----------
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2007-025.pdf
http://www.mit.edu/~9.520/spring07/Classes/rlsslides.pdf
"""
def __init__(self, alphas=[0.1, 1.0, 10.0],
fit_intercept=True, normalize=False,
scoring=None, copy_X=True,
gcv_mode=None, store_cv_values=False):
self.alphas = np.asarray(alphas)
self.fit_intercept = fit_intercept
self.normalize = normalize
self.scoring = scoring
self.copy_X = copy_X
self.gcv_mode = gcv_mode
self.store_cv_values = store_cv_values
def _pre_compute(self, X, y):
# even if X is very sparse, K is usually very dense
K = safe_sparse_dot(X, X.T, dense_output=True)
v, Q = linalg.eigh(K)
QT_y = np.dot(Q.T, y)
return v, Q, QT_y
def _decomp_diag(self, v_prime, Q):
# compute diagonal of the matrix: dot(Q, dot(diag(v_prime), Q^T))
return (v_prime * Q ** 2).sum(axis=-1)
def _diag_dot(self, D, B):
# compute dot(diag(D), B)
if len(B.shape) > 1:
# handle case where B is > 1-d
D = D[(slice(None), ) + (np.newaxis, ) * (len(B.shape) - 1)]
return D * B
def _errors(self, alpha, y, v, Q, QT_y):
# don't construct matrix G, instead compute action on y & diagonal
w = 1.0 / (v + alpha)
c = np.dot(Q, self._diag_dot(w, QT_y))
G_diag = self._decomp_diag(w, Q)
# handle case where y is 2-d
if len(y.shape) != 1:
G_diag = G_diag[:, np.newaxis]
return (c / G_diag) ** 2, c
def _values(self, alpha, y, v, Q, QT_y):
# don't construct matrix G, instead compute action on y & diagonal
w = 1.0 / (v + alpha)
c = np.dot(Q, self._diag_dot(w, QT_y))
G_diag = self._decomp_diag(w, Q)
# handle case where y is 2-d
if len(y.shape) != 1:
G_diag = G_diag[:, np.newaxis]
return y - (c / G_diag), c
def _pre_compute_svd(self, X, y):
if sparse.issparse(X):
raise TypeError("SVD not supported for sparse matrices")
U, s, _ = linalg.svd(X, full_matrices=0)
v = s ** 2
UT_y = np.dot(U.T, y)
return v, U, UT_y
def _errors_svd(self, alpha, y, v, U, UT_y):
w = ((v + alpha) ** -1) - (alpha ** -1)
c = np.dot(U, self._diag_dot(w, UT_y)) + (alpha ** -1) * y
G_diag = self._decomp_diag(w, U) + (alpha ** -1)
if len(y.shape) != 1:
# handle case where y is 2-d
G_diag = G_diag[:, np.newaxis]
return (c / G_diag) ** 2, c
def _values_svd(self, alpha, y, v, U, UT_y):
w = ((v + alpha) ** -1) - (alpha ** -1)
c = np.dot(U, self._diag_dot(w, UT_y)) + (alpha ** -1) * y
G_diag = self._decomp_diag(w, U) + (alpha ** -1)
if len(y.shape) != 1:
# handle case when y is 2-d
G_diag = G_diag[:, np.newaxis]
return y - (c / G_diag), c
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or array-like of shape [n_samples]
Sample weight
Returns
-------
self : Returns self.
"""
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], dtype=np.float,
multi_output=True, y_numeric=True)
n_samples, n_features = X.shape
X, y, X_mean, y_mean, X_std = LinearModel._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X,
sample_weight=sample_weight)
gcv_mode = self.gcv_mode
with_sw = len(np.shape(sample_weight))
if gcv_mode is None or gcv_mode == 'auto':
if sparse.issparse(X) or n_features > n_samples or with_sw:
gcv_mode = 'eigen'
else:
gcv_mode = 'svd'
elif gcv_mode == "svd" and with_sw:
# FIXME non-uniform sample weights not yet supported
warnings.warn("non-uniform sample weights unsupported for svd, "
"forcing usage of eigen")
gcv_mode = 'eigen'
if gcv_mode == 'eigen':
_pre_compute = self._pre_compute
_errors = self._errors
_values = self._values
elif gcv_mode == 'svd':
# assert n_samples >= n_features
_pre_compute = self._pre_compute_svd
_errors = self._errors_svd
_values = self._values_svd
else:
raise ValueError('bad gcv_mode "%s"' % gcv_mode)
v, Q, QT_y = _pre_compute(X, y)
n_y = 1 if len(y.shape) == 1 else y.shape[1]
cv_values = np.zeros((n_samples * n_y, len(self.alphas)))
C = []
scorer = check_scoring(self, scoring=self.scoring, allow_none=True)
error = scorer is None
for i, alpha in enumerate(self.alphas):
weighted_alpha = (sample_weight * alpha
if sample_weight is not None
else alpha)
if error:
out, c = _errors(weighted_alpha, y, v, Q, QT_y)
else:
out, c = _values(weighted_alpha, y, v, Q, QT_y)
cv_values[:, i] = out.ravel()
C.append(c)
if error:
best = cv_values.mean(axis=0).argmin()
else:
# The scorer want an object that will make the predictions but
# they are already computed efficiently by _RidgeGCV. This
# identity_estimator will just return them
def identity_estimator():
pass
identity_estimator.decision_function = lambda y_predict: y_predict
identity_estimator.predict = lambda y_predict: y_predict
out = [scorer(identity_estimator, y.ravel(), cv_values[:, i])
for i in range(len(self.alphas))]
best = np.argmax(out)
self.alpha_ = self.alphas[best]
self.dual_coef_ = C[best]
self.coef_ = safe_sparse_dot(self.dual_coef_.T, X)
self._set_intercept(X_mean, y_mean, X_std)
if self.store_cv_values:
if len(y.shape) == 1:
cv_values_shape = n_samples, len(self.alphas)
else:
cv_values_shape = n_samples, n_y, len(self.alphas)
self.cv_values_ = cv_values.reshape(cv_values_shape)
return self
class _BaseRidgeCV(LinearModel):
def __init__(self, alphas=np.array([0.1, 1.0, 10.0]),
fit_intercept=True, normalize=False, scoring=None,
cv=None, gcv_mode=None,
store_cv_values=False):
self.alphas = alphas
self.fit_intercept = fit_intercept
self.normalize = normalize
self.scoring = scoring
self.cv = cv
self.gcv_mode = gcv_mode
self.store_cv_values = store_cv_values
def fit(self, X, y, sample_weight=None):
"""Fit Ridge regression model
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or array-like of shape [n_samples]
Sample weight
Returns
-------
self : Returns self.
"""
if self.cv is None:
estimator = _RidgeGCV(self.alphas,
fit_intercept=self.fit_intercept,
normalize=self.normalize,
scoring=self.scoring,
gcv_mode=self.gcv_mode,
store_cv_values=self.store_cv_values)
estimator.fit(X, y, sample_weight=sample_weight)
self.alpha_ = estimator.alpha_
if self.store_cv_values:
self.cv_values_ = estimator.cv_values_
else:
if self.store_cv_values:
raise ValueError("cv!=None and store_cv_values=True "
" are incompatible")
parameters = {'alpha': self.alphas}
# FIXME: sample_weight must be split into training/validation data
# too!
#fit_params = {'sample_weight' : sample_weight}
fit_params = {}
gs = GridSearchCV(Ridge(fit_intercept=self.fit_intercept),
parameters, fit_params=fit_params, cv=self.cv)
gs.fit(X, y)
estimator = gs.best_estimator_
self.alpha_ = gs.best_estimator_.alpha
self.coef_ = estimator.coef_
self.intercept_ = estimator.intercept_
return self
class RidgeCV(_BaseRidgeCV, RegressorMixin):
"""Ridge regression with built-in cross-validation.
By default, it performs Generalized Cross-Validation, which is a form of
efficient Leave-One-Out cross-validation.
Parameters
----------
alphas : numpy array of shape [n_alphas]
Array of alpha values to try.
Small positive values of alpha improve the conditioning of the
problem and reduce the variance of the estimates.
Alpha corresponds to ``(2*C)^-1`` in other linear models such as
LogisticRegression or LinearSVC.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
cv : integer or cross-validation generator, optional
If None, Generalized Cross-Validation (efficient Leave-One-Out)
will be used.
If an integer is passed, it is the number of folds for KFold cross
validation. Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
gcv_mode : {None, 'auto', 'svd', eigen'}, optional
Flag indicating which strategy to use when performing
Generalized Cross-Validation. Options are::
'auto' : use svd if n_samples > n_features or when X is a sparse
matrix, otherwise use eigen
'svd' : force computation via singular value decomposition of X
(does not work for sparse matrices)
'eigen' : force computation via eigendecomposition of X^T X
The 'auto' mode is the default and is intended to pick the cheaper
option of the two depending upon the shape and format of the training
data.
store_cv_values : boolean, default=False
Flag indicating if the cross-validation values corresponding to
each alpha should be stored in the `cv_values_` attribute (see
below). This flag is only compatible with `cv=None` (i.e. using
Generalized Cross-Validation).
Attributes
----------
cv_values_ : array, shape = [n_samples, n_alphas] or \
shape = [n_samples, n_targets, n_alphas], optional
Cross-validation values for each alpha (if `store_cv_values=True` and \
`cv=None`). After `fit()` has been called, this attribute will \
contain the mean squared errors (by default) or the values of the \
`{loss,score}_func` function (if provided in the constructor).
coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
alpha_ : float
Estimated regularization parameter.
intercept_ : float | array, shape = (n_targets,)
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.
See also
--------
Ridge: Ridge regression
RidgeClassifier: Ridge classifier
RidgeClassifierCV: Ridge classifier with built-in cross validation
"""
pass
class RidgeClassifierCV(LinearClassifierMixin, _BaseRidgeCV):
"""Ridge classifier with built-in cross-validation.
By default, it performs Generalized Cross-Validation, which is a form of
efficient Leave-One-Out cross-validation. Currently, only the n_features >
n_samples case is handled efficiently.
Parameters
----------
alphas : numpy array of shape [n_alphas]
Array of alpha values to try.
Small positive values of alpha improve the conditioning of the
problem and reduce the variance of the estimates.
Alpha corresponds to ``(2*C)^-1`` in other linear models such as
LogisticRegression or LinearSVC.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
cv : cross-validation generator, optional
If None, Generalized Cross-Validation (efficient Leave-One-Out)
will be used.
class_weight : dict, optional
Weights associated with classes in the form
``{class_label : weight}``. If not given, all classes are
supposed to have weight one.
Attributes
----------
cv_values_ : array, shape = [n_samples, n_alphas] or \
shape = [n_samples, n_responses, n_alphas], optional
Cross-validation values for each alpha (if `store_cv_values=True` and
`cv=None`). After `fit()` has been called, this attribute will contain \
the mean squared errors (by default) or the values of the \
`{loss,score}_func` function (if provided in the constructor).
coef_ : array, shape = [n_features] or [n_targets, n_features]
Weight vector(s).
alpha_ : float
Estimated regularization parameter
See also
--------
Ridge: Ridge regression
RidgeClassifier: Ridge classifier
RidgeCV: Ridge regression with built-in cross validation
Notes
-----
For multi-class classification, n_class classifiers are trained in
a one-versus-all approach. Concretely, this is implemented by taking
advantage of the multi-variate response support in Ridge.
"""
def __init__(self, alphas=np.array([0.1, 1.0, 10.0]), fit_intercept=True,
normalize=False, scoring=None, cv=None, class_weight=None):
super(RidgeClassifierCV, self).__init__(
alphas=alphas, fit_intercept=fit_intercept, normalize=normalize,
scoring=scoring, cv=cv)
self.class_weight = class_weight
def fit(self, X, y, sample_weight=None):
"""Fit the ridge classifier.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape (n_samples,)
Target values.
sample_weight : float or numpy array of shape (n_samples,)
Sample weight.
Returns
-------
self : object
Returns self.
"""
if sample_weight is None:
sample_weight = 1.
self._label_binarizer = LabelBinarizer(pos_label=1, neg_label=-1)
Y = self._label_binarizer.fit_transform(y)
if not self._label_binarizer.y_type_.startswith('multilabel'):
y = column_or_1d(y, warn=True)
# modify the sample weights with the corresponding class weight
sample_weight = (sample_weight *
compute_sample_weight(self.class_weight, y))
_BaseRidgeCV.fit(self, X, Y, sample_weight=sample_weight)
return self
@property
def classes_(self):
return self._label_binarizer.classes_
|
bsd-3-clause
|
rubikloud/scikit-learn
|
examples/applications/plot_tomography_l1_reconstruction.py
|
81
|
5461
|
"""
======================================================================
Compressive sensing: tomography reconstruction with L1 prior (Lasso)
======================================================================
This example shows the reconstruction of an image from a set of parallel
projections, acquired along different angles. Such a dataset is acquired in
**computed tomography** (CT).
Without any prior information on the sample, the number of projections
required to reconstruct the image is of the order of the linear size
``l`` of the image (in pixels). For simplicity we consider here a sparse
image, where only pixels on the boundary of objects have a non-zero
value. Such data could correspond for example to a cellular material.
Note however that most images are sparse in a different basis, such as
the Haar wavelets. Only ``l/7`` projections are acquired, therefore it is
necessary to use prior information available on the sample (its
sparsity): this is an example of **compressive sensing**.
The tomography projection operation is a linear transformation. In
addition to the data-fidelity term corresponding to a linear regression,
we penalize the L1 norm of the image to account for its sparsity. The
resulting optimization problem is called the :ref:`lasso`. We use the
class :class:`sklearn.linear_model.Lasso`, that uses the coordinate descent
algorithm. Importantly, this implementation is more computationally efficient
on a sparse matrix, than the projection operator used here.
The reconstruction with L1 penalization gives a result with zero error
(all pixels are successfully labeled with 0 or 1), even if noise was
added to the projections. In comparison, an L2 penalization
(:class:`sklearn.linear_model.Ridge`) produces a large number of labeling
errors for the pixels. Important artifacts are observed on the
reconstructed image, contrary to the L1 penalization. Note in particular
the circular artifact separating the pixels in the corners, that have
contributed to fewer projections than the central disk.
"""
print(__doc__)
# Author: Emmanuelle Gouillart <[email protected]>
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from scipy import ndimage
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
def _weights(x, dx=1, orig=0):
x = np.ravel(x)
floor_x = np.floor((x - orig) / dx)
alpha = (x - orig - floor_x * dx) / dx
return np.hstack((floor_x, floor_x + 1)), np.hstack((1 - alpha, alpha))
def _generate_center_coordinates(l_x):
X, Y = np.mgrid[:l_x, :l_x].astype(np.float64)
center = l_x / 2.
X += 0.5 - center
Y += 0.5 - center
return X, Y
def build_projection_operator(l_x, n_dir):
""" Compute the tomography design matrix.
Parameters
----------
l_x : int
linear size of image array
n_dir : int
number of angles at which projections are acquired.
Returns
-------
p : sparse matrix of shape (n_dir l_x, l_x**2)
"""
X, Y = _generate_center_coordinates(l_x)
angles = np.linspace(0, np.pi, n_dir, endpoint=False)
data_inds, weights, camera_inds = [], [], []
data_unravel_indices = np.arange(l_x ** 2)
data_unravel_indices = np.hstack((data_unravel_indices,
data_unravel_indices))
for i, angle in enumerate(angles):
Xrot = np.cos(angle) * X - np.sin(angle) * Y
inds, w = _weights(Xrot, dx=1, orig=X.min())
mask = np.logical_and(inds >= 0, inds < l_x)
weights += list(w[mask])
camera_inds += list(inds[mask] + i * l_x)
data_inds += list(data_unravel_indices[mask])
proj_operator = sparse.coo_matrix((weights, (camera_inds, data_inds)))
return proj_operator
def generate_synthetic_data():
""" Synthetic binary data """
rs = np.random.RandomState(0)
n_pts = 36.
x, y = np.ogrid[0:l, 0:l]
mask_outer = (x - l / 2) ** 2 + (y - l / 2) ** 2 < (l / 2) ** 2
mask = np.zeros((l, l))
points = l * rs.rand(2, n_pts)
mask[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
mask = ndimage.gaussian_filter(mask, sigma=l / n_pts)
res = np.logical_and(mask > mask.mean(), mask_outer)
return res - ndimage.binary_erosion(res)
# Generate synthetic images, and projections
l = 128
proj_operator = build_projection_operator(l, l / 7.)
data = generate_synthetic_data()
proj = proj_operator * data.ravel()[:, np.newaxis]
proj += 0.15 * np.random.randn(*proj.shape)
# Reconstruction with L2 (Ridge) penalization
rgr_ridge = Ridge(alpha=0.2)
rgr_ridge.fit(proj_operator, proj.ravel())
rec_l2 = rgr_ridge.coef_.reshape(l, l)
# Reconstruction with L1 (Lasso) penalization
# the best value of alpha was determined using cross validation
# with LassoCV
rgr_lasso = Lasso(alpha=0.001)
rgr_lasso.fit(proj_operator, proj.ravel())
rec_l1 = rgr_lasso.coef_.reshape(l, l)
plt.figure(figsize=(8, 3.3))
plt.subplot(131)
plt.imshow(data, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
plt.title('original image')
plt.subplot(132)
plt.imshow(rec_l2, cmap=plt.cm.gray, interpolation='nearest')
plt.title('L2 penalization')
plt.axis('off')
plt.subplot(133)
plt.imshow(rec_l1, cmap=plt.cm.gray, interpolation='nearest')
plt.title('L1 penalization')
plt.axis('off')
plt.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0,
right=1)
plt.show()
|
bsd-3-clause
|
Akshay0724/scikit-learn
|
benchmarks/bench_glm.py
|
100
|
1515
|
"""
A comparison of different methods in GLM
Data comes from a random square matrix.
"""
from datetime import datetime
import numpy as np
from sklearn import linear_model
from sklearn.utils.bench import total_seconds
if __name__ == '__main__':
import matplotlib.pyplot as plt
n_iter = 40
time_ridge = np.empty(n_iter)
time_ols = np.empty(n_iter)
time_lasso = np.empty(n_iter)
dimensions = 500 * np.arange(1, n_iter + 1)
for i in range(n_iter):
print('Iteration %s of %s' % (i, n_iter))
n_samples, n_features = 10 * i + 3, 10 * i + 3
X = np.random.randn(n_samples, n_features)
Y = np.random.randn(n_samples)
start = datetime.now()
ridge = linear_model.Ridge(alpha=1.)
ridge.fit(X, Y)
time_ridge[i] = total_seconds(datetime.now() - start)
start = datetime.now()
ols = linear_model.LinearRegression()
ols.fit(X, Y)
time_ols[i] = total_seconds(datetime.now() - start)
start = datetime.now()
lasso = linear_model.LassoLars()
lasso.fit(X, Y)
time_lasso[i] = total_seconds(datetime.now() - start)
plt.figure('scikit-learn GLM benchmark results')
plt.xlabel('Dimensions')
plt.ylabel('Time (s)')
plt.plot(dimensions, time_ridge, color='r')
plt.plot(dimensions, time_ols, color='g')
plt.plot(dimensions, time_lasso, color='b')
plt.legend(['Ridge', 'OLS', 'LassoLars'], loc='upper left')
plt.axis('tight')
plt.show()
|
bsd-3-clause
|
jigargandhi/UdemyMachineLearning
|
Machine Learning A-Z Template Folder/Part 2 - Regression/Section 5 - Multiple Linear Regression/j_multiple_linear_regression.py
|
1
|
1546
|
# Multiple Linear Regression
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('50_Startups.csv')
X = dataset.iloc[: ,: -1].values
y = dataset.iloc[:,4].values
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
# convert label to numbers
labelEncoder_X = LabelEncoder()
X[:,3]= labelEncoder_X.fit_transform(X[:,3])
#encode labels as different values in columns
#in array use index of column
onehotencoder = OneHotEncoder(categorical_features=[3])
X = onehotencoder.fit_transform(X).toarray()
#avoiding the dummy variable
X = X[: ,1:]
#splitting training and test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state=0)
# fitting multiple
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
#predict the regressor
y_pred = regressor.predict(X_test)
# backward elimination
import statsmodels.formula.api as sm
X = np.append(arr = np.ones((50,1)).astype(int), values = X, axis = 1)
X_opt = X[:, [0, 1, 2, 3, 4, 5]]
regressor_ols = sm.OLS(endog = y, exog = X_opt).fit()
regressor_ols.summary()
X_opt = X[:, [0, 3, 4, 5]]
regressor_ols = sm.OLS(endog = y, exog = X_opt).fit()
regressor_ols.summary()
X_opt = X[:, [0, 3, 5]]
regressor_ols = sm.OLS(endog = y, exog = X_opt).fit()
regressor_ols.summary()
X_opt = X[:, [0, 3]]
regressor_ols = sm.OLS(endog = y, exog = X_opt).fit()
regressor_ols.summary()
|
mit
|
GoogleCloudPlatform/keras-idiomatic-programmer
|
zoo/dcgan/dcgan_c.py
|
1
|
8985
|
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# DCGAN + composable (2016)
# Paper: https://arxiv.org/pdf/1511.06434.pdf
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Flatten, Reshape, Dropout, Dense, ReLU
from tensorflow.keras.layers import LeakyReLU, Activation, ZeroPadding2D
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, BatchNormalization
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
from models_c import Composable
class DCGAN(Composable):
# Initial Hyperparameters
hyperparameters = { 'initializer': 'glorot_uniform',
'regularizer': None,
'relu_clip' : None,
'bn_epsilon' : None,
'use_bias' : False
}
def __init__(self, latent=100, input_shape=(28, 28, 1),
**hyperparameters):
""" Construct a Deep Convolutional GAN (DC-GAN)
latent : dimension of latent space
input_shape : input shape
initializer : kernel initializer
regularizer : kernel regularizer
relu_clip : max value for ReLU
bn_epsilon : epsilon for batch normalization
use_bias : whether to include bias
"""
Composable.__init__(self, input_shape, None, self.hyperparameters, **hyperparameters)
# Construct the generator
self.g = self.generator(latent=latent, height=input_shape[0], channels=input_shape[2])
# Construct the discriminator
self.d = self.discriminator(input_shape=input_shape, optimizer=Adam(0.0002, 0.5))
# Construct the combined (stacked) generator/discriminator model (GAN)
self.model = self.gan(latent=latent, optimizer=Adam(0.0002, 0.5))
def generator(self, latent=100, height=28, channels=1):
""" Construct the Generator
latent : dimension of latent space
channels : number of channels
"""
def stem(inputs):
factor = height // 4
x = self.Dense(inputs, 128 * factor * factor)
x = self.ReLU(x)
x = Reshape((factor, factor, 128))(x)
return x
def learner(x):
x = self.Conv2DTranspose(x, 128, (3, 3), strides=2, padding='same')
x = self.Conv2D(x, 128, (3, 3), padding="same")
x = self.BatchNormalization(x, momentum=0.8)
x = self.ReLU(x)
x = self.Conv2DTranspose(x, 64, (3, 3), strides=2, padding='same')
x = self.Conv2D(x, 64, (3, 3), padding="same")
x = self.BatchNormalization(x, momentum=0.8)
x = self.ReLU(x)
return x
def classifier(x):
outputs = self.Conv2D(x, channels, (3, 3), activation='tanh', padding="same")
return outputs
# Construct the Generator
inputs = Input(shape=(latent,))
x = stem(inputs)
x = learner(x)
outputs = classifier(x)
return Model(inputs, outputs)
def discriminator(self, input_shape=(28, 28, 1), optimizer=Adam(0.0002, 0.5)):
""" Construct the discriminator
input_shape : the input shape of the images
optimizer : the optimizer
"""
def stem(inputs):
x = self.Conv2D(inputs, 32, (3, 3), strides=2, padding="same")
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
return x
def learner(x):
x = self.Conv2D(x, 64, (3, 3), strides=2, padding="same")
x = ZeroPadding2D(padding=((0,1),(0,1)))(x)
x = self.BatchNormalization(x, momentum=0.8)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = self.Conv2D(x, 128, (3, 3), strides=2, padding="same")
x = self.BatchNormalization(x, momentum=0.8)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = self.Conv2D(x, 256, (3, 3), strides=1, padding="same")
x = self.BatchNormalization(x, momentum=0.8)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
return x
def classifier(x):
x = Flatten()(x)
outputs = self.Dense(x, 1, activation='sigmoid')
return outputs
# Construct the discriminator
inputs = Input(shape=input_shape)
x = stem(inputs)
x = learner(x)
outputs = classifier(x)
model = Model(inputs, outputs)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
# For the combined model we will only train the generator
model.trainable = False
return model
def gan(self, latent=100, optimizer=Adam(0.0002, 0.5)):
""" Construct the Combined Generator/Discrimator (GAN)
latent : the latent space dimension
optimizer : the optimizer
"""
# The generator takes noise as input and generates fake images
noise = Input(shape=(latent,))
fake = self.g(noise)
# The discriminator takes generated images as input and determines if real or fake
valid = self.d(fake)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
model = Model(noise, valid)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
return model
def train(self, images, latent=100, epochs=4000, batch_size=128, save_interval=50):
""" Train the GAN
images : images from the training data
latent : dimension of the latent space
credit: https://github.com/eriklindernoren
"""
# Adversarial ground truths
valid_labels = np.ones ((batch_size, 1))
fake_labels = np.zeros((batch_size, 1))
for epoch in range(epochs):
# Train the Discriminator
# Select a random half of the images
idx = np.random.randint(0, images.shape[0], batch_size)
batch = images[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, latent))
fakes = self.g.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = self.d.train_on_batch(batch, valid_labels)
d_loss_fake = self.d.train_on_batch(fakes, fake_labels)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the Generator
# Train the generator (wants discriminator to mistake images as real)
g_loss = self.model.train_on_batch(noise, valid_labels)
# Plot the progress
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# If at save interval => save generated image samples
if epoch % save_interval == 0:
self.save_imgs(epoch)
def save_imgs(self, epoch, latent=100):
import os
if not os.path.isdir('images'):
os.mkdir('images')
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, latent))
gen_imgs = self.g.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
#MNIST: axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].imshow(gen_imgs[cnt, :,:,0])
axs[i,j].axis('off')
cnt += 1
fig.savefig("images/mnist_%d.png" % epoch)
plt.close()
# Example
# model = DCGAN()
def example():
# Build/Train a DCGAN for CIFAR-10
gan = DCGAN(input_shape=(32, 32, 3))
gan.model.summary()
from tensorflow.keras.datasets import cifar10
(x_train, _), (_, _) = cifar10.load_data()
x_train, _ = gan.normalization(x_train, centered=True)
gan.train(x_train, latent=100, epochs=6000)
# example()
|
apache-2.0
|
philouc/pyhrf
|
python/pyhrf/jde/beta.py
|
1
|
252614
|
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
import numpy as _np
import scipy.interpolate
from pyhrf.boldsynth.pottsfield.swendsenwang import *
from pyhrf.boldsynth.field import *
import pyhrf
from pyhrf.stats.random import truncRandn, erf
from pyhrf.tools import resampleToGrid, get_2Dtable_string
from pyhrf import xmlio
from pyhrf.xmlio.xmlnumpy import NumpyXMLHandler
from pyhrf.ndarray import xndarray
from samplerbase import *
if hasattr(_np, 'float96'):
float_hires = np.float96
elif hasattr(_np, 'float128'):
float_hires = np.float128
else:
float_hires = np.float64
#################################
# Partition Function estimation #
#################################
def Cpt_Expected_U_graph(RefGraph,beta,LabelsNb,SamplesNb,GraphWeight=None,GraphNodesLabels=None,GraphLinks=None,RefGrphNgbhPosi=None):
"""
Useless now!
Estimates the expectation of U for a given normalization constant Beta and
a given mask shape.
Swendsen-Wang sampling is used to assess the expectation on significant
images depending of beta.
input:
* RefGraph: List which contains the connectivity graph. Each entry
represents a node of the graph and contains the list of
its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd
node is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* beta: normalization constant
* LabelsNb: Labels number
* SamplesNb: Samples number for the U expectation estimation
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the
corresponding edge in RefGraph. If not defined
the weights are set to 1.0.
* GraphNodesLabels: Optional list containing the nodes labels.
The sampler aims to modify its values in function
of beta and NbLabels. At this level this variable
is seen as temporary and will be modified.
Defining it slightly increases the calculation
times.
* GraphLinks: Same shape as RefGraph. Each entry indicates if the link
of the corresponding edge in RefGraph is considered
(if yes ...=1 else ...=0). At this level this variable
is seen as temporary and will be modified. Defining
it slightly increases the calculation times.
* RefGrphNgbhPosi: Same shape as RefGraph. RefGrphNgbhPosi[i][j]
indicates for which k is the link to i in
RefGraph[RefGraph[i][j]][k]. This optional list
is never modified.
output:
* ExpectU: U expectation
"""
#initialization
SumU=0.
if GraphWeight==None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
if GraphNodesLabels==None:
GraphNodesLabels=CptDefaultGraphNodesLabels(RefGraph)
if GraphLinks==None:
GraphLinks=CptDefaultGraphLinks(RefGraph)
if RefGrphNgbhPosi==None:
RefGrphNgbhPosi=CptRefGrphNgbhPosi(RefGraph)
#all estimates of ImagLoc will then be significant in the expectation calculation
for i in xrange(len(GraphNodesLabels)):
GraphNodesLabels[i]=0
SwendsenWangSampler_graph(RefGraph,GraphNodesLabels,beta,LabelsNb,
GraphLinks=GraphLinks,
RefGrphNgbhPosi=RefGrphNgbhPosi)
#estimation
for i in xrange(SamplesNb):
SwendsenWangSampler_graph(RefGraph,GraphNodesLabels,beta,LabelsNb,
GraphLinks=GraphLinks,
RefGrphNgbhPosi=RefGrphNgbhPosi)
Utemp=Cpt_U_graph(RefGraph,GraphNodesLabels,GraphWeight=GraphWeight)
SumU=SumU+Utemp
ExpectU=SumU/SamplesNb
return ExpectU
def Estim_lnZ_ngbhd_graph(RefGraph,beta_Ngbhd,beta_Ref,lnZ_ref,VecU_ref,
LabelsNb):
"""
Estimates ln(Z) for beta=betaNgbhd. beta_Ngbhd is supposed close to beta_Ref
for which ln(Z) is known (lnZ_ref) and the energy U of fields generated
according to it have already been
computed (VecU_ref).
input:
* RefGraph: List which contains the connectivity graph. Each entry
represents a node of the graph and contains the list of
its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node
is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* beta_Ngbhd: normalization constant for which ln(Z) will be estimated
* beta_Ref: normalization constant close to beta_Ngbhd for which ln(Z)
already known
* lnZ_ref: ln(Z) for beta=beta_Ref
* VecU_ref: energy U of fields generated according to beta_Ref
* LabelsNb: Labels number
output:
* lnZ_Ngbhd: ln(Z) for beta=beta_Ngbhd
"""
#print 'VecU_ref:'
#print VecU_ref
LocSum = 0.
#reference formulation
#for i in xrange(VecU_ref.shape[0]):
# LocSum=LocSum+(np.exp(beta_Ngbhd*VecU_ref[i])/np.exp(beta_Ref*VecU_ref[i]))
#LocSum=np.log(LocSum/VecU_ref.shape[0])
#equivalent and numericaly more stable formulation
for i in xrange(VecU_ref.shape[0]):
LocSum = LocSum + (np.exp((beta_Ngbhd - beta_Ref) * VecU_ref[i] -
np.log(VecU_ref.shape[0])))
LocSum = np.log(LocSum)
lnZ_Ngbhd = lnZ_ref + LocSum
#print 'lnZ_Ngbhd:', lnZ_Ngbhd
return lnZ_Ngbhd
def Cpt_Vec_Estim_lnZ_Graph_fast3(RefGraph,LabelsNb,MaxErrorAllowed=5,
BetaMax=1.4,BetaStep=0.05):
"""
Estimate ln(Z(beta)) of Potts fields. The default Beta grid is between 0. and 1.4 with
a step of 0.05. Extrapolation algorithm is used. Fast estimates are only performed for
Ising fields (2 labels). Reference partition functions were pre-computed on Ising fields
designed on regular and non-regular grids. They all respect a 6-connectivity system.
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node. => There exists i such that RefGraph[10][i]=2
* LabelsNb: possible number of labels in each site of the graph
* MaxErrorAllowed: maximum error allowed in the graph estimation (in percents).
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax. Maximum considered value is 1.4
* BetaStep: gap between two considered values of beta. Actual gaps are not exactly those asked but very close.
output:
* Est_lnZ: Vector containing the ln(Z(beta)) estimates
* V_Beta: Vector of the same size as VecExpectZ containing the corresponding beta value
"""
#launch a more general algorithm if the inputs are not appropriate
if (LabelsNb!=2 and LabelsNb!=3) or BetaMax>1.4:
[Est_lnZ,V_Beta]=Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=30,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
return Est_lnZ,V_Beta
#initialisation
#...default returned values
V_Beta=np.array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4])
Est_lnZ=np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
#...load reference partition functions
[BaseLogPartFctRef,V_Beta_Ref]=LoadBaseLogPartFctRef()
NbSites=[len(RefGraph)]
NbCliques=[sum( [len(nl) for nl in RefGraph] ) ]
#...NbSites
s=len(RefGraph)
#...NbCliques
NbCliquesTmp=0
for j in xrange(len(RefGraph)):
NbCliquesTmp=NbCliquesTmp+len(RefGraph[j])
c=NbCliquesTmp/2
NbCliques.append(c)
#...StdVal Nb neighbors / Moy Nb neighbors
StdValCliquesPerSiteTmp=0.
nc = NbCliques[-1] + 0.
ns = NbSites[-1] + 0.
for j in xrange(len(RefGraph)):
if ns==1: #HACK
ns_1 = 1.
else:
ns_1 = ns-1.
StdValCliquesPerSiteTmp = StdValCliquesPerSiteTmp \
+ ( (nc/ns-len(RefGraph[j])/2.)**2. ) / ns
StdNgbhDivMoyNgbh = np.sqrt(StdValCliquesPerSiteTmp) \
/ ( nc/(ns_1) )
#extrapolation algorithm
Best_MaxError=10000000.
logN2=np.log(2.)
logN3=np.log(3.)
if LabelsNb==2:
for i in BaseLogPartFctRef.keys():
if BaseLogPartFctRef[i]['NbLabels']==2:
MaxError=np.abs((BaseLogPartFctRef[i]['NbSites']-1.)*((1.*c)/(1.*BaseLogPartFctRef[i]['NbCliques']))-(s-1.))*logN2 #error at beta=0
MaxError=MaxError+(np.abs(BaseLogPartFctRef[i]['StdNgbhDivMoyNgbh']-StdNgbhDivMoyNgbh)) #penalty added to the error at zero to penalyze different homogeneities of the neighboroud (a bareer function would be cleaner for the conversion in percents)
MaxError=MaxError*100./(s*logN2) #to have a percentage of error
if MaxError<Best_MaxError:
Best_MaxError=MaxError
BestI=i
if Best_MaxError<MaxErrorAllowed:
Est_lnZ=((c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*BaseLogPartFctRef[BestI]['LogPF']+(1-(c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*logN2
V_Beta=V_Beta_Ref.copy()
else:
pyhrf.verbose(1, 'LnZ: path sampling')
[Est_lnZ,V_Beta] = Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=30,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
if LabelsNb==3:
for i in BaseLogPartFctRef.keys():
if BaseLogPartFctRef[i]['NbLabels']==3:
MaxError=np.abs((BaseLogPartFctRef[i]['NbSites']-1.)*((1.*c)/(1.*BaseLogPartFctRef[i]['NbCliques']))-(s-1.))*logN3 #error at beta=0
MaxError=MaxError+(np.abs(BaseLogPartFctRef[i]['StdNgbhDivMoyNgbh']-StdNgbhDivMoyNgbh)) #penalty added to the error at zero to penalyze different homogeneities of the neighboroud (a bareer function would be cleaner for the conversion in percents)
MaxError=MaxError*100./(s*logN3) #to have a percentage of error
if MaxError<Best_MaxError:
Best_MaxError=MaxError
BestI=i
if Best_MaxError<MaxErrorAllowed:
Est_lnZ=((c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*BaseLogPartFctRef[BestI]['LogPF']+(1-(c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*logN3
V_Beta=V_Beta_Ref.copy()
else:
pyhrf.verbose(1, 'LnZ: path sampling')
[Est_lnZ,V_Beta] = Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=30,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
#reduction of the domain
if (BetaMax<1.4):
temp=0
while V_Beta[temp]<BetaMax and temp<V_Beta.shape[0]-2:
temp=temp+1
V_Beta=V_Beta[:temp]
Est_lnZ=Est_lnZ[:temp]
#domain resampling
if (abs(BetaStep-0.05)>0.0001):
v_Beta_Resample=[]
cpt=0.
while cpt<BetaMax+0.0001:
v_Beta_Resample.append(cpt)
cpt=cpt+BetaStep
Est_lnZ=resampleToGrid(np.array(V_Beta),np.array(Est_lnZ),np.array(v_Beta_Resample))
V_Beta=v_Beta_Resample
return Est_lnZ,V_Beta
def Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=40,BetaMax=1.4,
BetaStep=0.05,GraphWeight=None):
"""
Estimates ln(Z) for fields of a given size and Beta values between 0 and
BetaMax. Estimates of ln(Z) are first computed on a coarse grid of Beta
values. They are then computed and returned on a fine grid. No
approximation using precomputed partition function is performed here.
input:
* RefGraph: List which contains the connectivity graph. Each entry
represents a node of the graph and contains the list of
its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node
is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* LabelsNb: number of labels
* SamplesNb: number of fields estimated for each beta
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax
* BetaStep: gap between two considered values of beta (...in the fine
grid. This gap in the coarse grid is automatically fixed
and depends on the graph size.)
* GraphWeight: Same shape as RefGraph. Each entry is the weight of
the corresponding edge in RefGraph. If not defined
the weights are set to 1.0.
output:
* VecEstim_lnZ: Vector containing the ln(Z(beta,mask)) estimates
* VecBetaVal: Vector of the same size as VecExpectZ containing the
corresponding beta value
"""
#initialization
if GraphWeight is None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
GraphNodesLabels=CptDefaultGraphNodesLabels(RefGraph)
GraphLinks=CptDefaultGraphLinks(RefGraph)
RefGrphNgbhPosi=CptRefGrphNgbhPosi(RefGraph)
if LabelsNb==2:
if len(RefGraph)<20:
BetaStepCoarse=0.01
elif len(RefGraph)<50:
BetaStepCoarse=0.05
else:
BetaStepCoarse=0.1
else: # 3 in particular
if len(RefGraph)<20:
BetaStepCoarse=0.005
elif len(RefGraph)<50:
BetaStepCoarse=0.025
else:
BetaStepCoarse=0.05
#BetaStepCoarse = BetaStep
BetaLoc=0.
ListEstim_lnZ=[]
ListBetaVal=[]
VecU=[]
ListEstim_lnZ.append(len(RefGraph)*np.log(LabelsNb))
ListBetaVal.append(BetaLoc)
#print 'RefGraph:', len(RefGraph)
#print 'GraphWeight:', len(GraphWeight)
#compute the Z(beta_i) at a coarse resolution
while BetaLoc<BetaMax+0.000001:
VecU.append(Cpt_Vec_U_graph(RefGraph,BetaLoc,LabelsNb,SamplesNb,
GraphWeight=GraphWeight,
GraphNodesLabels=GraphNodesLabels,
GraphLinks=GraphLinks,
RefGrphNgbhPosi=RefGrphNgbhPosi))
BetaLoc=BetaLoc+BetaStepCoarse
Estim_lnZ=Estim_lnZ_ngbhd_graph(RefGraph,BetaLoc,ListBetaVal[-1],
ListEstim_lnZ[-1],VecU[-1],LabelsNb)
ListEstim_lnZ.append(Estim_lnZ)
ListBetaVal.append(BetaLoc)
pyhrf.verbose(4, 'beta=%1.4f -> ln(Z)=%1.4f' \
%(ListBetaVal[-1],ListEstim_lnZ[-1]))
VecU.append(Cpt_Vec_U_graph(RefGraph,BetaLoc,LabelsNb,SamplesNb,
GraphWeight=GraphWeight,
GraphNodesLabels=GraphNodesLabels,
GraphLinks=GraphLinks,
RefGrphNgbhPosi=RefGrphNgbhPosi))
#compute the Z(beta_j) at a fine resolution
BetaLoc=0.
ListEstim_lnZ_f=[]
ListBetaVal_f=[]
while BetaLoc < BetaMax + 0.000001:
ListBetaVal_f.append(BetaLoc)
i_cor = int(ListBetaVal_f[-1] / BetaStepCoarse)
LEZnew = Estim_lnZ_ngbhd_graph(RefGraph, ListBetaVal_f[-1],
ListBetaVal[i_cor], ListEstim_lnZ[i_cor],
VecU[i_cor],LabelsNb) * \
(ListBetaVal[i_cor+1] - ListBetaVal_f[-1])/BetaStepCoarse
LEZnew = LEZnew + Estim_lnZ_ngbhd_graph(RefGraph,ListBetaVal_f[-1],
ListBetaVal[i_cor+1],
ListEstim_lnZ[i_cor+1],
VecU[i_cor+1],LabelsNb) * \
(-ListBetaVal[i_cor]+ListBetaVal_f[-1])/BetaStepCoarse
ListEstim_lnZ_f.append(LEZnew)
BetaLoc = BetaLoc + BetaStep
#cast the lists into vectors
VecEstim_lnZ=np.zeros(len(ListEstim_lnZ_f))
VecBetaVal=np.zeros(len(ListBetaVal_f))
for i in xrange(len(ListBetaVal_f)):
VecBetaVal[i]=ListBetaVal_f[i]
VecEstim_lnZ[i]=ListEstim_lnZ_f[i]
return np.array(VecEstim_lnZ),np.array(VecBetaVal)
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def LoadBaseLogPartFctRef():
"""
output:
* BaseLogPartFctRef: dictionnary that contains the data base of log-PF (first value = nb labels / second value = nb. sites / third value = nb. cliques)
* V_Beta_Ref: Beta grid corresponding to the log-PF values in 'Est_lnZ_Ref'
"""
BaseLogPartFctRef={}
#non-regular grids: Graphes10_02.pickle
BaseLogPartFctRef[0]={}
BaseLogPartFctRef[0]['LogPF']=np.array([16.64,17.29,17.97,18.66,19.37,20.09,20.82,21.58,22.36,23.15,23.95,24.77,25.61,26.46,27.32,28.21,29.12,30.03,30.97,31.92,32.88,33.86,34.85,35.86,36.88,37.91,38.95,40.02,41.08,42.17])
BaseLogPartFctRef[0]['NbLabels']=2
BaseLogPartFctRef[0]['NbCliques']=26
BaseLogPartFctRef[0]['NbSites']=24
BaseLogPartFctRef[0]['StdNgbhDivMoyNgbh']=0.3970
BaseLogPartFctRef[1]={}
BaseLogPartFctRef[1]['LogPF']=np.array([17.33,18.22,19.12,20.05,21.01,21.98,22.98,24.00,25.04,26.11,27.21,28.33,29.49,30.67,31.88,33.11,34.38,35.69,37.02,38.38,39.78,41.19,42.64,44.11,45.61,47.13,48.68,50.25,51.83,53.44])
BaseLogPartFctRef[1]['NbLabels']=2
BaseLogPartFctRef[1]['NbCliques']=35
BaseLogPartFctRef[1]['NbSites']=25
BaseLogPartFctRef[1]['StdNgbhDivMoyNgbh']=0.4114
BaseLogPartFctRef[2]={}
BaseLogPartFctRef[2]['LogPF']=np.array([15.94,16.63,17.34,18.05,18.79,19.53,20.30,21.08,21.88,22.70,23.54,24.38,25.25,26.13,27.04,27.97,28.91,29.88,30.85,31.86,32.87,33.90,34.94,36.00,37.08,38.17,39.28,40.40,41.54,42.69])
BaseLogPartFctRef[2]['NbLabels']=2
BaseLogPartFctRef[2]['NbCliques']=27
BaseLogPartFctRef[2]['NbSites']=23
BaseLogPartFctRef[2]['StdNgbhDivMoyNgbh']=0.4093
BaseLogPartFctRef[3]={}
BaseLogPartFctRef[3]['LogPF']=np.array([19.41,20.47,21.56,22.67,23.81,24.99,26.18,27.41,28.67,29.95,31.28,32.64,34.03,35.46,36.93,38.45,39.99,41.58,43.21,44.89,46.61,48.37,50.16,51.98,53.82,55.69,57.60,59.50,61.44,63.39])
BaseLogPartFctRef[3]['NbLabels']=2
BaseLogPartFctRef[3]['NbCliques']=42
BaseLogPartFctRef[3]['NbSites']=28
BaseLogPartFctRef[3]['StdNgbhDivMoyNgbh']=0.3542
BaseLogPartFctRef[4]={}
BaseLogPartFctRef[4]['LogPF']=np.array([16.64,17.47,18.33,19.21,20.11,21.02,21.96,22.92,23.90,24.91,25.94,27.00,28.08,29.19,30.33,31.47,32.66,33.88,35.14,36.40,37.70,39.02,40.37,41.74,43.15,44.58,46.03,47.49,48.98,50.48])
BaseLogPartFctRef[4]['NbLabels']=2
BaseLogPartFctRef[4]['NbCliques']=33
BaseLogPartFctRef[4]['NbSites']=24
BaseLogPartFctRef[4]['StdNgbhDivMoyNgbh']=0.4527
BaseLogPartFctRef[5]={}
BaseLogPartFctRef[5]['LogPF']=np.array([9.01,9.42,9.84,10.26,10.69,11.14,11.59,12.06,12.54,13.03,13.53,14.04,14.56,15.09,15.63,16.18,16.75,17.33,17.91,18.51,19.12,19.74,20.38,21.01,21.66,22.32,22.99,23.67,24.36,25.06])
BaseLogPartFctRef[5]['NbLabels']=2
BaseLogPartFctRef[5]['NbCliques']=16
BaseLogPartFctRef[5]['NbSites']=13
BaseLogPartFctRef[5]['StdNgbhDivMoyNgbh']=0.3160
BaseLogPartFctRef[6]={}
BaseLogPartFctRef[6]['LogPF']=np.array([20.10,21.04,21.99,22.98,23.99,25.02,26.08,27.15,28.26,29.38,30.54,31.72,32.93,34.17,35.42,36.70,38.02,39.36,40.73,42.12,43.53,44.99,46.46,47.97,49.49,51.03,52.60,54.18,55.79,57.42])
BaseLogPartFctRef[6]['NbLabels']=2
BaseLogPartFctRef[6]['NbCliques']=37
BaseLogPartFctRef[6]['NbSites']=29
BaseLogPartFctRef[6]['StdNgbhDivMoyNgbh']=0.3663
BaseLogPartFctRef[7]={}
BaseLogPartFctRef[7]['LogPF']=np.array([17.33,18.11,18.91,19.73,20.58,21.44,22.32,23.22,24.15,25.10,26.07,27.06,28.07,29.09,30.14,31.22,32.31,33.43,34.58,35.75,36.94,38.14,39.37,40.62,41.90,43.19,44.50,45.83,47.16,48.51])
BaseLogPartFctRef[7]['NbLabels']=2
BaseLogPartFctRef[7]['NbCliques']=31
BaseLogPartFctRef[7]['NbSites']=25
BaseLogPartFctRef[7]['StdNgbhDivMoyNgbh']=0.4257
BaseLogPartFctRef[8]={}
BaseLogPartFctRef[8]['LogPF']=np.array([27.03,28.52,30.05,31.61,33.21,34.85,36.53,38.26,40.04,41.84,43.68,45.58,47.54,49.53,51.58,53.68,55.84,58.06,60.35,62.69,65.07,67.51,70.02,72.56,75.14,77.75,80.42,83.12,85.83,88.56])
BaseLogPartFctRef[8]['NbLabels']=2
BaseLogPartFctRef[8]['NbCliques']=59
BaseLogPartFctRef[8]['NbSites']=39
BaseLogPartFctRef[8]['StdNgbhDivMoyNgbh']=0.3682
BaseLogPartFctRef[9]={}
BaseLogPartFctRef[9]['LogPF']=np.array([16.64,17.47,18.33,19.20,20.10,21.02,21.95,22.92,23.91,24.92,25.95,27.00,28.08,29.19,30.32,31.48,32.66,33.88,35.12,36.38,37.68,39.00,40.35,41.73,43.12,44.53,45.97,47.43,48.90,50.39])
BaseLogPartFctRef[9]['NbLabels']=2
BaseLogPartFctRef[9]['NbCliques']=33
BaseLogPartFctRef[9]['NbSites']=24
BaseLogPartFctRef[9]['StdNgbhDivMoyNgbh']=0.3521
BaseLogPartFctRef[10]={}
BaseLogPartFctRef[10]['LogPF']=np.array([15.25,15.93,16.62,17.34,18.07,18.83,19.60,20.38,21.18,22.00,22.85,23.70,24.58,25.48,26.40,27.33,28.28,29.25,30.25,31.26,32.28,33.33,34.40,35.48,36.58,37.70,38.83,39.98,41.14,42.32])
BaseLogPartFctRef[10]['NbLabels']=2
BaseLogPartFctRef[10]['NbCliques']=27
BaseLogPartFctRef[10]['NbSites']=22
BaseLogPartFctRef[10]['StdNgbhDivMoyNgbh']=0.4344
BaseLogPartFctRef[11]={}
BaseLogPartFctRef[11]['LogPF']=np.array([11.09,11.59,12.11,12.65,13.19,13.74,14.31,14.90,15.49,16.10,16.73,17.36,18.01,18.67,19.36,20.05,20.77,21.49,22.23,22.98,23.74,24.52,25.33,26.14,26.97,27.81,28.66,29.52,30.39,31.27])
BaseLogPartFctRef[11]['NbLabels']=2
BaseLogPartFctRef[11]['NbCliques']=20
BaseLogPartFctRef[11]['NbSites']=16
BaseLogPartFctRef[11]['StdNgbhDivMoyNgbh']=0.3508
BaseLogPartFctRef[12]={}
BaseLogPartFctRef[12]['LogPF']=np.array([18.02,18.96,19.91,20.91,21.92,22.95,24.00,25.08,26.18,27.32,28.47,29.66,30.87,32.11,33.38,34.67,36.01,37.37,38.77,40.20,41.66,43.15,44.68,46.23,47.79,49.39,51.02,52.67,54.34,56.03])
BaseLogPartFctRef[12]['NbLabels']=2
BaseLogPartFctRef[12]['NbCliques']=37
BaseLogPartFctRef[12]['NbSites']=26
BaseLogPartFctRef[12]['StdNgbhDivMoyNgbh']=0.3712
BaseLogPartFctRef[13]={}
BaseLogPartFctRef[13]['LogPF']=np.array([22.87,23.89,24.93,25.99,27.08,28.20,29.34,30.51,31.70,32.91,34.16,35.43,36.72,38.03,39.39,40.77,42.17,43.60,45.05,46.55,48.05,49.59,51.16,52.76,54.38,56.02,57.68,59.37,61.08,62.81])
BaseLogPartFctRef[13]['NbLabels']=2
BaseLogPartFctRef[13]['NbCliques']=40
BaseLogPartFctRef[13]['NbSites']=33
BaseLogPartFctRef[13]['StdNgbhDivMoyNgbh']=0.4181
BaseLogPartFctRef[14]={}
BaseLogPartFctRef[14]['LogPF']=np.array([17.33,18.19,19.07,19.97,20.90,21.85,22.81,23.81,24.82,25.86,26.92,28.02,29.13,30.27,31.45,32.65,33.88,35.14,36.42,37.75,39.09,40.46,41.86,43.28,44.72,46.18,47.66,49.17,50.68,52.21])
BaseLogPartFctRef[14]['NbLabels']=2
BaseLogPartFctRef[14]['NbCliques']=34
BaseLogPartFctRef[14]['NbSites']=25
BaseLogPartFctRef[14]['StdNgbhDivMoyNgbh']=0.4057
BaseLogPartFctRef[15]={}
BaseLogPartFctRef[15]['LogPF']=np.array([15.94,16.75,17.58,18.43,19.30,20.19,21.10,22.04,22.99,23.97,24.97,25.99,27.04,28.10,29.20,30.31,31.46,32.63,33.83,35.06,36.31,37.59,38.90,40.22,41.57,42.95,44.34,45.75,47.19,48.63])
BaseLogPartFctRef[15]['NbLabels']=2
BaseLogPartFctRef[15]['NbCliques']=32
BaseLogPartFctRef[15]['NbSites']=23
BaseLogPartFctRef[15]['StdNgbhDivMoyNgbh']=0.3787
BaseLogPartFctRef[16]={}
BaseLogPartFctRef[16]['LogPF']=np.array([32.58,34.22,35.91,37.63,39.40,41.20,43.06,44.95,46.90,48.89,50.92,52.99,55.13,57.32,59.56,61.85,64.19,66.59,69.06,71.59,74.17,76.79,79.47,82.21,84.96,87.77,90.63,93.50,96.41,99.35])
BaseLogPartFctRef[16]['NbLabels']=2
BaseLogPartFctRef[16]['NbCliques']=65
BaseLogPartFctRef[16]['NbSites']=47
BaseLogPartFctRef[16]['StdNgbhDivMoyNgbh']=0.3945
BaseLogPartFctRef[17]={}
BaseLogPartFctRef[17]['LogPF']=np.array([21.49,22.62,23.79,24.99,26.21,27.46,28.75,30.07,31.40,32.78,34.19,35.64,37.11,38.63,40.18,41.78,43.42,45.11,46.85,48.62,50.44,52.30,54.20,56.13,58.09,60.09,62.11,64.14,66.20,68.28])
BaseLogPartFctRef[17]['NbLabels']=2
BaseLogPartFctRef[17]['NbCliques']=45
BaseLogPartFctRef[17]['NbSites']=31
BaseLogPartFctRef[17]['StdNgbhDivMoyNgbh']=0.4178
BaseLogPartFctRef[18]={}
BaseLogPartFctRef[18]['LogPF']=np.array([13.17,13.78,14.40,15.04,15.69,16.36,17.05,17.75,18.47,19.19,19.94,20.70,21.48,22.29,23.10,23.93,24.77,25.64,26.52,27.43,28.35,29.28,30.24,31.21,32.19,33.20,34.21,35.25,36.30,37.36])
BaseLogPartFctRef[18]['NbLabels']=2
BaseLogPartFctRef[18]['NbCliques']=24
BaseLogPartFctRef[18]['NbSites']=19
BaseLogPartFctRef[18]['StdNgbhDivMoyNgbh']=0.3724
BaseLogPartFctRef[19]={}
BaseLogPartFctRef[19]['LogPF']=np.array([13.86,14.45,15.04,15.66,16.29,16.92,17.57,18.24,18.93,19.63,20.34,21.07,21.82,22.59,23.36,24.15,24.96,25.79,26.63,27.48,28.36,29.25,30.15,31.06,31.99,32.93,33.89,34.86,35.84,36.83])
BaseLogPartFctRef[19]['NbLabels']=2
BaseLogPartFctRef[19]['NbCliques']=23
BaseLogPartFctRef[19]['NbSites']=20
BaseLogPartFctRef[19]['StdNgbhDivMoyNgbh']=0.3940
BaseLogPartFctRef[20]={}
BaseLogPartFctRef[20]['LogPF']=np.array([20.10,21.11,22.15,23.22,24.30,25.42,26.56,27.72,28.92,30.14,31.39,32.68,34.00,35.36,36.73,38.14,39.59,41.09,42.63,44.19,45.79,47.42,49.07,50.75,52.46,54.21,55.98,57.75,59.56,61.37])
BaseLogPartFctRef[20]['NbLabels']=2
BaseLogPartFctRef[20]['NbCliques']=40
BaseLogPartFctRef[20]['NbSites']=29
BaseLogPartFctRef[20]['StdNgbhDivMoyNgbh']=0.3970
BaseLogPartFctRef[21]={}
BaseLogPartFctRef[21]['LogPF']=np.array([16.64,17.34,18.07,18.82,19.58,20.36,21.16,21.97,22.80,23.65,24.52,25.41,26.32,27.24,28.18,29.15,30.13,31.12,32.13,33.17,34.23,35.31,36.39,37.50,38.63,39.76,40.92,42.10,43.29,44.49])
BaseLogPartFctRef[21]['NbLabels']=2
BaseLogPartFctRef[21]['NbCliques']=28
BaseLogPartFctRef[21]['NbSites']=24
BaseLogPartFctRef[21]['StdNgbhDivMoyNgbh']=0.3872
BaseLogPartFctRef[22]={}
BaseLogPartFctRef[22]['LogPF']=np.array([22.87,24.03,25.22,26.43,27.68,28.96,30.27,31.61,32.97,34.38,35.81,37.28,38.78,40.32,41.87,43.48,45.13,46.81,48.52,50.29,52.11,53.93,55.82,57.73,59.66,61.64,63.64,65.67,67.71,69.79])
BaseLogPartFctRef[22]['NbLabels']=2
BaseLogPartFctRef[22]['NbCliques']=46
BaseLogPartFctRef[22]['NbSites']=33
BaseLogPartFctRef[22]['StdNgbhDivMoyNgbh']=0.4085
BaseLogPartFctRef[23]={}
BaseLogPartFctRef[23]['LogPF']=np.array([13.86,14.52,15.19,15.88,16.59,17.31,18.05,18.81,19.59,20.38,21.19,22.02,22.87,23.74,24.63,25.53,26.46,27.40,28.38,29.37,30.38,31.41,32.46,33.52,34.60,35.70,36.82,37.95,39.09,40.24])
BaseLogPartFctRef[23]['NbLabels']=2
BaseLogPartFctRef[23]['NbCliques']=26
BaseLogPartFctRef[23]['NbSites']=20
BaseLogPartFctRef[23]['StdNgbhDivMoyNgbh']=0.3543
BaseLogPartFctRef[24]={}
BaseLogPartFctRef[24]['LogPF']=np.array([15.25,15.98,16.73,17.49,18.28,19.09,19.91,20.75,21.61,22.50,23.40,24.32,25.25,26.22,27.21,28.21,29.25,30.31,31.38,32.48,33.60,34.74,35.91,37.10,38.30,39.52,40.76,42.02,43.31,44.60])
BaseLogPartFctRef[24]['NbLabels']=2
BaseLogPartFctRef[24]['NbCliques']=29
BaseLogPartFctRef[24]['NbSites']=22
BaseLogPartFctRef[24]['StdNgbhDivMoyNgbh']=0.3709
BaseLogPartFctRef[25]={}
BaseLogPartFctRef[25]['LogPF']=np.array([30.50,31.97,33.47,35.02,36.60,38.20,39.86,41.55,43.28,45.04,46.84,48.68,50.57,52.50,54.46,56.47,58.51,60.60,62.75,64.94,67.18,69.45,71.76,74.12,76.51,78.93,81.40,83.92,86.46,89.03])
BaseLogPartFctRef[25]['NbLabels']=2
BaseLogPartFctRef[25]['NbCliques']=58
BaseLogPartFctRef[25]['NbSites']=44
BaseLogPartFctRef[25]['StdNgbhDivMoyNgbh']=0.3879
BaseLogPartFctRef[26]={}
BaseLogPartFctRef[26]['LogPF']=np.array([20.10,21.04,21.99,22.97,23.98,25.01,26.07,27.13,28.23,29.36,30.51,31.68,32.89,34.12,35.37,36.66,37.98,39.31,40.69,42.08,43.50,44.96,46.43,47.96,49.49,51.05,52.63,54.23,55.85,57.49])
BaseLogPartFctRef[26]['NbLabels']=2
BaseLogPartFctRef[26]['NbCliques']=37
BaseLogPartFctRef[26]['NbSites']=29
BaseLogPartFctRef[26]['StdNgbhDivMoyNgbh']=0.4167
BaseLogPartFctRef[27]={}
BaseLogPartFctRef[27]['LogPF']=np.array([14.56,15.16,15.78,16.42,17.07,17.74,18.43,19.12,19.83,20.56,21.30,22.06,22.83,23.62,24.42,25.25,26.09,26.94,27.81,28.69,29.58,30.50,31.42,32.36,33.31,34.27,35.26,36.25,37.25,38.27])
BaseLogPartFctRef[27]['NbLabels']=2
BaseLogPartFctRef[27]['NbCliques']=24
BaseLogPartFctRef[27]['NbSites']=21
BaseLogPartFctRef[27]['StdNgbhDivMoyNgbh']=0.3669
BaseLogPartFctRef[28]={}
BaseLogPartFctRef[28]['LogPF']=np.array([30.50,32.10,33.73,35.41,37.11,38.86,40.65,42.50,44.39,46.32,48.29,50.32,52.39,54.51,56.67,58.89,61.18,63.52,65.94,68.40,70.91,73.47,76.10,78.79,81.53,84.31,87.12,89.97,92.82,95.71])
BaseLogPartFctRef[28]['NbLabels']=2
BaseLogPartFctRef[28]['NbCliques']=63
BaseLogPartFctRef[28]['NbSites']=44
BaseLogPartFctRef[28]['StdNgbhDivMoyNgbh']=0.4089
BaseLogPartFctRef[29]={}
BaseLogPartFctRef[29]['LogPF']=np.array([20.79,21.70,22.64,23.60,24.58,25.59,26.62,27.67,28.74,29.83,30.95,32.11,33.28,34.46,35.68,36.92,38.20,39.50,40.82,42.17,43.54,44.93,46.36,47.80,49.26,50.74,52.24,53.77,55.31,56.88])
BaseLogPartFctRef[29]['NbLabels']=2
BaseLogPartFctRef[29]['NbCliques']=36
BaseLogPartFctRef[29]['NbSites']=30
BaseLogPartFctRef[29]['StdNgbhDivMoyNgbh']=0.3835
#non-regular grids: Graphes10_04.pickle
BaseLogPartFctRef[30]={}
BaseLogPartFctRef[30]['LogPF']=np.array([487.3,527.2,568.2,610.1,653.1,697.1,742.2,788.6,836.3,885.4,936.2,989.5,1045.,1106.,1170.,1238.,1308.,1379.,1453.,1527.,1602.,1678.,1754.,1831.,1908.,1985.,2063.,2141.,2219.,2297.])
BaseLogPartFctRef[30]['NbLabels']=2
BaseLogPartFctRef[30]['NbCliques']=1578
BaseLogPartFctRef[30]['NbSites']=703
BaseLogPartFctRef[30]['StdNgbhDivMoyNgbh']=0.2508
BaseLogPartFctRef[31]={}
BaseLogPartFctRef[31]['LogPF']=np.array([463.0,500.5,539.0,578.3,618.7,660.0,702.4,745.9,790.7,836.8,884.5,934.4,987.0,1044.,1104.,1167.,1233.,1300.,1368.,1437.,1507.,1578.,1649.,1721.,1793.,1866.,1938.,2011.,2084.,2157.])
BaseLogPartFctRef[31]['NbLabels']=2
BaseLogPartFctRef[31]['NbCliques']=1481
BaseLogPartFctRef[31]['NbSites']=668
BaseLogPartFctRef[31]['StdNgbhDivMoyNgbh']=0.2677
BaseLogPartFctRef[32]={}
BaseLogPartFctRef[32]['LogPF']=np.array([310.5,333.2,356.4,380.2,404.5,429.4,454.9,481.1,508.0,535.6,564.1,593.4,623.9,655.7,688.8,723.7,760.2,798.1,837.2,877.4,918.3,959.8,1002.,1044.,1087.,1130.,1173.,1217.,1261.,1304.])
BaseLogPartFctRef[32]['NbLabels']=2
BaseLogPartFctRef[32]['NbCliques']=894
BaseLogPartFctRef[32]['NbSites']=448
BaseLogPartFctRef[32]['StdNgbhDivMoyNgbh']=0.2893
BaseLogPartFctRef[33]={}
BaseLogPartFctRef[33]['LogPF']=np.array([470.0,508.6,548.3,589.0,630.6,673.3,717.0,762.1,808.3,856.0,905.5,957.6,1013.,1072.,1135.,1201.,1269.,1339.,1410.,1482.,1554.,1628.,1702.,1776.,1850.,1925.,2000.,2076.,2151.,2227.])
BaseLogPartFctRef[33]['NbLabels']=2
BaseLogPartFctRef[33]['NbCliques']=1529
BaseLogPartFctRef[33]['NbSites']=678
BaseLogPartFctRef[33]['StdNgbhDivMoyNgbh']=0.2701
BaseLogPartFctRef[34]={}
BaseLogPartFctRef[34]['LogPF']=np.array([496.3,536.3,577.4,619.4,662.5,706.6,751.9,798.3,846.0,895.1,945.9,998.5,1054.,1113.,1175.,1242.,1312.,1383.,1456.,1530.,1606.,1681.,1758.,1835.,1912.,1990.,2067.,2145.,2224.,2302.])
BaseLogPartFctRef[34]['NbLabels']=2
BaseLogPartFctRef[34]['NbCliques']=1582
BaseLogPartFctRef[34]['NbSites']=716
BaseLogPartFctRef[34]['StdNgbhDivMoyNgbh']=0.2517
BaseLogPartFctRef[35]={}
BaseLogPartFctRef[35]['LogPF']=np.array([512.2,554.6,597.9,642.5,688.0,734.6,782.4,831.6,882.0,933.8,987.8,1044.,1103.,1166.,1234.,1305.,1379.,1456.,1534.,1613.,1692.,1773.,1854.,1936.,2018.,2100.,2183.,2265.,2348.,2431.])
BaseLogPartFctRef[35]['NbLabels']=2
BaseLogPartFctRef[35]['NbCliques']=1672
BaseLogPartFctRef[35]['NbSites']=739
BaseLogPartFctRef[35]['StdNgbhDivMoyNgbh']=0.2348
BaseLogPartFctRef[36]={}
BaseLogPartFctRef[36]['LogPF']=np.array([520.6,563.0,606.6,651.1,696.7,743.5,791.4,840.6,891.1,943.3,997.2,1053.,1112.,1175.,1243.,1315.,1389.,1465.,1542.,1621.,1701.,1781.,1862.,1944.,2026.,2108.,2191.,2273.,2356.,2439.])
BaseLogPartFctRef[36]['NbLabels']=2
BaseLogPartFctRef[36]['NbCliques']=1677
BaseLogPartFctRef[36]['NbSites']=751
BaseLogPartFctRef[36]['StdNgbhDivMoyNgbh']=0.2473
BaseLogPartFctRef[37]={}
BaseLogPartFctRef[37]['LogPF']=np.array([499.8,540.1,581.3,623.6,667.0,711.4,756.9,803.6,851.6,901.0,952.0,1005.,1061.,1120.,1183.,1250.,1320.,1392.,1465.,1539.,1615.,1691.,1768.,1845.,1923.,2001.,2079.,2157.,2236.,2315.])
BaseLogPartFctRef[37]['NbLabels']=2
BaseLogPartFctRef[37]['NbCliques']=1591
BaseLogPartFctRef[37]['NbSites']=721
BaseLogPartFctRef[37]['StdNgbhDivMoyNgbh']=0.2585
BaseLogPartFctRef[38]={}
BaseLogPartFctRef[38]['LogPF']=np.array([526.8,570.6,615.6,661.6,708.8,757.1,806.7,857.6,910.0,964.1,1020.,1079.,1141.,1208.,1280.,1355.,1432.,1511.,1592.,1674.,1756.,1839.,1923.,2008.,2093.,2178.,2263.,2348.,2434.,2520.])
BaseLogPartFctRef[38]['NbLabels']=2
BaseLogPartFctRef[38]['NbCliques']=1732
BaseLogPartFctRef[38]['NbSites']=760
BaseLogPartFctRef[38]['StdNgbhDivMoyNgbh']=0.2602
BaseLogPartFctRef[39]={}
BaseLogPartFctRef[39]['LogPF']=np.array([497.7,537.3,577.9,619.5,662.1,705.8,750.5,796.4,843.6,892.2,942.3,994.2,1049.,1106.,1168.,1233.,1301.,1371.,1443.,1516.,1590.,1665.,1741.,1816.,1893.,1969.,2046.,2123.,2201.,2278.])
BaseLogPartFctRef[39]['NbLabels']=2
BaseLogPartFctRef[39]['NbCliques']=1565
BaseLogPartFctRef[39]['NbSites']=718
BaseLogPartFctRef[39]['StdNgbhDivMoyNgbh']=0.2564
BaseLogPartFctRef[40]={}
BaseLogPartFctRef[40]['LogPF']=np.array([455.4,492.3,530.1,568.9,608.7,649.3,691.0,733.9,777.9,823.2,870.0,919.0,970.3,1025.,1084.,1146.,1210.,1276.,1344.,1412.,1481.,1551.,1622.,1692.,1764.,1835.,1907.,1979.,2051.,2123.])
BaseLogPartFctRef[40]['NbLabels']=2
BaseLogPartFctRef[40]['NbCliques']=1459
BaseLogPartFctRef[40]['NbSites']=657
BaseLogPartFctRef[40]['StdNgbhDivMoyNgbh']=0.2619
BaseLogPartFctRef[41]={}
BaseLogPartFctRef[41]['LogPF']=np.array([499.1,540.3,582.6,626.1,670.5,716.0,762.7,810.6,859.8,910.6,963.4,1018.,1076.,1139.,1205.,1276.,1348.,1423.,1498.,1575.,1653.,1732.,1811.,1890.,1970.,2050.,2131.,2211.,2292.,2373.])
BaseLogPartFctRef[41]['NbLabels']=2
BaseLogPartFctRef[41]['NbCliques']=1631
BaseLogPartFctRef[41]['NbSites']=720
BaseLogPartFctRef[41]['StdNgbhDivMoyNgbh']=0.2496
BaseLogPartFctRef[42]={}
BaseLogPartFctRef[42]['LogPF']=np.array([429.1,463.3,498.4,534.3,571.2,608.9,647.6,687.4,728.3,770.3,813.9,859.3,907.0,957.9,1012.,1069.,1128.,1188.,1250.,1313.,1377.,1441.,1506.,1571.,1637.,1703.,1770.,1836.,1903.,1970.])
BaseLogPartFctRef[42]['NbLabels']=2
BaseLogPartFctRef[42]['NbCliques']=1353
BaseLogPartFctRef[42]['NbSites']=619
BaseLogPartFctRef[42]['StdNgbhDivMoyNgbh']=0.2770
BaseLogPartFctRef[43]={}
BaseLogPartFctRef[43]['LogPF']=np.array([445.7,481.0,517.2,554.3,592.3,631.2,671.1,712.1,754.1,797.4,842.2,888.8,937.8,989.9,1046.,1104.,1164.,1227.,1290.,1355.,1421.,1487.,1554.,1621.,1689.,1757.,1825.,1894.,1963.,2031.])
BaseLogPartFctRef[43]['NbLabels']=2
BaseLogPartFctRef[43]['NbCliques']=1395
BaseLogPartFctRef[43]['NbSites']=643
BaseLogPartFctRef[43]['StdNgbhDivMoyNgbh']=0.2785
BaseLogPartFctRef[44]={}
BaseLogPartFctRef[44]['LogPF']=np.array([501.1,541.7,583.2,625.7,669.2,713.8,759.6,806.5,854.8,904.5,955.8,1009.,1065.,1125.,1190.,1258.,1328.,1401.,1475.,1550.,1626.,1702.,1779.,1857.,1935.,2013.,2092.,2171.,2250.,2329.])
BaseLogPartFctRef[44]['NbLabels']=2
BaseLogPartFctRef[44]['NbCliques']=1599
BaseLogPartFctRef[44]['NbSites']=723
BaseLogPartFctRef[44]['StdNgbhDivMoyNgbh']=0.2602
BaseLogPartFctRef[45]={}
BaseLogPartFctRef[45]['LogPF']=np.array([473.4,512.1,551.7,592.2,633.8,676.3,720.0,764.8,811.0,858.5,907.6,958.8,1013.,1071.,1133.,1199.,1266.,1335.,1406.,1477.,1550.,1623.,1697.,1771.,1845.,1920.,1995.,2070.,2146.,2221.])
BaseLogPartFctRef[45]['NbLabels']=2
BaseLogPartFctRef[45]['NbCliques']=1526
BaseLogPartFctRef[45]['NbSites']=683
BaseLogPartFctRef[45]['StdNgbhDivMoyNgbh']=0.2644
BaseLogPartFctRef[46]={}
BaseLogPartFctRef[46]['LogPF']=np.array([388.9,418.0,447.8,478.4,509.8,541.8,574.7,608.4,643.1,678.7,715.5,753.6,793.4,835.3,879.7,926.1,974.4,1025.,1076.,1129.,1182.,1236.,1291.,1346.,1401.,1457.,1513.,1569.,1626.,1682.])
BaseLogPartFctRef[46]['NbLabels']=2
BaseLogPartFctRef[46]['NbCliques']=1151
BaseLogPartFctRef[46]['NbSites']=561
BaseLogPartFctRef[46]['StdNgbhDivMoyNgbh']=0.3102
BaseLogPartFctRef[47]={}
BaseLogPartFctRef[47]['LogPF']=np.array([555.9,603.4,652.0,701.8,752.9,805.1,858.9,913.8,970.6,1029.,1090.,1154.,1222.,1296.,1375.,1457.,1541.,1628.,1716.,1805.,1895.,1985.,2076.,2168.,2260.,2352.,2445.,2538.,2630.,2723.])
BaseLogPartFctRef[47]['NbLabels']=2
BaseLogPartFctRef[47]['NbCliques']=1874
BaseLogPartFctRef[47]['NbSites']=802
BaseLogPartFctRef[47]['StdNgbhDivMoyNgbh']=0.2385
BaseLogPartFctRef[48]={}
BaseLogPartFctRef[48]['LogPF']=np.array([454.7,490.6,527.3,564.9,603.6,643.1,683.6,725.2,767.9,811.8,857.2,904.3,953.5,1005.,1061.,1119.,1180.,1243.,1308.,1374.,1441.,1508.,1576.,1645.,1714.,1783.,1852.,1922.,1992.,2062.])
BaseLogPartFctRef[48]['NbLabels']=2
BaseLogPartFctRef[48]['NbCliques']=1417
BaseLogPartFctRef[48]['NbSites']=656
BaseLogPartFctRef[48]['StdNgbhDivMoyNgbh']=0.2654
BaseLogPartFctRef[49]={}
BaseLogPartFctRef[49]['LogPF']=np.array([441.5,477.3,514.0,551.5,590.0,629.5,669.8,711.4,754.0,798.0,843.6,891.3,942.0,996.0,1054.,1114.,1177.,1241.,1306.,1373.,1440.,1507.,1575.,1644.,1712.,1781.,1851.,1920.,1990.,2059.])
BaseLogPartFctRef[49]['NbLabels']=2
BaseLogPartFctRef[49]['NbCliques']=1413
BaseLogPartFctRef[49]['NbSites']=637
BaseLogPartFctRef[49]['StdNgbhDivMoyNgbh']=0.2899
BaseLogPartFctRef[50]={}
BaseLogPartFctRef[50]['LogPF']=np.array([547.6,595.0,643.5,693.2,744.2,796.4,850.0,905.1,961.7,1020.,1081.,1145.,1214.,1289.,1369.,1451.,1536.,1623.,1711.,1801.,1891.,1982.,2073.,2165.,2257.,2350.,2442.,2535.,2628.,2721.])
BaseLogPartFctRef[50]['NbLabels']=2
BaseLogPartFctRef[50]['NbCliques']=1872
BaseLogPartFctRef[50]['NbSites']=790
BaseLogPartFctRef[50]['StdNgbhDivMoyNgbh']=0.2272
BaseLogPartFctRef[51]={}
BaseLogPartFctRef[51]['LogPF']=np.array([396.5,427.5,459.3,491.9,525.3,559.6,594.7,630.6,667.6,705.6,745.0,785.8,828.7,874.6,923.2,974.3,1027.,1081.,1137.,1194.,1251.,1309.,1368.,1427.,1486.,1546.,1606.,1666.,1727.,1787.])
BaseLogPartFctRef[51]['NbLabels']=2
BaseLogPartFctRef[51]['NbCliques']=1226
BaseLogPartFctRef[51]['NbSites']=572
BaseLogPartFctRef[51]['StdNgbhDivMoyNgbh']=0.2877
BaseLogPartFctRef[52]={}
BaseLogPartFctRef[52]['LogPF']=np.array([469.3,507.6,547.1,587.6,628.9,671.4,714.9,759.5,805.5,852.7,901.9,953.3,1007.,1066.,1129.,1194.,1262.,1331.,1402.,1473.,1545.,1618.,1691.,1765.,1839.,1914.,1988.,2063.,2138.,2213.])
BaseLogPartFctRef[52]['NbLabels']=2
BaseLogPartFctRef[52]['NbCliques']=1520
BaseLogPartFctRef[52]['NbSites']=677
BaseLogPartFctRef[52]['StdNgbhDivMoyNgbh']=0.2647
BaseLogPartFctRef[53]={}
BaseLogPartFctRef[53]['LogPF']=np.array([445.0,481.0,517.9,555.7,594.5,634.1,674.9,716.6,759.5,803.9,849.8,897.8,948.8,1004.,1061.,1122.,1185.,1249.,1314.,1381.,1448.,1516.,1584.,1653.,1722.,1792.,1861.,1931.,2001.,2071.])
BaseLogPartFctRef[53]['NbLabels']=2
BaseLogPartFctRef[53]['NbCliques']=1422
BaseLogPartFctRef[53]['NbSites']=642
BaseLogPartFctRef[53]['StdNgbhDivMoyNgbh']=0.2876
BaseLogPartFctRef[54]={}
BaseLogPartFctRef[54]['LogPF']=np.array([474.8,512.9,551.9,591.9,632.9,674.8,717.9,762.1,807.4,854.2,902.4,952.7,1005.,1061.,1121.,1184.,1250.,1317.,1387.,1457.,1528.,1600.,1673.,1746.,1819.,1893.,1967.,2041.,2115.,2190.])
BaseLogPartFctRef[54]['NbLabels']=2
BaseLogPartFctRef[54]['NbCliques']=1504
BaseLogPartFctRef[54]['NbSites']=685
BaseLogPartFctRef[54]['StdNgbhDivMoyNgbh']=0.2660
BaseLogPartFctRef[55]={}
BaseLogPartFctRef[55]['LogPF']=np.array([448.5,484.6,521.7,559.7,598.6,638.5,679.4,721.3,764.5,809.0,855.0,903.1,953.8,1008.,1065.,1125.,1188.,1252.,1318.,1385.,1452.,1521.,1589.,1659.,1728.,1798.,1869.,1939.,2010.,2080.])
BaseLogPartFctRef[55]['NbLabels']=2
BaseLogPartFctRef[55]['NbCliques']=1429
BaseLogPartFctRef[55]['NbSites']=647
BaseLogPartFctRef[55]['StdNgbhDivMoyNgbh']=0.2674
BaseLogPartFctRef[56]={}
BaseLogPartFctRef[56]['LogPF']=np.array([488.7,528.1,568.6,610.1,652.5,695.9,740.6,786.3,833.4,881.9,932.0,984.3,1039.,1098.,1161.,1227.,1295.,1366.,1438.,1511.,1584.,1659.,1734.,1810.,1886.,1962.,2039.,2115.,2192.,2269.])
BaseLogPartFctRef[56]['NbLabels']=2
BaseLogPartFctRef[56]['NbCliques']=1559
BaseLogPartFctRef[56]['NbSites']=705
BaseLogPartFctRef[56]['StdNgbhDivMoyNgbh']=0.2582
BaseLogPartFctRef[57]={}
BaseLogPartFctRef[57]['LogPF']=np.array([478.3,517.6,557.9,599.3,641.5,684.7,729.1,774.7,821.6,870.0,920.0,972.4,1028.,1087.,1151.,1217.,1286.,1357.,1429.,1502.,1575.,1650.,1725.,1800.,1876.,1952.,2029.,2105.,2182.,2259.])
BaseLogPartFctRef[57]['NbLabels']=2
BaseLogPartFctRef[57]['NbCliques']=1552
BaseLogPartFctRef[57]['NbSites']=690
BaseLogPartFctRef[57]['StdNgbhDivMoyNgbh']=0.2564
BaseLogPartFctRef[58]={}
BaseLogPartFctRef[58]['LogPF']=np.array([486.6,526.0,566.4,607.8,650.3,693.7,738.3,784.0,831.1,879.5,929.6,981.8,1037.,1096.,1159.,1225.,1293.,1364.,1436.,1509.,1583.,1658.,1733.,1808.,1884.,1960.,2037.,2114.,2190.,2267.])
BaseLogPartFctRef[58]['NbLabels']=2
BaseLogPartFctRef[58]['NbCliques']=1557
BaseLogPartFctRef[58]['NbSites']=702
BaseLogPartFctRef[58]['StdNgbhDivMoyNgbh']=0.2720
BaseLogPartFctRef[59]={}
BaseLogPartFctRef[59]['LogPF']=np.array([417.3,450.5,484.7,519.6,555.5,592.2,629.8,668.4,708.2,749.1,791.4,835.6,882.3,932.6,986.0,1042.,1099.,1158.,1219.,1280.,1342.,1404.,1467.,1530.,1594.,1658.,1723.,1787.,1852.,1917.])
BaseLogPartFctRef[59]['NbLabels']=2
BaseLogPartFctRef[59]['NbCliques']=1316
BaseLogPartFctRef[59]['NbSites']=602
BaseLogPartFctRef[59]['StdNgbhDivMoyNgbh']=0.2885
#non-regular grids: Graphes10_03.pickle
BaseLogPartFctRef[90]={}
BaseLogPartFctRef[90]['LogPF']=np.array([59.61,62.87,66.21,69.64,73.14,76.75,80.43,84.19,88.05,92.00,96.03,100.2,104.4,108.7,113.2,117.8,122.5,127.3,132.2,137.2,142.4,147.7,153.1,158.6,164.2,169.9,175.6,181.5,187.4,193.3])
BaseLogPartFctRef[90]['NbLabels']=2
BaseLogPartFctRef[90]['NbCliques']=129
BaseLogPartFctRef[90]['NbSites']=86
BaseLogPartFctRef[90]['StdNgbhDivMoyNgbh']=0.3588
BaseLogPartFctRef[91]={}
BaseLogPartFctRef[91]['LogPF']=np.array([270.3,289.0,308.1,327.7,347.7,368.3,389.3,410.9,433.0,455.7,479.0,503.1,528.0,553.8,580.8,608.9,638.2,668.8,700.3,732.6,765.5,799.0,833.0,867.4,902.1,937.1,972.4,1008.,1043.,1079.])
BaseLogPartFctRef[91]['NbLabels']=2
BaseLogPartFctRef[91]['NbCliques']=737
BaseLogPartFctRef[91]['NbSites']=390
BaseLogPartFctRef[91]['StdNgbhDivMoyNgbh']=0.3378
BaseLogPartFctRef[92]={}
BaseLogPartFctRef[92]['LogPF']=np.array([71.39,75.52,79.74,84.08,88.51,93.04,97.70,102.5,107.3,112.3,117.4,122.7,128.1,133.7,139.4,145.2,151.3,157.5,164.0,170.5,177.3,184.1,191.1,198.3,205.6,212.9,220.4,227.9,235.5,243.2])
BaseLogPartFctRef[92]['NbLabels']=2
BaseLogPartFctRef[92]['NbCliques']=163
BaseLogPartFctRef[92]['NbSites']=103
BaseLogPartFctRef[92]['StdNgbhDivMoyNgbh']=0.4274
BaseLogPartFctRef[93]={}
BaseLogPartFctRef[93]['LogPF']=np.array([207.3,220.6,234.3,248.4,262.8,277.5,292.6,308.1,323.9,340.2,356.9,374.0,391.7,409.8,428.6,448.0,468.1,489.0,510.6,533.0,555.8,579.2,603.1,627.3,651.8,676.6,701.6,726.8,752.2,777.6])
BaseLogPartFctRef[93]['NbLabels']=2
BaseLogPartFctRef[93]['NbCliques']=529
BaseLogPartFctRef[93]['NbSites']=299
BaseLogPartFctRef[93]['StdNgbhDivMoyNgbh']=0.3502
BaseLogPartFctRef[94]={}
BaseLogPartFctRef[94]['LogPF']=np.array([94.27,99.47,104.8,110.2,115.8,121.5,127.4,133.4,139.5,145.8,152.2,158.8,165.5,172.5,179.5,186.8,194.3,202.0,209.9,218.0,226.2,234.7,243.3,252.1,261.1,270.2,279.4,288.7,298.1,307.6])
BaseLogPartFctRef[94]['NbLabels']=2
BaseLogPartFctRef[94]['NbCliques']=205
BaseLogPartFctRef[94]['NbSites']=136
BaseLogPartFctRef[94]['StdNgbhDivMoyNgbh']=0.3809
BaseLogPartFctRef[95]={}
BaseLogPartFctRef[95]['LogPF']=np.array([88.03,93.55,99.21,105.0,111.0,117.1,123.3,129.6,136.2,142.9,149.8,156.9,164.2,171.8,179.5,187.5,195.8,204.4,213.3,222.5,231.8,241.4,251.1,261.0,271.0,281.1,291.3,301.6,312.0,322.4])
BaseLogPartFctRef[95]['NbLabels']=2
BaseLogPartFctRef[95]['NbCliques']=218
BaseLogPartFctRef[95]['NbSites']=127
BaseLogPartFctRef[95]['StdNgbhDivMoyNgbh']=0.3673
BaseLogPartFctRef[96]={}
BaseLogPartFctRef[96]['LogPF']=np.array([117.1,124.4,131.8,139.4,147.1,155.1,163.2,171.5,180.1,188.8,197.8,207.0,216.4,226.1,236.1,246.5,257.2,268.1,279.5,291.2,303.3,315.6,328.2,341.1,354.1,367.3,380.6,394.1,407.6,421.2])
BaseLogPartFctRef[96]['NbLabels']=2
BaseLogPartFctRef[96]['NbCliques']=285
BaseLogPartFctRef[96]['NbSites']=169
BaseLogPartFctRef[96]['StdNgbhDivMoyNgbh']=0.3804
BaseLogPartFctRef[97]={}
BaseLogPartFctRef[97]['LogPF']=np.array([64.46,68.38,72.40,76.53,80.75,85.06,89.51,94.03,98.66,103.4,108.3,113.3,118.5,123.7,129.2,134.8,140.5,146.5,152.7,159.0,165.5,172.1,178.9,185.8,192.9,200.0,207.2,214.5,221.8,229.1])
BaseLogPartFctRef[97]['NbLabels']=2
BaseLogPartFctRef[97]['NbCliques']=155
BaseLogPartFctRef[97]['NbSites']=93
BaseLogPartFctRef[97]['StdNgbhDivMoyNgbh']=0.3436
BaseLogPartFctRef[98]={}
BaseLogPartFctRef[98]['LogPF']=np.array([94.96,100.3,105.8,111.4,117.1,123.0,129.0,135.2,141.5,147.9,154.6,161.3,168.3,175.5,182.8,190.4,198.2,206.3,214.5,222.9,231.6,240.5,249.6,258.7,268.1,277.5,287.1,296.8,306.5,316.3])
BaseLogPartFctRef[98]['NbLabels']=2
BaseLogPartFctRef[98]['NbCliques']=211
BaseLogPartFctRef[98]['NbSites']=137
BaseLogPartFctRef[98]['StdNgbhDivMoyNgbh']=0.4104
BaseLogPartFctRef[99]={}
BaseLogPartFctRef[99]['LogPF']=np.array([86.64,91.87,97.23,102.7,108.3,114.0,119.9,126.0,132.1,138.5,145.0,151.6,158.4,165.5,172.7,180.2,187.9,195.9,204.1,212.6,221.2,230.1,239.1,248.3,257.5,267.0,276.5,286.1,295.7,305.5])
BaseLogPartFctRef[99]['NbLabels']=2
BaseLogPartFctRef[99]['NbCliques']=206
BaseLogPartFctRef[99]['NbSites']=125
BaseLogPartFctRef[99]['StdNgbhDivMoyNgbh']=0.3692
BaseLogPartFctRef[100]={}
BaseLogPartFctRef[100]['LogPF']=np.array([94.96,101.0,107.1,113.5,119.9,126.6,133.4,140.3,147.5,154.8,162.2,169.9,177.8,185.9,194.3,203.0,212.0,221.3,230.9,240.8,251.0,261.4,272.0,282.8,293.7,304.8,316.0,327.3,338.7,350.1])
BaseLogPartFctRef[100]['NbLabels']=2
BaseLogPartFctRef[100]['NbCliques']=238
BaseLogPartFctRef[100]['NbSites']=137
BaseLogPartFctRef[100]['StdNgbhDivMoyNgbh']=0.3203
BaseLogPartFctRef[101]={}
BaseLogPartFctRef[101]['LogPF']=np.array([115.8,122.9,130.2,137.6,145.3,153.1,161.1,169.3,177.7,186.3,195.1,204.2,213.6,223.2,233.2,243.5,254.2,265.1,276.4,288.1,300.0,312.3,324.7,337.3,350.1,363.0,376.1,389.3,402.6,416.0])
BaseLogPartFctRef[101]['NbLabels']=2
BaseLogPartFctRef[101]['NbCliques']=281
BaseLogPartFctRef[101]['NbSites']=167
BaseLogPartFctRef[101]['StdNgbhDivMoyNgbh']=0.3747
BaseLogPartFctRef[102]={}
BaseLogPartFctRef[102]['LogPF']=np.array([160.1,170.5,181.1,192.0,203.1,214.5,226.2,238.1,250.4,262.9,275.8,289.1,302.7,316.8,331.4,346.7,362.4,378.6,395.5,412.9,430.7,448.9,467.4,486.2,505.1,524.3,543.7,563.1,582.6,602.3])
BaseLogPartFctRef[102]['NbLabels']=2
BaseLogPartFctRef[102]['NbCliques']=409
BaseLogPartFctRef[102]['NbSites']=231
BaseLogPartFctRef[102]['StdNgbhDivMoyNgbh']=0.3669
BaseLogPartFctRef[103]={}
BaseLogPartFctRef[103]['LogPF']=np.array([105.4,112.0,118.9,125.9,133.1,140.5,148.0,155.7,163.6,171.7,180.0,188.6,197.4,206.5,215.9,225.7,235.8,246.2,257.0,268.0,279.3,290.9,302.7,314.7,326.8,339.1,351.5,363.9,376.5,389.1])
BaseLogPartFctRef[103]['NbLabels']=2
BaseLogPartFctRef[103]['NbCliques']=264
BaseLogPartFctRef[103]['NbSites']=152
BaseLogPartFctRef[103]['StdNgbhDivMoyNgbh']=0.3455
BaseLogPartFctRef[104]={}
BaseLogPartFctRef[104]['LogPF']=np.array([90.80,96.28,101.9,107.6,113.5,119.5,125.7,132.0,138.5,145.1,151.9,158.9,166.1,173.4,181.0,188.8,196.9,205.2,213.8,222.7,231.8,241.1,250.6,260.3,270.1,280.1,290.1,300.2,310.4,320.6])
BaseLogPartFctRef[104]['NbLabels']=2
BaseLogPartFctRef[104]['NbCliques']=216
BaseLogPartFctRef[104]['NbSites']=131
BaseLogPartFctRef[104]['StdNgbhDivMoyNgbh']=0.3877
BaseLogPartFctRef[105]={}
BaseLogPartFctRef[105]['LogPF']=np.array([84.56,89.31,94.17,99.15,104.2,109.4,114.7,120.2,125.8,131.5,137.3,143.4,149.5,155.8,162.3,168.9,175.7,182.7,189.8,197.1,204.7,212.4,220.2,228.2,236.4,244.6,253.1,261.6,270.2,278.9])
BaseLogPartFctRef[105]['NbLabels']=2
BaseLogPartFctRef[105]['NbCliques']=187
BaseLogPartFctRef[105]['NbSites']=122
BaseLogPartFctRef[105]['StdNgbhDivMoyNgbh']=0.3629
BaseLogPartFctRef[106]={}
BaseLogPartFctRef[106]['LogPF']=np.array([53.37,56.03,58.75,61.54,64.39,67.32,70.31,73.36,76.49,79.70,82.98,86.34,89.76,93.27,96.83,100.5,104.2,108.0,112.0,116.0,120.1,124.2,128.5,132.8,137.1,141.6,146.1,150.7,155.3,160.0])
BaseLogPartFctRef[106]['NbLabels']=2
BaseLogPartFctRef[106]['NbCliques']=105
BaseLogPartFctRef[106]['NbSites']=77
BaseLogPartFctRef[106]['StdNgbhDivMoyNgbh']=0.3897
BaseLogPartFctRef[107]={}
BaseLogPartFctRef[107]['LogPF']=np.array([83.18,88.08,93.10,98.22,103.5,108.9,114.4,120.0,125.7,131.7,137.7,143.9,150.3,156.9,163.6,170.6,177.7,185.0,192.6,200.4,208.3,216.5,224.8,233.3,241.9,250.6,259.5,268.4,277.4,286.4])
BaseLogPartFctRef[107]['NbLabels']=2
BaseLogPartFctRef[107]['NbCliques']=193
BaseLogPartFctRef[107]['NbSites']=120
BaseLogPartFctRef[107]['StdNgbhDivMoyNgbh']=0.3673
BaseLogPartFctRef[108]={}
BaseLogPartFctRef[108]['LogPF']=np.array([135.9,144.3,153.0,161.8,170.9,180.2,189.7,199.5,209.5,219.8,230.3,241.1,252.3,263.7,275.5,287.7,300.3,313.3,326.7,340.6,354.7,369.1,383.9,398.9,414.1,429.5,445.1,460.8,476.6,492.6])
BaseLogPartFctRef[108]['NbLabels']=2
BaseLogPartFctRef[108]['NbCliques']=334
BaseLogPartFctRef[108]['NbSites']=196
BaseLogPartFctRef[108]['StdNgbhDivMoyNgbh']=0.3608
BaseLogPartFctRef[109]={}
BaseLogPartFctRef[109]['LogPF']=np.array([123.4,131.5,139.9,148.5,157.3,166.3,175.5,185.0,194.7,204.6,214.8,225.4,236.3,247.5,259.1,271.2,283.7,296.8,310.4,324.2,338.5,353.0,367.7,382.6,397.7,412.9,428.3,443.7,459.2,474.8])
BaseLogPartFctRef[109]['NbLabels']=2
BaseLogPartFctRef[109]['NbCliques']=323
BaseLogPartFctRef[109]['NbSites']=178
BaseLogPartFctRef[109]['StdNgbhDivMoyNgbh']=0.3482
BaseLogPartFctRef[110]={}
BaseLogPartFctRef[110]['LogPF']=np.array([64.46,67.91,71.46,75.12,78.83,82.65,86.53,90.54,94.63,98.82,103.1,107.5,112.0,116.7,121.4,126.3,131.4,136.5,141.8,147.3,152.8,158.5,164.3,170.1,176.1,182.1,188.2,194.4,200.7,206.9])
BaseLogPartFctRef[110]['NbLabels']=2
BaseLogPartFctRef[110]['NbCliques']=137
BaseLogPartFctRef[110]['NbSites']=93
BaseLogPartFctRef[110]['StdNgbhDivMoyNgbh']=0.4218
BaseLogPartFctRef[111]={}
BaseLogPartFctRef[111]['LogPF']=np.array([161.5,172.4,183.5,194.9,206.5,218.5,230.7,243.2,256.1,269.3,282.8,296.8,311.2,326.1,341.5,357.5,374.1,391.4,409.3,427.6,446.5,465.8,485.4,505.2,525.3,545.6,566.0,586.6,607.2,628.0])
BaseLogPartFctRef[111]['NbLabels']=2
BaseLogPartFctRef[111]['NbCliques']=428
BaseLogPartFctRef[111]['NbSites']=233
BaseLogPartFctRef[111]['StdNgbhDivMoyNgbh']=0.3430
BaseLogPartFctRef[112]={}
BaseLogPartFctRef[112]['LogPF']=np.array([63.77,67.94,72.22,76.58,81.09,85.70,90.42,95.26,100.2,105.3,110.5,115.9,121.5,127.2,133.2,139.4,146.0,152.7,159.7,166.8,174.1,181.5,189.0,196.7,204.4,212.2,220.0,227.9,235.8,243.7])
BaseLogPartFctRef[112]['NbLabels']=2
BaseLogPartFctRef[112]['NbCliques']=165
BaseLogPartFctRef[112]['NbSites']=92
BaseLogPartFctRef[112]['StdNgbhDivMoyNgbh']=0.3796
BaseLogPartFctRef[113]={}
BaseLogPartFctRef[113]['LogPF']=np.array([261.3,279.8,298.7,318.1,338.0,358.4,379.2,400.7,422.6,445.2,468.3,492.4,517.3,543.1,570.2,598.5,628.2,658.9,690.5,723.0,756.1,789.7,823.8,858.1,892.8,927.8,962.9,998.1,1034.,1069.])
BaseLogPartFctRef[113]['NbLabels']=2
BaseLogPartFctRef[113]['NbCliques']=730
BaseLogPartFctRef[113]['NbSites']=377
BaseLogPartFctRef[113]['StdNgbhDivMoyNgbh']=0.3418
BaseLogPartFctRef[114]={}
BaseLogPartFctRef[114]['LogPF']=np.array([94.96,100.3,105.8,111.4,117.2,123.1,129.1,135.3,141.7,148.1,154.8,161.6,168.6,175.7,183.1,190.6,198.4,206.4,214.6,223.0,231.7,240.5,249.6,258.8,268.1,277.5,287.1,296.8,306.6,316.4])
BaseLogPartFctRef[114]['NbLabels']=2
BaseLogPartFctRef[114]['NbCliques']=212
BaseLogPartFctRef[114]['NbSites']=137
BaseLogPartFctRef[114]['StdNgbhDivMoyNgbh']=0.3695
BaseLogPartFctRef[115]={}
BaseLogPartFctRef[115]['LogPF']=np.array([77.63,82.91,88.35,93.92,99.60,105.4,111.4,117.5,123.8,130.2,136.8,143.7,150.8,158.1,165.8,173.7,182.1,190.6,199.5,208.6,217.9,227.4,237.0,246.7,256.5,266.4,276.4,286.4,296.5,306.6])
BaseLogPartFctRef[115]['NbLabels']=2
BaseLogPartFctRef[115]['NbCliques']=209
BaseLogPartFctRef[115]['NbSites']=112
BaseLogPartFctRef[115]['StdNgbhDivMoyNgbh']=0.3290
BaseLogPartFctRef[116]={}
BaseLogPartFctRef[116]['LogPF']=np.array([110.9,117.5,124.3,131.2,138.2,145.5,152.9,160.5,168.3,176.3,184.5,192.9,201.5,210.3,219.4,228.7,238.4,248.3,258.5,268.9,279.7,290.7,302.0,313.5,325.2,337.0,349.0,361.1,373.3,385.6])
BaseLogPartFctRef[116]['NbLabels']=2
BaseLogPartFctRef[116]['NbCliques']=260
BaseLogPartFctRef[116]['NbSites']=160
BaseLogPartFctRef[116]['StdNgbhDivMoyNgbh']=0.3698
BaseLogPartFctRef[117]={}
BaseLogPartFctRef[117]['LogPF']=np.array([59.61,63.17,66.80,70.53,74.33,78.24,82.22,86.32,90.50,94.80,99.20,103.7,108.4,113.2,118.1,123.2,128.4,133.9,139.4,145.2,151.0,157.0,163.1,169.3,175.6,181.9,188.4,194.9,201.4,208.0])
BaseLogPartFctRef[117]['NbLabels']=2
BaseLogPartFctRef[117]['NbCliques']=140
BaseLogPartFctRef[117]['NbSites']=86
BaseLogPartFctRef[117]['StdNgbhDivMoyNgbh']=0.3851
BaseLogPartFctRef[118]={}
BaseLogPartFctRef[118]['LogPF']=np.array([141.4,150.6,160.1,169.8,179.7,189.8,200.2,210.9,221.8,233.0,244.5,256.4,268.6,281.2,294.3,307.9,322.1,336.7,351.9,367.5,383.4,399.7,416.2,432.9,449.8,466.8,484.0,501.4,518.8,536.3])
BaseLogPartFctRef[118]['NbLabels']=2
BaseLogPartFctRef[118]['NbCliques']=364
BaseLogPartFctRef[118]['NbSites']=204
BaseLogPartFctRef[118]['StdNgbhDivMoyNgbh']=0.3663
BaseLogPartFctRef[119]={}
BaseLogPartFctRef[119]['LogPF']=np.array([108.1,115.3,122.7,130.3,138.1,146.0,154.1,162.5,171.0,179.8,188.8,198.1,207.7,217.7,228.1,239.0,250.2,261.8,273.8,286.1,298.7,311.5,324.5,337.7,351.0,364.5,378.0,391.6,405.4,419.1])
BaseLogPartFctRef[119]['NbLabels']=2
BaseLogPartFctRef[119]['NbCliques']=285
BaseLogPartFctRef[119]['NbSites']=156
BaseLogPartFctRef[119]['StdNgbhDivMoyNgbh']=0.3677
#non-regular grids: Graphes10_06.pickle
BaseLogPartFctRef[120]={}
BaseLogPartFctRef[120]['LogPF']=np.array([638.4,698.4,760.0,822.9,887.6,953.8,1022.,1092.,1164.,1239.,1319.,1406.,1501.,1602.,1708.,1817.,1928.,2041.,2155.,2270.,2386.,2503.,2620.,2737.,2854.,2972.,3090.,3208.,3326.,3444.])
BaseLogPartFctRef[120]['NbLabels']=2
BaseLogPartFctRef[120]['NbCliques']=2370
BaseLogPartFctRef[120]['NbSites']=921
BaseLogPartFctRef[120]['StdNgbhDivMoyNgbh']=0.1781
BaseLogPartFctRef[121]={}
BaseLogPartFctRef[121]['LogPF']=np.array([642.5,702.6,764.1,827.2,891.9,958.2,1026.,1096.,1168.,1243.,1323.,1409.,1503.,1604.,1710.,1818.,1929.,2042.,2156.,2272.,2388.,2504.,2621.,2739.,2856.,2974.,3092.,3210.,3328.,3446.])
BaseLogPartFctRef[121]['NbLabels']=2
BaseLogPartFctRef[121]['NbCliques']=2373
BaseLogPartFctRef[121]['NbSites']=927
BaseLogPartFctRef[121]['StdNgbhDivMoyNgbh']=0.1773
BaseLogPartFctRef[122]={}
BaseLogPartFctRef[122]['LogPF']=np.array([641.9,701.9,763.5,826.5,891.2,957.5,1026.,1096.,1168.,1243.,1322.,1409.,1504.,1605.,1710.,1819.,1930.,2043.,2157.,2272.,2388.,2505.,2622.,2739.,2857.,2975.,3093.,3211.,3329.,3448.])
BaseLogPartFctRef[122]['NbLabels']=2
BaseLogPartFctRef[122]['NbCliques']=2374
BaseLogPartFctRef[122]['NbSites']=926
BaseLogPartFctRef[122]['StdNgbhDivMoyNgbh']=0.1803
BaseLogPartFctRef[123]={}
BaseLogPartFctRef[123]['LogPF']=np.array([640.5,700.8,762.5,825.8,890.7,957.1,1025.,1096.,1168.,1244.,1324.,1411.,1507.,1609.,1715.,1824.,1936.,2050.,2165.,2280.,2397.,2514.,2631.,2749.,2867.,2985.,3104.,3222.,3341.,3459.])
BaseLogPartFctRef[123]['NbLabels']=2
BaseLogPartFctRef[123]['NbCliques']=2381
BaseLogPartFctRef[123]['NbSites']=924
BaseLogPartFctRef[123]['StdNgbhDivMoyNgbh']=0.1770
BaseLogPartFctRef[124]={}
BaseLogPartFctRef[124]['LogPF']=np.array([652.3,714.1,777.4,842.4,909.0,977.2,1047.,1119.,1194.,1271.,1353.,1442.,1541.,1646.,1755.,1868.,1983.,2100.,2218.,2336.,2456.,2576.,2697.,2818.,2939.,3060.,3182.,3303.,3425.,3547.])
BaseLogPartFctRef[124]['NbLabels']=2
BaseLogPartFctRef[124]['NbCliques']=2442
BaseLogPartFctRef[124]['NbSites']=941
BaseLogPartFctRef[124]['StdNgbhDivMoyNgbh']=0.1680
BaseLogPartFctRef[125]={}
BaseLogPartFctRef[125]['LogPF']=np.array([641.9,702.3,764.1,827.5,892.4,959.0,1027.,1098.,1170.,1246.,1326.,1413.,1509.,1611.,1717.,1826.,1938.,2051.,2166.,2282.,2399.,2516.,2633.,2751.,2869.,2987.,3106.,3224.,3343.,3462.])
BaseLogPartFctRef[125]['NbLabels']=2
BaseLogPartFctRef[125]['NbCliques']=2383
BaseLogPartFctRef[125]['NbSites']=926
BaseLogPartFctRef[125]['StdNgbhDivMoyNgbh']=0.1770
BaseLogPartFctRef[126]={}
BaseLogPartFctRef[126]['LogPF']=np.array([637.0,696.6,757.7,820.3,884.4,950.1,1018.,1087.,1159.,1233.,1312.,1397.,1491.,1592.,1696.,1805.,1915.,2027.,2140.,2255.,2370.,2485.,2602.,2718.,2835.,2951.,3068.,3185.,3303.,3420.])
BaseLogPartFctRef[126]['NbLabels']=2
BaseLogPartFctRef[126]['NbCliques']=2354
BaseLogPartFctRef[126]['NbSites']=919
BaseLogPartFctRef[126]['StdNgbhDivMoyNgbh']=0.1802
BaseLogPartFctRef[127]={}
BaseLogPartFctRef[127]['LogPF']=np.array([650.9,712.3,775.1,839.6,905.6,973.4,1043.,1114.,1188.,1265.,1346.,1435.,1533.,1637.,1745.,1857.,1971.,2086.,2203.,2321.,2439.,2558.,2678.,2798.,2918.,3039.,3159.,3280.,3401.,3521.])
BaseLogPartFctRef[127]['NbLabels']=2
BaseLogPartFctRef[127]['NbCliques']=2424
BaseLogPartFctRef[127]['NbSites']=939
BaseLogPartFctRef[127]['StdNgbhDivMoyNgbh']=0.1725
BaseLogPartFctRef[128]={}
BaseLogPartFctRef[128]['LogPF']=np.array([643.2,703.9,765.9,829.6,894.9,961.7,1030.,1101.,1174.,1250.,1331.,1419.,1515.,1618.,1725.,1835.,1947.,2062.,2177.,2294.,2411.,2528.,2647.,2765.,2884.,3003.,3122.,3241.,3360.,3480.])
BaseLogPartFctRef[128]['NbLabels']=2
BaseLogPartFctRef[128]['NbCliques']=2395
BaseLogPartFctRef[128]['NbSites']=928
BaseLogPartFctRef[128]['StdNgbhDivMoyNgbh']=0.1779
BaseLogPartFctRef[129]={}
BaseLogPartFctRef[129]['LogPF']=np.array([628.0,686.2,745.7,806.6,869.2,933.3,999.1,1067.,1137.,1209.,1286.,1368.,1459.,1556.,1658.,1762.,1869.,1978.,2088.,2199.,2311.,2424.,2537.,2651.,2764.,2878.,2992.,3107.,3221.,3335.])
BaseLogPartFctRef[129]['NbLabels']=2
BaseLogPartFctRef[129]['NbCliques']=2296
BaseLogPartFctRef[129]['NbSites']=906
BaseLogPartFctRef[129]['StdNgbhDivMoyNgbh']=0.1849
BaseLogPartFctRef[130]={}
BaseLogPartFctRef[130]['LogPF']=np.array([648.1,708.9,771.4,835.3,900.9,968.1,1037.,1108.,1181.,1257.,1338.,1425.,1522.,1624.,1732.,1842.,1955.,2070.,2186.,2303.,2421.,2539.,2658.,2776.,2896.,3015.,3135.,3254.,3374.,3494.])
BaseLogPartFctRef[130]['NbLabels']=2
BaseLogPartFctRef[130]['NbCliques']=2405
BaseLogPartFctRef[130]['NbSites']=935
BaseLogPartFctRef[130]['StdNgbhDivMoyNgbh']=0.1743
BaseLogPartFctRef[131]={}
BaseLogPartFctRef[131]['LogPF']=np.array([643.9,704.5,766.6,830.3,895.5,962.2,1031.,1101.,1174.,1250.,1330.,1419.,1514.,1616.,1723.,1833.,1945.,2059.,2175.,2291.,2408.,2525.,2643.,2762.,2880.,2999.,3118.,3237.,3356.,3476.])
BaseLogPartFctRef[131]['NbLabels']=2
BaseLogPartFctRef[131]['NbCliques']=2392
BaseLogPartFctRef[131]['NbSites']=929
BaseLogPartFctRef[131]['StdNgbhDivMoyNgbh']=0.1722
BaseLogPartFctRef[132]={}
BaseLogPartFctRef[132]['LogPF']=np.array([630.8,689.2,749.0,810.3,873.0,937.5,1004.,1072.,1142.,1215.,1291.,1375.,1466.,1563.,1665.,1770.,1878.,1987.,2098.,2210.,2322.,2435.,2549.,2663.,2777.,2891.,3006.,3120.,3235.,3350.])
BaseLogPartFctRef[132]['NbLabels']=2
BaseLogPartFctRef[132]['NbCliques']=2306
BaseLogPartFctRef[132]['NbSites']=910
BaseLogPartFctRef[132]['StdNgbhDivMoyNgbh']=0.1862
BaseLogPartFctRef[133]={}
BaseLogPartFctRef[133]['LogPF']=np.array([651.6,713.5,776.9,842.0,908.6,977.0,1047.,1119.,1194.,1271.,1354.,1444.,1543.,1649.,1758.,1871.,1987.,2104.,2222.,2341.,2461.,2581.,2702.,2823.,2944.,3066.,3187.,3309.,3431.,3553.])
BaseLogPartFctRef[133]['NbLabels']=2
BaseLogPartFctRef[133]['NbCliques']=2446
BaseLogPartFctRef[133]['NbSites']=940
BaseLogPartFctRef[133]['StdNgbhDivMoyNgbh']=0.1669
BaseLogPartFctRef[134]={}
BaseLogPartFctRef[134]['LogPF']=np.array([650.2,711.4,774.3,838.7,904.6,972.4,1042.,1113.,1187.,1264.,1345.,1434.,1532.,1635.,1743.,1855.,1969.,2084.,2201.,2319.,2437.,2557.,2676.,2796.,2916.,3036.,3157.,3278.,3398.,3519.])
BaseLogPartFctRef[134]['NbLabels']=2
BaseLogPartFctRef[134]['NbCliques']=2422
BaseLogPartFctRef[134]['NbSites']=938
BaseLogPartFctRef[134]['StdNgbhDivMoyNgbh']=0.1660
BaseLogPartFctRef[135]={}
BaseLogPartFctRef[135]['LogPF']=np.array([639.1,699.6,761.7,825.3,890.5,957.4,1026.,1096.,1169.,1245.,1326.,1414.,1511.,1614.,1721.,1831.,1943.,2057.,2173.,2289.,2406.,2524.,2642.,2760.,2879.,2998.,3117.,3236.,3355.,3474.])
BaseLogPartFctRef[135]['NbLabels']=2
BaseLogPartFctRef[135]['NbCliques']=2392
BaseLogPartFctRef[135]['NbSites']=922
BaseLogPartFctRef[135]['StdNgbhDivMoyNgbh']=0.1761
BaseLogPartFctRef[136]={}
BaseLogPartFctRef[136]['LogPF']=np.array([635.6,694.9,755.6,817.8,881.6,946.9,1014.,1083.,1154.,1228.,1307.,1391.,1484.,1583.,1687.,1795.,1904.,2015.,2128.,2242.,2356.,2471.,2586.,2702.,2817.,2934.,3050.,3166.,3283.,3399.])
BaseLogPartFctRef[136]['NbLabels']=2
BaseLogPartFctRef[136]['NbCliques']=2340
BaseLogPartFctRef[136]['NbSites']=917
BaseLogPartFctRef[136]['StdNgbhDivMoyNgbh']=0.1841
BaseLogPartFctRef[137]={}
BaseLogPartFctRef[137]['LogPF']=np.array([647.4,708.3,770.8,834.9,900.3,967.6,1037.,1108.,1181.,1257.,1338.,1427.,1523.,1627.,1734.,1845.,1958.,2073.,2189.,2306.,2423.,2542.,2660.,2779.,2899.,3018.,3138.,3257.,3377.,3497.])
BaseLogPartFctRef[137]['NbLabels']=2
BaseLogPartFctRef[137]['NbCliques']=2406
BaseLogPartFctRef[137]['NbSites']=934
BaseLogPartFctRef[137]['StdNgbhDivMoyNgbh']=0.1698
BaseLogPartFctRef[138]={}
BaseLogPartFctRef[138]['LogPF']=np.array([638.4,698.2,759.3,821.9,886.1,952.0,1020.,1089.,1161.,1235.,1314.,1400.,1494.,1594.,1698.,1806.,1917.,2029.,2143.,2257.,2373.,2489.,2605.,2722.,2838.,2956.,3073.,3190.,3308.,3425.])
BaseLogPartFctRef[138]['NbLabels']=2
BaseLogPartFctRef[138]['NbCliques']=2359
BaseLogPartFctRef[138]['NbSites']=921
BaseLogPartFctRef[138]['StdNgbhDivMoyNgbh']=0.1759
BaseLogPartFctRef[139]={}
BaseLogPartFctRef[139]['LogPF']=np.array([652.9,714.5,777.7,842.4,908.7,976.8,1047.,1119.,1193.,1270.,1352.,1442.,1540.,1644.,1753.,1865.,1979.,2096.,2213.,2332.,2451.,2571.,2691.,2812.,2932.,3053.,3174.,3296.,3417.,3539.])
BaseLogPartFctRef[139]['NbLabels']=2
BaseLogPartFctRef[139]['NbCliques']=2436
BaseLogPartFctRef[139]['NbSites']=942
BaseLogPartFctRef[139]['StdNgbhDivMoyNgbh']=0.1721
BaseLogPartFctRef[140]={}
BaseLogPartFctRef[140]['LogPF']=np.array([636.3,695.6,756.4,818.7,882.6,948.0,1015.,1084.,1156.,1230.,1308.,1393.,1486.,1586.,1690.,1797.,1907.,2018.,2131.,2245.,2359.,2474.,2590.,2705.,2821.,2938.,3054.,3171.,3287.,3404.])
BaseLogPartFctRef[140]['NbLabels']=2
BaseLogPartFctRef[140]['NbCliques']=2343
BaseLogPartFctRef[140]['NbSites']=918
BaseLogPartFctRef[140]['StdNgbhDivMoyNgbh']=0.1785
BaseLogPartFctRef[141]={}
BaseLogPartFctRef[141]['LogPF']=np.array([628.0,686.7,746.7,808.2,871.4,936.2,1003.,1071.,1141.,1215.,1292.,1377.,1470.,1568.,1671.,1777.,1885.,1995.,2107.,2219.,2332.,2446.,2560.,2675.,2789.,2904.,3019.,3135.,3250.,3365.])
BaseLogPartFctRef[141]['NbLabels']=2
BaseLogPartFctRef[141]['NbCliques']=2316
BaseLogPartFctRef[141]['NbSites']=906
BaseLogPartFctRef[141]['StdNgbhDivMoyNgbh']=0.1839
BaseLogPartFctRef[142]={}
BaseLogPartFctRef[142]['LogPF']=np.array([643.2,703.3,765.0,828.1,892.7,959.1,1027.,1097.,1170.,1245.,1324.,1411.,1506.,1607.,1712.,1821.,1932.,2045.,2160.,2275.,2391.,2508.,2625.,2742.,2860.,2978.,3096.,3214.,3332.,3450.])
BaseLogPartFctRef[142]['NbLabels']=2
BaseLogPartFctRef[142]['NbCliques']=2374
BaseLogPartFctRef[142]['NbSites']=928
BaseLogPartFctRef[142]['StdNgbhDivMoyNgbh']=0.1787
BaseLogPartFctRef[143]={}
BaseLogPartFctRef[143]['LogPF']=np.array([638.4,697.8,758.6,821.0,884.9,950.6,1018.,1087.,1159.,1233.,1311.,1396.,1489.,1588.,1693.,1800.,1910.,2022.,2135.,2249.,2363.,2479.,2594.,2710.,2827.,2943.,3060.,3177.,3294.,3411.])
BaseLogPartFctRef[143]['NbLabels']=2
BaseLogPartFctRef[143]['NbCliques']=2348
BaseLogPartFctRef[143]['NbSites']=921
BaseLogPartFctRef[143]['StdNgbhDivMoyNgbh']=0.1739
BaseLogPartFctRef[144]={}
BaseLogPartFctRef[144]['LogPF']=np.array([622.4,680.0,739.0,799.6,861.6,925.1,990.3,1057.,1126.,1198.,1274.,1356.,1446.,1542.,1642.,1746.,1852.,1960.,2069.,2179.,2290.,2401.,2513.,2626.,2738.,2851.,2964.,3077.,3190.,3303.])
BaseLogPartFctRef[144]['NbLabels']=2
BaseLogPartFctRef[144]['NbCliques']=2274
BaseLogPartFctRef[144]['NbSites']=898
BaseLogPartFctRef[144]['StdNgbhDivMoyNgbh']=0.1871
BaseLogPartFctRef[145]={}
BaseLogPartFctRef[145]['LogPF']=np.array([642.5,702.9,764.9,828.3,893.3,959.9,1028.,1099.,1171.,1247.,1327.,1414.,1509.,1611.,1717.,1827.,1939.,2053.,2168.,2284.,2401.,2518.,2636.,2754.,2873.,2991.,3110.,3229.,3348.,3467.])
BaseLogPartFctRef[145]['NbLabels']=2
BaseLogPartFctRef[145]['NbCliques']=2387
BaseLogPartFctRef[145]['NbSites']=927
BaseLogPartFctRef[145]['StdNgbhDivMoyNgbh']=0.1717
BaseLogPartFctRef[146]={}
BaseLogPartFctRef[146]['LogPF']=np.array([619.0,675.5,733.5,792.9,853.7,916.0,980.1,1046.,1114.,1184.,1257.,1336.,1423.,1516.,1615.,1716.,1820.,1926.,2033.,2141.,2250.,2359.,2469.,2579.,2689.,2800.,2911.,3022.,3133.,3244.])
BaseLogPartFctRef[146]['NbLabels']=2
BaseLogPartFctRef[146]['NbCliques']=2233
BaseLogPartFctRef[146]['NbSites']=893
BaseLogPartFctRef[146]['StdNgbhDivMoyNgbh']=0.1868
BaseLogPartFctRef[147]={}
BaseLogPartFctRef[147]['LogPF']=np.array([642.5,702.4,763.8,826.8,891.4,957.7,1026.,1095.,1167.,1242.,1321.,1407.,1502.,1602.,1707.,1816.,1927.,2039.,2153.,2269.,2384.,2501.,2617.,2734.,2852.,2969.,3087.,3205.,3323.,3441.])
BaseLogPartFctRef[147]['NbLabels']=2
BaseLogPartFctRef[147]['NbCliques']=2369
BaseLogPartFctRef[147]['NbSites']=927
BaseLogPartFctRef[147]['StdNgbhDivMoyNgbh']=0.1785
BaseLogPartFctRef[148]={}
BaseLogPartFctRef[148]['LogPF']=np.array([637.0,696.7,757.7,820.3,884.5,950.3,1018.,1087.,1159.,1233.,1312.,1398.,1492.,1592.,1697.,1805.,1915.,2027.,2141.,2255.,2370.,2486.,2602.,2718.,2835.,2952.,3069.,3186.,3303.,3421.])
BaseLogPartFctRef[148]['NbLabels']=2
BaseLogPartFctRef[148]['NbCliques']=2355
BaseLogPartFctRef[148]['NbSites']=919
BaseLogPartFctRef[148]['StdNgbhDivMoyNgbh']=0.1801
BaseLogPartFctRef[149]={}
BaseLogPartFctRef[149]['LogPF']=np.array([643.9,704.3,766.1,829.5,894.4,961.0,1029.,1099.,1172.,1247.,1327.,1414.,1509.,1610.,1716.,1825.,1936.,2050.,2164.,2280.,2397.,2514.,2631.,2749.,2867.,2985.,3104.,3222.,3341.,3460.])
BaseLogPartFctRef[149]['NbLabels']=2
BaseLogPartFctRef[149]['NbCliques']=2382
BaseLogPartFctRef[149]['NbSites']=929
BaseLogPartFctRef[149]['StdNgbhDivMoyNgbh']=0.1741
#non-regular grids: Graphes15_02.pickle
BaseLogPartFctRef[150]={}
BaseLogPartFctRef[150]['LogPF']=np.array([25.65,26.93,28.25,29.60,30.99,32.41,33.86,35.35,36.87,38.44,40.04,41.67,43.34,45.05,46.79,48.59,50.45,52.32,54.24,56.22,58.24,60.28,62.37,64.50,66.68,68.87,71.09,73.35,75.62,77.93])
BaseLogPartFctRef[150]['NbLabels']=2
BaseLogPartFctRef[150]['NbCliques']=51
BaseLogPartFctRef[150]['NbSites']=37
BaseLogPartFctRef[150]['StdNgbhDivMoyNgbh']=0.4217
BaseLogPartFctRef[151]={}
BaseLogPartFctRef[151]['LogPF']=np.array([20.10,21.17,22.25,23.37,24.51,25.68,26.87,28.10,29.35,30.64,31.97,33.31,34.69,36.11,37.55,39.02,40.54,42.09,43.68,45.31,46.99,48.72,50.47,52.23,54.04,55.89,57.76,59.66,61.56,63.50])
BaseLogPartFctRef[151]['NbLabels']=2
BaseLogPartFctRef[151]['NbCliques']=42
BaseLogPartFctRef[151]['NbSites']=29
BaseLogPartFctRef[151]['StdNgbhDivMoyNgbh']=0.3948
BaseLogPartFctRef[152]={}
BaseLogPartFctRef[152]['LogPF']=np.array([24.26,25.45,26.66,27.91,29.19,30.50,31.84,33.21,34.61,36.04,37.50,39.00,40.55,42.12,43.73,45.37,47.04,48.76,50.51,52.30,54.13,55.99,57.89,59.84,61.81,63.80,65.83,67.90,69.98,72.09])
BaseLogPartFctRef[152]['NbLabels']=2
BaseLogPartFctRef[152]['NbCliques']=47
BaseLogPartFctRef[152]['NbSites']=35
BaseLogPartFctRef[152]['StdNgbhDivMoyNgbh']=0.4034
BaseLogPartFctRef[153]={}
BaseLogPartFctRef[153]['LogPF']=np.array([39.51,41.33,43.21,45.11,47.06,49.06,51.11,53.21,55.35,57.54,59.77,62.05,64.39,66.78,69.21,71.67,74.20,76.77,79.40,82.08,84.81,87.59,90.41,93.30,96.21,99.17,102.2,105.2,108.3,111.4])
BaseLogPartFctRef[153]['NbLabels']=2
BaseLogPartFctRef[153]['NbCliques']=72
BaseLogPartFctRef[153]['NbSites']=57
BaseLogPartFctRef[153]['StdNgbhDivMoyNgbh']=0.3793
BaseLogPartFctRef[154]={}
BaseLogPartFctRef[154]['LogPF']=np.array([30.50,31.89,33.32,34.77,36.27,37.81,39.37,40.97,42.61,44.28,45.99,47.74,49.53,51.36,53.23,55.14,57.08,59.06,61.09,63.16,65.27,67.41,69.59,71.81,74.05,76.33,78.65,80.99,83.36,85.77])
BaseLogPartFctRef[154]['NbLabels']=2
BaseLogPartFctRef[154]['NbCliques']=55
BaseLogPartFctRef[154]['NbSites']=44
BaseLogPartFctRef[154]['StdNgbhDivMoyNgbh']=0.3953
BaseLogPartFctRef[155]={}
BaseLogPartFctRef[155]['LogPF']=np.array([22.87,23.99,25.13,26.30,27.50,28.73,29.99,31.27,32.58,33.91,35.28,36.68,38.10,39.57,41.07,42.59,44.15,45.75,47.38,49.04,50.73,52.47,54.23,56.02,57.85,59.70,61.57,63.47,65.41,67.37])
BaseLogPartFctRef[155]['NbLabels']=2
BaseLogPartFctRef[155]['NbCliques']=44
BaseLogPartFctRef[155]['NbSites']=33
BaseLogPartFctRef[155]['StdNgbhDivMoyNgbh']=0.3655
BaseLogPartFctRef[156]={}
BaseLogPartFctRef[156]['LogPF']=np.array([34.66,36.61,38.60,40.65,42.75,44.88,47.07,49.32,51.63,53.99,56.41,58.90,61.43,64.04,66.73,69.50,72.34,75.27,78.27,81.36,84.51,87.73,91.00,94.34,97.73,101.2,104.6,108.2,111.7,115.3])
BaseLogPartFctRef[156]['NbLabels']=2
BaseLogPartFctRef[156]['NbCliques']=77
BaseLogPartFctRef[156]['NbSites']=50
BaseLogPartFctRef[156]['StdNgbhDivMoyNgbh']=0.3297
BaseLogPartFctRef[157]={}
BaseLogPartFctRef[157]['LogPF']=np.array([35.35,37.22,39.15,41.12,43.14,45.20,47.30,49.46,51.67,53.94,56.24,58.61,61.04,63.55,66.12,68.75,71.45,74.23,77.05,79.94,82.92,85.96,89.06,92.23,95.45,98.70,102.0,105.3,108.7,112.1])
BaseLogPartFctRef[157]['NbLabels']=2
BaseLogPartFctRef[157]['NbCliques']=74
BaseLogPartFctRef[157]['NbSites']=51
BaseLogPartFctRef[157]['StdNgbhDivMoyNgbh']=0.4138
BaseLogPartFctRef[158]={}
BaseLogPartFctRef[158]['LogPF']=np.array([15.94,16.75,17.59,18.44,19.30,20.19,21.10,22.03,22.99,23.96,24.97,25.99,27.04,28.12,29.21,30.34,31.51,32.69,33.90,35.13,36.39,37.68,39.00,40.33,41.69,43.08,44.48,45.91,47.36,48.81])
BaseLogPartFctRef[158]['NbLabels']=2
BaseLogPartFctRef[158]['NbCliques']=32
BaseLogPartFctRef[158]['NbSites']=23
BaseLogPartFctRef[158]['StdNgbhDivMoyNgbh']=0.3505
BaseLogPartFctRef[159]={}
BaseLogPartFctRef[159]['LogPF']=np.array([38.82,40.84,42.91,45.04,47.22,49.44,51.71,54.04,56.44,58.89,61.38,63.94,66.57,69.27,72.03,74.84,77.74,80.71,83.73,86.83,90.00,93.25,96.57,99.94,103.3,106.8,110.3,113.9,117.5,121.1])
BaseLogPartFctRef[159]['NbLabels']=2
BaseLogPartFctRef[159]['NbCliques']=80
BaseLogPartFctRef[159]['NbSites']=56
BaseLogPartFctRef[159]['StdNgbhDivMoyNgbh']=0.4079
BaseLogPartFctRef[160]={}
BaseLogPartFctRef[160]['LogPF']=np.array([22.18,23.27,24.39,25.54,26.71,27.91,29.13,30.39,31.67,32.98,34.33,35.71,37.11,38.55,40.02,41.54,43.09,44.68,46.31,47.97,49.67,51.39,53.15,54.94,56.76,58.61,60.48,62.38,64.30,66.24])
BaseLogPartFctRef[160]['NbLabels']=2
BaseLogPartFctRef[160]['NbCliques']=43
BaseLogPartFctRef[160]['NbSites']=32
BaseLogPartFctRef[160]['StdNgbhDivMoyNgbh']=0.4074
BaseLogPartFctRef[161]={}
BaseLogPartFctRef[161]['LogPF']=np.array([31.19,32.66,34.17,35.71,37.28,38.89,40.54,42.23,43.95,45.71,47.51,49.36,51.24,53.17,55.13,57.14,59.19,61.26,63.38,65.55,67.76,70.01,72.30,74.63,76.99,79.38,81.83,84.29,86.80,89.34])
BaseLogPartFctRef[161]['NbLabels']=2
BaseLogPartFctRef[161]['NbCliques']=58
BaseLogPartFctRef[161]['NbSites']=45
BaseLogPartFctRef[161]['StdNgbhDivMoyNgbh']=0.3792
BaseLogPartFctRef[162]={}
BaseLogPartFctRef[162]['LogPF']=np.array([21.49,22.58,23.69,24.83,26.00,27.20,28.41,29.66,30.94,32.26,33.60,34.97,36.38,37.82,39.30,40.81,42.35,43.94,45.57,47.24,48.93,50.65,52.43,54.24,56.06,57.92,59.81,61.71,63.63,65.57])
BaseLogPartFctRef[162]['NbLabels']=2
BaseLogPartFctRef[162]['NbCliques']=43
BaseLogPartFctRef[162]['NbSites']=31
BaseLogPartFctRef[162]['StdNgbhDivMoyNgbh']=0.4579
BaseLogPartFctRef[163]={}
BaseLogPartFctRef[163]['LogPF']=np.array([33.27,34.93,36.65,38.40,40.19,42.03,43.90,45.83,47.80,49.81,51.87,53.99,56.15,58.37,60.64,62.97,65.34,67.79,70.27,72.82,75.44,78.13,80.86,83.65,86.46,89.32,92.22,95.17,98.12,101.1])
BaseLogPartFctRef[163]['NbLabels']=2
BaseLogPartFctRef[163]['NbCliques']=66
BaseLogPartFctRef[163]['NbSites']=48
BaseLogPartFctRef[163]['StdNgbhDivMoyNgbh']=0.4269
BaseLogPartFctRef[164]={}
BaseLogPartFctRef[164]['LogPF']=np.array([26.34,27.64,28.96,30.31,31.71,33.13,34.58,36.07,37.60,39.16,40.76,42.39,44.06,45.76,47.51,49.29,51.12,52.97,54.86,56.78,58.75,60.76,62.81,64.89,67.01,69.16,71.34,73.54,75.78,78.04])
BaseLogPartFctRef[164]['NbLabels']=2
BaseLogPartFctRef[164]['NbCliques']=51
BaseLogPartFctRef[164]['NbSites']=38
BaseLogPartFctRef[164]['StdNgbhDivMoyNgbh']=0.4086
BaseLogPartFctRef[165]={}
BaseLogPartFctRef[165]['LogPF']=np.array([31.88,33.32,34.80,36.31,37.87,39.46,41.08,42.75,44.44,46.17,47.94,49.75,51.60,53.49,55.42,57.40,59.42,61.47,63.57,65.71,67.88,70.10,72.35,74.65,76.96,79.32,81.70,84.13,86.58,89.07])
BaseLogPartFctRef[165]['NbLabels']=2
BaseLogPartFctRef[165]['NbCliques']=57
BaseLogPartFctRef[165]['NbSites']=46
BaseLogPartFctRef[165]['StdNgbhDivMoyNgbh']=0.3925
BaseLogPartFctRef[166]={}
BaseLogPartFctRef[166]['LogPF']=np.array([29.81,31.35,32.93,34.55,36.21,37.90,39.64,41.42,43.24,45.10,47.01,48.97,50.96,53.01,55.10,57.25,59.43,61.66,63.96,66.30,68.69,71.13,73.62,76.16,78.73,81.34,84.01,86.69,89.40,92.15])
BaseLogPartFctRef[166]['NbLabels']=2
BaseLogPartFctRef[166]['NbCliques']=61
BaseLogPartFctRef[166]['NbSites']=43
BaseLogPartFctRef[166]['StdNgbhDivMoyNgbh']=0.3632
BaseLogPartFctRef[167]={}
BaseLogPartFctRef[167]['LogPF']=np.array([27.03,28.45,29.90,31.39,32.91,34.47,36.07,37.71,39.38,41.10,42.86,44.67,46.52,48.41,50.35,52.33,54.36,56.45,58.57,60.76,63.02,65.31,67.66,70.05,72.47,74.93,77.42,79.93,82.48,85.05])
BaseLogPartFctRef[167]['NbLabels']=2
BaseLogPartFctRef[167]['NbCliques']=56
BaseLogPartFctRef[167]['NbSites']=39
BaseLogPartFctRef[167]['StdNgbhDivMoyNgbh']=0.4150
BaseLogPartFctRef[168]={}
BaseLogPartFctRef[168]['LogPF']=np.array([41.59,43.57,45.59,47.67,49.80,51.96,54.20,56.47,58.80,61.17,63.61,66.09,68.63,71.23,73.89,76.61,79.37,82.21,85.12,88.09,91.13,94.22,97.35,100.6,103.8,107.1,110.4,113.8,117.2,120.7])
BaseLogPartFctRef[168]['NbLabels']=2
BaseLogPartFctRef[168]['NbCliques']=78
BaseLogPartFctRef[168]['NbSites']=60
BaseLogPartFctRef[168]['StdNgbhDivMoyNgbh']=0.4212
BaseLogPartFctRef[169]={}
BaseLogPartFctRef[169]['LogPF']=np.array([36.04,37.77,39.53,41.34,43.18,45.06,47.00,48.97,51.00,53.07,55.18,57.35,59.55,61.82,64.14,66.52,68.95,71.42,73.96,76.56,79.20,81.90,84.65,87.44,90.26,93.14,96.06,99.00,102.0,105.0])
BaseLogPartFctRef[169]['NbLabels']=2
BaseLogPartFctRef[169]['NbCliques']=68
BaseLogPartFctRef[169]['NbSites']=52
BaseLogPartFctRef[169]['StdNgbhDivMoyNgbh']=0.3902
BaseLogPartFctRef[170]={}
BaseLogPartFctRef[170]['LogPF']=np.array([38.12,40.25,42.43,44.67,46.95,49.29,51.70,54.15,56.67,59.24,61.88,64.59,67.36,70.21,73.14,76.15,79.24,82.40,85.67,89.02,92.45,95.95,99.51,103.1,106.8,110.6,114.4,118.2,122.1,126.1])
BaseLogPartFctRef[170]['NbLabels']=2
BaseLogPartFctRef[170]['NbCliques']=84
BaseLogPartFctRef[170]['NbSites']=55
BaseLogPartFctRef[170]['StdNgbhDivMoyNgbh']=0.3897
BaseLogPartFctRef[171]={}
BaseLogPartFctRef[171]['LogPF']=np.array([33.96,35.58,37.24,38.94,40.68,42.46,44.29,46.15,48.06,50.01,52.00,54.05,56.14,58.26,60.45,62.68,64.97,67.30,69.69,72.12,74.60,77.14,79.72,82.32,84.98,87.66,90.40,93.16,95.96,98.79])
BaseLogPartFctRef[171]['NbLabels']=2
BaseLogPartFctRef[171]['NbCliques']=64
BaseLogPartFctRef[171]['NbSites']=49
BaseLogPartFctRef[171]['StdNgbhDivMoyNgbh']=0.4340
BaseLogPartFctRef[172]={}
BaseLogPartFctRef[172]['LogPF']=np.array([24.26,25.32,26.41,27.53,28.67,29.84,31.03,32.26,33.52,34.80,36.11,37.43,38.80,40.19,41.61,43.06,44.54,46.04,47.59,49.15,50.75,52.37,54.02,55.69,57.39,59.12,60.86,62.62,64.42,66.23])
BaseLogPartFctRef[172]['NbLabels']=2
BaseLogPartFctRef[172]['NbCliques']=42
BaseLogPartFctRef[172]['NbSites']=35
BaseLogPartFctRef[172]['StdNgbhDivMoyNgbh']=0.4128
BaseLogPartFctRef[173]={}
BaseLogPartFctRef[173]['LogPF']=np.array([42.98,45.15,47.38,49.66,51.99,54.38,56.82,59.33,61.88,64.50,67.18,69.93,72.73,75.62,78.57,81.58,84.68,87.85,91.09,94.42,97.82,101.3,104.8,108.4,112.1,115.8,119.5,123.3,127.2,131.1])
BaseLogPartFctRef[173]['NbLabels']=2
BaseLogPartFctRef[173]['NbCliques']=86
BaseLogPartFctRef[173]['NbSites']=62
BaseLogPartFctRef[173]['StdNgbhDivMoyNgbh']=0.3737
BaseLogPartFctRef[174]={}
BaseLogPartFctRef[174]['LogPF']=np.array([22.87,23.98,25.12,26.29,27.48,28.70,29.96,31.23,32.55,33.89,35.27,36.67,38.10,39.58,41.07,42.60,44.17,45.78,47.43,49.10,50.81,52.56,54.33,56.13,57.97,59.84,61.73,63.66,65.61,67.58])
BaseLogPartFctRef[174]['NbLabels']=2
BaseLogPartFctRef[174]['NbCliques']=44
BaseLogPartFctRef[174]['NbSites']=33
BaseLogPartFctRef[174]['StdNgbhDivMoyNgbh']=0.4262
BaseLogPartFctRef[175]={}
BaseLogPartFctRef[175]['LogPF']=np.array([58.92,61.86,64.87,67.95,71.11,74.34,77.66,81.05,84.50,88.03,91.63,95.36,99.13,103.0,107.0,111.0,115.2,119.4,123.8,128.2,132.7,137.3,142.0,146.8,151.6,156.6,161.6,166.6,171.7,176.9])
BaseLogPartFctRef[175]['NbLabels']=2
BaseLogPartFctRef[175]['NbCliques']=116
BaseLogPartFctRef[175]['NbSites']=85
BaseLogPartFctRef[175]['StdNgbhDivMoyNgbh']=0.4058
BaseLogPartFctRef[176]={}
BaseLogPartFctRef[176]['LogPF']=np.array([26.34,27.94,29.57,31.24,32.96,34.71,36.51,38.36,40.24,42.18,44.16,46.22,48.32,50.50,52.74,55.05,57.42,59.86,62.39,64.97,67.61,70.32,73.08,75.89,78.74,81.63,84.55,87.50,90.47,93.46])
BaseLogPartFctRef[176]['NbLabels']=2
BaseLogPartFctRef[176]['NbCliques']=63
BaseLogPartFctRef[176]['NbSites']=38
BaseLogPartFctRef[176]['StdNgbhDivMoyNgbh']=0.3238
BaseLogPartFctRef[177]={}
BaseLogPartFctRef[177]['LogPF']=np.array([33.27,35.14,37.06,39.03,41.04,43.10,45.21,47.38,49.60,51.88,54.21,56.62,59.05,61.58,64.19,66.88,69.65,72.49,75.44,78.46,81.53,84.66,87.85,91.10,94.39,97.73,101.1,104.5,107.9,111.4])
BaseLogPartFctRef[177]['NbLabels']=2
BaseLogPartFctRef[177]['NbCliques']=74
BaseLogPartFctRef[177]['NbSites']=48
BaseLogPartFctRef[177]['StdNgbhDivMoyNgbh']=0.4340
BaseLogPartFctRef[178]={}
BaseLogPartFctRef[178]['LogPF']=np.array([24.95,26.22,27.52,28.84,30.20,31.60,33.02,34.48,35.98,37.51,39.07,40.67,42.32,44.00,45.72,47.48,49.28,51.14,53.03,54.96,56.94,58.94,61.00,63.08,65.20,67.35,69.54,71.75,73.99,76.25])
BaseLogPartFctRef[178]['NbLabels']=2
BaseLogPartFctRef[178]['NbCliques']=50
BaseLogPartFctRef[178]['NbSites']=36
BaseLogPartFctRef[178]['StdNgbhDivMoyNgbh']=0.3966
BaseLogPartFctRef[179]={}
BaseLogPartFctRef[179]['LogPF']=np.array([27.03,28.32,29.64,31.00,32.38,33.81,35.26,36.74,38.26,39.82,41.41,43.02,44.68,46.38,48.11,49.88,51.69,53.55,55.43,57.36,59.31,61.31,63.36,65.44,67.52,69.67,71.83,74.03,76.25,78.49])
BaseLogPartFctRef[179]['NbLabels']=2
BaseLogPartFctRef[179]['NbCliques']=51
BaseLogPartFctRef[179]['NbSites']=39
BaseLogPartFctRef[179]['StdNgbhDivMoyNgbh']=0.3922
#non-regular grids: Graphes15_03.pickle
BaseLogPartFctRef[180]={}
BaseLogPartFctRef[180]['LogPF']=np.array([701.5,748.9,797.5,847.2,898.2,950.3,1004.,1059.,1115.,1172.,1232.,1293.,1355.,1420.,1488.,1558.,1632.,1708.,1787.,1868.,1951.,2036.,2122.,2209.,2297.,2386.,2475.,2565.,2655.,2746.])
BaseLogPartFctRef[180]['NbLabels']=2
BaseLogPartFctRef[180]['NbCliques']=1872
BaseLogPartFctRef[180]['NbSites']=1012
BaseLogPartFctRef[180]['StdNgbhDivMoyNgbh']=0.3409
BaseLogPartFctRef[181]={}
BaseLogPartFctRef[181]['LogPF']=np.array([385.4,410.2,435.7,461.8,488.4,515.8,543.8,572.5,601.9,632.1,663.0,694.9,727.6,761.4,796.2,832.3,869.8,908.5,948.6,989.9,1032.,1076.,1120.,1165.,1210.,1256.,1303.,1349.,1396.,1444.])
BaseLogPartFctRef[181]['NbLabels']=2
BaseLogPartFctRef[181]['NbCliques']=981
BaseLogPartFctRef[181]['NbSites']=556
BaseLogPartFctRef[181]['StdNgbhDivMoyNgbh']=0.3564
BaseLogPartFctRef[182]={}
BaseLogPartFctRef[182]['LogPF']=np.array([524.7,560.5,597.2,634.8,673.2,712.6,752.9,794.3,836.7,880.2,925.0,971.0,1019.,1068.,1119.,1173.,1229.,1287.,1348.,1410.,1473.,1537.,1602.,1668.,1734.,1801.,1869.,1937.,2005.,2074.])
BaseLogPartFctRef[182]['NbLabels']=2
BaseLogPartFctRef[182]['NbCliques']=1414
BaseLogPartFctRef[182]['NbSites']=757
BaseLogPartFctRef[182]['StdNgbhDivMoyNgbh']=0.3404
BaseLogPartFctRef[183]={}
BaseLogPartFctRef[183]['LogPF']=np.array([406.9,432.3,458.4,485.1,512.5,540.6,569.3,598.7,628.8,659.7,691.4,723.9,757.5,792.2,827.9,865.0,903.4,943.1,984.1,1026.,1069.,1113.,1158.,1204.,1250.,1297.,1344.,1391.,1439.,1487.])
BaseLogPartFctRef[183]['NbLabels']=2
BaseLogPartFctRef[183]['NbCliques']=1006
BaseLogPartFctRef[183]['NbSites']=587
BaseLogPartFctRef[183]['StdNgbhDivMoyNgbh']=0.3831
BaseLogPartFctRef[184]={}
BaseLogPartFctRef[184]['LogPF']=np.array([262.0,279.4,297.2,315.5,334.1,353.2,372.7,392.8,413.4,434.5,456.1,478.4,501.4,525.3,550.1,575.8,602.4,630.1,658.6,688.0,718.1,748.7,779.9,811.5,843.5,875.7,908.2,940.9,973.8,1007.])
BaseLogPartFctRef[184]['NbLabels']=2
BaseLogPartFctRef[184]['NbCliques']=686
BaseLogPartFctRef[184]['NbSites']=378
BaseLogPartFctRef[184]['StdNgbhDivMoyNgbh']=0.3491
BaseLogPartFctRef[185]={}
BaseLogPartFctRef[185]['LogPF']=np.array([492.1,524.5,557.6,591.5,626.2,661.7,698.2,735.5,773.7,813.0,853.2,894.6,937.3,981.3,1027.,1074.,1123.,1174.,1227.,1281.,1337.,1394.,1452.,1511.,1570.,1630.,1691.,1751.,1813.,1874.])
BaseLogPartFctRef[185]['NbLabels']=2
BaseLogPartFctRef[185]['NbCliques']=1276
BaseLogPartFctRef[185]['NbSites']=710
BaseLogPartFctRef[185]['StdNgbhDivMoyNgbh']=0.3444
BaseLogPartFctRef[186]={}
BaseLogPartFctRef[186]['LogPF']=np.array([256.5,273.2,290.3,307.8,325.8,344.2,363.0,382.3,402.1,422.4,443.3,464.7,486.7,509.5,533.1,557.7,583.2,609.7,637.1,665.4,694.4,723.9,754.0,784.4,815.1,846.2,877.5,908.9,940.5,972.3])
BaseLogPartFctRef[186]['NbLabels']=2
BaseLogPartFctRef[186]['NbCliques']=660
BaseLogPartFctRef[186]['NbSites']=370
BaseLogPartFctRef[186]['StdNgbhDivMoyNgbh']=0.3615
BaseLogPartFctRef[187]={}
BaseLogPartFctRef[187]['LogPF']=np.array([407.6,434.7,462.6,491.1,520.3,550.3,581.0,612.4,644.7,677.7,711.7,746.7,782.8,820.2,858.9,899.2,941.1,984.8,1030.,1076.,1123.,1172.,1221.,1270.,1321.,1371.,1422.,1473.,1525.,1577.])
BaseLogPartFctRef[187]['NbLabels']=2
BaseLogPartFctRef[187]['NbCliques']=1074
BaseLogPartFctRef[187]['NbSites']=588
BaseLogPartFctRef[187]['StdNgbhDivMoyNgbh']=0.3498
BaseLogPartFctRef[188]={}
BaseLogPartFctRef[188]['LogPF']=np.array([283.5,299.9,316.7,334.0,351.7,369.7,388.3,407.2,426.6,446.5,466.9,487.9,509.3,531.4,554.0,577.3,601.3,625.9,651.3,677.4,704.1,731.3,759.1,787.4,816.1,845.2,874.7,904.5,934.6,965.0])
BaseLogPartFctRef[188]['NbLabels']=2
BaseLogPartFctRef[188]['NbCliques']=649
BaseLogPartFctRef[188]['NbSites']=409
BaseLogPartFctRef[188]['StdNgbhDivMoyNgbh']=0.3758
BaseLogPartFctRef[189]={}
BaseLogPartFctRef[189]['LogPF']=np.array([263.4,281.2,299.4,318.1,337.2,356.8,376.8,397.4,418.5,440.2,462.4,485.3,509.1,533.7,559.2,585.8,613.6,642.5,672.3,702.9,734.2,766.0,798.3,831.0,864.0,897.3,930.8,964.5,998.4,1032.])
BaseLogPartFctRef[189]['NbLabels']=2
BaseLogPartFctRef[189]['NbCliques']=703
BaseLogPartFctRef[189]['NbSites']=380
BaseLogPartFctRef[189]['StdNgbhDivMoyNgbh']=0.3573
BaseLogPartFctRef[190]={}
BaseLogPartFctRef[190]['LogPF']=np.array([467.9,499.4,531.7,564.7,598.6,633.3,668.8,705.2,742.6,780.9,820.3,860.8,902.7,946.0,991.0,1038.,1087.,1138.,1191.,1245.,1300.,1356.,1413.,1471.,1529.,1588.,1647.,1707.,1767.,1827.])
BaseLogPartFctRef[190]['NbLabels']=2
BaseLogPartFctRef[190]['NbCliques']=1244
BaseLogPartFctRef[190]['NbSites']=675
BaseLogPartFctRef[190]['StdNgbhDivMoyNgbh']=0.3489
BaseLogPartFctRef[191]={}
BaseLogPartFctRef[191]['LogPF']=np.array([519.2,553.9,589.5,625.8,663.2,701.5,740.7,780.8,821.9,864.1,907.5,952.2,998.2,1046.,1095.,1147.,1201.,1257.,1315.,1374.,1435.,1497.,1559.,1623.,1687.,1752.,1817.,1883.,1949.,2015.])
BaseLogPartFctRef[191]['NbLabels']=2
BaseLogPartFctRef[191]['NbCliques']=1371
BaseLogPartFctRef[191]['NbSites']=749
BaseLogPartFctRef[191]['StdNgbhDivMoyNgbh']=0.3458
BaseLogPartFctRef[192]={}
BaseLogPartFctRef[192]['LogPF']=np.array([434.6,462.4,490.8,520.0,549.9,580.4,611.7,643.8,676.6,710.4,745.0,780.6,817.3,855.1,894.2,934.8,976.9,1020.,1065.,1112.,1159.,1207.,1256.,1306.,1357.,1408.,1459.,1511.,1564.,1616.])
BaseLogPartFctRef[192]['NbLabels']=2
BaseLogPartFctRef[192]['NbCliques']=1097
BaseLogPartFctRef[192]['NbSites']=627
BaseLogPartFctRef[192]['StdNgbhDivMoyNgbh']=0.3576
BaseLogPartFctRef[193]={}
BaseLogPartFctRef[193]['LogPF']=np.array([264.8,283.0,301.6,320.6,340.2,360.2,380.7,401.7,423.3,445.5,468.2,491.7,516.0,541.4,567.8,595.3,624.2,653.9,684.5,715.9,748.0,780.6,813.7,847.1,880.9,914.9,949.2,983.7,1018.,1053.])
BaseLogPartFctRef[193]['NbLabels']=2
BaseLogPartFctRef[193]['NbCliques']=717
BaseLogPartFctRef[193]['NbSites']=382
BaseLogPartFctRef[193]['StdNgbhDivMoyNgbh']=0.3624
BaseLogPartFctRef[194]={}
BaseLogPartFctRef[194]['LogPF']=np.array([480.4,513.2,546.8,581.3,616.6,652.7,689.7,727.7,766.7,806.7,848.0,890.5,934.5,980.4,1028.,1078.,1131.,1185.,1241.,1298.,1356.,1415.,1475.,1536.,1597.,1659.,1720.,1783.,1845.,1908.])
BaseLogPartFctRef[194]['NbLabels']=2
BaseLogPartFctRef[194]['NbCliques']=1297
BaseLogPartFctRef[194]['NbSites']=693
BaseLogPartFctRef[194]['StdNgbhDivMoyNgbh']=0.3674
BaseLogPartFctRef[195]={}
BaseLogPartFctRef[195]['LogPF']=np.array([228.7,242.7,257.0,271.7,286.7,302.1,317.9,334.0,350.5,367.4,384.8,402.7,420.9,439.7,459.2,479.2,499.9,521.3,543.3,566.1,589.4,613.2,637.5,662.3,687.4,712.9,738.5,764.4,790.5,816.7])
BaseLogPartFctRef[195]['NbLabels']=2
BaseLogPartFctRef[195]['NbCliques']=552
BaseLogPartFctRef[195]['NbSites']=330
BaseLogPartFctRef[195]['StdNgbhDivMoyNgbh']=0.3641
BaseLogPartFctRef[196]={}
BaseLogPartFctRef[196]['LogPF']=np.array([406.9,433.7,461.2,489.4,518.2,547.8,578.0,609.0,640.7,673.4,706.9,741.3,776.9,813.5,851.5,891.0,932.2,974.7,1019.,1064.,1111.,1158.,1207.,1255.,1305.,1355.,1405.,1456.,1507.,1558.])
BaseLogPartFctRef[196]['NbLabels']=2
BaseLogPartFctRef[196]['NbCliques']=1060
BaseLogPartFctRef[196]['NbSites']=587
BaseLogPartFctRef[196]['StdNgbhDivMoyNgbh']=0.3567
BaseLogPartFctRef[197]={}
BaseLogPartFctRef[197]['LogPF']=np.array([381.2,406.3,431.9,458.2,485.1,512.6,540.8,569.7,599.3,629.7,660.9,693.0,726.1,760.3,795.6,832.3,870.4,910.0,951.0,993.2,1036.,1080.,1125.,1171.,1217.,1263.,1310.,1357.,1404.,1452.])
BaseLogPartFctRef[197]['NbLabels']=2
BaseLogPartFctRef[197]['NbCliques']=988
BaseLogPartFctRef[197]['NbSites']=550
BaseLogPartFctRef[197]['StdNgbhDivMoyNgbh']=0.3431
BaseLogPartFctRef[198]={}
BaseLogPartFctRef[198]['LogPF']=np.array([544.1,580.1,617.0,654.8,693.6,733.2,773.9,815.4,858.0,901.7,946.5,992.7,1040.,1089.,1140.,1193.,1248.,1305.,1364.,1425.,1488.,1552.,1617.,1683.,1749.,1816.,1884.,1952.,2020.,2089.])
BaseLogPartFctRef[198]['NbLabels']=2
BaseLogPartFctRef[198]['NbCliques']=1423
BaseLogPartFctRef[198]['NbSites']=785
BaseLogPartFctRef[198]['StdNgbhDivMoyNgbh']=0.3462
BaseLogPartFctRef[199]={}
BaseLogPartFctRef[199]['LogPF']=np.array([262.7,279.9,297.6,315.7,334.2,353.2,372.7,392.6,413.0,433.9,455.5,477.7,500.4,524.1,548.5,574.1,600.5,627.9,656.1,685.2,714.9,745.3,776.2,807.4,839.0,871.0,903.2,935.5,968.1,1001.])
BaseLogPartFctRef[199]['NbLabels']=2
BaseLogPartFctRef[199]['NbCliques']=681
BaseLogPartFctRef[199]['NbSites']=379
BaseLogPartFctRef[199]['StdNgbhDivMoyNgbh']=0.3542
BaseLogPartFctRef[200]={}
BaseLogPartFctRef[200]['LogPF']=np.array([150.4,159.8,169.4,179.3,189.4,199.7,210.3,221.1,232.2,243.6,255.3,267.3,279.6,292.4,305.6,319.1,333.2,347.7,362.6,378.1,393.8,410.0,426.6,443.4,460.4,477.6,494.9,512.5,530.1,547.8])
BaseLogPartFctRef[200]['NbLabels']=2
BaseLogPartFctRef[200]['NbCliques']=371
BaseLogPartFctRef[200]['NbSites']=217
BaseLogPartFctRef[200]['StdNgbhDivMoyNgbh']=0.3551
BaseLogPartFctRef[201]={}
BaseLogPartFctRef[201]['LogPF']=np.array([286.3,305.2,324.6,344.5,364.8,385.7,407.0,428.9,451.4,474.4,498.1,522.4,547.6,573.5,600.8,629.2,658.7,689.5,720.9,753.3,786.3,819.8,853.9,888.4,923.2,958.3,993.7,1029.,1065.,1101.])
BaseLogPartFctRef[201]['NbLabels']=2
BaseLogPartFctRef[201]['NbCliques']=748
BaseLogPartFctRef[201]['NbSites']=413
BaseLogPartFctRef[201]['StdNgbhDivMoyNgbh']=0.3603
BaseLogPartFctRef[202]={}
BaseLogPartFctRef[202]['LogPF']=np.array([490.1,522.9,556.7,591.2,626.5,662.8,699.9,738.0,776.9,817.0,858.1,900.5,944.0,989.1,1036.,1084.,1135.,1187.,1242.,1298.,1355.,1413.,1473.,1533.,1594.,1656.,1718.,1780.,1843.,1906.])
BaseLogPartFctRef[202]['NbLabels']=2
BaseLogPartFctRef[202]['NbCliques']=1300
BaseLogPartFctRef[202]['NbSites']=707
BaseLogPartFctRef[202]['StdNgbhDivMoyNgbh']=0.3424
BaseLogPartFctRef[203]={}
BaseLogPartFctRef[203]['LogPF']=np.array([433.9,461.4,489.6,518.5,548.1,578.3,609.4,641.2,673.7,707.1,741.4,776.6,812.9,850.3,889.1,929.2,970.8,1014.,1058.,1104.,1151.,1198.,1247.,1296.,1346.,1397.,1448.,1499.,1551.,1603.])
BaseLogPartFctRef[203]['NbLabels']=2
BaseLogPartFctRef[203]['NbCliques']=1087
BaseLogPartFctRef[203]['NbSites']=626
BaseLogPartFctRef[203]['StdNgbhDivMoyNgbh']=0.3691
BaseLogPartFctRef[204]={}
BaseLogPartFctRef[204]['LogPF']=np.array([287.7,306.4,325.5,345.1,365.2,385.8,406.8,428.4,450.5,473.2,496.6,520.7,545.5,571.2,598.0,625.7,654.5,684.3,715.0,746.4,778.5,811.2,844.6,878.3,912.4,946.9,981.8,1017.,1052.,1088.])
BaseLogPartFctRef[204]['NbLabels']=2
BaseLogPartFctRef[204]['NbCliques']=738
BaseLogPartFctRef[204]['NbSites']=415
BaseLogPartFctRef[204]['StdNgbhDivMoyNgbh']=0.3716
BaseLogPartFctRef[205]={}
BaseLogPartFctRef[205]['LogPF']=np.array([294.6,314.1,334.1,354.5,375.6,397.0,419.0,441.5,464.5,488.3,512.7,537.8,563.8,590.7,618.6,647.9,678.3,709.6,741.9,775.1,808.9,843.4,878.3,913.8,949.6,985.7,1022.,1059.,1096.,1133.])
BaseLogPartFctRef[205]['NbLabels']=2
BaseLogPartFctRef[205]['NbCliques']=770
BaseLogPartFctRef[205]['NbSites']=425
BaseLogPartFctRef[205]['StdNgbhDivMoyNgbh']=0.3628
BaseLogPartFctRef[206]={}
BaseLogPartFctRef[206]['LogPF']=np.array([334.1,356.8,380.0,403.8,428.1,453.0,478.6,504.8,531.7,559.3,587.6,616.7,646.9,678.1,710.3,743.9,779.0,815.4,853.2,892.2,932.0,972.6,1014.,1056.,1098.,1140.,1183.,1226.,1270.,1313.])
BaseLogPartFctRef[206]['NbLabels']=2
BaseLogPartFctRef[206]['NbCliques']=895
BaseLogPartFctRef[206]['NbSites']=482
BaseLogPartFctRef[206]['StdNgbhDivMoyNgbh']=0.3485
BaseLogPartFctRef[207]={}
BaseLogPartFctRef[207]['LogPF']=np.array([627.3,669.2,712.2,756.2,801.3,847.4,894.7,943.2,992.8,1044.,1096.,1150.,1205.,1263.,1322.,1384.,1448.,1515.,1584.,1655.,1728.,1802.,1878.,1955.,2032.,2110.,2189.,2268.,2348.,2428.])
BaseLogPartFctRef[207]['NbLabels']=2
BaseLogPartFctRef[207]['NbCliques']=1656
BaseLogPartFctRef[207]['NbSites']=905
BaseLogPartFctRef[207]['StdNgbhDivMoyNgbh']=0.3428
BaseLogPartFctRef[208]={}
BaseLogPartFctRef[208]['LogPF']=np.array([339.6,362.1,385.1,408.7,432.8,457.5,482.9,508.8,535.4,562.7,590.7,619.4,649.0,679.5,711.3,744.3,778.6,814.3,851.3,889.5,928.4,968.2,1009.,1049.,1091.,1133.,1175.,1217.,1260.,1302.])
BaseLogPartFctRef[208]['NbLabels']=2
BaseLogPartFctRef[208]['NbCliques']=887
BaseLogPartFctRef[208]['NbSites']=490
BaseLogPartFctRef[208]['StdNgbhDivMoyNgbh']=0.3409
BaseLogPartFctRef[209]={}
BaseLogPartFctRef[209]['LogPF']=np.array([333.4,356.1,379.3,403.0,427.4,452.3,477.9,504.1,530.9,558.5,586.8,616.0,646.2,677.5,710.2,744.3,779.8,816.7,854.8,894.1,934.2,974.8,1016.,1058.,1100.,1142.,1185.,1228.,1271.,1314.])
BaseLogPartFctRef[209]['NbLabels']=2
BaseLogPartFctRef[209]['NbCliques']=895
BaseLogPartFctRef[209]['NbSites']=481
BaseLogPartFctRef[209]['StdNgbhDivMoyNgbh']=0.3583
#non-regular grids: Graphes15_05.pickle
BaseLogPartFctRef[210]={}
BaseLogPartFctRef[210]['LogPF']=np.array([2086.,2284.,2487.,2694.,2907.,3125.,3349.,3579.,3817.,4070.,4350.,4649.,4971.,5308.,5659.,6020.,6388.,6761.,7138.,7518.,7901.,8285.,8670.,9057.,9444.,9832.,10220.,10609.,10998.,11387.])
BaseLogPartFctRef[210]['NbLabels']=2
BaseLogPartFctRef[210]['NbCliques']=7810
BaseLogPartFctRef[210]['NbSites']=3010
BaseLogPartFctRef[210]['StdNgbhDivMoyNgbh']=0.1767
BaseLogPartFctRef[211]={}
BaseLogPartFctRef[211]['LogPF']=np.array([1987.,2171.,2360.,2554.,2753.,2956.,3165.,3380.,3602.,3836.,4095.,4368.,4663.,4974.,5298.,5631.,5972.,6318.,6668.,7021.,7377.,7735.,8094.,8454.,8814.,9176.,9538.,9900.,10263.,10626.])
BaseLogPartFctRef[211]['NbLabels']=2
BaseLogPartFctRef[211]['NbCliques']=7288
BaseLogPartFctRef[211]['NbSites']=2866
BaseLogPartFctRef[211]['StdNgbhDivMoyNgbh']=0.1894
BaseLogPartFctRef[212]={}
BaseLogPartFctRef[212]['LogPF']=np.array([2013.,2199.,2389.,2584.,2784.,2988.,3199.,3415.,3638.,3871.,4129.,4407.,4702.,5012.,5337.,5672.,6015.,6363.,6715.,7070.,7428.,7787.,8148.,8511.,8874.,9237.,9602.,9966.,10332.,10697.])
BaseLogPartFctRef[212]['NbLabels']=2
BaseLogPartFctRef[212]['NbCliques']=7335
BaseLogPartFctRef[212]['NbSites']=2904
BaseLogPartFctRef[212]['StdNgbhDivMoyNgbh']=0.1867
BaseLogPartFctRef[213]={}
BaseLogPartFctRef[213]['LogPF']=np.array([2043.,2233.,2427.,2627.,2832.,3041.,3257.,3478.,3707.,3952.,4217.,4501.,4807.,5127.,5461.,5805.,6156.,6513.,6874.,7239.,7605.,7974.,8344.,8715.,9086.,9459.,9832.,10205.,10579.,10953.])
BaseLogPartFctRef[213]['NbLabels']=2
BaseLogPartFctRef[213]['NbCliques']=7509
BaseLogPartFctRef[213]['NbSites']=2947
BaseLogPartFctRef[213]['StdNgbhDivMoyNgbh']=0.1871
BaseLogPartFctRef[214]={}
BaseLogPartFctRef[214]['LogPF']=np.array([2050.,2242.,2438.,2639.,2845.,3056.,3273.,3496.,3726.,3970.,4236.,4527.,4836.,5161.,5498.,5846.,6201.,6561.,6926.,7293.,7663.,8034.,8407.,8781.,9156.,9531.,9907.,10284.,10660.,11037.])
BaseLogPartFctRef[214]['NbLabels']=2
BaseLogPartFctRef[214]['NbCliques']=7564
BaseLogPartFctRef[214]['NbSites']=2958
BaseLogPartFctRef[214]['StdNgbhDivMoyNgbh']=0.1783
BaseLogPartFctRef[215]={}
BaseLogPartFctRef[215]['LogPF']=np.array([2028.,2218.,2412.,2612.,2816.,3025.,3240.,3461.,3690.,3934.,4197.,4483.,4788.,5109.,5443.,5787.,6138.,6495.,6857.,7221.,7587.,7955.,8324.,8695.,9066.,9438.,9811.,10184.,10557.,10931.])
BaseLogPartFctRef[215]['NbLabels']=2
BaseLogPartFctRef[215]['NbCliques']=7496
BaseLogPartFctRef[215]['NbSites']=2926
BaseLogPartFctRef[215]['StdNgbhDivMoyNgbh']=0.1825
BaseLogPartFctRef[216]={}
BaseLogPartFctRef[216]['LogPF']=np.array([2044.,2234.,2429.,2628.,2832.,3042.,3257.,3478.,3706.,3948.,4211.,4497.,4803.,5123.,5457.,5801.,6152.,6509.,6870.,7234.,7600.,7969.,8338.,8709.,9080.,9452.,9825.,10198.,10572.,10945.])
BaseLogPartFctRef[216]['NbLabels']=2
BaseLogPartFctRef[216]['NbCliques']=7504
BaseLogPartFctRef[216]['NbSites']=2949
BaseLogPartFctRef[216]['StdNgbhDivMoyNgbh']=0.1890
BaseLogPartFctRef[217]={}
BaseLogPartFctRef[217]['LogPF']=np.array([2076.,2271.,2471.,2675.,2885.,3101.,3321.,3549.,3783.,4032.,4306.,4603.,4919.,5250.,5594.,5948.,6310.,6677.,7048.,7423.,7800.,8178.,8558.,8939.,9321.,9703.,10086.,10469.,10853.,11237.])
BaseLogPartFctRef[217]['NbLabels']=2
BaseLogPartFctRef[217]['NbCliques']=7703
BaseLogPartFctRef[217]['NbSites']=2995
BaseLogPartFctRef[217]['StdNgbhDivMoyNgbh']=0.1787
BaseLogPartFctRef[218]={}
BaseLogPartFctRef[218]['LogPF']=np.array([2070.,2264.,2464.,2668.,2877.,3092.,3312.,3538.,3772.,4023.,4298.,4597.,4911.,5243.,5587.,5940.,6301.,6667.,7037.,7411.,7786.,8164.,8543.,8923.,9304.,9685.,10067.,10449.,10832.,11215.])
BaseLogPartFctRef[218]['NbLabels']=2
BaseLogPartFctRef[218]['NbCliques']=7683
BaseLogPartFctRef[218]['NbSites']=2986
BaseLogPartFctRef[218]['StdNgbhDivMoyNgbh']=0.1788
BaseLogPartFctRef[219]={}
BaseLogPartFctRef[219]['LogPF']=np.array([2069.,2263.,2461.,2665.,2873.,3087.,3306.,3531.,3765.,4015.,4283.,4574.,4885.,5214.,5555.,5906.,6265.,6629.,6998.,7369.,7743.,8119.,8496.,8874.,9253.,9633.,10013.,10394.,10775.,11156.])
BaseLogPartFctRef[219]['NbLabels']=2
BaseLogPartFctRef[219]['NbCliques']=7650
BaseLogPartFctRef[219]['NbSites']=2985
BaseLogPartFctRef[219]['StdNgbhDivMoyNgbh']=0.1807
BaseLogPartFctRef[220]={}
BaseLogPartFctRef[220]['LogPF']=np.array([2023.,2210.,2402.,2599.,2801.,3008.,3220.,3439.,3664.,3902.,4165.,4447.,4745.,5061.,5389.,5727.,6073.,6423.,6779.,7138.,7499.,7862.,8226.,8592.,8958.,9326.,9693.,10062.,10430.,10799.])
BaseLogPartFctRef[220]['NbLabels']=2
BaseLogPartFctRef[220]['NbCliques']=7408
BaseLogPartFctRef[220]['NbSites']=2918
BaseLogPartFctRef[220]['StdNgbhDivMoyNgbh']=0.1983
BaseLogPartFctRef[221]={}
BaseLogPartFctRef[221]['LogPF']=np.array([2014.,2202.,2393.,2590.,2791.,2998.,3210.,3428.,3654.,3891.,4153.,4437.,4738.,5054.,5383.,5722.,6068.,6419.,6775.,7134.,7496.,7859.,8224.,8589.,8956.,9323.,9691.,10059.,10428.,10796.])
BaseLogPartFctRef[221]['NbLabels']=2
BaseLogPartFctRef[221]['NbCliques']=7401
BaseLogPartFctRef[221]['NbSites']=2906
BaseLogPartFctRef[221]['StdNgbhDivMoyNgbh']=0.1820
BaseLogPartFctRef[222]={}
BaseLogPartFctRef[222]['LogPF']=np.array([1992.,2176.,2365.,2559.,2757.,2961.,3170.,3385.,3607.,3840.,4096.,4371.,4664.,4975.,5298.,5631.,5972.,6318.,6668.,7021.,7376.,7733.,8092.,8452.,8813.,9174.,9536.,9898.,10261.,10624.])
BaseLogPartFctRef[222]['NbLabels']=2
BaseLogPartFctRef[222]['NbCliques']=7285
BaseLogPartFctRef[222]['NbSites']=2874
BaseLogPartFctRef[222]['StdNgbhDivMoyNgbh']=0.1896
BaseLogPartFctRef[223]={}
BaseLogPartFctRef[223]['LogPF']=np.array([2059.,2253.,2451.,2654.,2862.,3075.,3294.,3519.,3752.,3998.,4271.,4563.,4875.,5202.,5543.,5894.,6252.,6616.,6984.,7355.,7728.,8104.,8480.,8858.,9236.,9615.,9995.,10375.,10755.,11136.])
BaseLogPartFctRef[223]['NbLabels']=2
BaseLogPartFctRef[223]['NbCliques']=7636
BaseLogPartFctRef[223]['NbSites']=2971
BaseLogPartFctRef[223]['StdNgbhDivMoyNgbh']=0.1798
BaseLogPartFctRef[224]={}
BaseLogPartFctRef[224]['LogPF']=np.array([2071.,2265.,2463.,2667.,2875.,3089.,3308.,3534.,3767.,4014.,4283.,4578.,4890.,5219.,5560.,5911.,6270.,6635.,7003.,7375.,7749.,8125.,8502.,8880.,9259.,9639.,10019.,10400.,10781.,11163.])
BaseLogPartFctRef[224]['NbLabels']=2
BaseLogPartFctRef[224]['NbCliques']=7652
BaseLogPartFctRef[224]['NbSites']=2988
BaseLogPartFctRef[224]['StdNgbhDivMoyNgbh']=0.1819
BaseLogPartFctRef[225]={}
BaseLogPartFctRef[225]['LogPF']=np.array([2053.,2245.,2442.,2644.,2851.,3064.,3281.,3506.,3737.,3983.,4254.,4545.,4855.,5182.,5521.,5870.,6227.,6588.,6954.,7324.,7695.,8068.,8443.,8818.,9195.,9572.,9949.,10327.,10706.,11085.])
BaseLogPartFctRef[225]['NbLabels']=2
BaseLogPartFctRef[225]['NbCliques']=7599
BaseLogPartFctRef[225]['NbSites']=2962
BaseLogPartFctRef[225]['StdNgbhDivMoyNgbh']=0.1821
BaseLogPartFctRef[226]={}
BaseLogPartFctRef[226]['LogPF']=np.array([2075.,2269.,2468.,2673.,2882.,3097.,3317.,3543.,3778.,4029.,4302.,4597.,4909.,5239.,5581.,5934.,6295.,6661.,7032.,7405.,7781.,8158.,8537.,8917.,9298.,9680.,10062.,10444.,10827.,11210.])
BaseLogPartFctRef[226]['NbLabels']=2
BaseLogPartFctRef[226]['NbCliques']=7685
BaseLogPartFctRef[226]['NbSites']=2993
BaseLogPartFctRef[226]['StdNgbhDivMoyNgbh']=0.1803
BaseLogPartFctRef[227]={}
BaseLogPartFctRef[227]['LogPF']=np.array([2066.,2260.,2459.,2663.,2872.,3086.,3306.,3532.,3766.,4014.,4288.,4582.,4896.,5227.,5570.,5923.,6283.,6649.,7018.,7391.,7766.,8143.,8522.,8901.,9281.,9662.,10044.,10425.,10808.,11190.])
BaseLogPartFctRef[227]['NbLabels']=2
BaseLogPartFctRef[227]['NbCliques']=7673
BaseLogPartFctRef[227]['NbSites']=2980
BaseLogPartFctRef[227]['StdNgbhDivMoyNgbh']=0.1807
BaseLogPartFctRef[228]={}
BaseLogPartFctRef[228]['LogPF']=np.array([2075.,2270.,2470.,2676.,2886.,3101.,3323.,3550.,3786.,4041.,4316.,4614.,4930.,5261.,5607.,5963.,6325.,6693.,7065.,7440.,7818.,8197.,8577.,8959.,9342.,9725.,10108.,10492.,10877.,11262.])
BaseLogPartFctRef[228]['NbLabels']=2
BaseLogPartFctRef[228]['NbCliques']=7718
BaseLogPartFctRef[228]['NbSites']=2993
BaseLogPartFctRef[228]['StdNgbhDivMoyNgbh']=0.1810
BaseLogPartFctRef[229]={}
BaseLogPartFctRef[229]['LogPF']=np.array([2049.,2240.,2436.,2637.,2843.,3055.,3271.,3494.,3725.,3971.,4239.,4529.,4835.,5158.,5495.,5841.,6195.,6555.,6919.,7286.,7655.,8026.,8399.,8773.,9147.,9522.,9898.,10274.,10651.,11027.])
BaseLogPartFctRef[229]['NbLabels']=2
BaseLogPartFctRef[229]['NbCliques']=7560
BaseLogPartFctRef[229]['NbSites']=2956
BaseLogPartFctRef[229]['StdNgbhDivMoyNgbh']=0.1824
BaseLogPartFctRef[230]={}
BaseLogPartFctRef[230]['LogPF']=np.array([2041.,2231.,2427.,2628.,2834.,3044.,3261.,3483.,3713.,3955.,4226.,4512.,4821.,5146.,5483.,5830.,6184.,6544.,6907.,7274.,7643.,8014.,8386.,8759.,9133.,9508.,9883.,10259.,10635.,11011.])
BaseLogPartFctRef[230]['NbLabels']=2
BaseLogPartFctRef[230]['NbCliques']=7546
BaseLogPartFctRef[230]['NbSites']=2944
BaseLogPartFctRef[230]['StdNgbhDivMoyNgbh']=0.1754
BaseLogPartFctRef[231]={}
BaseLogPartFctRef[231]['LogPF']=np.array([2075.,2270.,2470.,2676.,2886.,3102.,3323.,3550.,3785.,4036.,4313.,4612.,4929.,5262.,5608.,5964.,6327.,6695.,7067.,7442.,7820.,8199.,8580.,8962.,9344.,9727.,10111.,10495.,10880.,11265.])
BaseLogPartFctRef[231]['NbLabels']=2
BaseLogPartFctRef[231]['NbCliques']=7720
BaseLogPartFctRef[231]['NbSites']=2993
BaseLogPartFctRef[231]['StdNgbhDivMoyNgbh']=0.1826
BaseLogPartFctRef[232]={}
BaseLogPartFctRef[232]['LogPF']=np.array([2065.,2258.,2456.,2659.,2867.,3080.,3299.,3524.,3757.,4002.,4273.,4564.,4875.,5201.,5542.,5893.,6250.,6614.,6981.,7352.,7725.,8100.,8476.,8853.,9231.,9610.,9990.,10370.,10750.,11130.])
BaseLogPartFctRef[232]['NbLabels']=2
BaseLogPartFctRef[232]['NbCliques']=7633
BaseLogPartFctRef[232]['NbSites']=2979
BaseLogPartFctRef[232]['StdNgbhDivMoyNgbh']=0.1800
BaseLogPartFctRef[233]={}
BaseLogPartFctRef[233]['LogPF']=np.array([2031.,2219.,2412.,2610.,2813.,3021.,3235.,3454.,3680.,3920.,4180.,4460.,4760.,5076.,5407.,5748.,6096.,6449.,6806.,7167.,7531.,7896.,8262.,8630.,8998.,9368.,9737.,10108.,10478.,10849.])
BaseLogPartFctRef[233]['NbLabels']=2
BaseLogPartFctRef[233]['NbCliques']=7442
BaseLogPartFctRef[233]['NbSites']=2930
BaseLogPartFctRef[233]['StdNgbhDivMoyNgbh']=0.1851
BaseLogPartFctRef[234]={}
BaseLogPartFctRef[234]['LogPF']=np.array([2073.,2268.,2468.,2673.,2883.,3098.,3320.,3547.,3782.,4035.,4313.,4610.,4925.,5257.,5602.,5958.,6320.,6688.,7060.,7435.,7813.,8192.,8572.,8954.,9336.,9719.,10103.,10486.,10871.,11255.])
BaseLogPartFctRef[234]['NbLabels']=2
BaseLogPartFctRef[234]['NbCliques']=7713
BaseLogPartFctRef[234]['NbSites']=2990
BaseLogPartFctRef[234]['StdNgbhDivMoyNgbh']=0.1761
BaseLogPartFctRef[235]={}
BaseLogPartFctRef[235]['LogPF']=np.array([2075.,2269.,2469.,2673.,2882.,3097.,3318.,3544.,3779.,4025.,4296.,4593.,4905.,5235.,5579.,5932.,6292.,6659.,7030.,7403.,7779.,8157.,8536.,8916.,9297.,9679.,10061.,10444.,10826.,11210.])
BaseLogPartFctRef[235]['NbLabels']=2
BaseLogPartFctRef[235]['NbCliques']=7686
BaseLogPartFctRef[235]['NbSites']=2993
BaseLogPartFctRef[235]['StdNgbhDivMoyNgbh']=0.1758
BaseLogPartFctRef[236]={}
BaseLogPartFctRef[236]['LogPF']=np.array([2070.,2266.,2466.,2672.,2883.,3099.,3321.,3550.,3786.,4039.,4316.,4614.,4931.,5266.,5613.,5971.,6335.,6705.,7079.,7455.,7834.,8215.,8597.,8980.,9363.,9748.,10133.,10518.,10904.,11289.])
BaseLogPartFctRef[236]['NbLabels']=2
BaseLogPartFctRef[236]['NbCliques']=7741
BaseLogPartFctRef[236]['NbSites']=2986
BaseLogPartFctRef[236]['StdNgbhDivMoyNgbh']=0.1784
BaseLogPartFctRef[237]={}
BaseLogPartFctRef[237]['LogPF']=np.array([2054.,2246.,2443.,2644.,2850.,3062.,3279.,3502.,3733.,3979.,4247.,4536.,4842.,5165.,5503.,5850.,6204.,6564.,6929.,7296.,7666.,8037.,8410.,8785.,9160.,9535.,9911.,10288.,10665.,11042.])
BaseLogPartFctRef[237]['NbLabels']=2
BaseLogPartFctRef[237]['NbCliques']=7569
BaseLogPartFctRef[237]['NbSites']=2964
BaseLogPartFctRef[237]['StdNgbhDivMoyNgbh']=0.1798
BaseLogPartFctRef[238]={}
BaseLogPartFctRef[238]['LogPF']=np.array([2076.,2272.,2472.,2678.,2889.,3105.,3326.,3554.,3790.,4044.,4318.,4618.,4933.,5266.,5612.,5968.,6331.,6700.,7073.,7449.,7828.,8208.,8589.,8972.,9355.,9739.,10124.,10509.,10894.,11279.])
BaseLogPartFctRef[238]['NbLabels']=2
BaseLogPartFctRef[238]['NbCliques']=7732
BaseLogPartFctRef[238]['NbSites']=2995
BaseLogPartFctRef[238]['StdNgbhDivMoyNgbh']=0.1768
BaseLogPartFctRef[239]={}
BaseLogPartFctRef[239]['LogPF']=np.array([2066.,2261.,2460.,2664.,2873.,3088.,3308.,3535.,3769.,4019.,4292.,4586.,4898.,5229.,5573.,5926.,6287.,6652.,7022.,7395.,7771.,8148.,8526.,8906.,9286.,9668.,10049.,10431.,10814.,11197.])
BaseLogPartFctRef[239]['NbLabels']=2
BaseLogPartFctRef[239]['NbCliques']=7679
BaseLogPartFctRef[239]['NbSites']=2981
BaseLogPartFctRef[239]['StdNgbhDivMoyNgbh']=0.1819
#regular grids: lines - squares - cubes
BaseLogPartFctRef[240]={}
BaseLogPartFctRef[240]['LogPF']=np.array([693.1,718.2,743.9,770.3,797.6,825.2,853.6,882.7,912.4,942.8,973.6,1005.,1037.,1070.,1103.,1137.,1171.,1206.,1241.,1277.,1314.,1351.,1388.,1426.,1464.,1503.,1543.,1583.,1623.,1663.])
BaseLogPartFctRef[240]['NbLabels']=2
BaseLogPartFctRef[240]['NbCliques']=999
BaseLogPartFctRef[240]['NbSites']=1000
BaseLogPartFctRef[240]['StdNgbhDivMoyNgbh']=0.0223
BaseLogPartFctRef[241]={}
BaseLogPartFctRef[241]['LogPF']=np.array([693.1,761.6,831.6,903.3,977.0,1052.,1130.,1211.,1295.,1386.,1485.,1592.,1708.,1828.,1953.,2080.,2209.,2340.,2471.,2603.,2736.,2870.,3004.,3138.,3272.,3407.,3541.,3676.,3811.,3946.])
BaseLogPartFctRef[241]['NbLabels']=2
BaseLogPartFctRef[241]['NbCliques']=2700
BaseLogPartFctRef[241]['NbSites']=1000
BaseLogPartFctRef[241]['StdNgbhDivMoyNgbh']=0.1282
BaseLogPartFctRef[242]={}
BaseLogPartFctRef[242]['LogPF']=np.array([1198.,1318.,1441.,1567.,1697.,1830.,1966.,2107.,2257.,2425.,2606.,2805.,3012.,3227.,3449.,3674.,3903.,4134.,4366.,4600.,4835.,5070.,5306.,5543.,5779.,6016.,6253.,6490.,6727.,6964.])
BaseLogPartFctRef[242]['NbLabels']=2
BaseLogPartFctRef[242]['NbCliques']=4752
BaseLogPartFctRef[242]['NbSites']=1728
BaseLogPartFctRef[242]['StdNgbhDivMoyNgbh']=0.1173
BaseLogPartFctRef[243]={}
BaseLogPartFctRef[243]['LogPF']=np.array([10830.,11614.,12417.,13239.,14084.,14952.,15845.,16763.,17713.,18690.,19705.,20756.,21857.,23002.,24195.,25449.,26770.,28140.,29560.,31007.,32472.,33955.,35452.,36961.,38477.,39999.,41528.,43061.,44598.,46135.])
BaseLogPartFctRef[243]['NbLabels']=2
BaseLogPartFctRef[243]['NbCliques']=31000
BaseLogPartFctRef[243]['NbSites']=15625
BaseLogPartFctRef[243]['StdNgbhDivMoyNgbh']=0.0447
BaseLogPartFctRef[244]={}
BaseLogPartFctRef[244]['LogPF']=np.array([2339.,2578.,2822.,3074.,3332.,3596.,3869.,4156.,4468.,4810.,5192.,5597.,6014.,6449.,6894.,7347.,7804.,8265.,8729.,9195.,9663.,10132.,10601.,11071.,11541.,12013.,12484.,12956.,13428.,13900.])
BaseLogPartFctRef[244]['NbLabels']=2
BaseLogPartFctRef[244]['NbCliques']=9450
BaseLogPartFctRef[244]['NbSites']=3375
BaseLogPartFctRef[244]['StdNgbhDivMoyNgbh']=0.1051
BaseLogPartFctRef[245]={}
BaseLogPartFctRef[245]['LogPF']=np.array([10830.,11227.,11632.,12049.,12475.,12913.,13359.,13818.,14285.,14763.,15254.,15754.,16263.,16785.,17315.,17853.,18401.,18959.,19524.,20101.,20686.,21278.,21875.,22482.,23097.,23719.,24348.,24982.,25627.,26275.])
BaseLogPartFctRef[245]['NbLabels']=2
BaseLogPartFctRef[245]['NbCliques']=15624
BaseLogPartFctRef[245]['NbSites']=15625
BaseLogPartFctRef[245]['StdNgbhDivMoyNgbh']=0.0057
BaseLogPartFctRef[246]={}
BaseLogPartFctRef[246]['LogPF']=np.array([1198.,1241.,1286.,1332.,1379.,1427.,1476.,1526.,1578.,1630.,1684.,1739.,1794.,1851.,1908.,1967.,2026.,2086.,2147.,2210.,2273.,2337.,2402.,2467.,2534.,2601.,2669.,2738.,2807.,2877.])
BaseLogPartFctRef[246]['NbLabels']=2
BaseLogPartFctRef[246]['NbCliques']=1727
BaseLogPartFctRef[246]['NbSites']=1728
BaseLogPartFctRef[246]['StdNgbhDivMoyNgbh']=0.0170
BaseLogPartFctRef[247]={}
BaseLogPartFctRef[247]['LogPF']=np.array([5545.,6122.,6715.,7322.,7946.,8585.,9250.,9949.,10720.,11592.,12535.,13531.,14569.,15629.,16710.,17808.,18917.,20035.,21158.,22285.,23414.,24546.,25680.,26815.,27952.,29090.,30228.,31367.,32505.,33645.])
BaseLogPartFctRef[247]['NbLabels']=2
BaseLogPartFctRef[247]['NbCliques']=22800
BaseLogPartFctRef[247]['NbSites']=8000
BaseLogPartFctRef[247]['StdNgbhDivMoyNgbh']=0.0911
BaseLogPartFctRef[248]={}
BaseLogPartFctRef[248]['LogPF']=np.array([10830.,11968.,13133.,14329.,15560.,16832.,18149.,19549.,21106.,22849.,24742.,26729.,28789.,30902.,33054.,35232.,37428.,39637.,41856.,44085.,46317.,48554.,50795.,53037.,55282.,57528.,59775.,62022.,64270.,66519.])
BaseLogPartFctRef[248]['NbLabels']=2
BaseLogPartFctRef[248]['NbCliques']=45000
BaseLogPartFctRef[248]['NbSites']=15625
BaseLogPartFctRef[248]['StdNgbhDivMoyNgbh']=0.0816
BaseLogPartFctRef[249]={}
BaseLogPartFctRef[249]['LogPF']=np.array([666.1,713.0,761.3,810.8,861.5,913.5,966.7,1021.,1077.,1134.,1193.,1254.,1317.,1382.,1449.,1518.,1591.,1667.,1746.,1828.,1914.,2000.,2088.,2177.,2266.,2357.,2447.,2539.,2631.,2723.])
BaseLogPartFctRef[249]['NbLabels']=2
BaseLogPartFctRef[249]['NbCliques']=1860
BaseLogPartFctRef[249]['NbSites']=961
BaseLogPartFctRef[249]['StdNgbhDivMoyNgbh']=0.0897
BaseLogPartFctRef[250]={}
BaseLogPartFctRef[250]['LogPF']=np.array([2339.,2425.,2512.,2603.,2694.,2788.,2884.,2982.,3083.,3185.,3290.,3396.,3505.,3616.,3729.,3843.,3960.,4079.,4199.,4320.,4444.,4570.,4698.,4828.,4958.,5089.,5223.,5357.,5494.,5631.])
BaseLogPartFctRef[250]['NbLabels']=2
BaseLogPartFctRef[250]['NbCliques']=3374
BaseLogPartFctRef[250]['NbSites']=3375
BaseLogPartFctRef[250]['StdNgbhDivMoyNgbh']=0.0122
BaseLogPartFctRef[251]={}
BaseLogPartFctRef[251]['LogPF']=np.array([1165.,1248.,1333.,1420.,1509.,1601.,1695.,1791.,1890.,1991.,2095.,2203.,2314.,2429.,2548.,2674.,2806.,2943.,3086.,3234.,3385.,3539.,3695.,3853.,4012.,4172.,4333.,4494.,4656.,4818.])
BaseLogPartFctRef[251]['NbLabels']=2
BaseLogPartFctRef[251]['NbCliques']=3280
BaseLogPartFctRef[251]['NbSites']=1681
BaseLogPartFctRef[251]['StdNgbhDivMoyNgbh']=0.0780
BaseLogPartFctRef[252]={}
BaseLogPartFctRef[252]['LogPF']=np.array([2332.,2499.,2671.,2846.,3026.,3210.,3400.,3594.,3794.,4000.,4213.,4433.,4661.,4895.,5142.,5403.,5674.,5955.,6250.,6553.,6861.,7173.,7491.,7811.,8133.,8456.,8781.,9107.,9434.,9761.])
BaseLogPartFctRef[252]['NbLabels']=2
BaseLogPartFctRef[252]['NbCliques']=6612
BaseLogPartFctRef[252]['NbSites']=3364
BaseLogPartFctRef[252]['StdNgbhDivMoyNgbh']=0.0656
BaseLogPartFctRef[253]={}
BaseLogPartFctRef[253]['LogPF']=np.array([5545.,5747.,5956.,6168.,6386.,6609.,6837.,7069.,7307.,7551.,7799.,8053.,8311.,8575.,8844.,9117.,9394.,9677.,9964.,10256.,10552.,10852.,11156.,11463.,11775.,12090.,12409.,12730.,13056.,13384.])
BaseLogPartFctRef[253]['NbLabels']=2
BaseLogPartFctRef[253]['NbCliques']=7999
BaseLogPartFctRef[253]['NbSites']=8000
BaseLogPartFctRef[253]['StdNgbhDivMoyNgbh']=0.0079
BaseLogPartFctRef[254]={}
BaseLogPartFctRef[254]['LogPF']=np.array([5490.,5888.,6294.,6711.,7139.,7577.,8028.,8489.,8964.,9455.,9963.,10495.,11038.,11610.,12207.,12837.,13496.,14183.,14893.,15616.,16352.,17099.,17852.,18611.,19376.,20143.,20914.,21689.,22464.,23241.])
BaseLogPartFctRef[254]['NbLabels']=2
BaseLogPartFctRef[254]['NbCliques']=15664
BaseLogPartFctRef[254]['NbSites']=7921
BaseLogPartFctRef[254]['StdNgbhDivMoyNgbh']=0.0530
#ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+0),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=10,NBY=10,NBZ=10,BetaGeneration=0.2)
BaseLogPartFctRef[255 ]={}
BaseLogPartFctRef[255]['LogPF']=np.array([71.41,73.01,74.72,76.50,78.23,80.06,81.98,84.04,86.13,88.26,90.30,92.47,94.95,97.21,99.74,102.3,104.9,107.9,111.0,114.0,117.1,120.1,123.3,126.7,130.4,134.2,138.0,142.0,146.0,150.2])
BaseLogPartFctRef[ 255 ]['NbLabels']= 3
BaseLogPartFctRef[ 255 ]['NbCliques']= 96
BaseLogPartFctRef[ 255 ]['NbSites']= 65
BaseLogPartFctRef[ 255 ]['StdNgbhDivMoyNgbh']= 0.357726769776
BaseLogPartFctRef[ 256 ]={}
BaseLogPartFctRef[256]['LogPF']=np.array([32.96,33.70,34.47,35.25,36.05,36.87,37.72,38.60,39.52,40.50,41.49,42.56,43.62,44.72,45.88,47.08,48.31,49.59,50.87,52.31,53.77,55.33,56.95,58.62,60.30,61.95,63.72,65.55,67.39,69.26])
BaseLogPartFctRef[ 256 ]['NbLabels']= 3
BaseLogPartFctRef[ 256 ]['NbCliques']= 43
BaseLogPartFctRef[ 256 ]['NbSites']= 30
BaseLogPartFctRef[ 256 ]['StdNgbhDivMoyNgbh']= 0.405898164708
BaseLogPartFctRef[ 257 ]={}
BaseLogPartFctRef[257]['LogPF']=np.array([24.17,24.63,25.11,25.59,26.13,26.67,27.22,27.82,28.42,29.01,29.68,30.34,31.00,31.67,32.40,33.15,33.94,34.74,35.59,36.46,37.35,38.27,39.19,40.20,41.17,42.22,43.29,44.34,45.43,46.60])
BaseLogPartFctRef[ 257 ]['NbLabels']= 3
BaseLogPartFctRef[ 257 ]['NbCliques']= 28
BaseLogPartFctRef[ 257 ]['NbSites']= 22
BaseLogPartFctRef[ 257 ]['StdNgbhDivMoyNgbh']= 0.352639105663
BaseLogPartFctRef[ 258 ]={}
BaseLogPartFctRef[258]['LogPF']=np.array([18.68,19.04,19.41,19.78,20.18,20.62,21.08,21.55,22.02,22.52,23.01,23.52,24.07,24.58,25.15,25.78,26.40,27.05,27.69,28.38,29.06,29.80,30.55,31.31,32.10,32.93,33.78,34.68,35.60,36.48])
BaseLogPartFctRef[ 258 ]['NbLabels']= 3
BaseLogPartFctRef[ 258 ]['NbCliques']= 22
BaseLogPartFctRef[ 258 ]['NbSites']= 17
BaseLogPartFctRef[ 258 ]['StdNgbhDivMoyNgbh']= 0.414774747159
BaseLogPartFctRef[ 259 ]={}
BaseLogPartFctRef[259]['LogPF']=np.array([28.56,29.17,29.80,30.44,31.12,31.84,32.54,33.30,34.07,34.86,35.71,36.54,37.37,38.28,39.24,40.22,41.27,42.36,43.51,44.69,45.91,47.20,48.49,49.81,51.21,52.62,54.08,55.63,57.09,58.60])
BaseLogPartFctRef[ 259 ]['NbLabels']= 3
BaseLogPartFctRef[ 259 ]['NbCliques']= 36
BaseLogPartFctRef[ 259 ]['NbSites']= 26
BaseLogPartFctRef[ 259 ]['StdNgbhDivMoyNgbh']= 0.433976410504
BaseLogPartFctRef[ 260 ]={}
BaseLogPartFctRef[260]['LogPF']=np.array([40.65,41.50,42.37,43.26,44.18,45.13,46.15,47.21,48.26,49.41,50.52,51.70,52.89,54.14,55.44,56.79,58.17,59.56,61.13,62.71,64.21,65.97,67.69,69.47,71.31,73.24,75.22,77.21,79.28,81.33])
BaseLogPartFctRef[ 260 ]['NbLabels']= 3
BaseLogPartFctRef[ 260 ]['NbCliques']= 50
BaseLogPartFctRef[ 260 ]['NbSites']= 37
BaseLogPartFctRef[ 260 ]['StdNgbhDivMoyNgbh']= 0.400221998294
BaseLogPartFctRef[261 ]={}
BaseLogPartFctRef[261]['LogPF']=np.array([29.66,30.27,30.92,31.58,32.25,32.96,33.71,34.48,35.25,36.09,36.92,37.84,38.75,39.69,40.66,41.64,42.68,43.76,44.86,46.07,47.29,48.55,49.86,51.17,52.56,54.01,55.47,56.95,58.54,60.07])
BaseLogPartFctRef[ 261 ]['NbLabels']= 3
BaseLogPartFctRef[ 261 ]['NbCliques']= 37
BaseLogPartFctRef[ 261 ]['NbSites']= 27
BaseLogPartFctRef[ 261 ]['StdNgbhDivMoyNgbh']= 0.401087998191
BaseLogPartFctRef[262 ]={}
BaseLogPartFctRef[262]['LogPF']=np.array([42.85,43.63,44.46,45.36,46.28,47.20,48.17,49.18,50.21,51.25,52.33,53.43,54.61,55.82,57.10,58.39,59.74,61.03,62.41,63.81,65.31,66.85,68.44,70.07,71.76,73.44,75.17,76.89,78.64,80.45])
BaseLogPartFctRef[ 262 ]['NbLabels']= 3
BaseLogPartFctRef[ 262 ]['NbCliques']= 48
BaseLogPartFctRef[ 262 ]['NbSites']= 39
BaseLogPartFctRef[ 262 ]['StdNgbhDivMoyNgbh']= 0.367913258811
BaseLogPartFctRef[263 ]={}
BaseLogPartFctRef[263]['LogPF']=np.array([37.35,38.18,39.02,39.88,40.77,41.70,42.68,43.68,44.71,45.78,46.92,48.05,49.21,50.47,51.76,53.06,54.47,55.90,57.39,59.03,60.57,62.21,63.98,65.79,67.55,69.39,71.34,73.24,75.26,77.30])
BaseLogPartFctRef[ 263 ]['NbLabels']= 3
BaseLogPartFctRef[ 263 ]['NbCliques']= 48
BaseLogPartFctRef[ 263 ]['NbSites']= 34
BaseLogPartFctRef[ 263 ]['StdNgbhDivMoyNgbh']= 0.367881359164
BaseLogPartFctRef[264 ]={}
BaseLogPartFctRef[264]['LogPF']=np.array([38.45,39.27,40.13,41.00,41.87,42.84,43.86,44.86,45.91,47.01,48.13,49.30,50.51,51.79,53.04,54.41,55.81,57.23,58.77,60.41,62.10,63.77,65.56,67.33,69.16,71.09,73.08,75.09,77.19,79.23])
BaseLogPartFctRef[ 264 ]['NbLabels']= 3
BaseLogPartFctRef[ 264 ]['NbCliques']= 49
BaseLogPartFctRef[ 264 ]['NbSites']= 35
BaseLogPartFctRef[ 264 ]['StdNgbhDivMoyNgbh']= 0.437276577262
BaseLogPartFctRef[265 ]={}
BaseLogPartFctRef[265]['LogPF']=np.array([25.27,25.70,26.15,26.63,27.10,27.60,28.12,28.64,29.18,29.75,30.32,30.92,31.57,32.23,32.88,33.60,34.30,35.01,35.79,36.58,37.32,38.14,38.98,39.83,40.65,41.53,42.45,43.38,44.37,45.29])
BaseLogPartFctRef[ 265 ]['NbLabels']= 3
BaseLogPartFctRef[ 265 ]['NbCliques']= 26
BaseLogPartFctRef[ 265 ]['NbSites']= 23
BaseLogPartFctRef[ 265 ]['StdNgbhDivMoyNgbh']= 0.417846099022
BaseLogPartFctRef[266 ]={}
BaseLogPartFctRef[266]['LogPF']=np.array([30.76,31.43,32.09,32.77,33.51,34.27,35.04,35.82,36.63,37.47,38.28,39.17,40.10,41.08,42.07,43.13,44.12,45.24,46.37,47.57,48.83,50.09,51.39,52.74,54.17,55.67,57.14,58.70,60.24,61.78])
BaseLogPartFctRef[ 266 ]['NbLabels']= 3
BaseLogPartFctRef[ 266 ]['NbCliques']= 38
BaseLogPartFctRef[ 266 ]['NbSites']= 28
BaseLogPartFctRef[ 266 ]['StdNgbhDivMoyNgbh']= 0.412310219784
BaseLogPartFctRef[267 ]={}
BaseLogPartFctRef[267]['LogPF']=np.array([26.37,26.93,27.53,28.17,28.81,29.47,30.12,30.84,31.56,32.31,33.04,33.86,34.71,35.55,36.45,37.40,38.33,39.35,40.37,41.47,42.58,43.75,45.01,46.26,47.52,48.83,50.16,51.57,52.99,54.45])
BaseLogPartFctRef[ 267 ]['NbLabels']= 3
BaseLogPartFctRef[ 267 ]['NbCliques']= 34
BaseLogPartFctRef[ 267 ]['NbSites']= 24
BaseLogPartFctRef[ 267 ]['StdNgbhDivMoyNgbh']= 0.34750373056
BaseLogPartFctRef[268 ]={}
BaseLogPartFctRef[268]['LogPF']=np.array([23.07,23.53,23.99,24.44,24.93,25.42,25.92,26.44,26.98,27.53,28.13,28.76,29.39,30.07,30.73,31.41,32.10,32.88,33.62,34.38,35.23,36.05,36.90,37.82,38.73,39.66,40.66,41.67,42.68,43.68])
BaseLogPartFctRef[ 268 ]['NbLabels']= 3
BaseLogPartFctRef[ 268 ]['NbCliques']= 26
BaseLogPartFctRef[ 268 ]['NbSites']= 21
BaseLogPartFctRef[ 268 ]['StdNgbhDivMoyNgbh']= 0.348466988836
BaseLogPartFctRef[269 ]={}
BaseLogPartFctRef[269]['LogPF']=np.array([36.25,37.01,37.78,38.56,39.37,40.22,41.11,42.02,42.97,43.99,45.00,46.02,47.11,48.19,49.35,50.50,51.67,52.91,54.27,55.67,57.06,58.50,59.98,61.52,63.14,64.79,66.47,68.15,69.86,71.66])
BaseLogPartFctRef[ 269 ]['NbLabels']= 3
BaseLogPartFctRef[ 269 ]['NbCliques']= 44
BaseLogPartFctRef[ 269 ]['NbSites']= 33
BaseLogPartFctRef[ 269 ]['StdNgbhDivMoyNgbh']= 0.416697970274
BaseLogPartFctRef[270 ]={}
BaseLogPartFctRef[270]['LogPF']=np.array([35.16,35.90,36.66,37.41,38.23,39.07,39.95,40.85,41.75,42.75,43.71,44.75,45.82,46.94,48.06,49.26,50.47,51.75,53.08,54.46,55.90,57.39,58.92,60.48,62.14,63.77,65.54,67.27,69.03,70.90])
BaseLogPartFctRef[ 270 ]['NbLabels']= 3
BaseLogPartFctRef[ 270 ]['NbCliques']= 44
BaseLogPartFctRef[ 270 ]['NbSites']= 32
BaseLogPartFctRef[ 270 ]['StdNgbhDivMoyNgbh']= 0.32952096304
BaseLogPartFctRef[271 ]={}
BaseLogPartFctRef[271]['LogPF']=np.array([24.17,24.63,25.12,25.61,26.10,26.62,27.18,27.74,28.33,28.96,29.62,30.32,31.01,31.71,32.46,33.23,33.99,34.81,35.64,36.44,37.32,38.26,39.19,40.18,41.21,42.21,43.31,44.40,45.48,46.59])
BaseLogPartFctRef[ 271 ]['NbLabels']= 3
BaseLogPartFctRef[ 271 ]['NbCliques']= 28
BaseLogPartFctRef[ 271 ]['NbSites']= 22
BaseLogPartFctRef[ 271 ]['StdNgbhDivMoyNgbh']= 0.387198296305
BaseLogPartFctRef[272 ]={}
BaseLogPartFctRef[272]['LogPF']=np.array([31.86,32.54,33.22,33.93,34.67,35.44,36.24,37.08,37.90,38.74,39.64,40.56,41.47,42.44,43.50,44.60,45.70,46.82,48.02,49.23,50.53,51.86,53.26,54.70,56.12,57.62,59.06,60.57,62.12,63.79])
BaseLogPartFctRef[ 272 ]['NbLabels']= 3
BaseLogPartFctRef[ 272 ]['NbCliques']= 39
BaseLogPartFctRef[ 272 ]['NbSites']= 29
BaseLogPartFctRef[ 272 ]['StdNgbhDivMoyNgbh']= 0.354031644642
BaseLogPartFctRef[273 ]={}
BaseLogPartFctRef[273]['LogPF']=np.array([54.93,56.03,57.20,58.37,59.63,60.90,62.29,63.57,64.88,66.47,68.00,69.60,71.18,72.81,74.67,76.56,78.53,80.31,82.27,84.29,86.25,88.65,90.88,93.13,95.45,97.80,100.3,102.8,105.6,108.1])
BaseLogPartFctRef[ 273 ]['NbLabels']= 3
BaseLogPartFctRef[ 273 ]['NbCliques']= 66
BaseLogPartFctRef[ 273 ]['NbSites']= 50
BaseLogPartFctRef[ 273 ]['StdNgbhDivMoyNgbh']= 0.361888983479
BaseLogPartFctRef[274 ]={}
BaseLogPartFctRef[274]['LogPF']=np.array([27.47,28.01,28.58,29.16,29.78,30.42,31.06,31.72,32.43,33.17,33.92,34.68,35.49,36.31,37.19,38.06,39.03,40.03,41.00,42.04,43.06,44.15,45.23,46.40,47.66,48.96,50.28,51.63,52.94,54.31])
BaseLogPartFctRef[ 274 ]['NbLabels']= 3
BaseLogPartFctRef[ 274 ]['NbCliques']= 33
BaseLogPartFctRef[ 274 ]['NbSites']= 25
BaseLogPartFctRef[ 274 ]['StdNgbhDivMoyNgbh']= 0.383183705377
BaseLogPartFctRef[275 ]={}
BaseLogPartFctRef[275]['LogPF']=np.array([46.14,47.08,48.05,49.03,50.02,51.06,52.20,53.34,54.49,55.73,56.92,58.22,59.53,60.95,62.37,63.82,65.39,66.98,68.60,70.35,72.06,73.84,75.68,77.53,79.47,81.40,83.46,85.55,87.65,89.83])
BaseLogPartFctRef[ 275 ]['NbLabels']= 3
BaseLogPartFctRef[ 275 ]['NbCliques']= 55
BaseLogPartFctRef[ 275 ]['NbSites']= 42
BaseLogPartFctRef[ 275 ]['StdNgbhDivMoyNgbh']= 0.398066719122
BaseLogPartFctRef[276 ]={}
BaseLogPartFctRef[276]['LogPF']=np.array([40.65,41.41,42.22,43.07,43.92,44.78,45.66,46.57,47.49,48.50,49.52,50.60,51.69,52.75,53.95,55.12,56.35,57.64,58.98,60.33,61.73,63.13,64.58,66.08,67.61,69.15,70.72,72.40,74.08,75.80])
BaseLogPartFctRef[ 276 ]['NbLabels']= 3
BaseLogPartFctRef[ 276 ]['NbCliques']= 45
BaseLogPartFctRef[ 276 ]['NbSites']= 37
BaseLogPartFctRef[ 276 ]['StdNgbhDivMoyNgbh']= 0.411095207391
BaseLogPartFctRef[277 ]={}
BaseLogPartFctRef[277]['LogPF']=np.array([24.17,24.63,25.11,25.60,26.09,26.60,27.15,27.69,28.26,28.84,29.47,30.09,30.77,31.43,32.14,32.87,33.62,34.37,35.14,36.03,36.84,37.68,38.58,39.47,40.42,41.41,42.45,43.47,44.55,45.62])
BaseLogPartFctRef[ 277 ]['NbLabels']= 3
BaseLogPartFctRef[ 277 ]['NbCliques']= 27
BaseLogPartFctRef[ 277 ]['NbSites']= 22
BaseLogPartFctRef[ 277 ]['StdNgbhDivMoyNgbh']= 0.384037694133
BaseLogPartFctRef[278 ]={}
BaseLogPartFctRef[278]['LogPF']=np.array([26.37,26.94,27.53,28.15,28.80,29.45,30.13,30.84,31.56,32.26,33.01,33.83,34.69,35.57,36.52,37.49,38.48,39.45,40.53,41.64,42.78,43.98,45.17,46.43,47.75,49.14,50.50,51.94,53.37,54.83])
BaseLogPartFctRef[ 278 ]['NbLabels']= 3
BaseLogPartFctRef[ 278 ]['NbCliques']= 34
BaseLogPartFctRef[ 278 ]['NbSites']= 24
BaseLogPartFctRef[ 278 ]['StdNgbhDivMoyNgbh']= 0.318891293476
BaseLogPartFctRef[279 ]={}
BaseLogPartFctRef[279]['LogPF']=np.array([23.07,23.58,24.09,24.62,25.15,25.75,26.32,26.92,27.52,28.16,28.81,29.51,30.22,31.00,31.76,32.55,33.38,34.21,35.12,35.98,36.96,37.93,38.92,39.94,41.01,42.03,43.13,44.27,45.47,46.67])
BaseLogPartFctRef[ 279 ]['NbLabels']= 3
BaseLogPartFctRef[ 279 ]['NbCliques']= 29
BaseLogPartFctRef[ 279 ]['NbSites']= 21
BaseLogPartFctRef[ 279 ]['StdNgbhDivMoyNgbh']= 0.382280680684
BaseLogPartFctRef[280 ]={}
BaseLogPartFctRef[280]['LogPF']=np.array([32.96,33.53,34.13,34.73,35.36,36.02,36.69,37.40,38.11,38.82,39.59,40.40,41.22,42.08,43.00,43.90,44.79,45.76,46.76,47.77,48.83,49.91,50.98,52.11,53.29,54.48,55.73,56.90,58.16,59.46])
BaseLogPartFctRef[ 280 ]['NbLabels']= 3
BaseLogPartFctRef[ 280 ]['NbCliques']= 34
BaseLogPartFctRef[ 280 ]['NbSites']= 30
BaseLogPartFctRef[ 280 ]['StdNgbhDivMoyNgbh']= 0.46630918092
BaseLogPartFctRef[281 ]={}
BaseLogPartFctRef[281]['LogPF']=np.array([37.35,38.11,38.92,39.74,40.61,41.50,42.42,43.35,44.36,45.37,46.44,47.47,48.63,49.78,50.99,52.29,53.56,54.96,56.33,57.82,59.44,61.03,62.69,64.38,66.12,67.90,69.71,71.51,73.48,75.41])
BaseLogPartFctRef[ 281 ]['NbLabels']= 3
BaseLogPartFctRef[ 281 ]['NbCliques']= 46
BaseLogPartFctRef[ 281 ]['NbSites']= 34
BaseLogPartFctRef[ 281 ]['StdNgbhDivMoyNgbh']= 0.456457258658
BaseLogPartFctRef[282 ]={}
BaseLogPartFctRef[282]['LogPF']=np.array([35.16,35.87,36.61,37.39,38.20,39.01,39.88,40.80,41.73,42.67,43.63,44.64,45.73,46.82,48.01,49.14,50.38,51.68,53.10,54.45,55.85,57.28,58.81,60.36,62.01,63.72,65.40,67.16,68.90,70.67])
BaseLogPartFctRef[ 282 ]['NbLabels']= 3
BaseLogPartFctRef[ 282 ]['NbCliques']= 43
BaseLogPartFctRef[ 282 ]['NbSites']= 32
BaseLogPartFctRef[ 282 ]['StdNgbhDivMoyNgbh']= 0.480561482005
BaseLogPartFctRef[283 ]={}
BaseLogPartFctRef[283]['LogPF']=np.array([50.54,51.68,52.87,54.09,55.33,56.64,58.01,59.43,60.84,62.36,63.89,65.42,67.07,68.75,70.60,72.39,74.33,76.30,78.40,80.55,82.84,85.17,87.65,90.19,92.89,95.61,98.32,101.2,104.0,106.9])
BaseLogPartFctRef[ 283 ]['NbLabels']= 3
BaseLogPartFctRef[ 283 ]['NbCliques']= 67
BaseLogPartFctRef[ 283 ]['NbSites']= 46
BaseLogPartFctRef[ 283 ]['StdNgbhDivMoyNgbh']= 0.413234085927
BaseLogPartFctRef[284 ]={}
BaseLogPartFctRef[284]['LogPF']=np.array([59.33,60.63,61.96,63.41,64.85,66.36,67.91,69.51,71.22,72.97,74.82,76.62,78.46,80.45,82.41,84.47,86.59,88.69,91.15,93.66,96.40,99.02,101.9,104.7,107.8,110.8,113.8,117.0,120.2,123.4])
BaseLogPartFctRef[ 284 ]['NbLabels']= 3
BaseLogPartFctRef[ 284 ]['NbCliques']= 78
BaseLogPartFctRef[ 284 ]['NbSites']= 54
BaseLogPartFctRef[ 284 ]['StdNgbhDivMoyNgbh']= 0.41179611259
#In [3]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+30),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=10,NBY=10,NBZ=10,BetaGeneration=0.4)
BaseLogPartFctRef[285 ]={}
BaseLogPartFctRef[285]['LogPF']=np.array([691.0,713.8,737.8,762.5,787.5,814.1,842.4,870.4,899.3,929.5,964.6,1003.7,1042.4,1086.1,1136.3,1186.0,1242.4,1299.1,1359.9,1418.3,1479.8,1542.0,1605.7,1669.4,1734.8,1799.7,1865.2,1931.2,1997.7,2063.7])
BaseLogPartFctRef[ 285 ]['NbLabels']= 3
BaseLogPartFctRef[ 285 ]['NbCliques']= 1358
BaseLogPartFctRef[ 285 ]['NbSites']= 629
BaseLogPartFctRef[ 285 ]['StdNgbhDivMoyNgbh']= 0.282723872571
BaseLogPartFctRef[286 ]={}
BaseLogPartFctRef[286]['LogPF']=np.array([793.2,822.3,851.8,882.8,914.5,947.4,981.4,1017.1,1055.9,1095.6,1145.2,1198.3,1258.3,1323.9,1390.5,1461.3,1533.9,1609.2,1686.6,1765.6,1844.5,1925.2,2006.7,2088.6,2171.4,2253.6,2336.5,2420.1,2503.7,2587.5])
BaseLogPartFctRef[ 286 ]['NbLabels']= 3
BaseLogPartFctRef[ 286 ]['NbCliques']= 1697
BaseLogPartFctRef[ 286 ]['NbSites']= 722
BaseLogPartFctRef[ 286 ]['StdNgbhDivMoyNgbh']= 0.239718877916
BaseLogPartFctRef[287 ]={}
BaseLogPartFctRef[287]['LogPF']=np.array([725.1,750.1,775.7,801.8,829.4,857.8,886.6,917.0,949.6,983.0,1020.9,1063.2,1108.6,1160.3,1216.3,1275.8,1340.0,1402.5,1466.5,1532.7,1600.4,1667.7,1736.5,1806.4,1876.5,1946.3,2017.4,2088.5,2160.6,2232.1])
BaseLogPartFctRef[ 287 ]['NbLabels']= 3
BaseLogPartFctRef[ 287 ]['NbCliques']= 1464
BaseLogPartFctRef[ 287 ]['NbSites']= 660
BaseLogPartFctRef[ 287 ]['StdNgbhDivMoyNgbh']= 0.268932478729
BaseLogPartFctRef[288 ]={}
BaseLogPartFctRef[288]['LogPF']=np.array([695.4,719.2,743.1,768.0,793.4,820.1,848.3,876.3,906.9,940.0,975.5,1016.8,1063.1,1111.4,1164.1,1217.2,1276.1,1335.6,1396.4,1459.0,1522.7,1587.6,1651.9,1717.0,1783.6,1850.8,1918.5,1986.1,2053.6,2121.8])
BaseLogPartFctRef[ 288 ]['NbLabels']= 3
BaseLogPartFctRef[ 288 ]['NbCliques']= 1389
BaseLogPartFctRef[ 288 ]['NbSites']= 633
BaseLogPartFctRef[ 288 ]['StdNgbhDivMoyNgbh']= 0.276406948361
BaseLogPartFctRef[289 ]={}
BaseLogPartFctRef[289]['LogPF']=np.array([643.8,665.3,688.0,711.9,737.0,762.2,787.8,814.7,843.0,874.0,907.4,945.1,984.6,1032.2,1081.8,1133.2,1186.8,1243.0,1297.5,1357.5,1418.3,1479.2,1539.5,1600.7,1663.3,1725.9,1788.4,1851.3,1915.2,1979.0])
BaseLogPartFctRef[ 289 ]['NbLabels']= 3
BaseLogPartFctRef[ 289 ]['NbCliques']= 1301
BaseLogPartFctRef[ 289 ]['NbSites']= 586
BaseLogPartFctRef[ 289 ]['StdNgbhDivMoyNgbh']= 0.279615990334
BaseLogPartFctRef[290 ]={}
BaseLogPartFctRef[290]['LogPF']=np.array([718.5,742.9,767.6,793.0,819.4,847.2,874.9,904.7,938.0,969.5,1008.4,1050.2,1097.8,1144.1,1195.6,1250.4,1308.7,1369.6,1433.3,1495.9,1560.2,1624.7,1691.5,1757.6,1825.9,1892.4,1960.9,2029.5,2098.6,2167.3])
BaseLogPartFctRef[ 290 ]['NbLabels']= 3
BaseLogPartFctRef[ 290 ]['NbCliques']= 1415
BaseLogPartFctRef[ 290 ]['NbSites']= 654
BaseLogPartFctRef[ 290 ]['StdNgbhDivMoyNgbh']= 0.280427716308
BaseLogPartFctRef[291 ]={}
BaseLogPartFctRef[291]['LogPF']=np.array([705.3,729.4,753.6,778.9,804.8,832.3,860.4,889.2,918.8,950.8,987.0,1027.1,1073.6,1124.6,1176.8,1231.5,1290.1,1351.3,1412.8,1475.9,1538.5,1603.1,1669.9,1735.8,1802.1,1869.1,1936.2,2004.5,2072.6,2140.9])
BaseLogPartFctRef[ 291 ]['NbLabels']= 3
BaseLogPartFctRef[ 291 ]['NbCliques']= 1396
BaseLogPartFctRef[ 291 ]['NbSites']= 642
BaseLogPartFctRef[ 291 ]['StdNgbhDivMoyNgbh']= 0.274809819809
BaseLogPartFctRef[292 ]={}
BaseLogPartFctRef[292]['LogPF']=np.array([637.2,658.2,680.2,703.0,726.4,751.1,776.2,802.6,830.1,859.6,891.8,925.9,965.3,1007.4,1054.7,1105.5,1155.1,1211.0,1266.0,1323.3,1381.2,1439.5,1498.0,1557.5,1617.6,1678.1,1739.1,1800.3,1861.1,1922.2])
BaseLogPartFctRef[ 292 ]['NbLabels']= 3
BaseLogPartFctRef[ 292 ]['NbCliques']= 1262
BaseLogPartFctRef[ 292 ]['NbSites']= 580
BaseLogPartFctRef[ 292 ]['StdNgbhDivMoyNgbh']= 0.286430368058
BaseLogPartFctRef[293 ]={}
BaseLogPartFctRef[293]['LogPF']=np.array([695.4,719.2,744.0,769.0,795.0,822.2,850.3,878.7,909.8,941.2,980.0,1020.3,1068.5,1116.8,1170.0,1222.6,1282.8,1342.1,1403.7,1466.6,1532.0,1596.1,1659.9,1726.3,1793.2,1861.2,1928.1,1996.4,2064.9,2133.1])
BaseLogPartFctRef[ 293 ]['NbLabels']= 3
BaseLogPartFctRef[ 293 ]['NbCliques']= 1399
BaseLogPartFctRef[ 293 ]['NbSites']= 633
BaseLogPartFctRef[ 293 ]['StdNgbhDivMoyNgbh']= 0.270857570679
BaseLogPartFctRef[294 ]={}
BaseLogPartFctRef[294]['LogPF']=np.array([692.1,716.0,740.7,766.5,793.2,820.1,847.7,876.7,907.6,939.6,976.2,1015.5,1057.5,1110.7,1163.0,1220.8,1279.5,1339.6,1402.3,1466.0,1530.7,1596.2,1662.3,1728.9,1795.3,1862.7,1930.2,1998.1,2067.1,2135.7])
BaseLogPartFctRef[ 294 ]['NbLabels']= 3
BaseLogPartFctRef[ 294 ]['NbCliques']= 1404
BaseLogPartFctRef[ 294 ]['NbSites']= 630
BaseLogPartFctRef[ 294 ]['StdNgbhDivMoyNgbh']= 0.279144327234
BaseLogPartFctRef[295 ]={}
BaseLogPartFctRef[295]['LogPF']=np.array([798.7,826.6,855.5,885.4,916.9,949.1,982.2,1018.0,1054.9,1095.1,1142.6,1196.1,1252.8,1309.6,1371.3,1438.1,1509.6,1580.3,1654.1,1729.5,1806.4,1883.6,1963.0,2042.2,2122.5,2202.1,2282.2,2363.7,2445.2,2527.2])
BaseLogPartFctRef[ 295 ]['NbLabels']= 3
BaseLogPartFctRef[ 295 ]['NbCliques']= 1662
BaseLogPartFctRef[ 295 ]['NbSites']= 727
BaseLogPartFctRef[ 295 ]['StdNgbhDivMoyNgbh']= 0.2566648115
BaseLogPartFctRef[296 ]={}
BaseLogPartFctRef[296]['LogPF']=np.array([705.3,729.0,754.1,778.7,804.5,831.5,859.6,889.0,920.4,953.5,987.2,1029.2,1078.3,1131.1,1184.6,1238.4,1297.3,1356.8,1417.6,1479.6,1543.2,1608.1,1673.6,1739.0,1805.9,1871.9,1938.9,2006.7,2074.3,2142.3])
BaseLogPartFctRef[ 296 ]['NbLabels']= 3
BaseLogPartFctRef[ 296 ]['NbCliques']= 1392
BaseLogPartFctRef[ 296 ]['NbSites']= 642
BaseLogPartFctRef[ 296 ]['StdNgbhDivMoyNgbh']= 0.27311390062
BaseLogPartFctRef[297 ]={}
BaseLogPartFctRef[297]['LogPF']=np.array([635.0,655.5,677.2,699.1,721.6,745.1,769.6,795.2,821.9,849.5,882.6,918.9,959.2,1000.9,1047.3,1095.1,1143.5,1194.9,1247.7,1300.9,1356.3,1411.4,1468.1,1525.7,1583.8,1641.7,1700.5,1759.6,1819.4,1878.6])
BaseLogPartFctRef[ 297 ]['NbLabels']= 3
BaseLogPartFctRef[ 297 ]['NbCliques']= 1224
BaseLogPartFctRef[ 297 ]['NbSites']= 578
BaseLogPartFctRef[ 297 ]['StdNgbhDivMoyNgbh']= 0.282448665186
BaseLogPartFctRef[298 ]={}
BaseLogPartFctRef[298]['LogPF']=np.array([785.5,812.7,841.2,869.8,900.0,930.6,962.7,996.0,1031.0,1070.1,1109.6,1158.4,1211.2,1267.9,1329.5,1393.6,1460.4,1531.4,1600.3,1674.2,1748.4,1822.5,1898.3,1974.0,2051.2,2129.3,2207.3,2285.4,2364.5,2443.6])
BaseLogPartFctRef[ 298 ]['NbLabels']= 3
BaseLogPartFctRef[ 298 ]['NbCliques']= 1606
BaseLogPartFctRef[ 298 ]['NbSites']= 715
BaseLogPartFctRef[ 298 ]['StdNgbhDivMoyNgbh']= 0.248105993557
BaseLogPartFctRef[299 ]={}
BaseLogPartFctRef[299]['LogPF']=np.array([821.8,851.0,881.0,912.3,944.9,978.2,1011.8,1048.7,1087.3,1130.1,1179.6,1230.6,1292.4,1356.2,1425.2,1498.2,1571.9,1650.6,1730.7,1811.3,1892.3,1974.6,2056.5,2139.6,2223.2,2307.8,2392.5,2477.5,2562.8,2647.9])
BaseLogPartFctRef[ 299 ]['NbLabels']= 3
BaseLogPartFctRef[ 299 ]['NbCliques']= 1737
BaseLogPartFctRef[ 299 ]['NbSites']= 748
BaseLogPartFctRef[ 299 ]['StdNgbhDivMoyNgbh']= 0.247301841432
BaseLogPartFctRef[300 ]={}
BaseLogPartFctRef[300]['LogPF']=np.array([680.0,702.3,725.2,748.8,773.8,799.1,825.5,853.3,881.6,911.1,944.8,981.5,1024.2,1070.1,1116.8,1169.2,1223.4,1278.1,1335.7,1394.8,1455.3,1517.1,1578.5,1641.8,1705.5,1769.7,1833.2,1897.3,1961.5,2026.6])
BaseLogPartFctRef[ 300 ]['NbLabels']= 3
BaseLogPartFctRef[ 300 ]['NbCliques']= 1324
BaseLogPartFctRef[ 300 ]['NbSites']= 619
BaseLogPartFctRef[ 300 ]['StdNgbhDivMoyNgbh']= 0.279879197433
BaseLogPartFctRef[301 ]={}
BaseLogPartFctRef[301]['LogPF']=np.array([717.4,741.6,766.7,793.1,820.1,847.6,876.9,907.6,940.3,973.4,1008.9,1052.1,1099.0,1151.2,1206.4,1262.7,1323.0,1386.4,1451.9,1516.0,1582.6,1649.5,1718.2,1787.1,1857.5,1927.3,1997.9,2068.4,2139.1,2210.3])
BaseLogPartFctRef[ 301 ]['NbLabels']= 3
BaseLogPartFctRef[ 301 ]['NbCliques']= 1454
BaseLogPartFctRef[ 301 ]['NbSites']= 653
BaseLogPartFctRef[ 301 ]['StdNgbhDivMoyNgbh']= 0.273239438847
BaseLogPartFctRef[302 ]={}
BaseLogPartFctRef[302]['LogPF']=np.array([767.9,793.9,820.2,848.1,876.3,905.6,936.0,967.7,1000.3,1037.5,1078.4,1124.2,1169.1,1222.9,1281.6,1342.7,1405.4,1470.1,1536.5,1604.9,1674.2,1744.6,1814.9,1887.2,1960.5,2034.0,2107.1,2180.7,2255.0,2329.8])
BaseLogPartFctRef[ 302 ]['NbLabels']= 3
BaseLogPartFctRef[ 302 ]['NbCliques']= 1522
BaseLogPartFctRef[ 302 ]['NbSites']= 699
BaseLogPartFctRef[ 302 ]['StdNgbhDivMoyNgbh']= 0.26619523446
BaseLogPartFctRef[303 ]={}
BaseLogPartFctRef[303]['LogPF']=np.array([776.7,802.8,829.5,856.9,885.3,915.4,946.1,977.9,1011.9,1049.4,1089.0,1136.3,1185.0,1240.0,1294.5,1355.0,1419.1,1484.6,1553.0,1621.9,1692.8,1763.6,1836.2,1908.5,1982.6,2056.2,2131.3,2205.6,2280.9,2356.7])
BaseLogPartFctRef[ 303 ]['NbLabels']= 3
BaseLogPartFctRef[ 303 ]['NbCliques']= 1543
BaseLogPartFctRef[ 303 ]['NbSites']= 707
BaseLogPartFctRef[ 303 ]['StdNgbhDivMoyNgbh']= 0.267395399121
BaseLogPartFctRef[304 ]={}
BaseLogPartFctRef[304]['LogPF']=np.array([732.8,756.7,780.8,806.5,832.7,859.8,888.2,916.8,948.3,979.0,1012.2,1050.8,1092.5,1137.1,1187.5,1240.7,1296.8,1354.0,1411.7,1474.4,1536.6,1599.0,1663.5,1729.1,1794.2,1859.9,1926.7,1994.4,2061.7,2128.6])
BaseLogPartFctRef[ 304 ]['NbLabels']= 3
BaseLogPartFctRef[ 304 ]['NbCliques']= 1392
BaseLogPartFctRef[ 304 ]['NbSites']= 667
BaseLogPartFctRef[ 304 ]['StdNgbhDivMoyNgbh']= 0.288462266949
BaseLogPartFctRef[305 ]={}
BaseLogPartFctRef[305]['LogPF']=np.array([777.8,804.6,832.3,860.1,889.5,920.5,951.0,983.6,1017.3,1054.0,1092.3,1136.0,1186.0,1237.7,1295.7,1359.9,1427.7,1492.2,1560.4,1632.3,1703.8,1777.0,1851.3,1925.5,2001.1,2076.9,2153.1,2229.3,2305.9,2383.1])
BaseLogPartFctRef[ 305 ]['NbLabels']= 3
BaseLogPartFctRef[ 305 ]['NbCliques']= 1569
BaseLogPartFctRef[ 305 ]['NbSites']= 708
BaseLogPartFctRef[ 305 ]['StdNgbhDivMoyNgbh']= 0.248564683861
BaseLogPartFctRef[306 ]={}
BaseLogPartFctRef[306]['LogPF']=np.array([717.4,741.8,767.4,793.5,820.5,848.9,877.9,908.9,941.5,977.4,1020.3,1063.1,1114.2,1165.5,1224.0,1283.2,1341.8,1405.3,1469.5,1535.1,1600.5,1668.8,1738.2,1807.5,1878.3,1948.9,2019.4,2090.6,2162.3,2234.2])
BaseLogPartFctRef[ 306 ]['NbLabels']= 3
BaseLogPartFctRef[ 306 ]['NbCliques']= 1461
BaseLogPartFctRef[ 306 ]['NbSites']= 653
BaseLogPartFctRef[ 306 ]['StdNgbhDivMoyNgbh']= 0.270948215172
BaseLogPartFctRef[307 ]={}
BaseLogPartFctRef[307]['LogPF']=np.array([683.3,705.6,729.7,753.7,779.3,805.0,831.9,859.1,887.9,920.0,954.9,989.1,1030.9,1080.2,1128.5,1180.2,1236.2,1293.2,1350.4,1410.1,1470.1,1532.4,1594.8,1656.7,1719.0,1783.2,1847.7,1912.5,1977.1,2042.2])
BaseLogPartFctRef[ 307 ]['NbLabels']= 3
BaseLogPartFctRef[ 307 ]['NbCliques']= 1335
BaseLogPartFctRef[ 307 ]['NbSites']= 622
BaseLogPartFctRef[ 307 ]['StdNgbhDivMoyNgbh']= 0.276450145133
BaseLogPartFctRef[308 ]={}
BaseLogPartFctRef[308]['LogPF']=np.array([795.4,822.3,850.0,879.0,908.8,939.8,971.2,1004.6,1039.0,1076.4,1119.3,1165.3,1217.1,1272.0,1332.9,1395.5,1457.3,1526.3,1598.1,1673.1,1746.4,1821.0,1896.0,1972.1,2048.6,2125.5,2203.4,2280.9,2358.8,2436.9])
BaseLogPartFctRef[ 308 ]['NbLabels']= 3
BaseLogPartFctRef[ 308 ]['NbCliques']= 1596
BaseLogPartFctRef[ 308 ]['NbSites']= 724
BaseLogPartFctRef[ 308 ]['StdNgbhDivMoyNgbh']= 0.258224305712
BaseLogPartFctRef[309 ]={}
BaseLogPartFctRef[309]['LogPF']=np.array([630.6,652.1,674.3,697.5,720.8,745.3,770.7,797.4,824.6,854.6,886.3,917.6,959.0,1003.4,1050.4,1100.6,1153.0,1206.7,1262.5,1318.7,1377.1,1435.3,1495.1,1554.9,1616.1,1677.4,1738.2,1800.3,1862.1,1924.1])
BaseLogPartFctRef[ 309 ]['NbLabels']= 3
BaseLogPartFctRef[ 309 ]['NbCliques']= 1266
BaseLogPartFctRef[ 309 ]['NbSites']= 574
BaseLogPartFctRef[ 309 ]['StdNgbhDivMoyNgbh']= 0.269653103733
BaseLogPartFctRef[310 ]={}
BaseLogPartFctRef[310]['LogPF']=np.array([662.5,684.0,707.2,729.6,753.8,778.7,804.8,831.8,859.7,890.6,922.5,957.7,994.7,1040.6,1085.4,1133.4,1183.9,1238.8,1296.3,1353.7,1412.4,1471.2,1532.0,1591.9,1653.2,1714.8,1776.5,1838.3,1900.4,1963.1])
BaseLogPartFctRef[ 310 ]['NbLabels']= 3
BaseLogPartFctRef[ 310 ]['NbCliques']= 1285
BaseLogPartFctRef[ 310 ]['NbSites']= 603
BaseLogPartFctRef[ 310 ]['StdNgbhDivMoyNgbh']= 0.291932109486
BaseLogPartFctRef[311 ]={}
BaseLogPartFctRef[311]['LogPF']=np.array([560.3,578.7,597.1,616.5,636.8,657.4,678.4,700.5,724.9,750.9,776.2,803.5,836.3,871.2,910.5,952.4,995.3,1040.0,1086.9,1134.4,1182.1,1230.0,1280.0,1329.8,1380.5,1431.6,1483.0,1534.7,1586.4,1638.9])
BaseLogPartFctRef[ 311 ]['NbLabels']= 3
BaseLogPartFctRef[ 311 ]['NbCliques']= 1071
BaseLogPartFctRef[ 311 ]['NbSites']= 510
BaseLogPartFctRef[ 311 ]['StdNgbhDivMoyNgbh']= 0.289240565804
BaseLogPartFctRef[312 ]={}
BaseLogPartFctRef[312]['LogPF']=np.array([785.5,811.6,839.6,867.6,897.5,928.4,960.1,993.0,1028.3,1068.4,1109.9,1154.2,1201.7,1256.7,1318.9,1384.1,1450.8,1521.5,1593.2,1665.4,1738.5,1811.0,1885.7,1960.7,2037.2,2114.2,2191.1,2268.1,2346.0,2424.1])
BaseLogPartFctRef[ 312 ]['NbLabels']= 3
BaseLogPartFctRef[ 312 ]['NbCliques']= 1592
BaseLogPartFctRef[ 312 ]['NbSites']= 715
BaseLogPartFctRef[ 312 ]['StdNgbhDivMoyNgbh']= 0.260541184501
BaseLogPartFctRef[313 ]={}
BaseLogPartFctRef[313]['LogPF']=np.array([660.3,681.9,704.7,727.9,751.9,776.3,802.6,829.2,857.4,888.0,920.6,957.8,997.8,1039.2,1087.7,1138.6,1188.3,1245.0,1302.1,1357.6,1416.1,1473.3,1533.7,1593.5,1654.2,1715.4,1776.8,1838.2,1900.7,1962.7])
BaseLogPartFctRef[ 313 ]['NbLabels']= 3
BaseLogPartFctRef[ 313 ]['NbCliques']= 1281
BaseLogPartFctRef[ 313 ]['NbSites']= 601
BaseLogPartFctRef[ 313 ]['StdNgbhDivMoyNgbh']= 0.29667269453
BaseLogPartFctRef[314 ]={}
BaseLogPartFctRef[314]['LogPF']=np.array([691.0,714.6,739.7,765.2,792.1,819.1,847.7,877.1,907.2,939.6,979.4,1022.9,1067.5,1119.7,1177.0,1233.5,1292.5,1355.9,1418.8,1482.8,1548.4,1613.4,1679.4,1745.9,1813.0,1881.1,1949.5,2018.5,2087.6,2157.0])
BaseLogPartFctRef[ 314 ]['NbLabels']= 3
BaseLogPartFctRef[ 314 ]['NbCliques']= 1410
BaseLogPartFctRef[ 314 ]['NbSites']= 629
BaseLogPartFctRef[ 314 ]['StdNgbhDivMoyNgbh']= 0.269776205412
#In [4]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+60),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=10,NBY=10,NBZ=10,BetaGeneration=0.3)
BaseLogPartFctRef[315 ]={}
BaseLogPartFctRef[315]['LogPF']=np.array([366.9,377.2,388.1,398.9,410.4,422.2,434.2,446.5,460.3,474.5,489.2,503.9,520.8,538.1,557.1,576.7,599.0,621.6,643.0,667.7,694.0,719.3,746.0,772.9,800.3,828.1,856.0,884.2,912.6,941.5])
BaseLogPartFctRef[ 315 ]['NbLabels']= 3
BaseLogPartFctRef[ 315 ]['NbCliques']= 615
BaseLogPartFctRef[ 315 ]['NbSites']= 334
BaseLogPartFctRef[ 315 ]['StdNgbhDivMoyNgbh']= 0.36598110595
BaseLogPartFctRef[316 ]={}
BaseLogPartFctRef[316]['LogPF']=np.array([114.3,116.9,119.8,122.8,125.8,128.8,132.1,135.5,139.1,142.6,146.4,150.4,154.4,158.6,163.0,167.6,172.3,177.2,182.2,187.7,193.1,198.6,204.7,210.9,217.8,224.6,231.4,238.4,245.5,253.1])
BaseLogPartFctRef[ 316 ]['NbLabels']= 3
BaseLogPartFctRef[ 316 ]['NbCliques']= 164
BaseLogPartFctRef[ 316 ]['NbSites']= 104
BaseLogPartFctRef[ 316 ]['StdNgbhDivMoyNgbh']= 0.363742189418
BaseLogPartFctRef[317 ]={}
BaseLogPartFctRef[317]['LogPF']=np.array([80.20,82.09,84.00,86.03,88.18,90.30,92.55,94.90,97.13,99.55,102.1,104.7,107.5,110.3,113.3,116.3,119.6,123.1,126.6,130.2,134.0,138.1,141.8,146.1,150.5,154.9,159.2,163.5,168.2,173.0])
BaseLogPartFctRef[ 317 ]['NbLabels']= 3
BaseLogPartFctRef[ 317 ]['NbCliques']= 110
BaseLogPartFctRef[ 317 ]['NbSites']= 73
BaseLogPartFctRef[ 317 ]['StdNgbhDivMoyNgbh']= 0.39988429009
BaseLogPartFctRef[318 ]={}
BaseLogPartFctRef[318]['LogPF']=np.array([81.30,83.19,85.15,87.18,89.26,91.49,93.68,95.88,98.28,100.7,103.2,105.9,108.7,111.7,114.7,118.2,121.3,124.7,128.2,131.8,135.7,139.9,144.1,148.3,152.5,157.3,162.1,166.7,171.5,176.4])
BaseLogPartFctRef[ 318 ]['NbLabels']= 3
BaseLogPartFctRef[ 318 ]['NbCliques']= 111
BaseLogPartFctRef[ 318 ]['NbSites']= 74
BaseLogPartFctRef[ 318 ]['StdNgbhDivMoyNgbh']= 0.40813745446
BaseLogPartFctRef[319 ]={}
BaseLogPartFctRef[319]['LogPF']=np.array([147.2,151.0,154.9,158.9,163.1,167.5,171.8,176.3,181.1,186.1,191.2,196.8,202.6,208.6,214.6,221.3,228.2,235.8,243.3,251.4,259.8,268.4,277.1,286.4,295.4,304.8,314.3,324.0,334.0,344.0])
BaseLogPartFctRef[ 319 ]['NbLabels']= 3
BaseLogPartFctRef[ 319 ]['NbCliques']= 221
BaseLogPartFctRef[ 319 ]['NbSites']= 134
BaseLogPartFctRef[ 319 ]['StdNgbhDivMoyNgbh']= 0.38560906263
BaseLogPartFctRef[320 ]={}
BaseLogPartFctRef[320]['LogPF']=np.array([126.3,129.5,132.7,136.0,139.6,143.2,147.0,150.7,154.6,158.7,163.0,167.4,171.9,177.0,182.2,187.6,193.3,199.2,205.5,211.2,218.0,225.3,232.5,240.2,247.6,255.3,263.2,271.5,280.0,288.5])
BaseLogPartFctRef[ 320 ]['NbLabels']= 3
BaseLogPartFctRef[ 320 ]['NbCliques']= 187
BaseLogPartFctRef[ 320 ]['NbSites']= 115
BaseLogPartFctRef[ 320 ]['StdNgbhDivMoyNgbh']= 0.350072864142
BaseLogPartFctRef[321 ]={}
BaseLogPartFctRef[321]['LogPF']=np.array([104.4,107.1,110.0,113.1,116.1,119.2,122.4,125.7,129.1,132.8,136.6,140.4,144.9,149.1,153.4,158.0,162.7,168.0,173.0,179.2,185.3,191.9,198.2,205.1,212.3,219.4,226.8,234.0,241.6,249.3])
BaseLogPartFctRef[ 321 ]['NbLabels']= 3
BaseLogPartFctRef[ 321 ]['NbCliques']= 163
BaseLogPartFctRef[ 321 ]['NbSites']= 95
BaseLogPartFctRef[ 321 ]['StdNgbhDivMoyNgbh']= 0.382651352394
BaseLogPartFctRef[322 ]={}
BaseLogPartFctRef[322]['LogPF']=np.array([316.4,325.0,333.7,343.1,352.7,362.8,372.6,382.7,393.4,405.2,417.6,430.3,443.3,457.4,473.8,489.8,508.4,527.1,546.6,566.8,585.9,607.1,628.4,650.9,673.5,696.5,719.9,743.6,767.5,791.6])
BaseLogPartFctRef[ 322 ]['NbLabels']= 3
BaseLogPartFctRef[ 322 ]['NbCliques']= 515
BaseLogPartFctRef[ 322 ]['NbSites']= 288
BaseLogPartFctRef[ 322 ]['StdNgbhDivMoyNgbh']= 0.365049934797
BaseLogPartFctRef[323 ]={}
BaseLogPartFctRef[323]['LogPF']=np.array([207.6,213.5,219.6,225.8,232.2,238.9,245.9,253.2,260.8,268.4,276.5,285.0,294.3,303.8,313.4,323.7,335.3,346.9,358.8,371.5,385.4,399.8,414.2,429.6,444.7,460.4,476.3,492.3,508.4,524.7])
BaseLogPartFctRef[ 323 ]['NbLabels']= 3
BaseLogPartFctRef[ 323 ]['NbCliques']= 345
BaseLogPartFctRef[ 323 ]['NbSites']= 189
BaseLogPartFctRef[ 323 ]['StdNgbhDivMoyNgbh']= 0.331011355529
BaseLogPartFctRef[324 ]={}
BaseLogPartFctRef[324]['LogPF']=np.array([209.8,214.8,220.0,225.4,231.0,236.9,242.9,249.0,255.3,261.9,268.8,275.9,283.3,291.0,299.2,308.0,316.9,326.8,336.6,347.1,357.7,368.8,380.0,391.6,403.6,416.4,429.0,441.9,455.2,468.8])
BaseLogPartFctRef[ 324 ]['NbLabels']= 3
BaseLogPartFctRef[ 324 ]['NbCliques']= 301
BaseLogPartFctRef[ 324 ]['NbSites']= 191
BaseLogPartFctRef[ 324 ]['StdNgbhDivMoyNgbh']= 0.368659147247
BaseLogPartFctRef[325 ]={}
BaseLogPartFctRef[325]['LogPF']=np.array([141.7,145.5,149.4,153.4,157.4,161.7,166.2,170.9,175.4,180.3,185.4,190.6,196.1,201.7,208.1,214.7,221.5,228.4,236.0,243.8,252.0,260.6,269.4,278.8,288.1,297.5,307.2,317.1,327.0,337.1])
BaseLogPartFctRef[ 325 ]['NbLabels']= 3
BaseLogPartFctRef[ 325 ]['NbCliques']= 218
BaseLogPartFctRef[ 325 ]['NbSites']= 129
BaseLogPartFctRef[ 325 ]['StdNgbhDivMoyNgbh']= 0.387193491953
BaseLogPartFctRef[326 ]={}
BaseLogPartFctRef[326]['LogPF']=np.array([234.0,240.5,247.3,254.3,261.4,269.2,277.0,284.9,293.4,301.9,310.5,319.7,329.8,340.7,352.1,365.1,378.7,393.0,407.6,423.3,439.0,455.2,471.6,488.5,505.5,523.3,541.6,559.4,577.8,595.9])
BaseLogPartFctRef[ 326 ]['NbLabels']= 3
BaseLogPartFctRef[ 326 ]['NbCliques']= 389
BaseLogPartFctRef[ 326 ]['NbSites']= 213
BaseLogPartFctRef[ 326 ]['StdNgbhDivMoyNgbh']= 0.325694986108
BaseLogPartFctRef[327 ]={}
BaseLogPartFctRef[327]['LogPF']=np.array([294.4,302.2,310.4,318.8,327.3,336.3,345.4,355.0,365.2,375.8,386.5,397.3,409.4,422.6,437.2,451.3,466.8,481.4,498.5,515.5,534.0,553.3,572.9,592.7,613.4,634.3,654.9,675.4,696.5,718.1])
BaseLogPartFctRef[ 327 ]['NbLabels']= 3
BaseLogPartFctRef[ 327 ]['NbCliques']= 465
BaseLogPartFctRef[ 327 ]['NbSites']= 268
BaseLogPartFctRef[ 327 ]['StdNgbhDivMoyNgbh']= 0.365667449166
BaseLogPartFctRef[328 ]={}
BaseLogPartFctRef[328]['LogPF']=np.array([292.2,300.2,308.6,317.1,326.0,335.1,344.7,354.2,364.2,374.8,386.1,397.0,408.9,422.2,436.4,450.6,465.8,482.3,499.2,517.6,536.7,555.6,576.0,596.3,617.8,639.7,661.3,683.4,705.4,727.8])
BaseLogPartFctRef[ 328 ]['NbLabels']= 3
BaseLogPartFctRef[ 328 ]['NbCliques']= 476
BaseLogPartFctRef[ 328 ]['NbSites']= 266
BaseLogPartFctRef[ 328 ]['StdNgbhDivMoyNgbh']= 0.337418413588
BaseLogPartFctRef[329 ]={}
BaseLogPartFctRef[329]['LogPF']=np.array([107.7,110.5,113.4,116.4,119.4,122.7,126.0,129.4,133.0,136.5,140.5,144.9,148.9,153.2,157.6,162.5,167.1,172.0,177.4,182.9,188.9,195.2,201.6,208.3,215.1,222.4,229.4,236.9,244.4,251.8])
BaseLogPartFctRef[ 329 ]['NbLabels']= 3
BaseLogPartFctRef[ 329 ]['NbCliques']= 164
BaseLogPartFctRef[ 329 ]['NbSites']= 98
BaseLogPartFctRef[ 329 ]['StdNgbhDivMoyNgbh']= 0.363676245603
BaseLogPartFctRef[330 ]={}
BaseLogPartFctRef[330]['LogPF']=np.array([130.7,134.0,137.4,140.8,144.4,148.2,151.9,155.8,159.6,163.9,168.5,173.2,178.0,183.0,188.3,193.8,199.5,206.0,212.3,219.2,226.3,233.1,240.2,247.9,256.0,264.0,272.2,280.8,289.3,297.9])
BaseLogPartFctRef[ 330 ]['NbLabels']= 3
BaseLogPartFctRef[ 330 ]['NbCliques']= 192
BaseLogPartFctRef[ 330 ]['NbSites']= 119
BaseLogPartFctRef[ 330 ]['StdNgbhDivMoyNgbh']= 0.372511651842
BaseLogPartFctRef[331 ]={}
BaseLogPartFctRef[331]['LogPF']=np.array([336.2,346.0,356.4,366.7,377.6,388.6,400.1,412.2,424.8,437.5,451.2,466.2,481.2,496.8,515.9,535.2,555.4,576.3,598.7,621.3,644.9,670.5,695.7,722.1,748.8,775.6,803.3,830.4,857.7,885.8])
BaseLogPartFctRef[ 331 ]['NbLabels']= 3
BaseLogPartFctRef[ 331 ]['NbCliques']= 580
BaseLogPartFctRef[ 331 ]['NbSites']= 306
BaseLogPartFctRef[ 331 ]['StdNgbhDivMoyNgbh']= 0.329518283642
BaseLogPartFctRef[332 ]={}
BaseLogPartFctRef[332]['LogPF']=np.array([167.0,171.1,175.3,179.6,184.2,188.9,193.9,198.8,204.0,209.3,214.8,220.6,226.4,232.9,239.3,246.4,253.6,261.1,269.1,277.1,285.7,294.4,303.2,312.6,322.6,332.2,342.1,352.7,363.4,374.2])
BaseLogPartFctRef[ 332 ]['NbLabels']= 3
BaseLogPartFctRef[ 332 ]['NbCliques']= 242
BaseLogPartFctRef[ 332 ]['NbSites']= 152
BaseLogPartFctRef[ 332 ]['StdNgbhDivMoyNgbh']= 0.360403372577
BaseLogPartFctRef[333 ]={}
BaseLogPartFctRef[333]['LogPF']=np.array([218.6,223.9,229.4,235.2,241.2,247.3,253.5,260.2,267.0,274.3,281.6,289.2,296.9,305.0,313.8,323.0,332.1,342.0,352.2,363.0,374.5,385.9,398.4,410.6,423.3,436.7,449.9,463.6,477.6,491.5])
BaseLogPartFctRef[ 333 ]['NbLabels']= 3
BaseLogPartFctRef[ 333 ]['NbCliques']= 316
BaseLogPartFctRef[ 333 ]['NbSites']= 199
BaseLogPartFctRef[ 333 ]['StdNgbhDivMoyNgbh']= 0.391555179039
BaseLogPartFctRef[334 ]={}
BaseLogPartFctRef[334]['LogPF']=np.array([151.6,155.6,159.7,163.8,168.2,172.5,177.2,181.9,186.7,192.0,197.5,202.9,208.6,214.5,220.9,227.3,233.9,241.1,248.7,256.9,265.4,274.4,283.7,293.1,302.5,312.3,322.3,332.7,342.6,353.1])
BaseLogPartFctRef[ 334 ]['NbLabels']= 3
BaseLogPartFctRef[ 334 ]['NbCliques']= 229
BaseLogPartFctRef[ 334 ]['NbSites']= 138
BaseLogPartFctRef[ 334 ]['StdNgbhDivMoyNgbh']= 0.354636934759
BaseLogPartFctRef[335 ]={}
BaseLogPartFctRef[335]['LogPF']=np.array([99.97,102.7,105.4,108.3,111.4,114.4,117.5,120.9,124.4,127.9,131.5,135.4,139.5,143.5,147.7,152.0,157.2,162.3,168.0,173.8,179.5,185.6,191.7,198.1,204.9,211.9,218.9,226.0,233.4,240.7])
BaseLogPartFctRef[ 335 ]['NbLabels']= 3
BaseLogPartFctRef[ 335 ]['NbCliques']= 159
BaseLogPartFctRef[ 335 ]['NbSites']= 91
BaseLogPartFctRef[ 335 ]['StdNgbhDivMoyNgbh']= 0.33924306586
BaseLogPartFctRef[336 ]={}
BaseLogPartFctRef[336]['LogPF']=np.array([90.09,92.50,95.06,97.71,100.3,103.1,105.9,108.8,111.9,115.3,118.7,122.1,125.8,129.5,133.6,138.1,143.2,148.2,153.6,159.0,164.4,170.0,176.1,182.3,188.6,195.0,201.6,208.2,215.0,221.6])
BaseLogPartFctRef[ 336 ]['NbLabels']= 3
BaseLogPartFctRef[ 336 ]['NbCliques']= 144
BaseLogPartFctRef[ 336 ]['NbSites']= 82
BaseLogPartFctRef[ 336 ]['StdNgbhDivMoyNgbh']= 0.362854513411
BaseLogPartFctRef[337 ]={}
BaseLogPartFctRef[337]['LogPF']=np.array([60.42,61.87,63.44,65.10,66.80,68.56,70.46,72.47,74.42,76.39,78.39,80.60,82.91,85.38,87.92,90.45,93.06,96.24,99.38,102.4,105.6,108.6,112.4,116.2,120.0,124.1,127.9,131.8,135.9,140.1])
BaseLogPartFctRef[ 337 ]['NbLabels']= 3
BaseLogPartFctRef[ 337 ]['NbCliques']= 90
BaseLogPartFctRef[ 337 ]['NbSites']= 55
BaseLogPartFctRef[ 337 ]['StdNgbhDivMoyNgbh']= 0.38569460792
BaseLogPartFctRef[338 ]={}
BaseLogPartFctRef[338]['LogPF']=np.array([148.3,152.3,156.4,160.6,165.1,169.9,174.6,179.6,184.7,190.0,195.4,201.2,207.5,213.8,220.5,228.0,236.1,244.7,253.4,262.4,272.1,281.4,291.0,301.2,311.9,322.6,333.4,344.1,354.9,366.0])
BaseLogPartFctRef[ 338 ]['NbLabels']= 3
BaseLogPartFctRef[ 338 ]['NbCliques']= 239
BaseLogPartFctRef[ 338 ]['NbSites']= 135
BaseLogPartFctRef[ 338 ]['StdNgbhDivMoyNgbh']= 0.378058229611
BaseLogPartFctRef[339 ]={}
BaseLogPartFctRef[339]['LogPF']=np.array([81.30,83.23,85.26,87.31,89.42,91.62,93.94,96.20,98.54,101.0,103.7,106.6,109.4,112.3,115.7,118.9,122.2,126.2,130.0,133.8,137.8,141.9,146.2,150.6,155.1,159.6,164.5,169.4,174.2,179.3])
BaseLogPartFctRef[ 339 ]['NbLabels']= 3
BaseLogPartFctRef[ 339 ]['NbCliques']= 115
BaseLogPartFctRef[ 339 ]['NbSites']= 74
BaseLogPartFctRef[ 339 ]['StdNgbhDivMoyNgbh']= 0.367360929892
BaseLogPartFctRef[340 ]={}
BaseLogPartFctRef[340]['LogPF']=np.array([450.4,463.6,477.2,490.9,506.0,521.2,536.5,553.1,569.8,587.4,607.3,626.7,648.5,671.8,696.1,725.1,751.4,779.6,811.0,841.8,875.1,910.3,945.5,981.5,1016.7,1052.5,1089.4,1126.5,1163.5,1200.9])
BaseLogPartFctRef[ 340 ]['NbLabels']= 3
BaseLogPartFctRef[ 340 ]['NbCliques']= 785
BaseLogPartFctRef[ 340 ]['NbSites']= 410
BaseLogPartFctRef[ 340 ]['StdNgbhDivMoyNgbh']= 0.312518467356
BaseLogPartFctRef[341 ]={}
BaseLogPartFctRef[341]['LogPF']=np.array([232.9,238.6,245.0,251.7,258.4,265.5,272.8,280.4,288.2,296.5,304.7,313.2,322.4,331.9,342.2,353.8,364.3,376.2,390.3,403.4,417.1,432.4,446.9,462.4,477.5,493.5,510.3,526.5,543.3,560.2])
BaseLogPartFctRef[ 341 ]['NbLabels']= 3
BaseLogPartFctRef[ 341 ]['NbCliques']= 363
BaseLogPartFctRef[ 341 ]['NbSites']= 212
BaseLogPartFctRef[ 341 ]['StdNgbhDivMoyNgbh']= 0.332688757734
BaseLogPartFctRef[342 ]={}
BaseLogPartFctRef[342]['LogPF']=np.array([260.4,267.7,275.2,283.0,290.9,299.5,308.5,317.6,327.1,337.0,347.4,358.4,371.1,384.1,398.4,412.2,426.8,441.6,458.6,475.6,494.2,513.5,532.0,550.8,570.7,590.9,610.7,631.1,652.0,672.6])
BaseLogPartFctRef[ 342 ]['NbLabels']= 3
BaseLogPartFctRef[ 342 ]['NbCliques']= 441
BaseLogPartFctRef[ 342 ]['NbSites']= 237
BaseLogPartFctRef[ 342 ]['StdNgbhDivMoyNgbh']= 0.386940124834
BaseLogPartFctRef[343 ]={}
BaseLogPartFctRef[343]['LogPF']=np.array([236.2,242.1,248.4,254.9,261.7,268.9,276.3,283.8,291.4,299.5,308.3,317.7,327.4,337.2,347.3,357.6,369.6,381.1,394.4,406.4,420.4,435.3,450.0,465.2,480.3,496.3,512.2,529.1,545.7,562.9])
BaseLogPartFctRef[ 343 ]['NbLabels']= 3
BaseLogPartFctRef[ 343 ]['NbCliques']= 365
BaseLogPartFctRef[ 343 ]['NbSites']= 215
BaseLogPartFctRef[ 343 ]['StdNgbhDivMoyNgbh']= 0.357302435905
BaseLogPartFctRef[344 ]={}
BaseLogPartFctRef[344]['LogPF']=np.array([107.7,110.4,113.1,115.9,118.9,121.9,125.1,128.1,131.5,135.1,138.8,142.5,146.4,150.7,154.9,159.2,164.3,169.3,174.4,180.0,185.6,191.6,197.9,204.3,210.7,217.8,225.0,231.8,238.9,246.2])
BaseLogPartFctRef[ 344 ]['NbLabels']= 3
BaseLogPartFctRef[ 344 ]['NbCliques']= 159
BaseLogPartFctRef[ 344 ]['NbSites']= 98
BaseLogPartFctRef[ 344 ]['StdNgbhDivMoyNgbh']= 0.385003124456
#In [5]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+90),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=10,NBY=10,NBZ=10,BetaGeneration=0.6)
BaseLogPartFctRef[345 ]={}
BaseLogPartFctRef[345]['LogPF']=np.array([1004.1,1043.5,1084.2,1126.3,1171.2,1215.7,1263.0,1314.5,1368.5,1427.8,1494.6,1571.3,1659.2,1759.1,1857.7,1962.2,2068.9,2176.0,2285.6,2397.1,2509.4,2622.9,2737.1,2851.6,2966.1,3081.3,3196.6,3312.4,3428.2,3544.1])
BaseLogPartFctRef[ 345 ]['NbLabels']= 3
BaseLogPartFctRef[ 345 ]['NbCliques']= 2334
BaseLogPartFctRef[ 345 ]['NbSites']= 914
BaseLogPartFctRef[ 345 ]['StdNgbhDivMoyNgbh']= 0.184520051268
BaseLogPartFctRef[346 ]={}
BaseLogPartFctRef[346]['LogPF']=np.array([1018.4,1058.8,1100.6,1143.0,1187.3,1233.0,1280.8,1332.5,1388.4,1453.6,1523.1,1605.0,1687.7,1781.5,1884.0,1987.9,2095.0,2204.2,2316.2,2428.8,2541.4,2655.9,2771.3,2887.1,3004.2,3120.9,3237.6,3354.9,3472.2,3589.8])
BaseLogPartFctRef[ 346 ]['NbLabels']= 3
BaseLogPartFctRef[ 346 ]['NbCliques']= 2366
BaseLogPartFctRef[ 346 ]['NbSites']= 927
BaseLogPartFctRef[ 346 ]['StdNgbhDivMoyNgbh']= 0.177286604214
BaseLogPartFctRef[347 ]={}
BaseLogPartFctRef[347]['LogPF']=np.array([1008.5,1048.1,1088.7,1131.6,1176.0,1220.9,1267.1,1319.5,1376.9,1439.5,1511.0,1587.1,1672.6,1765.5,1865.3,1965.9,2072.0,2182.1,2293.7,2404.4,2516.8,2629.5,2742.6,2857.0,2971.9,3087.2,3203.1,3319.3,3435.2,3551.8])
BaseLogPartFctRef[ 347 ]['NbLabels']= 3
BaseLogPartFctRef[ 347 ]['NbCliques']= 2337
BaseLogPartFctRef[ 347 ]['NbSites']= 918
BaseLogPartFctRef[ 347 ]['StdNgbhDivMoyNgbh']= 0.182266929196
BaseLogPartFctRef[348 ]={}
BaseLogPartFctRef[348]['LogPF']=np.array([1030.5,1070.9,1113.3,1157.0,1201.8,1248.9,1297.8,1348.1,1404.8,1466.7,1541.0,1630.4,1724.4,1819.8,1924.2,2031.6,2140.2,2253.0,2367.4,2483.5,2600.4,2717.9,2837.0,2955.8,3075.2,3194.0,3314.3,3435.2,3556.1,3676.8])
BaseLogPartFctRef[ 348 ]['NbLabels']= 3
BaseLogPartFctRef[ 348 ]['NbCliques']= 2424
BaseLogPartFctRef[ 348 ]['NbSites']= 938
BaseLogPartFctRef[ 348 ]['StdNgbhDivMoyNgbh']= 0.173444289427
BaseLogPartFctRef[349 ]={}
BaseLogPartFctRef[349]['LogPF']=np.array([1027.2,1067.7,1109.3,1152.4,1196.8,1242.8,1290.4,1342.5,1396.1,1456.1,1535.3,1614.4,1706.5,1800.5,1905.4,2010.0,2119.2,2232.0,2345.8,2460.5,2575.3,2692.3,2810.1,2928.8,3047.8,3166.5,3285.7,3405.2,3524.6,3644.6])
BaseLogPartFctRef[ 349 ]['NbLabels']= 3
BaseLogPartFctRef[ 349 ]['NbCliques']= 2409
BaseLogPartFctRef[ 349 ]['NbSites']= 935
BaseLogPartFctRef[ 349 ]['StdNgbhDivMoyNgbh']= 0.173752738849
BaseLogPartFctRef[350 ]={}
BaseLogPartFctRef[350]['LogPF']=np.array([1022.8,1063.4,1106.2,1148.6,1193.7,1240.6,1288.8,1340.4,1393.8,1453.9,1530.0,1615.0,1705.3,1806.4,1908.8,2018.0,2125.8,2238.3,2351.2,2465.9,2580.9,2697.8,2815.6,2932.9,3051.1,3169.4,3288.2,3407.4,3526.4,3645.8])
BaseLogPartFctRef[ 350 ]['NbLabels']= 3
BaseLogPartFctRef[ 350 ]['NbCliques']= 2396
BaseLogPartFctRef[ 350 ]['NbSites']= 931
BaseLogPartFctRef[ 350 ]['StdNgbhDivMoyNgbh']= 0.16981940557
BaseLogPartFctRef[351 ]={}
BaseLogPartFctRef[351]['LogPF']=np.array([1021.7,1061.8,1103.3,1147.2,1190.6,1236.9,1284.7,1335.0,1390.6,1451.2,1524.1,1607.1,1699.5,1798.5,1901.5,2006.8,2117.9,2228.8,2338.2,2451.9,2565.9,2681.6,2797.6,2915.2,3031.5,3149.0,3266.8,3384.5,3503.0,3621.3])
BaseLogPartFctRef[ 351 ]['NbLabels']= 3
BaseLogPartFctRef[ 351 ]['NbCliques']= 2384
BaseLogPartFctRef[ 351 ]['NbSites']= 930
BaseLogPartFctRef[ 351 ]['StdNgbhDivMoyNgbh']= 0.178554698558
BaseLogPartFctRef[352 ]={}
BaseLogPartFctRef[352]['LogPF']=np.array([1049.2,1091.5,1134.9,1179.9,1225.7,1273.2,1323.6,1376.8,1435.2,1497.8,1577.9,1669.8,1763.4,1861.3,1966.6,2077.1,2192.3,2309.9,2426.2,2543.6,2663.2,2784.4,2906.2,3028.3,3151.2,3274.3,3397.4,3520.4,3644.0,3768.0])
BaseLogPartFctRef[ 352 ]['NbLabels']= 3
BaseLogPartFctRef[ 352 ]['NbCliques']= 2487
BaseLogPartFctRef[ 352 ]['NbSites']= 955
BaseLogPartFctRef[ 352 ]['StdNgbhDivMoyNgbh']= 0.162737513894
BaseLogPartFctRef[353 ]={}
BaseLogPartFctRef[353]['LogPF']=np.array([1021.7,1062.4,1105.4,1148.4,1193.4,1239.0,1287.2,1337.1,1390.0,1447.7,1519.0,1604.4,1693.9,1788.3,1893.0,2002.7,2112.3,2223.2,2336.6,2452.1,2566.7,2682.0,2798.1,2915.4,3033.0,3150.7,3269.5,3388.2,3507.1,3625.8])
BaseLogPartFctRef[ 353 ]['NbLabels']= 3
BaseLogPartFctRef[ 353 ]['NbCliques']= 2393
BaseLogPartFctRef[ 353 ]['NbSites']= 930
BaseLogPartFctRef[ 353 ]['StdNgbhDivMoyNgbh']= 0.171279097606
BaseLogPartFctRef[354 ]={}
BaseLogPartFctRef[354]['LogPF']=np.array([1043.7,1085.0,1127.2,1171.5,1217.8,1265.0,1313.9,1365.6,1421.8,1487.5,1563.7,1648.9,1741.8,1838.0,1945.5,2053.4,2166.7,2279.3,2395.2,2511.9,2631.0,2750.0,2869.0,2989.3,3110.5,3231.9,3353.8,3475.5,3597.8,3720.1])
BaseLogPartFctRef[ 354 ]['NbLabels']= 3
BaseLogPartFctRef[ 354 ]['NbCliques']= 2458
BaseLogPartFctRef[ 354 ]['NbSites']= 950
BaseLogPartFctRef[ 354 ]['StdNgbhDivMoyNgbh']= 0.16747255202
BaseLogPartFctRef[355 ]={}
BaseLogPartFctRef[355]['LogPF']=np.array([1020.6,1060.3,1102.0,1145.4,1190.1,1235.7,1284.4,1334.7,1391.5,1450.2,1527.6,1607.0,1702.3,1800.2,1903.8,2008.4,2118.9,2229.3,2342.7,2457.0,2571.6,2688.0,2804.9,2921.9,3040.0,3158.3,3276.3,3394.6,3513.5,3632.5])
BaseLogPartFctRef[ 355 ]['NbLabels']= 3
BaseLogPartFctRef[ 355 ]['NbCliques']= 2391
BaseLogPartFctRef[ 355 ]['NbSites']= 929
BaseLogPartFctRef[ 355 ]['StdNgbhDivMoyNgbh']= 0.177453346132
BaseLogPartFctRef[356 ]={}
BaseLogPartFctRef[356]['LogPF']=np.array([1037.1,1078.0,1121.0,1165.6,1211.4,1258.6,1308.4,1359.7,1415.6,1478.1,1555.0,1639.7,1736.0,1836.0,1942.7,2053.9,2169.2,2285.1,2402.9,2519.1,2637.8,2758.7,2877.9,2998.6,3119.3,3240.6,3361.9,3483.4,3605.5,3727.7])
BaseLogPartFctRef[ 356 ]['NbLabels']= 3
BaseLogPartFctRef[ 356 ]['NbCliques']= 2455
BaseLogPartFctRef[ 356 ]['NbSites']= 944
BaseLogPartFctRef[ 356 ]['StdNgbhDivMoyNgbh']= 0.167934586556
BaseLogPartFctRef[357 ]={}
BaseLogPartFctRef[357]['LogPF']=np.array([1044.8,1087.0,1130.1,1174.7,1221.5,1268.5,1318.0,1369.8,1427.7,1494.3,1565.5,1657.4,1755.9,1858.6,1969.6,2077.3,2187.7,2303.5,2420.9,2540.2,2657.8,2779.0,2899.9,3021.3,3143.2,3264.7,3387.0,3509.8,3632.6,3755.5])
BaseLogPartFctRef[ 357 ]['NbLabels']= 3
BaseLogPartFctRef[ 357 ]['NbCliques']= 2474
BaseLogPartFctRef[ 357 ]['NbSites']= 951
BaseLogPartFctRef[ 357 ]['StdNgbhDivMoyNgbh']= 0.159926437281
BaseLogPartFctRef[358 ]={}
BaseLogPartFctRef[358]['LogPF']=np.array([1027.2,1068.7,1110.8,1154.3,1199.3,1246.3,1294.3,1345.3,1398.9,1463.1,1530.8,1614.8,1707.3,1805.0,1907.5,2012.8,2122.8,2235.2,2348.0,2460.8,2575.3,2690.8,2807.7,2925.6,3043.4,3161.9,3281.1,3399.8,3519.0,3638.4])
BaseLogPartFctRef[ 358 ]['NbLabels']= 3
BaseLogPartFctRef[ 358 ]['NbCliques']= 2398
BaseLogPartFctRef[ 358 ]['NbSites']= 935
BaseLogPartFctRef[ 358 ]['StdNgbhDivMoyNgbh']= 0.168424957433
BaseLogPartFctRef[359 ]={}
BaseLogPartFctRef[359]['LogPF']=np.array([1005.2,1044.5,1084.6,1126.3,1169.2,1214.1,1261.3,1310.8,1367.8,1429.1,1494.6,1579.2,1660.0,1752.5,1853.8,1953.7,2062.1,2171.8,2281.7,2394.6,2506.9,2620.5,2734.9,2848.5,2963.6,3078.8,3194.5,3309.9,3426.2,3542.5])
BaseLogPartFctRef[ 359 ]['NbLabels']= 3
BaseLogPartFctRef[ 359 ]['NbCliques']= 2335
BaseLogPartFctRef[ 359 ]['NbSites']= 915
BaseLogPartFctRef[ 359 ]['StdNgbhDivMoyNgbh']= 0.185181218268
BaseLogPartFctRef[360 ]={}
BaseLogPartFctRef[360]['LogPF']=np.array([999.7,1039.3,1078.8,1119.7,1163.3,1208.3,1254.9,1302.4,1352.9,1416.3,1486.6,1566.9,1656.7,1753.3,1852.5,1955.1,2059.3,2166.0,2274.1,2384.5,2496.6,2608.4,2721.7,2835.1,2949.3,3063.8,3178.6,3293.4,3408.5,3524.1])
BaseLogPartFctRef[ 360 ]['NbLabels']= 3
BaseLogPartFctRef[ 360 ]['NbCliques']= 2318
BaseLogPartFctRef[ 360 ]['NbSites']= 910
BaseLogPartFctRef[ 360 ]['StdNgbhDivMoyNgbh']= 0.178226331535
BaseLogPartFctRef[361 ]={}
BaseLogPartFctRef[361]['LogPF']=np.array([1044.8,1087.3,1130.7,1174.9,1221.0,1270.3,1321.3,1375.1,1434.9,1498.8,1575.2,1663.4,1757.9,1861.7,1970.0,2081.3,2198.2,2313.7,2432.1,2551.0,2672.0,2793.0,2915.0,3037.7,3160.5,3283.9,3408.2,3531.9,3656.0,3779.9])
BaseLogPartFctRef[ 361 ]['NbLabels']= 3
BaseLogPartFctRef[ 361 ]['NbCliques']= 2492
BaseLogPartFctRef[ 361 ]['NbSites']= 951
BaseLogPartFctRef[ 361 ]['StdNgbhDivMoyNgbh']= 0.154629683739
BaseLogPartFctRef[362 ]={}
BaseLogPartFctRef[362]['LogPF']=np.array([1004.1,1044.2,1085.4,1128.0,1172.5,1217.5,1264.9,1314.3,1366.5,1429.4,1504.5,1584.0,1675.4,1770.6,1869.5,1974.5,2081.1,2188.0,2298.1,2408.1,2521.4,2636.5,2750.5,2865.0,2980.4,3096.1,3211.6,3328.1,3444.3,3560.7])
BaseLogPartFctRef[ 362 ]['NbLabels']= 3
BaseLogPartFctRef[ 362 ]['NbCliques']= 2339
BaseLogPartFctRef[ 362 ]['NbSites']= 914
BaseLogPartFctRef[ 362 ]['StdNgbhDivMoyNgbh']= 0.181130100505
BaseLogPartFctRef[363 ]={}
BaseLogPartFctRef[363]['LogPF']=np.array([1009.6,1049.8,1091.4,1134.6,1178.4,1223.9,1271.1,1321.1,1374.6,1433.9,1502.3,1590.1,1682.6,1775.7,1875.2,1981.6,2088.9,2197.8,2310.8,2423.5,2537.7,2653.5,2768.6,2885.0,3001.2,3118.2,3236.1,3353.0,3470.8,3589.1])
BaseLogPartFctRef[ 363 ]['NbLabels']= 3
BaseLogPartFctRef[ 363 ]['NbCliques']= 2376
BaseLogPartFctRef[ 363 ]['NbSites']= 919
BaseLogPartFctRef[ 363 ]['StdNgbhDivMoyNgbh']= 0.180711050403
BaseLogPartFctRef[364 ]={}
BaseLogPartFctRef[364]['LogPF']=np.array([1014.0,1053.8,1095.0,1138.0,1182.2,1228.5,1275.0,1325.0,1378.9,1439.9,1512.2,1599.5,1691.3,1790.0,1891.0,1997.1,2108.1,2218.1,2328.8,2443.4,2558.0,2673.3,2789.9,2905.9,3022.7,3139.8,3257.0,3374.3,3491.9,3609.7])
BaseLogPartFctRef[ 364 ]['NbLabels']= 3
BaseLogPartFctRef[ 364 ]['NbCliques']= 2374
BaseLogPartFctRef[ 364 ]['NbSites']= 923
BaseLogPartFctRef[ 364 ]['StdNgbhDivMoyNgbh']= 0.180087914656
BaseLogPartFctRef[365 ]={}
BaseLogPartFctRef[365]['LogPF']=np.array([1012.9,1052.3,1093.5,1135.2,1179.2,1225.4,1273.5,1322.0,1373.3,1438.5,1509.5,1584.3,1672.8,1768.1,1867.5,1972.7,2081.4,2193.3,2303.3,2414.5,2525.7,2640.6,2755.1,2871.2,2987.2,3103.1,3219.7,3336.3,3453.1,3570.2])
BaseLogPartFctRef[ 365 ]['NbLabels']= 3
BaseLogPartFctRef[ 365 ]['NbCliques']= 2352
BaseLogPartFctRef[ 365 ]['NbSites']= 922
BaseLogPartFctRef[ 365 ]['StdNgbhDivMoyNgbh']= 0.17804221797
BaseLogPartFctRef[366 ]={}
BaseLogPartFctRef[366]['LogPF']=np.array([1027.2,1068.6,1110.9,1154.2,1198.9,1246.0,1294.5,1344.1,1401.2,1461.7,1535.7,1617.1,1710.3,1809.3,1915.5,2024.9,2134.3,2247.4,2361.0,2475.1,2591.2,2709.6,2827.3,2945.2,3063.6,3182.9,3302.5,3422.2,3542.0,3662.0])
BaseLogPartFctRef[ 366 ]['NbLabels']= 3
BaseLogPartFctRef[ 366 ]['NbCliques']= 2413
BaseLogPartFctRef[ 366 ]['NbSites']= 935
BaseLogPartFctRef[ 366 ]['StdNgbhDivMoyNgbh']= 0.168958832152
BaseLogPartFctRef[367 ]={}
BaseLogPartFctRef[367]['LogPF']=np.array([1036.0,1077.8,1120.3,1164.2,1209.2,1256.6,1305.3,1356.4,1412.5,1475.1,1552.1,1631.2,1722.6,1822.1,1927.6,2035.9,2149.2,2262.2,2377.2,2494.6,2612.6,2730.9,2849.3,2967.6,3087.9,3208.2,3328.8,3449.7,3570.7,3692.0])
BaseLogPartFctRef[ 367 ]['NbLabels']= 3
BaseLogPartFctRef[ 367 ]['NbCliques']= 2435
BaseLogPartFctRef[ 367 ]['NbSites']= 943
BaseLogPartFctRef[ 367 ]['StdNgbhDivMoyNgbh']= 0.158793113095
BaseLogPartFctRef[368 ]={}
BaseLogPartFctRef[368]['LogPF']=np.array([998.6,1037.1,1077.1,1118.4,1161.1,1204.5,1249.7,1297.4,1349.4,1406.2,1470.2,1548.3,1634.2,1725.1,1818.8,1921.9,2021.0,2123.5,2229.0,2337.9,2448.8,2559.5,2670.2,2782.3,2894.4,3006.7,3119.6,3233.2,3346.4,3460.0])
BaseLogPartFctRef[ 368 ]['NbLabels']= 3
BaseLogPartFctRef[ 368 ]['NbCliques']= 2284
BaseLogPartFctRef[ 368 ]['NbSites']= 909
BaseLogPartFctRef[ 368 ]['StdNgbhDivMoyNgbh']= 0.184650972235
BaseLogPartFctRef[369 ]={}
BaseLogPartFctRef[369]['LogPF']=np.array([1010.7,1050.8,1091.9,1134.4,1178.6,1224.2,1272.4,1321.7,1374.5,1437.9,1508.8,1588.6,1678.4,1777.1,1876.9,1980.7,2088.4,2197.7,2307.4,2418.2,2530.8,2645.3,2759.8,2874.8,2991.0,3106.5,3222.6,3339.4,3455.9,3572.2])
BaseLogPartFctRef[ 369 ]['NbLabels']= 3
BaseLogPartFctRef[ 369 ]['NbCliques']= 2350
BaseLogPartFctRef[ 369 ]['NbSites']= 920
BaseLogPartFctRef[ 369 ]['StdNgbhDivMoyNgbh']= 0.174309545746
BaseLogPartFctRef[370 ]={}
BaseLogPartFctRef[370]['LogPF']=np.array([1016.2,1055.7,1097.0,1140.2,1184.0,1229.6,1278.0,1327.4,1381.6,1443.0,1512.4,1594.6,1685.4,1782.4,1886.0,1991.3,2096.8,2207.3,2319.9,2431.5,2545.3,2659.3,2774.2,2890.0,3006.1,3122.2,3238.5,3355.4,3472.5,3589.9])
BaseLogPartFctRef[ 370 ]['NbLabels']= 3
BaseLogPartFctRef[ 370 ]['NbCliques']= 2360
BaseLogPartFctRef[ 370 ]['NbSites']= 925
BaseLogPartFctRef[ 370 ]['StdNgbhDivMoyNgbh']= 0.177824088142
BaseLogPartFctRef[371 ]={}
BaseLogPartFctRef[371]['LogPF']=np.array([1036.0,1076.9,1119.4,1163.3,1209.1,1256.1,1305.3,1355.9,1412.8,1477.0,1551.5,1633.0,1724.9,1823.6,1927.3,2037.1,2146.0,2258.9,2373.7,2489.9,2606.5,2723.6,2842.0,2960.9,3080.3,3199.9,3320.0,3440.1,3560.0,3680.2])
BaseLogPartFctRef[ 371 ]['NbLabels']= 3
BaseLogPartFctRef[ 371 ]['NbCliques']= 2420
BaseLogPartFctRef[ 371 ]['NbSites']= 943
BaseLogPartFctRef[ 371 ]['StdNgbhDivMoyNgbh']= 0.163616515935
BaseLogPartFctRef[372 ]={}
BaseLogPartFctRef[372]['LogPF']=np.array([1017.3,1056.9,1098.5,1141.1,1184.6,1231.2,1279.4,1330.7,1383.6,1441.7,1512.0,1595.1,1683.0,1779.5,1881.6,1986.2,2095.9,2208.2,2318.7,2431.9,2545.7,2659.9,2775.8,2892.3,3009.2,3126.3,3243.6,3361.1,3478.7,3596.3])
BaseLogPartFctRef[ 372 ]['NbLabels']= 3
BaseLogPartFctRef[ 372 ]['NbCliques']= 2369
BaseLogPartFctRef[ 372 ]['NbSites']= 926
BaseLogPartFctRef[ 372 ]['StdNgbhDivMoyNgbh']= 0.169912475535
BaseLogPartFctRef[373 ]={}
BaseLogPartFctRef[373]['LogPF']=np.array([1015.1,1055.1,1095.8,1138.5,1183.0,1229.0,1275.3,1324.3,1375.4,1435.9,1511.5,1587.4,1679.3,1775.9,1875.5,1981.3,2087.8,2198.5,2308.9,2421.6,2534.0,2648.0,2762.4,2878.7,2994.5,3110.9,3227.6,3344.7,3462.3,3579.2])
BaseLogPartFctRef[ 373 ]['NbLabels']= 3
BaseLogPartFctRef[ 373 ]['NbCliques']= 2361
BaseLogPartFctRef[ 373 ]['NbSites']= 924
BaseLogPartFctRef[ 373 ]['StdNgbhDivMoyNgbh']= 0.170465756475
BaseLogPartFctRef[374 ]={}
BaseLogPartFctRef[374]['LogPF']=np.array([1017.3,1057.8,1099.8,1142.6,1186.0,1232.0,1280.3,1330.6,1385.5,1447.7,1525.9,1606.6,1694.2,1787.5,1888.3,1995.2,2100.7,2211.0,2322.1,2435.0,2548.7,2662.5,2777.4,2893.5,3010.1,3126.9,3243.7,3361.0,3478.7,3596.3])
BaseLogPartFctRef[ 374 ]['NbLabels']= 3
BaseLogPartFctRef[ 374 ]['NbCliques']= 2365
BaseLogPartFctRef[ 374 ]['NbSites']= 926
BaseLogPartFctRef[ 374 ]['StdNgbhDivMoyNgbh']= 0.176840058158
#In [6]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+120),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=15,NBY=15,NBZ=15,BetaGeneration=0.2)
BaseLogPartFctRef[ 375 ]={}
BaseLogPartFctRef[375]['LogPF']=np.array([50.54,51.58,52.63,53.71,54.85,56.04,57.22,58.50,59.78,61.13,62.51,63.94,65.41,66.99,68.57,70.17,71.92,73.66,75.43,77.24,79.19,81.18,83.15,85.28,87.42,89.62,91.80,94.13,96.51,98.91])
BaseLogPartFctRef[ 375 ]['NbLabels']= 3
BaseLogPartFctRef[ 375 ]['NbCliques']= 60
BaseLogPartFctRef[ 375 ]['NbSites']= 46
BaseLogPartFctRef[ 375 ]['StdNgbhDivMoyNgbh']= 0.43870513197
BaseLogPartFctRef[376 ]={}
BaseLogPartFctRef[376]['LogPF']=np.array([29.66,30.30,30.98,31.67,32.39,33.12,33.91,34.67,35.48,36.31,37.15,38.05,38.98,39.96,40.95,41.99,43.08,44.31,45.50,46.72,48.03,49.35,50.72,52.21,53.69,55.27,56.84,58.46,60.05,61.74])
BaseLogPartFctRef[ 376 ]['NbLabels']= 3
BaseLogPartFctRef[ 376 ]['NbCliques']= 38
BaseLogPartFctRef[ 376 ]['NbSites']= 27
BaseLogPartFctRef[ 376 ]['StdNgbhDivMoyNgbh']= 0.475103961735
BaseLogPartFctRef[377 ]={}
BaseLogPartFctRef[377]['LogPF']=np.array([32.96,33.53,34.14,34.76,35.42,36.10,36.81,37.52,38.25,39.04,39.80,40.64,41.46,42.32,43.19,44.11,45.07,46.04,47.02,48.02,49.10,50.21,51.29,52.50,53.65,54.85,56.08,57.41,58.69,59.98])
BaseLogPartFctRef[ 377 ]['NbLabels']= 3
BaseLogPartFctRef[ 377 ]['NbCliques']= 35
BaseLogPartFctRef[ 377 ]['NbSites']= 30
BaseLogPartFctRef[ 377 ]['StdNgbhDivMoyNgbh']= 0.37565966167
BaseLogPartFctRef[378 ]={}
BaseLogPartFctRef[378]['LogPF']=np.array([98.88,101.2,103.6,106.1,108.5,111.2,113.9,116.9,119.7,122.8,126.1,129.2,132.8,136.3,139.8,143.7,147.7,151.9,156.3,160.6,165.2,170.1,175.2,180.7,186.0,191.4,197.1,202.7,208.6,214.4])
BaseLogPartFctRef[ 378 ]['NbLabels']= 3
BaseLogPartFctRef[ 378 ]['NbCliques']= 137
BaseLogPartFctRef[ 378 ]['NbSites']= 90
BaseLogPartFctRef[ 378 ]['StdNgbhDivMoyNgbh']= 0.407746584779
BaseLogPartFctRef[379 ]={}
BaseLogPartFctRef[379]['LogPF']=np.array([51.63,52.93,54.28,55.68,57.12,58.61,60.17,61.82,63.49,65.20,67.05,68.93,70.87,72.87,74.96,77.12,79.37,81.78,84.48,87.16,89.92,92.77,95.83,98.92,102.1,105.4,108.7,112.0,115.6,119.0])
BaseLogPartFctRef[ 379 ]['NbLabels']= 3
BaseLogPartFctRef[ 379 ]['NbCliques']= 77
BaseLogPartFctRef[ 379 ]['NbSites']= 47
BaseLogPartFctRef[ 379 ]['StdNgbhDivMoyNgbh']= 0.346934416658
BaseLogPartFctRef[380 ]={}
BaseLogPartFctRef[380]['LogPF']=np.array([36.25,37.00,37.78,38.62,39.51,40.39,41.31,42.27,43.25,44.29,45.32,46.45,47.58,48.78,49.94,51.18,52.49,53.91,55.23,56.72,58.17,59.75,61.39,63.03,64.66,66.42,68.23,69.97,71.83,73.79])
BaseLogPartFctRef[ 380 ]['NbLabels']= 3
BaseLogPartFctRef[ 380 ]['NbCliques']= 46
BaseLogPartFctRef[ 380 ]['NbSites']= 33
BaseLogPartFctRef[ 380 ]['StdNgbhDivMoyNgbh']= 0.370858752796
BaseLogPartFctRef[381 ]={}
BaseLogPartFctRef[381]['LogPF']=np.array([34.06,34.81,35.62,36.43,37.26,38.13,39.02,39.93,40.85,41.84,42.90,43.97,45.06,46.22,47.35,48.58,49.87,51.19,52.62,54.01,55.53,57.08,58.70,60.45,62.20,63.93,65.73,67.53,69.41,71.40])
BaseLogPartFctRef[ 381 ]['NbLabels']= 3
BaseLogPartFctRef[ 381 ]['NbCliques']= 45
BaseLogPartFctRef[ 381 ]['NbSites']= 31
BaseLogPartFctRef[ 381 ]['StdNgbhDivMoyNgbh']= 0.409167690117
BaseLogPartFctRef[382 ]={}
BaseLogPartFctRef[382]['LogPF']=np.array([35.16,35.88,36.63,37.39,38.21,39.03,39.91,40.77,41.67,42.62,43.64,44.69,45.75,46.88,48.05,49.23,50.43,51.73,53.02,54.41,55.86,57.31,58.82,60.44,62.04,63.77,65.46,67.19,68.94,70.75])
BaseLogPartFctRef[ 382 ]['NbLabels']= 3
BaseLogPartFctRef[ 382 ]['NbCliques']= 43
BaseLogPartFctRef[ 382 ]['NbSites']= 32
BaseLogPartFctRef[ 382 ]['StdNgbhDivMoyNgbh']= 0.407396352413
BaseLogPartFctRef[383 ]={}
BaseLogPartFctRef[383]['LogPF']=np.array([50.54,51.54,52.59,53.65,54.78,55.95,57.15,58.41,59.70,61.00,62.39,63.79,65.25,66.76,68.29,69.94,71.61,73.26,75.07,76.99,78.84,80.80,82.91,84.98,87.19,89.43,91.74,94.09,96.48,98.88])
BaseLogPartFctRef[ 383 ]['NbLabels']= 3
BaseLogPartFctRef[ 383 ]['NbCliques']= 60
BaseLogPartFctRef[ 383 ]['NbSites']= 46
BaseLogPartFctRef[ 383 ]['StdNgbhDivMoyNgbh']= 0.417276648654
BaseLogPartFctRef[384 ]={}
BaseLogPartFctRef[384]['LogPF']=np.array([36.25,37.14,38.06,39.05,40.01,41.02,42.09,43.16,44.28,45.40,46.59,47.83,49.22,50.58,52.02,53.47,55.09,56.86,58.73,60.73,62.68,64.65,66.74,68.90,71.10,73.26,75.61,77.96,80.31,82.71])
BaseLogPartFctRef[ 384 ]['NbLabels']= 3
BaseLogPartFctRef[ 384 ]['NbCliques']= 53
BaseLogPartFctRef[ 384 ]['NbSites']= 33
BaseLogPartFctRef[ 384 ]['StdNgbhDivMoyNgbh']= 0.398545837133
BaseLogPartFctRef[385 ]={}
BaseLogPartFctRef[385]['LogPF']=np.array([47.24,48.18,49.15,50.18,51.23,52.32,53.49,54.67,55.89,57.15,58.46,59.81,61.20,62.63,64.17,65.71,67.29,68.95,70.64,72.30,74.08,75.95,77.88,79.88,81.93,83.98,86.14,88.33,90.59,92.91])
BaseLogPartFctRef[ 385 ]['NbLabels']= 3
BaseLogPartFctRef[ 385 ]['NbCliques']= 57
BaseLogPartFctRef[ 385 ]['NbSites']= 43
BaseLogPartFctRef[ 385 ]['StdNgbhDivMoyNgbh']= 0.345278070397
BaseLogPartFctRef[386 ]={}
BaseLogPartFctRef[386]['LogPF']=np.array([48.34,49.42,50.57,51.76,52.99,54.25,55.56,56.89,58.28,59.72,61.21,62.73,64.36,66.00,67.85,69.61,71.54,73.53,75.61,77.82,80.07,82.35,84.82,87.16,89.76,92.33,94.97,97.78,100.7,103.5])
BaseLogPartFctRef[ 386 ]['NbLabels']= 3
BaseLogPartFctRef[ 386 ]['NbCliques']= 65
BaseLogPartFctRef[ 386 ]['NbSites']= 44
BaseLogPartFctRef[ 386 ]['StdNgbhDivMoyNgbh']= 0.410925150947
BaseLogPartFctRef[387 ]={}
BaseLogPartFctRef[387]['LogPF']=np.array([28.56,29.15,29.75,30.38,31.01,31.68,32.36,33.07,33.79,34.54,35.34,36.12,36.92,37.79,38.69,39.64,40.62,41.59,42.57,43.57,44.59,45.67,46.83,48.00,49.29,50.50,51.72,52.99,54.32,55.66])
BaseLogPartFctRef[ 387 ]['NbLabels']= 3
BaseLogPartFctRef[ 387 ]['NbCliques']= 34
BaseLogPartFctRef[ 387 ]['NbSites']= 26
BaseLogPartFctRef[ 387 ]['StdNgbhDivMoyNgbh']= 0.323685609227
BaseLogPartFctRef[388 ]={}
BaseLogPartFctRef[388]['LogPF']=np.array([46.14,47.17,48.23,49.32,50.46,51.62,52.84,54.10,55.35,56.71,58.15,59.66,61.15,62.72,64.31,65.97,67.69,69.65,71.55,73.47,75.60,77.79,80.02,82.35,84.83,87.33,89.76,92.35,94.89,97.53])
BaseLogPartFctRef[ 388 ]['NbLabels']= 3
BaseLogPartFctRef[ 388 ]['NbCliques']= 61
BaseLogPartFctRef[ 388 ]['NbSites']= 42
BaseLogPartFctRef[ 388 ]['StdNgbhDivMoyNgbh']= 0.438847523211
BaseLogPartFctRef[389 ]={}
BaseLogPartFctRef[389]['LogPF']=np.array([60.42,61.70,63.00,64.34,65.71,67.20,68.64,70.16,71.77,73.35,74.97,76.66,78.52,80.33,82.13,84.36,86.35,88.49,90.66,92.96,95.15,97.76,100.5,103.1,105.7,108.7,111.7,114.7,117.8,120.9])
BaseLogPartFctRef[ 389 ]['NbLabels']= 3
BaseLogPartFctRef[ 389 ]['NbCliques']= 74
BaseLogPartFctRef[ 389 ]['NbSites']= 55
BaseLogPartFctRef[ 389 ]['StdNgbhDivMoyNgbh']= 0.404940094816
BaseLogPartFctRef[390 ]={}
BaseLogPartFctRef[390]['LogPF']=np.array([40.65,41.46,42.30,43.15,44.04,44.97,45.91,46.92,47.95,49.04,50.10,51.24,52.38,53.58,54.85,56.14,57.44,58.81,60.24,61.66,63.15,64.75,66.43,68.17,69.91,71.71,73.55,75.41,77.31,79.25])
BaseLogPartFctRef[ 390 ]['NbLabels']= 3
BaseLogPartFctRef[ 390 ]['NbCliques']= 48
BaseLogPartFctRef[ 390 ]['NbSites']= 37
BaseLogPartFctRef[ 390 ]['StdNgbhDivMoyNgbh']= 0.448472572091
BaseLogPartFctRef[391 ]={}
BaseLogPartFctRef[391]['LogPF']=np.array([39.55,40.31,41.09,41.92,42.75,43.62,44.50,45.43,46.37,47.34,48.38,49.47,50.60,51.74,52.90,54.11,55.33,56.65,57.92,59.28,60.72,62.14,63.61,65.08,66.62,68.21,69.83,71.49,73.21,74.98])
BaseLogPartFctRef[ 391 ]['NbLabels']= 3
BaseLogPartFctRef[ 391 ]['NbCliques']= 45
BaseLogPartFctRef[ 391 ]['NbSites']= 36
BaseLogPartFctRef[ 391 ]['StdNgbhDivMoyNgbh']= 0.324074074074
BaseLogPartFctRef[392 ]={}
BaseLogPartFctRef[392]['LogPF']=np.array([28.56,29.12,29.69,30.29,30.92,31.57,32.22,32.88,33.57,34.30,35.04,35.82,36.62,37.41,38.30,39.16,40.04,40.97,41.96,43.00,44.01,45.06,46.19,47.30,48.50,49.67,50.90,52.13,53.45,54.76])
BaseLogPartFctRef[ 392 ]['NbLabels']= 3
BaseLogPartFctRef[ 392 ]['NbCliques']= 33
BaseLogPartFctRef[ 392 ]['NbSites']= 26
BaseLogPartFctRef[ 392 ]['StdNgbhDivMoyNgbh']= 0.319185639572
BaseLogPartFctRef[393 ]={}
BaseLogPartFctRef[393]['LogPF']=np.array([62.62,64.15,65.75,67.37,69.06,70.77,72.51,74.38,76.36,78.34,80.38,82.36,84.41,86.77,89.12,91.65,94.23,96.95,99.79,102.9,106.0,109.5,113.0,116.8,120.4,124.4,128.5,132.5,136.6,140.8])
BaseLogPartFctRef[ 393 ]['NbLabels']= 3
BaseLogPartFctRef[ 393 ]['NbCliques']= 91
BaseLogPartFctRef[ 393 ]['NbSites']= 57
BaseLogPartFctRef[ 393 ]['StdNgbhDivMoyNgbh']= 0.361955295314
BaseLogPartFctRef[394 ]={}
BaseLogPartFctRef[394]['LogPF']=np.array([51.63,52.79,53.99,55.20,56.44,57.73,59.06,60.47,61.88,63.36,64.90,66.51,68.16,69.83,71.62,73.54,75.46,77.41,79.53,81.70,83.96,86.38,88.72,91.15,93.74,96.27,99.02,101.8,104.5,107.4])
BaseLogPartFctRef[ 394 ]['NbLabels']= 3
BaseLogPartFctRef[ 394 ]['NbCliques']= 67
BaseLogPartFctRef[ 394 ]['NbSites']= 47
BaseLogPartFctRef[ 394 ]['StdNgbhDivMoyNgbh']= 0.406664275469
BaseLogPartFctRef[395 ]={}
BaseLogPartFctRef[395]['LogPF']=np.array([49.44,50.53,51.64,52.82,54.01,55.24,56.48,57.81,59.25,60.68,62.10,63.64,65.22,66.87,68.57,70.33,72.06,73.93,75.99,78.07,80.20,82.38,84.61,87.04,89.48,92.07,94.68,97.28,99.96,102.7])
BaseLogPartFctRef[ 395 ]['NbLabels']= 3
BaseLogPartFctRef[ 395 ]['NbCliques']= 64
BaseLogPartFctRef[ 395 ]['NbSites']= 45
BaseLogPartFctRef[ 395 ]['StdNgbhDivMoyNgbh']= 0.389957608886
BaseLogPartFctRef[396 ]={}
BaseLogPartFctRef[396]['LogPF']=np.array([43.94,45.08,46.20,47.35,48.56,49.77,51.03,52.34,53.71,55.15,56.62,58.16,59.76,61.44,63.13,64.89,66.86,68.91,71.15,73.28,75.53,77.93,80.42,82.88,85.60,88.37,91.23,94.02,96.82,99.69])
BaseLogPartFctRef[ 396 ]['NbLabels']= 3
BaseLogPartFctRef[ 396 ]['NbCliques']= 64
BaseLogPartFctRef[ 396 ]['NbSites']= 40
BaseLogPartFctRef[ 396 ]['StdNgbhDivMoyNgbh']= 0.386605096936
BaseLogPartFctRef[397 ]={}
BaseLogPartFctRef[397]['LogPF']=np.array([54.93,56.00,57.14,58.31,59.45,60.72,62.02,63.29,64.62,66.07,67.50,68.99,70.43,72.02,73.64,75.42,77.15,78.99,80.83,82.72,84.45,86.30,88.33,90.62,92.97,95.32,97.73,100.1,102.4,104.8])
BaseLogPartFctRef[ 397 ]['NbLabels']= 3
BaseLogPartFctRef[ 397 ]['NbCliques']= 64
BaseLogPartFctRef[ 397 ]['NbSites']= 50
BaseLogPartFctRef[ 397 ]['StdNgbhDivMoyNgbh']= 0.384035546247
BaseLogPartFctRef[398 ]={}
BaseLogPartFctRef[398]['LogPF']=np.array([49.44,50.34,51.28,52.24,53.27,54.33,55.38,56.48,57.64,58.84,60.09,61.38,62.68,64.04,65.39,66.78,68.28,69.80,71.33,72.96,74.60,76.30,78.09,79.84,81.71,83.62,85.60,87.63,89.59,91.66])
BaseLogPartFctRef[ 398 ]['NbLabels']= 3
BaseLogPartFctRef[ 398 ]['NbCliques']= 54
BaseLogPartFctRef[ 398 ]['NbSites']= 45
BaseLogPartFctRef[ 398 ]['StdNgbhDivMoyNgbh']= 0.387929445501
BaseLogPartFctRef[399 ]={}
BaseLogPartFctRef[399]['LogPF']=np.array([76.90,78.46,80.09,81.77,83.51,85.39,87.25,89.14,91.11,93.21,95.36,97.66,100.1,102.6,105.1,107.6,110.3,113.1,116.0,118.8,121.6,124.8,128.1,131.5,135.0,138.6,142.0,145.6,149.3,152.9])
BaseLogPartFctRef[ 399 ]['NbLabels']= 3
BaseLogPartFctRef[ 399 ]['NbCliques']= 95
BaseLogPartFctRef[ 399 ]['NbSites']= 70
BaseLogPartFctRef[ 399 ]['StdNgbhDivMoyNgbh']= 0.369039376788
BaseLogPartFctRef[400 ]={}
BaseLogPartFctRef[400]['LogPF']=np.array([64.82,66.29,67.82,69.35,70.93,72.55,74.29,76.10,77.90,79.79,81.64,83.70,85.96,88.21,90.46,92.78,95.13,97.77,100.2,103.0,105.8,108.7,111.9,115.0,118.4,121.7,125.0,128.5,131.8,135.4])
BaseLogPartFctRef[ 400 ]['NbLabels']= 3
BaseLogPartFctRef[ 400 ]['NbCliques']= 86
BaseLogPartFctRef[ 400 ]['NbSites']= 59
BaseLogPartFctRef[ 400 ]['StdNgbhDivMoyNgbh']= 0.358206345182
BaseLogPartFctRef[401 ]={}
BaseLogPartFctRef[401]['LogPF']=np.array([73.61,75.16,76.75,78.41,80.19,81.91,83.69,85.65,87.56,89.45,91.46,93.83,96.00,98.55,101.0,103.5,106.0,108.7,111.6,114.4,117.3,120.5,123.8,127.0,130.5,133.8,137.6,141.3,144.8,148.6])
BaseLogPartFctRef[ 401 ]['NbLabels']= 3
BaseLogPartFctRef[ 401 ]['NbCliques']= 93
BaseLogPartFctRef[ 401 ]['NbSites']= 67
BaseLogPartFctRef[ 401 ]['StdNgbhDivMoyNgbh']= 0.377028770411
BaseLogPartFctRef[402 ]={}
BaseLogPartFctRef[402]['LogPF']=np.array([38.45,39.21,40.00,40.84,41.68,42.56,43.47,44.40,45.35,46.35,47.34,48.39,49.49,50.61,51.76,52.97,54.20,55.53,56.95,58.35,59.81,61.36,62.91,64.46,66.08,67.73,69.45,71.14,72.91,74.74])
BaseLogPartFctRef[ 402 ]['NbLabels']= 3
BaseLogPartFctRef[ 402 ]['NbCliques']= 45
BaseLogPartFctRef[ 402 ]['NbSites']= 35
BaseLogPartFctRef[ 402 ]['StdNgbhDivMoyNgbh']= 0.3861653904
BaseLogPartFctRef[403 ]={}
BaseLogPartFctRef[403]['LogPF']=np.array([34.06,34.76,35.46,36.20,36.99,37.76,38.59,39.42,40.29,41.19,42.14,43.13,44.11,45.12,46.23,47.34,48.51,49.76,51.02,52.32,53.69,55.11,56.55,58.03,59.60,61.18,62.85,64.51,66.21,67.92])
BaseLogPartFctRef[ 403 ]['NbLabels']= 3
BaseLogPartFctRef[ 403 ]['NbCliques']= 41
BaseLogPartFctRef[ 403 ]['NbSites']= 31
BaseLogPartFctRef[ 403 ]['StdNgbhDivMoyNgbh']= 0.478807146541
BaseLogPartFctRef[404 ]={}
BaseLogPartFctRef[404]['LogPF']=np.array([53.83,55.20,56.57,58.01,59.45,60.95,62.56,64.17,65.88,67.65,69.48,71.35,73.29,75.31,77.53,79.74,82.01,84.41,87.14,89.82,92.62,95.69,98.77,101.9,105.2,108.6,112.0,115.5,119.1,122.6])
BaseLogPartFctRef[ 404 ]['NbLabels']= 3
BaseLogPartFctRef[ 404 ]['NbCliques']= 79
BaseLogPartFctRef[ 404 ]['NbSites']= 49
BaseLogPartFctRef[ 404 ]['StdNgbhDivMoyNgbh']= 0.389463364081
#In [7]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+150),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=15,NBY=15,NBZ=15,BetaGeneration=0.3)
BaseLogPartFctRef[405 ]={}
BaseLogPartFctRef[405]['LogPF']=np.array([508.7,523.6,538.6,554.7,570.8,587.1,604.1,621.4,640.4,660.1,681.2,702.8,728.3,753.8,783.3,811.6,842.0,875.1,908.3,944.1,980.3,1016.5,1053.4,1091.6,1131.5,1171.4,1211.5,1250.7,1290.6,1331.0])
BaseLogPartFctRef[ 405 ]['NbLabels']= 3
BaseLogPartFctRef[ 405 ]['NbCliques']= 863
BaseLogPartFctRef[ 405 ]['NbSites']= 463
BaseLogPartFctRef[ 405 ]['StdNgbhDivMoyNgbh']= 0.351784832347
BaseLogPartFctRef[406 ]={}
BaseLogPartFctRef[406]['LogPF']=np.array([738.3,760.5,782.7,806.4,830.6,855.2,881.3,907.9,936.5,967.3,998.2,1030.5,1067.6,1106.7,1148.7,1192.9,1240.5,1289.2,1341.6,1394.4,1449.9,1507.0,1565.2,1624.1,1684.5,1744.5,1805.6,1867.4,1930.2,1992.1])
BaseLogPartFctRef[ 406 ]['NbLabels']= 3
BaseLogPartFctRef[ 406 ]['NbCliques']= 1299
BaseLogPartFctRef[ 406 ]['NbSites']= 672
BaseLogPartFctRef[ 406 ]['StdNgbhDivMoyNgbh']= 0.33520591512
BaseLogPartFctRef[407 ]={}
BaseLogPartFctRef[407]['LogPF']=np.array([195.6,200.8,206.2,211.6,217.4,223.4,229.7,236.0,242.6,249.3,256.3,263.7,271.3,279.4,287.8,296.8,306.9,317.1,327.3,338.5,349.6,361.5,374.2,387.3,400.7,414.0,427.6,441.6,455.2,469.2])
BaseLogPartFctRef[ 407 ]['NbLabels']= 3
BaseLogPartFctRef[ 407 ]['NbCliques']= 306
BaseLogPartFctRef[ 407 ]['NbSites']= 178
BaseLogPartFctRef[ 407 ]['StdNgbhDivMoyNgbh']= 0.377850658458
BaseLogPartFctRef[408 ]={}
BaseLogPartFctRef[408]['LogPF']=np.array([522.9,537.2,552.1,567.6,583.3,599.7,617.0,634.7,653.3,671.9,692.6,714.9,736.2,761.5,788.9,817.2,845.1,875.4,908.1,942.4,976.8,1013.6,1049.7,1087.4,1126.1,1164.3,1202.8,1242.2,1281.7,1322.0])
BaseLogPartFctRef[ 408 ]['NbLabels']= 3
BaseLogPartFctRef[ 408 ]['NbCliques']= 859
BaseLogPartFctRef[ 408 ]['NbSites']= 476
BaseLogPartFctRef[ 408 ]['StdNgbhDivMoyNgbh']= 0.347821584232
BaseLogPartFctRef[409 ]={}
BaseLogPartFctRef[409]['LogPF']=np.array([551.5,566.4,581.7,597.7,614.2,630.9,648.6,667.5,687.0,706.8,728.1,749.6,773.2,798.2,824.4,853.3,882.0,913.2,946.0,980.2,1013.0,1049.1,1086.2,1123.2,1161.0,1199.8,1240.2,1280.0,1320.1,1360.7])
BaseLogPartFctRef[ 409 ]['NbLabels']= 3
BaseLogPartFctRef[ 409 ]['NbCliques']= 883
BaseLogPartFctRef[ 409 ]['NbSites']= 502
BaseLogPartFctRef[ 409 ]['StdNgbhDivMoyNgbh']= 0.356185598361
BaseLogPartFctRef[410 ]={}
BaseLogPartFctRef[410]['LogPF']=np.array([927.2,955.0,983.8,1013.0,1043.9,1074.9,1106.5,1139.9,1174.8,1212.6,1254.0,1296.0,1339.7,1389.9,1440.6,1496.7,1555.2,1616.5,1683.7,1752.2,1820.8,1889.9,1961.5,2033.1,2108.2,2181.8,2256.7,2332.7,2408.5,2485.0])
BaseLogPartFctRef[ 410 ]['NbLabels']= 3
BaseLogPartFctRef[ 410 ]['NbCliques']= 1602
BaseLogPartFctRef[ 410 ]['NbSites']= 844
BaseLogPartFctRef[ 410 ]['StdNgbhDivMoyNgbh']= 0.330651445748
BaseLogPartFctRef[411 ]={}
BaseLogPartFctRef[411]['LogPF']=np.array([432.9,444.3,456.5,468.7,481.7,494.7,508.2,522.3,538.1,553.1,568.7,585.2,603.3,621.5,641.0,661.0,684.2,709.9,737.3,763.5,791.6,818.2,846.4,876.5,906.1,937.2,968.5,999.4,1030.5,1062.5])
BaseLogPartFctRef[ 411 ]['NbLabels']= 3
BaseLogPartFctRef[ 411 ]['NbCliques']= 688
BaseLogPartFctRef[ 411 ]['NbSites']= 394
BaseLogPartFctRef[ 411 ]['StdNgbhDivMoyNgbh']= 0.387826059151
BaseLogPartFctRef[412 ]={}
BaseLogPartFctRef[412]['LogPF']=np.array([189.0,193.6,198.4,203.4,208.6,213.9,219.4,225.2,231.3,237.3,243.7,250.3,257.4,264.9,272.0,280.1,288.7,297.2,306.1,315.8,326.2,336.6,347.3,358.3,370.0,381.8,394.0,406.3,418.6,431.1])
BaseLogPartFctRef[ 412 ]['NbLabels']= 3
BaseLogPartFctRef[ 412 ]['NbCliques']= 279
BaseLogPartFctRef[ 412 ]['NbSites']= 172
BaseLogPartFctRef[ 412 ]['StdNgbhDivMoyNgbh']= 0.373680618184
BaseLogPartFctRef[413 ]={}
BaseLogPartFctRef[413]['LogPF']=np.array([512.0,526.0,540.3,554.7,570.0,585.9,602.9,620.0,637.5,656.9,675.9,696.5,717.4,740.2,765.5,790.2,818.8,849.8,881.9,913.0,947.5,981.7,1016.8,1052.4,1088.2,1125.4,1162.4,1199.6,1238.6,1277.1])
BaseLogPartFctRef[ 413 ]['NbLabels']= 3
BaseLogPartFctRef[ 413 ]['NbCliques']= 825
BaseLogPartFctRef[ 413 ]['NbSites']= 466
BaseLogPartFctRef[ 413 ]['StdNgbhDivMoyNgbh']= 0.376500894395
BaseLogPartFctRef[414 ]={}
BaseLogPartFctRef[414]['LogPF']=np.array([206.5,212.2,218.1,224.0,230.1,236.4,242.7,249.5,256.4,263.5,271.3,279.2,287.4,295.9,305.0,314.4,324.5,335.0,346.0,357.9,369.9,382.6,395.8,409.6,423.1,437.0,451.7,466.6,481.8,496.8])
BaseLogPartFctRef[ 414 ]['NbLabels']= 3
BaseLogPartFctRef[ 414 ]['NbCliques']= 325
BaseLogPartFctRef[ 414 ]['NbSites']= 188
BaseLogPartFctRef[ 414 ]['StdNgbhDivMoyNgbh']= 0.347733632011
BaseLogPartFctRef[415 ]={}
BaseLogPartFctRef[415]['LogPF']=np.array([370.2,380.2,390.2,400.8,411.8,423.0,434.6,446.9,459.3,472.1,485.9,500.2,515.4,531.2,548.4,567.2,586.4,605.6,626.5,647.6,669.6,693.2,717.3,741.0,766.4,791.7,817.7,843.9,870.2,896.8])
BaseLogPartFctRef[ 415 ]['NbLabels']= 3
BaseLogPartFctRef[ 415 ]['NbCliques']= 579
BaseLogPartFctRef[ 415 ]['NbSites']= 337
BaseLogPartFctRef[ 415 ]['StdNgbhDivMoyNgbh']= 0.368270578511
BaseLogPartFctRef[416 ]={}
BaseLogPartFctRef[416]['LogPF']=np.array([466.9,480.1,493.9,507.8,522.4,537.4,552.7,569.0,585.7,603.0,621.1,640.8,661.5,683.7,709.4,732.9,760.5,789.6,821.0,853.3,886.0,918.9,952.1,987.2,1021.7,1056.8,1093.6,1130.1,1166.2,1202.7])
BaseLogPartFctRef[ 416 ]['NbLabels']= 3
BaseLogPartFctRef[ 416 ]['NbCliques']= 780
BaseLogPartFctRef[ 416 ]['NbSites']= 425
BaseLogPartFctRef[ 416 ]['StdNgbhDivMoyNgbh']= 0.355694695333
BaseLogPartFctRef[417 ]={}
BaseLogPartFctRef[417]['LogPF']=np.array([1316.1,1355.1,1395.9,1437.3,1479.8,1523.6,1570.1,1618.1,1666.9,1722.6,1779.0,1838.6,1900.8,1968.1,2038.2,2117.3,2201.7,2289.6,2383.0,2477.3,2578.4,2679.5,2783.4,2885.1,2990.8,3097.2,3204.4,3312.3,3420.5,3530.0])
BaseLogPartFctRef[ 417 ]['NbLabels']= 3
BaseLogPartFctRef[ 417 ]['NbCliques']= 2283
BaseLogPartFctRef[ 417 ]['NbSites']= 1198
BaseLogPartFctRef[ 417 ]['StdNgbhDivMoyNgbh']= 0.329917495293
BaseLogPartFctRef[418 ]={}
BaseLogPartFctRef[418]['LogPF']=np.array([329.6,338.8,348.1,357.9,368.3,379.4,390.0,401.8,413.6,425.9,438.5,452.0,466.8,482.0,497.8,516.7,535.6,556.5,574.5,596.1,618.3,640.6,664.2,688.1,712.2,736.9,761.7,786.7,812.0,837.8])
BaseLogPartFctRef[ 418 ]['NbLabels']= 3
BaseLogPartFctRef[ 418 ]['NbCliques']= 546
BaseLogPartFctRef[ 418 ]['NbSites']= 300
BaseLogPartFctRef[ 418 ]['StdNgbhDivMoyNgbh']= 0.363634442206
BaseLogPartFctRef[419 ]={}
BaseLogPartFctRef[419]['LogPF']=np.array([483.4,495.9,508.6,522.6,536.8,551.3,566.0,581.0,597.3,613.4,630.8,648.2,666.5,686.2,707.4,731.8,756.4,780.4,807.6,835.7,864.6,894.2,924.2,956.0,988.6,1021.2,1053.7,1087.2,1119.8,1153.7])
BaseLogPartFctRef[ 419 ]['NbLabels']= 3
BaseLogPartFctRef[ 419 ]['NbCliques']= 741
BaseLogPartFctRef[ 419 ]['NbSites']= 440
BaseLogPartFctRef[ 419 ]['StdNgbhDivMoyNgbh']= 0.377967091594
BaseLogPartFctRef[420 ]={}
BaseLogPartFctRef[420]['LogPF']=np.array([565.8,581.5,597.8,614.3,632.0,649.8,668.4,687.5,708.1,728.4,751.1,775.3,801.0,829.1,857.6,885.8,918.2,951.8,988.2,1023.6,1061.1,1100.1,1138.6,1179.8,1220.4,1261.5,1303.5,1345.9,1388.7,1431.6])
BaseLogPartFctRef[ 420 ]['NbLabels']= 3
BaseLogPartFctRef[ 420 ]['NbCliques']= 926
BaseLogPartFctRef[ 420 ]['NbSites']= 515
BaseLogPartFctRef[ 420 ]['StdNgbhDivMoyNgbh']= 0.368924387478
BaseLogPartFctRef[421 ]={}
BaseLogPartFctRef[421]['LogPF']=np.array([672.4,691.0,711.1,731.8,752.7,774.0,797.0,819.8,845.4,871.2,897.1,924.9,956.1,989.2,1026.3,1066.4,1106.1,1147.0,1191.4,1237.7,1285.0,1334.0,1382.6,1432.9,1483.7,1535.2,1587.3,1639.8,1692.8,1745.9])
BaseLogPartFctRef[ 421 ]['NbLabels']= 3
BaseLogPartFctRef[ 421 ]['NbCliques']= 1128
BaseLogPartFctRef[ 421 ]['NbSites']= 612
BaseLogPartFctRef[ 421 ]['StdNgbhDivMoyNgbh']= 0.354709146661
BaseLogPartFctRef[422 ]={}
BaseLogPartFctRef[422]['LogPF']=np.array([588.9,604.9,621.8,639.7,658.0,676.1,695.9,716.0,735.9,757.0,780.8,804.8,832.4,860.7,890.8,922.0,956.8,994.2,1032.0,1071.2,1111.3,1153.3,1195.5,1238.8,1283.1,1326.5,1371.7,1417.5,1463.3,1509.1])
BaseLogPartFctRef[ 422 ]['NbLabels']= 3
BaseLogPartFctRef[ 422 ]['NbCliques']= 978
BaseLogPartFctRef[ 422 ]['NbSites']= 536
BaseLogPartFctRef[ 422 ]['StdNgbhDivMoyNgbh']= 0.351271838626
BaseLogPartFctRef[423 ]={}
BaseLogPartFctRef[423]['LogPF']=np.array([586.7,603.2,619.5,637.1,654.6,673.5,692.6,712.6,733.6,754.2,777.0,800.9,825.9,852.5,879.6,910.3,943.8,977.1,1013.6,1052.4,1090.7,1132.2,1173.0,1214.9,1257.9,1301.0,1344.7,1388.1,1431.7,1476.2])
BaseLogPartFctRef[ 423 ]['NbLabels']= 3
BaseLogPartFctRef[ 423 ]['NbCliques']= 956
BaseLogPartFctRef[ 423 ]['NbSites']= 534
BaseLogPartFctRef[ 423 ]['StdNgbhDivMoyNgbh']= 0.367098803098
BaseLogPartFctRef[424 ]={}
BaseLogPartFctRef[424]['LogPF']=np.array([337.3,345.7,354.3,363.6,373.2,383.1,393.5,404.2,415.3,426.7,438.5,451.1,464.6,478.6,492.8,507.3,521.9,537.5,554.6,573.5,592.7,612.6,631.9,653.4,674.4,696.6,718.6,741.2,764.5,788.3])
BaseLogPartFctRef[ 424 ]['NbLabels']= 3
BaseLogPartFctRef[ 424 ]['NbCliques']= 511
BaseLogPartFctRef[ 424 ]['NbSites']= 307
BaseLogPartFctRef[ 424 ]['StdNgbhDivMoyNgbh']= 0.344224972374
BaseLogPartFctRef[425 ]={}
BaseLogPartFctRef[425]['LogPF']=np.array([448.2,460.6,473.5,487.0,500.3,514.0,528.6,543.8,559.2,575.4,592.7,611.4,630.3,650.9,672.1,694.1,719.0,747.5,777.2,805.5,835.9,867.5,898.4,930.5,961.9,993.2,1025.7,1058.0,1090.7,1124.1])
BaseLogPartFctRef[ 425 ]['NbLabels']= 3
BaseLogPartFctRef[ 425 ]['NbCliques']= 727
BaseLogPartFctRef[ 425 ]['NbSites']= 408
BaseLogPartFctRef[ 425 ]['StdNgbhDivMoyNgbh']= 0.385626302272
BaseLogPartFctRef[426 ]={}
BaseLogPartFctRef[426]['LogPF']=np.array([307.6,316.1,324.7,333.0,342.5,352.0,361.7,372.0,382.8,393.6,405.2,416.6,430.9,445.8,461.1,477.4,493.4,512.3,528.5,545.9,565.4,585.7,607.2,628.5,650.3,672.3,694.5,716.4,739.1,762.3])
BaseLogPartFctRef[ 426 ]['NbLabels']= 3
BaseLogPartFctRef[ 426 ]['NbCliques']= 493
BaseLogPartFctRef[ 426 ]['NbSites']= 280
BaseLogPartFctRef[ 426 ]['StdNgbhDivMoyNgbh']= 0.381837528918
BaseLogPartFctRef[427 ]={}
BaseLogPartFctRef[427]['LogPF']=np.array([767.9,788.8,810.4,832.0,855.1,879.6,903.7,930.1,957.2,983.9,1014.0,1043.4,1076.3,1110.2,1149.5,1187.2,1228.3,1270.4,1318.0,1369.0,1418.9,1469.4,1523.3,1574.8,1629.0,1684.0,1741.4,1798.5,1856.3,1914.9])
BaseLogPartFctRef[ 427 ]['NbLabels']= 3
BaseLogPartFctRef[ 427 ]['NbCliques']= 1233
BaseLogPartFctRef[ 427 ]['NbSites']= 699
BaseLogPartFctRef[ 427 ]['StdNgbhDivMoyNgbh']= 0.346283953722
BaseLogPartFctRef[428 ]={}
BaseLogPartFctRef[428]['LogPF']=np.array([557.0,572.9,589.4,606.3,624.2,642.2,660.9,680.6,700.6,722.2,744.5,770.9,798.4,824.6,854.7,886.0,921.5,955.9,993.0,1031.1,1070.2,1110.0,1150.1,1191.1,1232.9,1275.5,1318.4,1361.5,1405.6,1449.7])
BaseLogPartFctRef[ 428 ]['NbLabels']= 3
BaseLogPartFctRef[ 428 ]['NbCliques']= 939
BaseLogPartFctRef[ 428 ]['NbSites']= 507
BaseLogPartFctRef[ 428 ]['StdNgbhDivMoyNgbh']= 0.356102461985
BaseLogPartFctRef[429 ]={}
BaseLogPartFctRef[429]['LogPF']=np.array([332.9,341.1,349.8,358.9,368.0,377.5,387.0,396.9,407.4,418.4,429.3,441.5,454.5,467.6,480.7,494.4,509.2,523.6,539.4,555.9,573.7,592.1,611.1,630.9,651.4,671.1,692.0,714.0,735.9,758.5])
BaseLogPartFctRef[ 429 ]['NbLabels']= 3
BaseLogPartFctRef[ 429 ]['NbCliques']= 489
BaseLogPartFctRef[ 429 ]['NbSites']= 303
BaseLogPartFctRef[ 429 ]['StdNgbhDivMoyNgbh']= 0.376854102508
BaseLogPartFctRef[430 ]={}
BaseLogPartFctRef[430]['LogPF']=np.array([581.2,595.9,611.6,627.8,644.3,661.5,680.1,698.8,718.3,739.5,760.4,783.2,808.9,834.1,860.8,888.1,919.9,950.2,983.1,1018.2,1054.5,1090.4,1128.9,1167.4,1205.6,1246.5,1287.0,1328.0,1368.0,1409.5])
BaseLogPartFctRef[ 430 ]['NbLabels']= 3
BaseLogPartFctRef[ 430 ]['NbCliques']= 906
BaseLogPartFctRef[ 430 ]['NbSites']= 529
BaseLogPartFctRef[ 430 ]['StdNgbhDivMoyNgbh']= 0.37396580405
BaseLogPartFctRef[431 ]={}
BaseLogPartFctRef[431]['LogPF']=np.array([438.3,450.1,462.4,475.4,488.5,502.0,515.5,530.0,544.5,559.7,575.5,593.3,612.5,632.0,653.3,673.8,699.2,723.6,748.3,774.2,801.9,830.4,859.2,888.5,919.7,951.4,983.6,1014.9,1046.8,1079.4])
BaseLogPartFctRef[ 431 ]['NbLabels']= 3
BaseLogPartFctRef[ 431 ]['NbCliques']= 700
BaseLogPartFctRef[ 431 ]['NbSites']= 399
BaseLogPartFctRef[ 431 ]['StdNgbhDivMoyNgbh']= 0.387604113959
BaseLogPartFctRef[432 ]={}
BaseLogPartFctRef[432]['LogPF']=np.array([414.2,426.8,439.8,453.1,466.6,480.6,495.2,511.3,527.2,544.1,563.5,583.1,605.2,627.7,653.3,680.1,707.9,735.1,764.4,796.9,829.7,862.1,895.4,929.8,963.4,998.3,1033.0,1068.0,1103.3,1138.5])
BaseLogPartFctRef[ 432 ]['NbLabels']= 3
BaseLogPartFctRef[ 432 ]['NbCliques']= 739
BaseLogPartFctRef[ 432 ]['NbSites']= 377
BaseLogPartFctRef[ 432 ]['StdNgbhDivMoyNgbh']= 0.351961232139
BaseLogPartFctRef[433 ]={}
BaseLogPartFctRef[433]['LogPF']=np.array([472.4,485.9,499.6,513.8,528.4,543.4,559.0,575.0,591.4,609.2,627.9,647.6,668.6,691.5,715.8,739.4,767.4,795.7,824.0,856.6,888.3,921.9,954.9,988.6,1022.6,1057.7,1093.9,1129.4,1165.7,1201.7])
BaseLogPartFctRef[ 433 ]['NbLabels']= 3
BaseLogPartFctRef[ 433 ]['NbCliques']= 778
BaseLogPartFctRef[ 433 ]['NbSites']= 430
BaseLogPartFctRef[ 433 ]['StdNgbhDivMoyNgbh']= 0.356626926853
BaseLogPartFctRef[434 ]={}
BaseLogPartFctRef[434]['LogPF']=np.array([640.5,659.2,678.7,698.1,718.2,739.3,761.1,783.4,806.8,830.9,855.9,883.8,916.8,947.4,983.6,1018.9,1058.7,1100.9,1144.3,1188.8,1234.1,1280.0,1328.0,1376.1,1425.7,1474.4,1524.0,1573.2,1623.7,1674.4])
BaseLogPartFctRef[ 434 ]['NbLabels']= 3
BaseLogPartFctRef[ 434 ]['NbCliques']= 1083
BaseLogPartFctRef[ 434 ]['NbSites']= 583
BaseLogPartFctRef[ 434 ]['StdNgbhDivMoyNgbh']= 0.362779359663
#In [8]: ComputeBaseLogPartFctRef_NonReg(FirstIndex=(255+180),NbExtraIndex=30,BetaMax=1.45,DeltaBeta=0.05,NbLabels=3,NBX=15,NBY=15,NBZ=15,BetaGeneration=0.5)
BaseLogPartFctRef[435 ]={}
BaseLogPartFctRef[435]['LogPF']=np.array([3246.4,3372.1,3503.8,3639.8,3778.6,3926.2,4081.7,4244.7,4424.1,4614.2,4838.0,5097.4,5388.1,5699.5,6018.5,6356.4,6703.3,7051.5,7405.2,7761.2,8123.4,8485.1,8848.1,9216.1,9584.6,9953.5,10322.0,10693.0,11064.9,11436.4])
BaseLogPartFctRef[ 435 ]['NbLabels']= 3
BaseLogPartFctRef[ 435 ]['NbCliques']= 7487
BaseLogPartFctRef[ 435 ]['NbSites']= 2955
BaseLogPartFctRef[ 435 ]['StdNgbhDivMoyNgbh']= 0.190295523564
BaseLogPartFctRef[436 ]={}
BaseLogPartFctRef[436]['LogPF']=np.array([3299.1,3430.6,3566.7,3706.0,3848.8,4000.4,4158.9,4323.0,4506.5,4713.0,4964.8,5240.2,5547.9,5874.9,6212.9,6558.0,6913.2,7271.6,7642.4,8015.9,8391.9,8769.6,9147.9,9528.3,9910.8,10293.5,10677.1,11062.5,11447.8,11834.2])
BaseLogPartFctRef[ 436 ]['NbLabels']= 3
BaseLogPartFctRef[ 436 ]['NbCliques']= 7755
BaseLogPartFctRef[ 436 ]['NbSites']= 3003
BaseLogPartFctRef[ 436 ]['StdNgbhDivMoyNgbh']= 0.177143540895
BaseLogPartFctRef[437 ]={}
BaseLogPartFctRef[437]['LogPF']=np.array([3228.8,3354.9,3484.2,3620.1,3761.2,3908.5,4060.7,4222.6,4397.8,4586.3,4806.9,5076.8,5369.9,5678.4,6010.0,6356.2,6707.0,7055.3,7409.4,7767.3,8130.5,8497.9,8865.2,9234.5,9604.5,9975.2,10347.6,10720.5,11093.2,11467.1])
BaseLogPartFctRef[ 437 ]['NbLabels']= 3
BaseLogPartFctRef[ 437 ]['NbCliques']= 7520
BaseLogPartFctRef[ 437 ]['NbSites']= 2939
BaseLogPartFctRef[ 437 ]['StdNgbhDivMoyNgbh']= 0.190823564003
BaseLogPartFctRef[438 ]={}
BaseLogPartFctRef[438]['LogPF']=np.array([3173.9,3296.5,3423.2,3555.4,3690.7,3831.4,3978.9,4137.3,4311.5,4507.1,4723.6,4979.9,5251.6,5536.4,5844.6,6171.9,6510.2,6852.1,7197.1,7547.5,7897.5,8252.1,8608.4,8967.4,9326.2,9686.4,10047.6,10408.7,10770.4,11132.8])
BaseLogPartFctRef[ 438 ]['NbLabels']= 3
BaseLogPartFctRef[ 438 ]['NbCliques']= 7298
BaseLogPartFctRef[ 438 ]['NbSites']= 2889
BaseLogPartFctRef[ 438 ]['StdNgbhDivMoyNgbh']= 0.188896482419
BaseLogPartFctRef[439 ]={}
BaseLogPartFctRef[439]['LogPF']=np.array([3234.3,3360.1,3491.4,3626.2,3767.1,3911.2,4063.2,4231.0,4400.4,4602.5,4845.9,5110.4,5401.3,5709.5,6034.0,6371.5,6712.4,7066.9,7423.3,7784.4,8147.7,8512.0,8879.5,9247.3,9616.6,9986.5,10358.8,10731.7,11104.2,11477.8])
BaseLogPartFctRef[ 439 ]['NbLabels']= 3
BaseLogPartFctRef[ 439 ]['NbCliques']= 7516
BaseLogPartFctRef[ 439 ]['NbSites']= 2944
BaseLogPartFctRef[ 439 ]['StdNgbhDivMoyNgbh']= 0.187317659209
BaseLogPartFctRef[440 ]={}
BaseLogPartFctRef[440]['LogPF']=np.array([3276.1,3407.1,3541.9,3679.2,3822.7,3972.1,4130.9,4300.6,4481.3,4682.1,4920.1,5198.0,5500.4,5825.0,6159.3,6505.0,6857.4,7214.1,7572.6,7938.2,8309.2,8683.1,9057.7,9432.1,9809.5,10187.0,10565.6,10945.0,11324.8,11704.9])
BaseLogPartFctRef[ 440 ]['NbLabels']= 3
BaseLogPartFctRef[ 440 ]['NbCliques']= 7649
BaseLogPartFctRef[ 440 ]['NbSites']= 2982
BaseLogPartFctRef[ 440 ]['StdNgbhDivMoyNgbh']= 0.178045474733
BaseLogPartFctRef[441 ]={}
BaseLogPartFctRef[441]['LogPF']=np.array([3202.5,3325.7,3452.9,3586.0,3723.3,3866.2,4013.7,4180.0,4350.2,4542.0,4768.4,5017.3,5297.2,5604.2,5918.9,6250.4,6584.7,6930.0,7277.4,7629.9,7983.4,8339.0,8699.3,9059.3,9421.0,9782.5,10146.6,10511.9,10876.1,11241.6])
BaseLogPartFctRef[ 441 ]['NbLabels']= 3
BaseLogPartFctRef[ 441 ]['NbCliques']= 7359
BaseLogPartFctRef[ 441 ]['NbSites']= 2915
BaseLogPartFctRef[ 441 ]['StdNgbhDivMoyNgbh']= 0.193835579538
BaseLogPartFctRef[442 ]={}
BaseLogPartFctRef[442]['LogPF']=np.array([3248.6,3377.4,3507.9,3644.8,3787.5,3932.5,4083.0,4247.3,4422.7,4637.9,4872.3,5143.2,5441.7,5754.0,6079.6,6421.7,6766.7,7123.1,7485.2,7846.0,8213.2,8580.5,8952.3,9324.4,9697.2,10071.3,10445.2,10820.9,11196.7,11573.7])
BaseLogPartFctRef[ 442 ]['NbLabels']= 3
BaseLogPartFctRef[ 442 ]['NbCliques']= 7576
BaseLogPartFctRef[ 442 ]['NbSites']= 2957
BaseLogPartFctRef[ 442 ]['StdNgbhDivMoyNgbh']= 0.184386902331
BaseLogPartFctRef[443 ]={}
BaseLogPartFctRef[443]['LogPF']=np.array([3197.0,3321.4,3452.2,3586.4,3723.9,3869.1,4020.5,4178.0,4350.9,4550.1,4779.6,5037.7,5323.7,5637.9,5962.0,6296.9,6632.2,6981.5,7332.1,7686.7,8044.1,8404.5,8767.6,9130.0,9495.5,9860.6,10226.0,10593.4,10960.6,11328.4])
BaseLogPartFctRef[ 443 ]['NbLabels']= 3
BaseLogPartFctRef[ 443 ]['NbCliques']= 7402
BaseLogPartFctRef[ 443 ]['NbSites']= 2910
BaseLogPartFctRef[ 443 ]['StdNgbhDivMoyNgbh']= 0.185876474335
BaseLogPartFctRef[444 ]={}
BaseLogPartFctRef[444]['LogPF']=np.array([3218.9,3348.0,3479.1,3614.7,3754.2,3900.8,4053.5,4214.5,4389.3,4592.9,4823.4,5085.7,5374.7,5676.4,6004.2,6341.0,6682.7,7032.1,7386.5,7746.8,8107.7,8472.3,8839.3,9207.4,9576.5,9946.5,10317.3,10687.8,11059.5,11432.2])
BaseLogPartFctRef[ 444 ]['NbLabels']= 3
BaseLogPartFctRef[ 444 ]['NbCliques']= 7491
BaseLogPartFctRef[ 444 ]['NbSites']= 2930
BaseLogPartFctRef[ 444 ]['StdNgbhDivMoyNgbh']= 0.188793217594
BaseLogPartFctRef[445 ]={}
BaseLogPartFctRef[445]['LogPF']=np.array([3276.1,3405.3,3539.7,3679.7,3822.3,3971.0,4126.7,4288.0,4475.9,4677.3,4922.6,5209.6,5505.6,5816.0,6149.9,6498.9,6854.2,7212.8,7578.4,7945.7,8316.7,8691.4,9068.0,9446.2,9825.6,10205.6,10586.8,10969.3,11351.6,11734.7])
BaseLogPartFctRef[ 445 ]['NbLabels']= 3
BaseLogPartFctRef[ 445 ]['NbCliques']= 7693
BaseLogPartFctRef[ 445 ]['NbSites']= 2982
BaseLogPartFctRef[ 445 ]['StdNgbhDivMoyNgbh']= 0.180774156288
BaseLogPartFctRef[446 ]={}
BaseLogPartFctRef[446]['LogPF']=np.array([3201.4,3327.5,3456.8,3590.9,3729.2,3875.2,4025.3,4190.0,4367.6,4559.8,4793.6,5058.7,5340.1,5647.4,5973.2,6309.0,6649.2,6997.9,7350.3,7703.5,8063.1,8422.9,8786.4,9149.0,9514.0,9881.2,10248.7,10617.1,10986.4,11355.6])
BaseLogPartFctRef[ 446 ]['NbLabels']= 3
BaseLogPartFctRef[ 446 ]['NbCliques']= 7436
BaseLogPartFctRef[ 446 ]['NbSites']= 2914
BaseLogPartFctRef[ 446 ]['StdNgbhDivMoyNgbh']= 0.188007135336
BaseLogPartFctRef[447 ]={}
BaseLogPartFctRef[447]['LogPF']=np.array([3194.8,3320.0,3448.9,3582.9,3723.9,3868.6,4022.5,4182.7,4357.8,4549.8,4760.5,5023.1,5324.8,5641.1,5965.8,6302.4,6645.5,6990.8,7343.7,7702.0,8058.5,8421.5,8784.8,9150.9,9518.0,9885.3,10253.6,10623.0,10992.4,11362.6])
BaseLogPartFctRef[ 447 ]['NbLabels']= 3
BaseLogPartFctRef[ 447 ]['NbCliques']= 7448
BaseLogPartFctRef[ 447 ]['NbSites']= 2908
BaseLogPartFctRef[ 447 ]['StdNgbhDivMoyNgbh']= 0.190136547931
BaseLogPartFctRef[448 ]={}
BaseLogPartFctRef[448]['LogPF']=np.array([3268.4,3396.3,3530.5,3667.4,3808.6,3954.7,4108.8,4269.6,4446.9,4639.4,4875.9,5147.9,5447.2,5764.9,6092.5,6431.7,6781.7,7142.8,7503.9,7867.8,8234.6,8606.3,8979.3,9352.1,9727.5,10104.4,10481.8,10860.0,11238.3,11616.9])
BaseLogPartFctRef[ 448 ]['NbLabels']= 3
BaseLogPartFctRef[ 448 ]['NbCliques']= 7624
BaseLogPartFctRef[ 448 ]['NbSites']= 2975
BaseLogPartFctRef[ 448 ]['StdNgbhDivMoyNgbh']= 0.187188178033
BaseLogPartFctRef[449 ]={}
BaseLogPartFctRef[449]['LogPF']=np.array([3284.9,3414.7,3548.6,3686.3,3829.8,3978.2,4134.3,4302.9,4485.8,4687.8,4922.4,5192.6,5486.0,5790.9,6118.6,6462.1,6814.8,7174.7,7539.7,7908.1,8278.6,8652.0,9026.0,9404.1,9782.3,10162.6,10543.6,10923.5,11304.6,11685.6])
BaseLogPartFctRef[ 449 ]['NbLabels']= 3
BaseLogPartFctRef[ 449 ]['NbCliques']= 7681
BaseLogPartFctRef[ 449 ]['NbSites']= 2990
BaseLogPartFctRef[ 449 ]['StdNgbhDivMoyNgbh']= 0.179728652519
BaseLogPartFctRef[450 ]={}
BaseLogPartFctRef[450]['LogPF']=np.array([3189.3,3313.5,3444.1,3576.7,3713.4,3855.6,4006.1,4168.8,4340.7,4528.2,4758.5,5022.5,5308.8,5619.4,5940.4,6267.4,6605.6,6951.5,7298.7,7653.0,8009.4,8370.8,8732.5,9096.7,9459.4,9824.6,10190.9,10556.6,10924.3,11292.2])
BaseLogPartFctRef[ 450 ]['NbLabels']= 3
BaseLogPartFctRef[ 450 ]['NbCliques']= 7399
BaseLogPartFctRef[ 450 ]['NbSites']= 2903
BaseLogPartFctRef[ 450 ]['StdNgbhDivMoyNgbh']= 0.19326362073
BaseLogPartFctRef[451 ]={}
BaseLogPartFctRef[451]['LogPF']=np.array([3247.5,3376.0,3507.8,3643.6,3786.9,3932.4,4089.3,4254.6,4434.9,4633.5,4859.4,5131.3,5435.8,5749.9,6080.5,6428.3,6777.0,7130.5,7489.8,7850.9,8216.3,8586.0,8955.8,9329.3,9703.7,10078.8,10453.8,10829.5,11206.6,11583.9])
BaseLogPartFctRef[ 451 ]['NbLabels']= 3
BaseLogPartFctRef[ 451 ]['NbCliques']= 7590
BaseLogPartFctRef[ 451 ]['NbSites']= 2956
BaseLogPartFctRef[ 451 ]['StdNgbhDivMoyNgbh']= 0.184517318445
BaseLogPartFctRef[452 ]={}
BaseLogPartFctRef[452]['LogPF']=np.array([3228.8,3356.3,3485.7,3621.0,3761.3,3909.7,4059.5,4221.9,4396.6,4595.5,4822.8,5088.9,5379.4,5698.5,6030.3,6363.4,6709.9,7060.6,7419.8,7782.6,8142.1,8507.5,8875.6,9243.1,9612.2,9983.1,10354.9,10726.1,11099.1,11471.8])
BaseLogPartFctRef[ 452 ]['NbLabels']= 3
BaseLogPartFctRef[ 452 ]['NbCliques']= 7500
BaseLogPartFctRef[ 452 ]['NbSites']= 2939
BaseLogPartFctRef[ 452 ]['StdNgbhDivMoyNgbh']= 0.182635741583
BaseLogPartFctRef[453 ]={}
BaseLogPartFctRef[453]['LogPF']=np.array([3284.9,3413.8,3547.5,3686.1,3829.4,3978.0,4133.6,4303.0,4487.5,4688.3,4919.4,5179.1,5473.2,5790.5,6125.7,6466.6,6821.5,7185.8,7553.7,7921.9,8291.8,8666.3,9040.8,9417.7,9795.6,10174.2,10554.7,10934.9,11316.7,11698.6])
BaseLogPartFctRef[ 453 ]['NbLabels']= 3
BaseLogPartFctRef[ 453 ]['NbCliques']= 7674
BaseLogPartFctRef[ 453 ]['NbSites']= 2990
BaseLogPartFctRef[ 453 ]['StdNgbhDivMoyNgbh']= 0.177543041584
BaseLogPartFctRef[454 ]={}
BaseLogPartFctRef[454]['LogPF']=np.array([3186.0,3310.2,3438.2,3571.8,3707.3,3850.0,3997.3,4155.2,4330.9,4538.3,4765.1,5017.3,5298.8,5598.4,5913.2,6240.5,6578.6,6925.8,7271.4,7620.9,7975.4,8330.6,8687.6,9048.2,9408.4,9770.0,10132.7,10495.8,10859.6,11224.0])
BaseLogPartFctRef[ 454 ]['NbLabels']= 3
BaseLogPartFctRef[ 454 ]['NbCliques']= 7337
BaseLogPartFctRef[ 454 ]['NbSites']= 2900
BaseLogPartFctRef[ 454 ]['StdNgbhDivMoyNgbh']= 0.188337653603
BaseLogPartFctRef[455 ]={}
BaseLogPartFctRef[455]['LogPF']=np.array([3248.6,3373.7,3506.3,3643.3,3784.2,3931.3,4085.1,4249.7,4427.4,4617.6,4853.3,5132.5,5425.4,5740.6,6069.6,6406.2,6752.3,7109.4,7463.9,7822.5,8187.2,8553.2,8919.9,9290.3,9661.8,10033.2,10406.1,10780.3,11154.6,11528.9])
BaseLogPartFctRef[ 455 ]['NbLabels']= 3
BaseLogPartFctRef[ 455 ]['NbCliques']= 7534
BaseLogPartFctRef[ 455 ]['NbSites']= 2957
BaseLogPartFctRef[ 455 ]['StdNgbhDivMoyNgbh']= 0.181601409976
BaseLogPartFctRef[456 ]={}
BaseLogPartFctRef[456]['LogPF']=np.array([3193.7,3318.8,3447.5,3579.9,3717.8,3863.1,4009.1,4167.0,4343.3,4533.5,4760.0,5007.1,5299.8,5606.9,5919.3,6242.7,6583.8,6928.4,7276.8,7633.5,7991.1,8350.2,8710.9,9072.6,9435.8,9801.2,10167.6,10533.9,10900.9,11269.2])
BaseLogPartFctRef[ 456 ]['NbLabels']= 3
BaseLogPartFctRef[ 456 ]['NbCliques']= 7394
BaseLogPartFctRef[ 456 ]['NbSites']= 2907
BaseLogPartFctRef[ 456 ]['StdNgbhDivMoyNgbh']= 0.18430088024
BaseLogPartFctRef[457 ]={}
BaseLogPartFctRef[457]['LogPF']=np.array([3267.3,3397.3,3530.6,3670.7,3812.5,3964.1,4122.8,4285.3,4470.0,4670.4,4906.2,5182.3,5485.5,5803.4,6132.0,6478.7,6827.4,7187.7,7550.2,7921.7,8292.4,8661.7,9035.2,9409.8,9787.1,10164.6,10542.1,10920.6,11300.4,11680.3])
BaseLogPartFctRef[ 457 ]['NbLabels']= 3
BaseLogPartFctRef[ 457 ]['NbCliques']= 7639
BaseLogPartFctRef[ 457 ]['NbSites']= 2974
BaseLogPartFctRef[ 457 ]['StdNgbhDivMoyNgbh']= 0.181737755074
BaseLogPartFctRef[458 ]={}
BaseLogPartFctRef[458]['LogPF']=np.array([3206.8,3332.3,3462.2,3598.1,3737.9,3881.7,4031.3,4191.1,4375.1,4570.4,4805.7,5066.9,5351.0,5662.8,5990.8,6323.3,6663.8,7013.8,7363.3,7722.3,8082.5,8442.8,8809.3,9175.4,9543.2,9912.0,10280.9,10651.4,11022.2,11392.9])
BaseLogPartFctRef[ 458 ]['NbLabels']= 3
BaseLogPartFctRef[ 458 ]['NbCliques']= 7468
BaseLogPartFctRef[ 458 ]['NbSites']= 2919
BaseLogPartFctRef[ 458 ]['StdNgbhDivMoyNgbh']= 0.182857692166
BaseLogPartFctRef[459 ]={}
BaseLogPartFctRef[459]['LogPF']=np.array([3251.9,3380.2,3512.2,3650.0,3794.5,3943.3,4102.2,4267.4,4450.3,4656.9,4888.6,5160.7,5453.6,5767.8,6107.2,6449.7,6802.8,7160.3,7525.8,7892.2,8261.2,8631.8,9006.4,9381.9,9760.1,10137.9,10516.0,10895.5,11275.2,11655.8])
BaseLogPartFctRef[ 459 ]['NbLabels']= 3
BaseLogPartFctRef[ 459 ]['NbCliques']= 7644
BaseLogPartFctRef[ 459 ]['NbSites']= 2960
BaseLogPartFctRef[ 459 ]['StdNgbhDivMoyNgbh']= 0.18019317349
BaseLogPartFctRef[460 ]={}
BaseLogPartFctRef[460]['LogPF']=np.array([3298.0,3429.9,3565.3,3704.0,3850.9,4000.8,4159.4,4324.6,4506.0,4712.3,4948.8,5228.2,5528.9,5863.0,6205.0,6558.1,6911.7,7274.6,7643.9,8016.0,8391.0,8768.7,9150.1,9532.8,9916.1,10299.9,10684.6,11069.3,11455.1,11841.2])
BaseLogPartFctRef[ 460 ]['NbLabels']= 3
BaseLogPartFctRef[ 460 ]['NbCliques']= 7761
BaseLogPartFctRef[ 460 ]['NbSites']= 3002
BaseLogPartFctRef[ 460 ]['StdNgbhDivMoyNgbh']= 0.175609354911
BaseLogPartFctRef[461 ]={}
BaseLogPartFctRef[461]['LogPF']=np.array([3256.3,3383.5,3516.4,3653.7,3795.6,3943.7,4099.6,4263.6,4439.1,4638.1,4885.2,5152.7,5439.5,5758.8,6092.7,6437.8,6785.5,7146.4,7501.6,7865.9,8232.9,8601.0,8970.8,9344.1,9718.3,10093.2,10469.5,10845.5,11221.9,11599.4])
BaseLogPartFctRef[ 461 ]['NbLabels']= 3
BaseLogPartFctRef[ 461 ]['NbCliques']= 7590
BaseLogPartFctRef[ 461 ]['NbSites']= 2964
BaseLogPartFctRef[ 461 ]['StdNgbhDivMoyNgbh']= 0.179845616885
BaseLogPartFctRef[462 ]={}
BaseLogPartFctRef[462]['LogPF']=np.array([3172.8,3294.7,3422.6,3553.8,3691.1,3830.1,3978.7,4137.5,4305.3,4492.3,4721.5,4975.7,5257.2,5551.4,5861.3,6181.5,6517.3,6859.0,7200.4,7545.4,7894.4,8245.6,8601.4,8958.2,9316.9,9674.2,10033.6,10393.7,10754.7,11116.5])
BaseLogPartFctRef[ 462 ]['NbLabels']= 3
BaseLogPartFctRef[ 462 ]['NbCliques']= 7272
BaseLogPartFctRef[ 462 ]['NbSites']= 2888
BaseLogPartFctRef[ 462 ]['StdNgbhDivMoyNgbh']= 0.192011604121
BaseLogPartFctRef[463 ]={}
BaseLogPartFctRef[463]['LogPF']=np.array([3245.3,3371.0,3502.2,3638.7,3780.3,3927.4,4079.9,4247.1,4427.2,4628.6,4868.6,5137.0,5439.6,5749.4,6073.5,6407.1,6752.9,7106.4,7464.9,7827.4,8192.5,8559.9,8930.8,9301.3,9672.3,10045.1,10418.6,10792.9,11168.0,11543.7])
BaseLogPartFctRef[ 463 ]['NbLabels']= 3
BaseLogPartFctRef[ 463 ]['NbCliques']= 7558
BaseLogPartFctRef[ 463 ]['NbSites']= 2954
BaseLogPartFctRef[ 463 ]['StdNgbhDivMoyNgbh']= 0.188403104636
BaseLogPartFctRef[464 ]={}
BaseLogPartFctRef[464]['LogPF']=np.array([3224.4,3351.2,3482.4,3617.1,3758.1,3904.4,4055.5,4217.8,4394.6,4599.1,4831.0,5104.9,5405.5,5722.0,6053.2,6389.7,6734.4,7087.1,7446.2,7809.8,8172.7,8539.6,8906.5,9274.7,9645.1,10017.0,10389.3,10763.0,11137.1,11511.9])
BaseLogPartFctRef[ 464 ]['NbLabels']= 3
BaseLogPartFctRef[ 464 ]['NbCliques']= 7535
BaseLogPartFctRef[ 464 ]['NbSites']= 2935
BaseLogPartFctRef[ 464 ]['StdNgbhDivMoyNgbh']= 0.183660633269
V_Beta_Ref=np.array([0.,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1.,1.05,1.1,1.15,1.2,1.25,1.3,1.35,1.4,1.45])
return BaseLogPartFctRef,V_Beta_Ref
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Vec_Estim_lnZ_Graph_fast(RefGraph,LabelsNb,MaxErrorAllowed=5,BetaMax=1.4,BetaStep=0.05):
"""
Estimate ln(Z(beta)) of Potts fields. The default Beta grid is between 0. and 1.4 with
a step of 0.05. Extrapolation algorithm is used. Fast estimates are only performed for
Ising fields (2 labels). Reference partition functions were pre-computed on Ising fields
designed on regular and non-regular grids. They all respect a 6-connectivity system.
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node. => There exists i such that RefGraph[10][i]=2
* LabelsNb: possible number of labels in each site of the graph
* MaxErrorAllowed: maximum error allowed in the graph estimation (in percents).
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax. Maximum considered value is 1.4
* BetaStep: gap between two considered values of beta. Actual gaps are not exactly those asked but very close.
output:
* Est_lnZ: Vector containing the ln(Z(beta)) estimates
* V_Beta: Vector of the same size as VecExpectZ containing the corresponding beta value
"""
#launch a more general algorithm if the inputs are not appropriate
if (LabelsNb!=2 and LabelsNb!=3) or BetaMax>1.4:
[Est_lnZ,V_Beta]=Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=30,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
return Est_lnZ,V_Beta
#initialisation
#...default returned values
V_Beta=np.array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4])
Est_lnZ=np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
#...load reference partition functions
[BaseLogPartFctRef,V_Beta_Ref]=LoadBaseLogPartFctRef()
NbSites=[len(RefGraph)]
NbCliques=[sum( [len(nl) for nl in RefGraph] ) ]
#...NbSites
s=len(RefGraph)
#...NbCliques
NbCliquesTmp=0
for j in xrange(len(RefGraph)):
NbCliquesTmp=NbCliquesTmp+len(RefGraph[j])
c=NbCliquesTmp/2
NbCliques.append(c)
#...StdVal Nb neighbors / Moy Nb neighbors
StdValCliquesPerSiteTmp=0.
nc = NbCliques[-1] + 0.
ns = NbSites[-1] + 0.
for j in xrange(len(RefGraph)):
StdValCliquesPerSiteTmp = StdValCliquesPerSiteTmp \
+ ( (nc/ns-len(RefGraph[j])/2.)**2. ) / ns
StdNgbhDivMoyNgbh = np.sqrt(StdValCliquesPerSiteTmp) \
/ ( nc/(ns-1.) )
#extrapolation algorithm
Best_MaxError=10000000.
for i in BaseLogPartFctRef.keys():
if BaseLogPartFctRef[i]['NbLabels']==LabelsNb:
MaxError=np.abs((BaseLogPartFctRef[i]['NbSites']-1.)*((1.*c)/(1.*BaseLogPartFctRef[i]['NbCliques']))-(s-1.))*np.log(LabelsNb*1.) #error at beta=0
MaxError=MaxError+(np.abs(BaseLogPartFctRef[i]['StdNgbhDivMoyNgbh']-StdNgbhDivMoyNgbh)) #penalty added to the error at zero to penalyze different homogeneities of the neighboroud (a bareer function would be cleaner for the conversion in percents)
MaxError=MaxError*100./(s*np.log(LabelsNb*1.)) #to have a percentage of error
if MaxError<Best_MaxError:
Best_MaxError=MaxError
BestI=i
if Best_MaxError<MaxErrorAllowed:
Est_lnZ=((c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*BaseLogPartFctRef[BestI]['LogPF']+(1-(c*1.)/(BaseLogPartFctRef[BestI]['NbCliques']*1.))*np.log(LabelsNb*1.)
V_Beta=V_Beta_Ref.copy()
else:
#print 'launch an adapted function'
[Est_lnZ,V_Beta] = Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=30,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
#print 'V_Beta be4 resampling:', V_Beta
#reduction of the domain
if (BetaMax<1.4):
temp=0
while V_Beta[temp]<BetaMax and temp<V_Beta.shape[0]-2:
temp=temp+1
V_Beta=V_Beta[:temp]
Est_lnZ=Est_lnZ[:temp]
#domain resampling
resamplingMethod = 'ply'
if (abs(BetaStep-0.05)>0.0001):
if resamplingMethod == 'linear':
v_Beta_Resample = []
cpt=0.
while cpt<BetaMax+0.0001:
v_Beta_Resample.append(cpt)
cpt=cpt+BetaStep
Est_lnZ = resampleToGrid(np.array(V_Beta),np.array(Est_lnZ),
np.array(v_Beta_Resample))
V_Beta = v_Beta_Resample
elif resamplingMethod == 'ply':
interpolator = scipy.interpolate.interp1d(V_Beta, Est_lnZ,
kind='cubic')
#print 'V_Beta[-1]+BetaStep:', V_Beta[-1],'+',BetaStep
targetBeta = np.arange(0, BetaMax + BetaStep, BetaStep)
Est_lnZ = interpolator(targetBeta)
#Est_lnZ = scipy.interpolate.spline(V_Beta, Est_lnZ, targetBeta,
# order=3)
#Est_lnZ = scipy.interpolate.krogh_interpolate(V_Beta, Est_lnZ,
# targetBeta)
#print 'Est_lnZ:', Est_lnZ
#print 'targetBeta:', targetBeta
V_Beta = targetBeta
else:
raise Exception('Unknown resampling method: %s' %resamplingMethod)
return Est_lnZ,V_Beta
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Vec_Estim_lnZ_Graph_fast2(RefGraph,BetaMax=1.4,BetaStep=0.05):
"""
Estimate ln(Z(beta)) of Ising fields (2 labels). The default Beta grid is between 0. and 1.4 with
a step of 0.05. Bilinar estimation with the number of sites and cliques is used. The bilinear functions
were estimated using bilinear regression on reference partition functions on 240 non-regular grids and with
respect to a 6-connectivity system. (Pfs are found in LoadBaseLogPartFctRef -> PFs 0:239)
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node. => There exists i such that RefGraph[10][i]=2
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax. Maximum considered value is 1.4
* BetaStep: gap between two considered values of beta. Actual gaps are not exactly those asked but very close.
output:
* Est_lnZ: Vector containing the ln(Z(beta)) estimates
* V_Beta: Vector of the same size as VecExpectZ containing the corresponding beta value
"""
#launch a more general algorithm if the inputs are not appropriate
if LabelsNb!=2 or BetaMax>1.4:
[Est_lnZ,V_Beta]=Cpt_Vec_Estim_lnZ_Graph(RefGraph,LabelsNb,SamplesNb=20,BetaMax=BetaMax,BetaStep=BetaStep,GraphWeight=None)
return Est_lnZ,V_Beta
#initialisation
#...default returned values
V_Beta=np.array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4])
Est_lnZ=np.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
#...NbSites
s=len(RefGraph)
#...NbCliques
NbCliquesTmp=0
for j in xrange(len(RefGraph)):
NbCliquesTmp=NbCliquesTmp+len(RefGraph[j])
c=NbCliquesTmp/2
#extrapolation algorithm
Est_lnZ[0]= 0.051 * c + 0.693 * s - 0.0004
Est_lnZ[1]= 0.105 * c + 0.692 * s + 0.003
Est_lnZ[2]= 0.162 * c + 0.691 * s + 0.012
Est_lnZ[3]= 0.224 * c + 0.686 * s + 0.058
Est_lnZ[4]= 0.298 * c + 0.663 * s + 0.314
Est_lnZ[5]= 0.406 * c + 0.580 * s + 1.26
Est_lnZ[6]= 0.537 * c + 0.467 * s + 2.34
Est_lnZ[7]= 0.669 * c + 0.363 * s + 3.07
Est_lnZ[8]= 0.797 * c + 0.281 * s + 3.39
Est_lnZ[9]= 0.919 * c + 0.219 * s + 3.41
Est_lnZ[10]=1.035 * c + 0.173 * s + 3.28
Est_lnZ[11]=1.148 * c + 0.137 * s + 3.08
Est_lnZ[12]=1.258 * c + 0.110 * s + 2.87
Est_lnZ[13]=1.366 * c + 0.089 * s + 2.66
#reduction of the domain
if (BetaMax<1.4):
temp=0
while V_Beta[temp]<BetaMax and temp<V_Beta.shape[0]-2:
temp=temp+1
V_Beta=V_Beta[:temp]
Est_lnZ=Est_lnZ[:temp]
#domain resampling
if (abs(BetaStep-0.05)>0.0001):
v_Beta_Resample=[]
cpt=0.
while cpt<BetaMax+0.0001:
v_Beta_Resample.append(cpt)
cpt=cpt+BetaStep
Est_lnZ=resampleToGrid(np.array(V_Beta),np.array(Est_lnZ),np.array(v_Beta_Resample))
V_Beta=v_Beta_Resample
return Est_lnZ,V_Beta
def logpf_ising_onsager(size, beta):
"""
Calculate log partition function in terms of beta for an Ising field
of size 'size'. 'beta' can be scalar or numpy.array.
Assumptions: the field is 2D, squared, toroidal and has 4-connectivity
"""
coshb = np.cosh(beta)
u = 2 * np.sinh(beta) / (coshb*coshb)
if np.isscalar(beta):
psi = np.zeros(1)
u = [u]
else:
psi = np.zeros(len(beta))
for iu,vu in enumerate(u):
x = np.arange(0, np.pi, 0.01)
sinx = np.sin(x)
y = np.log( (1 + (1 - vu*vu*sinx*sinx)**.5)/2 )
psi[iu] = 1/(2*np.pi) * np.trapz(y,x)
if np.isscalar(beta):
return size * (beta + np.log(2*coshb + psi[0]))
else:
return size * (beta + np.log(2*coshb + psi))
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Estim_lnZ_Onsager(n,beta):
"""
Estimate ln(Z(beta)) using Onsager technique (2D periodic fields - 2 labels - 4 connectivity)
input:
* n: number of sites
* beta: beta
output:
* LogZ: ln(Z(beta)) estimate
"""
#estimate u(beta)
u=2.*scipy.sinh(beta)/(scipy.cosh(beta)*scipy.cosh(beta))
#estimate psi(u(beta))
NbSteps=1000
DeltaX=scipy.pi/NbSteps
integra=0.
for i in xrange(NbSteps):
x=scipy.pi*(i+0.5)/NbSteps
integra+=(scipy.log((1.+scipy.sqrt(1-u*u*scipy.sin(x)*scipy.sin(x)))/2.))*DeltaX
Psi=integra/(2.*scipy.pi)
#estimate Log(Z)
LogZ=n*(beta+scipy.log(2.*scipy.cosh(beta))+Psi)
return LogZ
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Vec_Estim_lnZ_Onsager(n,BetaMax=1.2,BetaStep=0.05):
"""
Estimate ln(Z(beta)) Onsager using Onsager technique (2D periodic fields - 2 labels - 4 connectivity)
input:
* n: number of sites
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax. Maximum considered value is 1.2.
* BetaStep: gap between two considered values of beta. Actual gaps are not exactly those asked but very close.
output:
* Est_lnZ: Vector containing the ln(Z(beta)) estimates
* V_Beta: Vector of the same size as VecExpectZ containing the corresponding beta value
"""
#initialization
BetaLoc=0.
ListBetaVal=[]
ListLnZ=[]
#compute the values of beta and lnZ
while BetaLoc<BetaMax:
LnZLoc=Estim_lnZ_Onsager(n,BetaLoc)
ListLnZ.append(LnZLoc)
ListBetaVal.append(BetaLoc)
BetaLoc=BetaLoc+BetaStep
#cast the result into an array
Est_lnZ=_N.array(ListLnZ)
V_Beta=_N.array(ListBetaVal)
#return the estimated values
return Est_lnZ,V_Beta
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Vec_Estim_lnZ_OLD_Graph(RefGraph,LabelsNb,SamplesNb=50,BetaMax=1.0,BetaStep=0.01,GraphWeight=None):
"""
Useless now!
Estimates ln(Z) for fields of a given size and Beta values between 0 and BetaMax
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* LabelsNb: number of labels
* BetaMax: Z(beta,mask) will be computed for beta between 0 and BetaMax
* BetaStep: gap between two considered values of bseta
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the corresponding
edge in RefGraph. If not defined the weights are set to 1.0.
output:
* VecEstim_lnZ: Vector containing the ln(Z(beta,mask)) estimates
* VecBetaVal: Vector of the same size as VecExpectZ containing the corresponding beta value
"""
#initialization
BetaLoc=0
ListExpectU=[]
ListBetaVal=[]
if GraphWeight==None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
GraphNodesLabels=CptDefaultGraphNodesLabels(RefGraph)
GraphLinks=CptDefaultGraphLinks(RefGraph)
RefGrphNgbhPosi=CptRefGrphNgbhPosi(RefGraph)
#compute all E(U|beta)...
while BetaLoc<BetaMax:
BetaLoc=BetaLoc+BetaStep
ExpU_loc=Cpt_Expected_U_graph(RefGraph,BetaLoc,LabelsNb,SamplesNb,GraphWeight=GraphWeight,GraphNodesLabels=GraphNodesLabels,GraphLinks=GraphLinks,RefGrphNgbhPosi=RefGrphNgbhPosi)
ListExpectU.append(ExpU_loc)
ListBetaVal.append(BetaLoc)
pyhrf.verbose(2, 'beta=%1.4f -> exp(U)=%1.4f' \
%(BetaLoc,ExpU_loc))
VecEstim_lnZ=np.zeros(len(ListExpectU))
VecBetaVal=np.zeros(len(ListExpectU))
for i in xrange(len(ListExpectU)):
VecBetaVal[i]=ListBetaVal[i]
if i==0:
VecEstim_lnZ[i]=len(RefGraph)*np.log(LabelsNb)+ListExpectU[0]*BetaStep/2
else:
VecEstim_lnZ[i]=VecEstim_lnZ[i-1]+(ListExpectU[i-1]+ListExpectU[i])*BetaStep/2
return VecEstim_lnZ,VecBetaVal
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Exact_lnZ_graph(RefGraph,beta,LabelsNb,GraphWeight=None):
"""
Computes the logarithm of the exact partition function Z(\beta).
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* beta: spatial regularization parameter
* LabelsNb: number of labels in each site (typically 2 or 3)
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the corresponding
edge in RefGraph. If not defined the weights are set to 1.0.
output:
* exact_lnZ: exact value of ln(Z)
"""
#initialization
if GraphWeight==None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
GraphNodesLabels=CptDefaultGraphNodesLabels(RefGraph)
VoxelsNb=len(RefGraph)
#sum of the energies U for all possible configurations of GraphNodesLabels
Config_ID_max=LabelsNb**VoxelsNb
Z_exact=0
if LabelsNb==2: #use of np.binary_repr instead of np.base_repr because it is far faster but only designed for base two numbers
for Config_ID in xrange(Config_ID_max):
if Config_ID==0: #handle a problem with 'binary_repr' which write binary_repr(0,VoxelsNb)='0'
for i in xrange(VoxelsNb):
GraphNodesLabels[i]=0
else:
for i in xrange(VoxelsNb):
GraphNodesLabels[i]=int(np.binary_repr(Config_ID,VoxelsNb)[i])
#print GraphNodesLabels
Z_exact=Z_exact+np.exp(beta*Cpt_U_graph(RefGraph,GraphNodesLabels,GraphWeight=GraphWeight))
else:
for Config_ID in xrange(Config_ID_max):
for i in xrange(VoxelsNb):
GraphNodesLabels[i]=int(np.base_repr(Config_ID,base=LabelsNb,padding=VoxelsNb)[-1-i])
#print GraphNodesLabels
Z_exact=Z_exact+np.exp(beta*Cpt_U_graph(RefGraph,GraphNodesLabels,GraphWeight=GraphWeight))
exact_lnZ=np.log(Z_exact)
return exact_lnZ
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_Distrib_P_beta_graph(RefGraph,GraphNodesLabels,VecEstim_lnZ,VecBetaVal,
thresh=1.5,GraphWeight=None):
"""
Computes the distribution P(beta|q)
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* GraphNodesLabels: Nodes labels. GraphNodesLabels[i] is the node i label.
* VecEstim_lnZ: Vector containing the ln(Z(beta,mask)) estimates (in accordance with the defined graph).
* VecBetaVal: Vector of the same size as VecExpectZ containing the corresponding beta
value (in accordance with the defined graph).
* thresh: the prior on beta is uniform between 0 and thresh and linearly decrease between thresh and VecBetaVal[-1]
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the corresponding
edge in RefGraph. If not defined the weights are set to 1.0.
output:
* Vec_P_Beta: contains the P(beta|q) values (consistant with VecBetaVal).
"""
#initialization
if GraphWeight==None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
BetaStep=VecBetaVal[1]-VecBetaVal[0]
BetaLoc=0
Vec_P_Beta=VecEstim_lnZ*0.0
#computes all P(beta_i|q_i)
#cpt the Energy
Energy=Cpt_U_graph(RefGraph,GraphNodesLabels,GraphWeight=GraphWeight)
#print 'Energy:', Energy
#prior normalization
if thresh>VecBetaVal[-1]:
thresh=VecBetaVal[-1]
PriorDenomi=(thresh)+((VecBetaVal[-1]-thresh)/2.)
#print "PriorDenomi:", PriorDenomi
for i in xrange(VecEstim_lnZ.shape[0]):
#print 'VecBetaVal:', VecBetaVal[i]
#print thresh
if VecBetaVal[i] < thresh:
log_P_Beta = -VecEstim_lnZ[i]+Energy*VecBetaVal[i]
#print 'log_P_Beta:', log_P_Beta
Vec_P_Beta[i]=np.exp(log_P_Beta.astype(float_hires))
elif (VecBetaVal[-1] - VecBetaVal[i]) == 0.:
Vec_P_Beta[i] = 0.
else:
P_beta = (VecBetaVal[-1] - VecBetaVal[i]) / (VecBetaVal[-1]-thresh)
#print 'P_beta:', P_beta
log_P_Beta = -VecEstim_lnZ[i]+Energy*VecBetaVal[i] + \
np.log(P_beta/PriorDenomi)
#print 'log_P_Beta:', log_P_Beta
Vec_P_Beta[i]=np.exp(log_P_Beta.astype(float_hires))
if np.isnan(Vec_P_Beta[i]):
Vec_P_Beta[i] = 0.
#print 'Vec_P_Beta[i]', Vec_P_Beta[i]
#print 'BetaStep:', BetaStep
#print 'Vec_P_Beta:', Vec_P_Beta, Vec_P_Beta.sum()
#print '(BetaStep*Vec_P_Beta.sum())', (BetaStep*Vec_P_Beta.sum())
#Vec_P_Beta normalization
Vec_P_Beta=Vec_P_Beta/(BetaStep*Vec_P_Beta.sum())
return Vec_P_Beta
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
def Cpt_AcceptNewBeta_Graph(RefGraph,GraphNodesLabels,VecEstim_lnZ,VecBetaVal,CurrentBeta,sigma,thresh=1.2,GraphWeight=None):
"""
Starting from a given Beta vector (1 value for each condition) 'CurrentBeta', computes new Beta values in 'NewBeta'
using a Metropolis-Hastings step.
input:
* RefGraph: List which contains the connectivity graph. Each entry represents a node of the graph
and contains the list of its neighbors entry location in the graph.
ex: RefGraph[2][3]=10 means 3rd neighbour of the 2nd node is the 10th node.
=> There exists i such that RefGraph[10][i]=2
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the corresponding
edge in RefGraph. If not defined the weights are set to 1.0.
* GraphNodesLabels: Nodes labels. GraphNodesLabels[i] is the node i label.
* VecEstim_lnZ: Vector containing the ln(Z(beta,mask)) estimates (in accordance with the defined mask).
* VecBetaVal: Vector of the same size as VecExpectZ containing the corresponding beta
value (in accordance with the defined mask).
* CurrentBeta: Beta at the current iteration
* sigma: such as NewBeta = CurrentBeta + N(0,sigma)
* thresh: the prior on beta is uniform between 0 and thresh and linearly decrease between thresh and VecBetaVal[-1]
* GraphWeight: Same shape as RefGraph. Each entry is the weight of the corresponding
edge in RefGraph. If not defined the weights are set to 1.0.
output:
* NewBeta: Contains the accepted beta value at the next iteration
"""
#1) initialization
betaMax = VecBetaVal[-1]
betaMin = VecBetaVal[0]
priorBetaMax = betaMax
BetaStep=VecBetaVal[1]-VecBetaVal[0]
if thresh>betaMax or thresh<0:
thresh = betaMax
if GraphWeight==None:
GraphWeight=CptDefaultGraphWeight(RefGraph)
bInf = (betaMin-CurrentBeta)/sigma
bSup = (betaMax-CurrentBeta)/sigma
## print 'betaMax :', betaMax, 'CurrentBeta :', CurrentBeta, 'sigma:', sigma
## print 'cur beta :', CurrentBeta
## print ' bInf =', bInf, 'bSup =', bSup
u = truncRandn(1, a=bInf, b=bSup)
#print ' u = ', u, 'sigma = ', sigma, '-> u*s=', u*sigma
dBeta = sigma*u[0]
NewBeta = CurrentBeta + dBeta
#print '-> proposed beta:', NewBeta
assert (NewBeta <= betaMax) and (NewBeta >= betaMin)
#3.1) computes ln(Z(CurrentBeta|Mask)) estimation
i=0
while VecBetaVal[i]<CurrentBeta:
i=i+1
# First order interpolation of precomputed log-PF at currentBeta:
ln_Z1_estim= VecEstim_lnZ[i-1] * (VecBetaVal[i]-CurrentBeta)/(VecBetaVal[i]-VecBetaVal[i-1]) \
+VecEstim_lnZ[i] * (CurrentBeta-VecBetaVal[i-1])/(VecBetaVal[i]-VecBetaVal[i-1])
#3.1.b) compute the prior P(CurrentBeta) thresh=VecBetaVal[-1]
#prior normalization
if thresh>betaMax:
thresh=betaMax
PriorDenomi=(thresh)+((betaMax-thresh)/2.)
if CurrentBeta<thresh:
P_cur_beta=1.
elif CurrentBeta<betaMax:
P_cur_beta=(betaMax-CurrentBeta)/(betaMax-thresh)
else:
P_cur_beta=0.000001
#3.2) computes ln(P(CurrentBeta|q,Mask)) estimation
Energy=Cpt_U_graph(RefGraph,GraphNodesLabels,GraphWeight=GraphWeight)
log_P_CurrentBeta = -ln_Z1_estim + Energy*CurrentBeta + np.log(P_cur_beta/PriorDenomi)
#4.1) computes ln(Z(NewBeta|Mask)) estimation
i=0
while VecBetaVal[i]<NewBeta:
i=i+1
# First order interpolation of precomputed log-PF at NewBeta:
ln_Z2_estim=VecEstim_lnZ[i-1]*(VecBetaVal[i]-NewBeta)/(VecBetaVal[i]-VecBetaVal[i-1])+VecEstim_lnZ[i]*(NewBeta-VecBetaVal[i-1])/(VecBetaVal[i]-VecBetaVal[i-1])
#4.1.b) compute the prior P(NewBeta)
if NewBeta<thresh:
P_new_beta=1.
elif NewBeta<betaMax:
P_new_beta=(betaMax-NewBeta)/(betaMax-thresh)
else:
P_new_beta=0.000001
#4.2) computes ln(P(NewBeta|q,Mask)) estimation
log_P_NewBeta=-ln_Z2_estim+Energy*NewBeta+np.log(P_new_beta/PriorDenomi)
#5) compute A_NewBeta and accept or not the new beta value
sigsqrt2 = sigma*2**.5
log_g_new = erf((betaMax-NewBeta)/sigsqrt2) - erf((betaMin-NewBeta)/sigsqrt2)
log_g_cur = erf((betaMax-CurrentBeta)/sigsqrt2) - erf((betaMin-CurrentBeta)/sigsqrt2)
temp = np.exp(log_P_NewBeta - log_P_CurrentBeta + log_g_cur - log_g_new)
A_NewBeta=min(1,temp)
#print 'Accept ratio :', A_NewBeta
if np.random.rand() > A_NewBeta:
#print ' -> rejected !'
NewBeta = CurrentBeta
return NewBeta, dBeta, A_NewBeta
def beta_estim_obs_field(graph, labels, gridLnz, method='MAP',weights=None):
"""
Estimate the amount of spatial correlation of an Ising observed field.
'graph' is the neighbours list defining the topology
'labels' is the field realisation
'gridLnz' is the log-partition function associated to the topology, ie a grid
where gridLnz[0] stores values of lnz and gridLnz[1] stores corresponding values of
beta.
Return :
- estimated beta
- tabulated distribution p(beta|labels)
"""
if method == 'MAP':
# p(beta | labels):
pBeta = Cpt_Distrib_P_beta_graph(graph, labels, gridLnz[0], gridLnz[1])
#print 'pBeta:', pBeta/pBeta.sum()
#print 'gridLnz:', gridLnz[1]
pBeta /= pBeta.sum()
postMean = (gridLnz[1]*pBeta).sum()
#varBetaEstim = (gridLnz[1]*gridLnz[1]*pBeta).sum() - postMean*postMean
return postMean, pBeta
elif method == 'ML':
hc = count_homo_cliques(graph, labels, weights)
#print 'hc:', hc
logll = gridLnz[1] * hc - gridLnz[0]
#print 'log likelihood:', logll.dtype
#print logll
pBeta = np.exp(logll.astype(float_hires))
#print 'pBeta unnorm:', pBeta
pBeta = pBeta/pBeta.sum()
#print 'pBeta:', pBeta
betaML = gridLnz[1][np.argmax(logll)]
#dlpf = np.diff(lpf) / dbeta
#gamma = dlpf/lpf[1:]
#print 'gamma:', gamma
#dbetaGrid = betaGrid[1:]
#print 'dbetaGrid:', dbetaGrid
#betaML = dbetaGrid[closestsorted(gamma, hc)]
return betaML, pBeta
else:
raise Exception('Unknown method %s' %method)
#################
# Beta Sampling #
#################
class BetaSampler(xmlio.XMLParamDrivenClass, GibbsSamplerVariable):
P_VAL_INI = 'initialValue'
P_SAMPLE_FLAG = 'sampleFlag'
P_USE_TRUE_VALUE = 'useTrueValue'
P_PR_BETA_CUT = 'priorBetaCut'
P_SIGMA = 'MH_sigma'
P_PARTITION_FUNCTION = 'partitionFunction'
P_PARTITION_FUNCTION_METH = 'partitionFunctionMethod'
# parameters definitions and default values :
defaultParameters = {
P_SAMPLE_FLAG : True,
P_USE_TRUE_VALUE : False,
P_VAL_INI : np.array([0.7]),
P_SIGMA : 0.05,
P_PR_BETA_CUT : 1.2,
P_PARTITION_FUNCTION_METH : 'es',
P_PARTITION_FUNCTION : None,
}
if pyhrf.__usemode__ == pyhrf.DEVEL:
parametersToSghow = [P_SAMPLE_FLAG, P_VAL_INI, P_SIGMA, P_PR_BETA_CUT,
P_USE_TRUE_VALUE,
P_PARTITION_FUNCTION, P_PARTITION_FUNCTION_METH,]
elif pyhrf.__usemode__ == pyhrf.ENDUSER:
parametersToShow = [P_SAMPLE_FLAG, P_VAL_INI]
parametersComments = {
P_PARTITION_FUNCTION_METH : \
'either "es" (extrapolation scheme) or "ps" (path sampling)',
}
#P_BETA : 'Amount of spatial correlation.\n Recommanded between 0.0 and'\
# ' 0.7',
def __init__(self, parameters=None, xmlHandler=NumpyXMLHandler(),
xmlLabel=None, xmlComment=None):
xmlio.XMLParamDrivenClass.__init__(self, parameters, xmlHandler,
xmlLabel, xmlComment)
sampleFlag = self.parameters[self.P_SAMPLE_FLAG]
valIni = self.parameters[self.P_VAL_INI]
useTrueValue = self.parameters[self.P_USE_TRUE_VALUE]
an = ['condition']
GibbsSamplerVariable.__init__(self,'beta', valIni=valIni,
sampleFlag=sampleFlag,
useTrueValue=useTrueValue,
axes_names=an,
value_label='PM Beta')
self.priorBetaCut = self.parameters[self.P_PR_BETA_CUT]
self.gridLnZ = self.parameters[self.P_PARTITION_FUNCTION]
self.pfMethod = self.parameters[self.P_PARTITION_FUNCTION_METH]
self.currentDB = None
self.currentAcceptRatio = None
def linkToData(self, dataInput):
self.dataInput = dataInput
nbc = self.nbConditions = self.dataInput.nbConditions
self.sigma = np.zeros(nbc, dtype=float) + self.parameters[self.P_SIGMA]
self.nbClasses = self.samplerEngine.getVariable('nrl').nbClasses
self.nbVox = dataInput.nbVoxels
self.pBeta = [ [] for c in xrange(self.nbConditions) ]
self.betaWalk = [ [] for c in xrange(self.nbConditions) ]
self.acceptBeta = [ [] for c in xrange(self.nbConditions) ]
self.valIni = self.parameters[self.P_VAL_INI]
def checkAndSetInitValue(self, variables):
if self.useTrueValue :
if self.trueValue is not None:
self.currentValue = self.trueValue
elif self.valIni is not None:
self.currentValue = self.valIni
else:
raise Exception('Needed a true value for drift init but '\
'None defined')
if self.currentValue is not None:
self.currentValue = np.zeros(self.nbConditions, dtype=float) \
+ self.currentValue[0]
def samplingWarmUp(self, variables):
if self.sampleFlag:
self.loadBetaGrid()
def loadBetaGrid(self):
if self.gridLnZ is None:
g = self.dataInput.neighboursIndexes
g = np.array([l[l!=-1] for l in g],dtype=object)
if self.pfMethod == 'es':
self.gridLnZ = Cpt_Vec_Estim_lnZ_Graph_fast3(g, self.nbClasses)
pyhrf.verbose(2, 'lnz ES ...')
#print self.gridLnZ
elif self.pfMethod == 'ps':
self.gridLnZ = Cpt_Vec_Estim_lnZ_Graph(g, self.nbClasses)
pyhrf.verbose(2, 'lnz PS ...')
#HACK
#import matplotlib.pyplot as plt
#print 'nbClasses:', self.nbClasses
#print 'g:', g
#lnz_ps = Cpt_Vec_Estim_lnZ_Graph(g, self.nbClasses)
#lnz_es = Cpt_Vec_Estim_lnZ_Graph_fast3(g, self.nbClasses)
#plt.plot(lnz_es[1][:29],lnz_es[0][:29],'r-')
#plt.plot(lnz_ps[1],lnz_ps[0],'b-')
#plt.show()
#sys.exit(0)
#HACK
def sampleNextInternal(self, variables):
snrls = self.samplerEngine.getVariable('nrl')
for cond in xrange(self.nbConditions):
vlnz, vb = self.gridLnZ
g = self.dataInput.neighboursIndexes
labs = self.samplerEngine.getVariable('nrl').labels[cond,:]
t = self.priorBetaCut
b, db, a = Cpt_AcceptNewBeta_Graph(g, labs, vlnz, vb,
self.currentValue[cond],
self.sigma[0], thresh=t)
self.currentDB = db
self.currentAcceptRatio = a
self.currentValue[cond] = b
msg = "beta cond %d: %f" %(cond, self.currentValue[cond])
pyhrf.verbose.printNdarray(5, msg)
def get_string_value(self, v):
return get_2Dtable_string(v, self.dataInput.cNames, ['pm_beta'])
def saveCurrentValue(self, it):
GibbsSamplerVariable.saveCurrentValue(self,it)
if 0 and self.sampleFlag and self.currentDB is not None:
for cond in xrange(self.nbConditions):
vlnz, vb = self.gridLnZ
g = self.dataInput.neighboursIndexes
labs = self.samplerEngine.getVariable('nrl').labels[cond,:]
t = self.priorBetaCut
self.betaWalk[cond].append(self.currentDB)
self.pBeta[cond].append(Cpt_Distrib_P_beta_graph(g, labs, vlnz,
vb, thresh=t))
self.acceptBeta[cond].append(self.currentAcceptRatio)
def getOutputs(self):
outputs = {}
if pyhrf.__usemode__ == pyhrf.DEVEL:
outputs = GibbsSamplerVariable.getOutputs(self)
cn = self.dataInput.cNames
axes_names = ['condition', 'voxel']
axes_domains = {'condition' : cn}
nbv, nbc = self.nbVox, self.nbConditions
repeatedBeta = np.repeat(self.mean, nbv).reshape(nbc, nbv)
outputs['pm_beta_mapped'] = xndarray(repeatedBeta,
axes_names=axes_names,
axes_domains=axes_domains,
value_label="pm Beta")
if self.sampleFlag:
if self.pBeta is not None and len(self.pBeta[0])>0:
axes_names = ['condition', 'iteration', 'beta']
axes_domains = {'beta':self.gridLnZ[1], 'condition':cn,
'iteration':self.smplHistoryIts[1:]}
pBeta = np.array(self.pBeta)
#print 'pBeta.shape:', pBeta.shape
outputs['pBetaApost'] = xndarray(pBeta, axes_names=axes_names,
value_label="proba",
axes_domains=axes_domains)
if self.betaWalk is not None and len(self.betaWalk[0])>0:
axes_names = ['condition', 'iteration']
axes_domains = {'condition' : cn,
'iteration':self.smplHistoryIts[1:]}
betaWalks = np.array(self.betaWalk)
#print 'betaWalk hist :', betaWalks.shape
outputs['betaWalkHist'] = xndarray(betaWalks,
axes_names=axes_names,
value_label="dBeta",
axes_domains=axes_domains)
if self.acceptBeta is not None and len(self.acceptBeta[0])>0:
axes_names = ['condition', 'iteration']
axes_domains = {'condition' : cn,
'iteration':self.smplHistoryIts[1:]}
acceptBetaHist = np.array(self.acceptBeta)
#print 'acceptBeta hist :', acceptBetaHist.shape
outputs['acceptBetaHist'] = xndarray(acceptBetaHist,
axes_names=axes_names,
value_label="Accept",
axes_domains=axes_domains)
axes_names = ['beta']
axes_domains = {'beta':self.gridLnZ[1]}
nc = self.dataInput.nbCliques
outputs['lnZ_div_NbCliques'] = xndarray(self.gridLnZ[0]/nc,
axes_names=axes_names,
value_label="lnZ/#cliques",
axes_domains=axes_domains)
return outputs
|
gpl-3.0
|
elijah513/scikit-learn
|
examples/ensemble/plot_forest_iris.py
|
335
|
6271
|
"""
====================================================================
Plot the decision surfaces of ensembles of trees on the iris dataset
====================================================================
Plot the decision surfaces of forests of randomized trees trained on pairs of
features of the iris dataset.
This plot compares the decision surfaces learned by a decision tree classifier
(first column), by a random forest classifier (second column), by an extra-
trees classifier (third column) and by an AdaBoost classifier (fourth column).
In the first row, the classifiers are built using the sepal width and the sepal
length features only, on the second row using the petal length and sepal length
only, and on the third row using the petal width and the petal length only.
In descending order of quality, when trained (outside of this example) on all
4 features using 30 estimators and scored using 10 fold cross validation, we see::
ExtraTreesClassifier() # 0.95 score
RandomForestClassifier() # 0.94 score
AdaBoost(DecisionTree(max_depth=3)) # 0.94 score
DecisionTree(max_depth=None) # 0.94 score
Increasing `max_depth` for AdaBoost lowers the standard deviation of the scores (but
the average score does not improve).
See the console's output for further details about each model.
In this example you might try to:
1) vary the ``max_depth`` for the ``DecisionTreeClassifier`` and
``AdaBoostClassifier``, perhaps try ``max_depth=3`` for the
``DecisionTreeClassifier`` or ``max_depth=None`` for ``AdaBoostClassifier``
2) vary ``n_estimators``
It is worth noting that RandomForests and ExtraTrees can be fitted in parallel
on many cores as each tree is built independently of the others. AdaBoost's
samples are built sequentially and so do not use multiple cores.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import clone
from sklearn.datasets import load_iris
from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,
AdaBoostClassifier)
from sklearn.externals.six.moves import xrange
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3
n_estimators = 30
plot_colors = "ryb"
cmap = plt.cm.RdYlBu
plot_step = 0.02 # fine step width for decision surface contours
plot_step_coarser = 0.5 # step widths for coarse classifier guesses
RANDOM_SEED = 13 # fix the seed on each iteration
# Load data
iris = load_iris()
plot_idx = 1
models = [DecisionTreeClassifier(max_depth=None),
RandomForestClassifier(n_estimators=n_estimators),
ExtraTreesClassifier(n_estimators=n_estimators),
AdaBoostClassifier(DecisionTreeClassifier(max_depth=3),
n_estimators=n_estimators)]
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Train
clf = clone(model)
clf = model.fit(X, y)
scores = clf.score(X, y)
# Create a title for each column and the console by using str() and
# slicing away useless parts of the string
model_title = str(type(model)).split(".")[-1][:-2][:-len("Classifier")]
model_details = model_title
if hasattr(model, "estimators_"):
model_details += " with {} estimators".format(len(model.estimators_))
print( model_details + " with features", pair, "has a score of", scores )
plt.subplot(3, 4, plot_idx)
if plot_idx <= len(models):
# Add a title at the top of each column
plt.title(model_title)
# Now plot the decision boundary using a fine mesh as input to a
# filled contour plot
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
# Plot either a single DecisionTreeClassifier or alpha blend the
# decision surfaces of the ensemble of classifiers
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap)
else:
# Choose alpha blend level with respect to the number of estimators
# that are in use (noting that AdaBoost can use fewer estimators
# than its maximum if it achieves a good enough fit early on)
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
# Build a coarser grid to plot a set of ensemble classifications
# to show how these are different to what we see in the decision
# surfaces. These points are regularly space and do not have a black outline
xx_coarser, yy_coarser = np.meshgrid(np.arange(x_min, x_max, plot_step_coarser),
np.arange(y_min, y_max, plot_step_coarser))
Z_points_coarser = model.predict(np.c_[xx_coarser.ravel(), yy_coarser.ravel()]).reshape(xx_coarser.shape)
cs_points = plt.scatter(xx_coarser, yy_coarser, s=15, c=Z_points_coarser, cmap=cmap, edgecolors="none")
# Plot the training points, these are clustered together and have a
# black outline
for i, c in zip(xrange(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=c, label=iris.target_names[i],
cmap=cmap)
plot_idx += 1 # move on to the next plot in sequence
plt.suptitle("Classifiers on feature subsets of the Iris dataset")
plt.axis("tight")
plt.show()
|
bsd-3-clause
|
vybstat/scikit-learn
|
sklearn/feature_selection/tests/test_from_model.py
|
244
|
1593
|
import numpy as np
import scipy.sparse as sp
from nose.tools import assert_raises, assert_true
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_greater
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
from sklearn.svm import LinearSVC
iris = load_iris()
def test_transform_linear_model():
for clf in (LogisticRegression(C=0.1),
LinearSVC(C=0.01, dual=False),
SGDClassifier(alpha=0.001, n_iter=50, shuffle=True,
random_state=0)):
for thresh in (None, ".09*mean", "1e-5 * median"):
for func in (np.array, sp.csr_matrix):
X = func(iris.data)
clf.set_params(penalty="l1")
clf.fit(X, iris.target)
X_new = clf.transform(X, thresh)
if isinstance(clf, SGDClassifier):
assert_true(X_new.shape[1] <= X.shape[1])
else:
assert_less(X_new.shape[1], X.shape[1])
clf.set_params(penalty="l2")
clf.fit(X_new, iris.target)
pred = clf.predict(X_new)
assert_greater(np.mean(pred == iris.target), 0.7)
def test_invalid_input():
clf = SGDClassifier(alpha=0.1, n_iter=10, shuffle=True, random_state=None)
clf.fit(iris.data, iris.target)
assert_raises(ValueError, clf.transform, iris.data, "gobbledigook")
assert_raises(ValueError, clf.transform, iris.data, ".5 * gobbledigook")
|
bsd-3-clause
|
XuhanLiu/lemp
|
lemp.py
|
1
|
4850
|
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import getopt
import os
import sys
from collections import Counter
import numpy as np
from keras.models import load_model
from sklearn.externals import joblib
AA = 'ARNDCQEGHILKMFPSTWYV'
CFG = joblib.load('model/config.pkl')
USAGE = """
USAGE
python lemp.py -i <input_file> [-o <output_file>] [-h] [-v]
OPTIONAL ARGUMENTS
-i <input_file> : dataset file containing protein sequences as FASTA file format.
-o <output_file> : a directory containing the results of prediction of each sample.
-v : version information of this software.
-h : Help information, print USAGE and ARGUMENTS messages.
Note: Please designate each protein sequence in FASTA file with distinct name!
"""
VERSION = """
DESCRIPTION
Name : LEMP (LSTM-based Ensemble Malonylation Predictor)
Version : 1.0
Update Time : 2017-07-01
Author : Xuhan Liu & Zhen Chen
"""
def Snippet(fasta):
fas = open(fasta).read().replace('\r\n', '\n')[1:].split('>')
dic = {}
for fa in fas:
lines = fa.split('\n')
kb = lines[0].split()[0]
seq = ''.join(lines[1:]).upper()
frags = []
for i, res in enumerate(seq):
if res != 'K': continue
frag = [i+1]
for j in range(i - 15, i + 16):
if j < 0 or j >= len(seq):
frag.append(20)
else:
frag.append(AA.index(seq[j]) if seq[j] in AA else 20)
frags.append(frag)
# print(str(i+1) + '\t' + tmpStr + '\n')
dic[kb] = np.array(frags)
return dic
def EAAC(frags):
eaacs = []
for frag in frags:
eaac = []
for i in range(24):
count = Counter(frag[i: i + 8])
if 20 in count: count.pop(20)
sums = sum(count.values()) + 1e-6
aac = [count[i] / sums for i in range(20)]
eaac += aac
eaacs.append(eaac)
return np.array(eaacs)
def ScoreEAAC(dic):
model = joblib.load('model/eaac.pkl')
scores = {}
for kb, frags in dic.items():
score = np.zeros((len(frags), 2))
score[:, 0] = frags[:, 0]
score[:, 1] = model.predict_proba(EAAC(frags[:, 1:]))[:, 1]
scores[kb] = score
return scores
def ScoreLSTM(dic):
scores = {}
models = [load_model('model/lstm.%d.h5' % i) for i in range(5)]
for kb, frags in dic.items():
score = np.zeros((len(frags), 2))
for model in models:
score[:, 0] += frags[:, 0]
score[:, 1] += model.predict_proba(frags[:, 1:])[:, 0]
scores[kb] = score / 5
return scores
def Predict(EAACscores, LSTMscores):
scores = {}
for kb in LSTMscores:
EAACscore = EAACscores[kb]
LSTMscore = LSTMscores[kb]
score = np.zeros(LSTMscores[kb].shape)
score[:, 0] = LSTMscore[:, 0]
score[:, 1] = 1 / (1 + np.exp(-EAACscore[:, 1] * CFG['w_eaac'] - LSTMscore[:, 1] * CFG['w_lstm'] - CFG['bias']))
scores[kb] = score
return scores
if __name__ == '__main__':
try:
opts, args = getopt.getopt(sys.argv[1:], "hvi:o:")
OPT = dict(opts)
except getopt.GetoptError:
print('ERROR: Invalid arguments usage. Please type \'-h\' for help.')
sys.exit(2)
if '-h' in OPT:
print(USAGE + '\n')
elif '-v' in OPT:
print(VERSION + '\n')
else:
if '-i' not in OPT:
print('ERROR: Input file is missing. Please type \'-h\' for help.')
sys.exit(2)
elif not os.path.exists(OPT['-i']):
print('ERROR: Input file cannot be found. Please type \'-h\' for help.')
sys.exit(2)
# Process train and predict submit
else:
dic = Snippet(OPT['-i'])
LSTMscores = ScoreLSTM(dic)
EAACscores = ScoreEAAC(dic)
scores = Predict(LSTMscores=LSTMscores, EAACscores=EAACscores)
results = 'ID\tSite\tResidue\tScore\tY/N(Sp=90%)\tY/N(Sp=95%)\tY/N(Sp=99%)\n'
for kb, score in scores.items():
for i in score:
flag90 = 'Y' if i[1] > CFG['cut90'] else 'N'
flag95 = 'Y' if i[1] > CFG['cut95'] else 'N'
flag99 = 'Y' if i[1] > CFG['cut99'] else 'N'
results += '%s\t%d\tK\t%f\t%s\t%s\t%s\n' % (kb, i[0], i[1], flag90, flag95, flag99)
if '-o' not in OPT:
print(results)
else:
output = open(OPT['-o'], 'w')
output.write(results)
output.close()
print('=== SUCCESS ===')
|
gpl-3.0
|
r-mart/scikit-learn
|
sklearn/cross_decomposition/cca_.py
|
209
|
3150
|
from .pls_ import _PLS
__all__ = ['CCA']
class CCA(_PLS):
"""CCA Canonical Correlation Analysis.
CCA inherits from PLS with mode="B" and deflation_mode="canonical".
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
n_components : int, (default 2).
number of components to keep.
scale : boolean, (default True)
whether to scale the data?
max_iter : an integer, (default 500)
the maximum number of iterations of the NIPALS inner loop
tol : non-negative real, default 1e-06.
the tolerance used in the iterative algorithm
copy : boolean
Whether the deflation be done on a copy. Let the default value
to True unless you don't care about side effects
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_loadings_ : array, [p, n_components]
X block loadings vectors.
y_loadings_ : array, [q, n_components]
Y block loadings vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
x_rotations_ : array, [p, n_components]
X block to latents rotations.
y_rotations_ : array, [q, n_components]
Y block to latents rotations.
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component.
Notes
-----
For each component k, find the weights u, v that maximizes
max corr(Xk u, Yk v), such that ``|u| = |v| = 1``
Note that it maximizes only the correlations between the scores.
The residual matrix of X (Xk+1) block is obtained by the deflation on the
current X score: x_score.
The residual matrix of Y (Yk+1) block is obtained by deflation on the
current Y score.
Examples
--------
>>> from sklearn.cross_decomposition import CCA
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> cca = CCA(n_components=1)
>>> cca.fit(X, Y)
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
CCA(copy=True, max_iter=500, n_components=1, scale=True, tol=1e-06)
>>> X_c, Y_c = cca.transform(X, Y)
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
In french but still a reference:
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
See also
--------
PLSCanonical
PLSSVD
"""
def __init__(self, n_components=2, scale=True,
max_iter=500, tol=1e-06, copy=True):
_PLS.__init__(self, n_components=n_components, scale=scale,
deflation_mode="canonical", mode="B",
norm_y_weights=True, algorithm="nipals",
max_iter=max_iter, tol=tol, copy=copy)
|
bsd-3-clause
|
MTG/sms-tools
|
lectures/04-STFT/plots-code/windows-2.py
|
24
|
1026
|
import matplotlib.pyplot as plt
import numpy as np
import time, os, sys
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../../software/models/'))
import dftModel as DF
import utilFunctions as UF
import math
(fs, x) = UF.wavread('../../../sounds/violin-B3.wav')
N = 1024
pin = 5000
w = np.ones(801)
hM1 = int(math.floor((w.size+1)/2))
hM2 = int(math.floor(w.size/2))
x1 = x[pin-hM1:pin+hM2]
plt.figure(1, figsize=(9.5, 5))
plt.subplot(3,1,1)
plt.plot(np.arange(-hM1, hM2), x1, lw=1.5)
plt.axis([-hM1, hM2, min(x1), max(x1)])
plt.title('x (violin-B3.wav)')
mX, pX = DF.dftAnal(x1, w, N)
mX = mX - max(mX)
plt.subplot(3,1,2)
plt.plot(np.arange(mX.size), mX, 'r', lw=1.5)
plt.axis([0,N/4,-70,0])
plt.title ('mX (rectangular window)')
w = np.blackman(801)
mX, pX = DF.dftAnal(x1, w, N)
mX = mX - max(mX)
plt.subplot(3,1,3)
plt.plot(np.arange(mX.size), mX, 'r', lw=1.5)
plt.axis([0,N/4,-70,0])
plt.title ('mX (blackman window)')
plt.tight_layout()
plt.savefig('windows-2.png')
plt.show()
|
agpl-3.0
|
ucd-cws/arcproject-wq-processing
|
arcproject/scripts/mapping.py
|
1
|
11152
|
import calendar
import datetime
import os
import logging
log = logging.getLogger("arcproject")
import arcpy
from sqlalchemy import extract
import pandas as pd
import amaptor
from arcproject.waterquality.classes import get_new_session, WaterQuality
from .exceptions import NoRecordsError, SpatialReferenceError
from .wqt_timestamp_match import pd2np
from arcproject.waterquality import classes
from arcproject.scripts import funcs as wq_funcs
_BASE_FOLDER = os.path.split(os.path.dirname(__file__))[0]
_TEMPLATES_FOLDER = os.path.join(_BASE_FOLDER, "templates", )
_LAYERS_FOLDER = os.path.join(_TEMPLATES_FOLDER, "layers")
arcgis_10_template = os.path.join(_TEMPLATES_FOLDER, "base_template.mxd")
arcgis_pro_template = os.path.join(_TEMPLATES_FOLDER, "arcproject_template_pro", "arcproject_template_pro.aprx")
arcgis_pro_layout_template = os.path.join(_TEMPLATES_FOLDER, "arcproject_template_pro", "main_layout.pagx")
arcgis_pro_layer_symbology = os.path.join(_LAYERS_FOLDER, "wq_points.lyrx")
arcgis_10_layer_symbology = os.path.join(_LAYERS_FOLDER, "wq_points.lyr")
def set_output_symbology(parameter):
## Output symbology
if arcpy.GetInstallInfo()["ProductName"] == "ArcGISPro":
parameter.symbology = arcgis_pro_layer_symbology
else:
parameter.symbology = arcgis_10_layer_symbology
return parameter
def layer_from_date(date_to_use, output_location):
"""
Given a date and output location, exports records to a feature class
:param date_to_use: date in MM/DD/YYYY format or a datetime object
:param output_location: full path to output feature class
:return: returns nothing
"""
wq = classes.WaterQuality
session = classes.get_new_session()
try:
arcpy.AddMessage("Using Date {}".format(date_to_use))
upper_bound = date_to_use.date() + datetime.timedelta(days=1)
query = session.query(wq).filter(wq.date_time > date_to_use.date(), wq.date_time < upper_bound, wq.x_coord != None, wq.y_coord != None) # add 1 day's worth of nanoseconds
try:
query_to_features(query, output_location)
except NoRecordsError:
arcpy.AddWarning("No records for date {}".format(date_to_use))
raise
finally:
session.close()
def generate_layer_for_month(month_to_use, year_to_use, output_location):
wq = classes.WaterQuality
session = classes.get_new_session()
try:
lower_bound = datetime.date(year_to_use, month_to_use, 1)
upper_bound = datetime.date(year_to_use, month_to_use, int(calendar.monthrange(year_to_use, month_to_use)[1]))
arcpy.AddMessage("Pulling data for {} through {}".format(lower_bound, upper_bound))
query = session.query(wq).filter(wq.date_time > lower_bound, wq.date_time < upper_bound, wq.x_coord != None, wq.y_coord != None) # add 1 day's worth of nanoseconds
query_to_features(query, output_location)
finally:
session.close()
def generate_layer_for_site(siteid, output_location):
wq = classes.WaterQuality
session = classes.get_new_session()
try:
query = session.query(wq).filter(wq.site_id == siteid, wq.x_coord != None, wq.y_coord != None)
query_to_features(query, output_location)
finally:
session.close()
def replaceDefaultNull(fc, placeholder=-9999):
if fc.endswith(".shp"): # skip shapefiles because they don't support null
return
try:
with arcpy.da.UpdateCursor(fc, '*') as cursor:
for row in cursor:
for i in range(len(row)):
if row[i] == placeholder or row[i] == str(placeholder):
row[i] = None
cursor.updateRow(row)
except Exception as e:
print(e.message)
def query_to_features(query, export_path):
"""
Given a SQLAlchemy query for water quality data, exports it to a feature class
:param query: a SQLAlchemy query object
:param export_path: the path to write out a feature class to
:return:
"""
# read the SQLAlchemy query into a Pandas Data Frame since that can talk to ArcGIS
df = pd.read_sql(query.statement, query.session.bind)
# confirm that all of the items are in the same spatial reference - this should always be the case, but we should
# make sure so nothing weird happens
sr_code = wq_funcs.get_wq_df_spatial_reference(df)
sr = arcpy.SpatialReference(int(sr_code)) # make a spatial reference object
arcpy.da.NumPyArrayToFeatureClass(
in_array=pd2np(df),
out_table=export_path,
shape_fields=["x_coord", "y_coord"],
spatial_reference=sr,
)
# replace -9999 with <null> for geodatabase. Shapefile no data will remain -9999
replaceDefaultNull(export_path)
def map_missing_segments(summary_file, loaded_data, output_location, template=os.path.join(_TEMPLATES_FOLDER, "base_template.mxd")):
"""
A mapping function used by data validation code to create maps when data is invalid
:param summary_file:
:param loaded_data:
:param output_location:
:param template:
:return:
"""
transect_template = os.path.join(_LAYERS_FOLDER, "added_data.lyr")
summary_file_template = os.path.join(_LAYERS_FOLDER, "summary_file_review.lyr")
project = amaptor.Project(template)
map = project.maps[0]
map.insert_feature_class_with_symbology(summary_file, layer_file=summary_file_template, layer_name="Summary File",
near_name="Arc_DeltaWaterways_0402", insert_position="BEFORE")
map.insert_feature_class_with_symbology(loaded_data, layer_file=transect_template, layer_name="Loaded Transects",
near_name="Arc_DeltaWaterways_0402", insert_position="BEFORE")
project.save_a_copy(output_location)
return project
class WQMappingBase(object):
"""
A base class for tools that want to provide a choice for how to symbolize water quality data. To use, subclass it
for any of the other tools, add the defined parameter to the list of params,
and make sure the tool init includes a call to super(NewClassName, self).__init__() at the beginning of __init__
"""
def __init__(self):
self.table_workspace = "in_memory"
self.table_name = "arcproject_temp_date_table"
self.temporary_date_table = "{}\\{}".format(self.table_workspace, self.table_name)
self.select_wq_param = arcpy.Parameter(
displayName="Symbolize Data by",
name="symbology",
datatype="GPString",
multiValue=False,
direction="Input",
)
self.year_to_generate = arcpy.Parameter( # optional to use, but available
displayName="Year",
name="year_to_generate",
datatype="GPString",
multiValue=False,
direction="Input"
)
self.month_to_generate = arcpy.Parameter( # optional to use, but available
displayName="Month",
name="month_to_generate",
datatype="GPString",
multiValue=False,
direction="Input"
)
self.month_to_generate.filter.type = 'ValueList'
t = list(calendar.month_name)
t.pop(0)
self.month_to_generate.filter.list = t
self._filter_to_layer_mapping = {
"CHL": "CHL_regular.lyr",
"CHL Corrected": "CHL_corrected.lyr",
"Dissolved Oxygen": "DO_v2.lyr",
"DO Percent Saturation": "DOPerCentSat_v2.lyr",
"pH": "pH.lyr",
"RPAR": "RPAR_v2.lyr",
"Salinity": "Sal.lyr",
"SpCond": "SpCond.lyr",
"Temperature": "Temp.lyr",
"Turbidity": "Turbid.lyr",
}
self.select_wq_param.filter.type = "ValueList"
self.select_wq_param.filter.list = ["CHL", "Corrected CHL", "Dissolved Oxygen", "DO Percent Saturation", "pH", "RPAR", "Salinity", "SpCond",
"Temperature", "Turbidity"]
def insert_layer(self, data_path, symbology_param, map_or_project="CURRENT"):
"""
Symbolizes a WQ layer based on the specified parameter and then inserts it into a map
:param data_path:
:param symbology_param:
:param map_or_project: a reference to a map document (including "CURRENT"), an instance of amaptor.Project, or
and instance of amaptor.Map
:return:
"""
if isinstance(map_or_project, amaptor.Project):
project = map_or_project
l_map = project.get_active_map()
elif isinstance(map_or_project, amaptor.Map):
l_map = map_or_project
else:
project = amaptor.Project(map_or_project)
l_map = project.get_active_map()
layer_name = self._filter_to_layer_mapping[symbology_param.valueAsText]
layer_path = os.path.join(_LAYERS_FOLDER, layer_name)
layer = amaptor.functions.make_layer_with_file_symbology(data_path, layer_path)
layer.name = os.path.split(data_path)[1]
l_map.add_layer(layer)
def cleanup(self):
if arcpy.Exists(self.temporary_date_table):
arcpy.Delete_management(self.temporary_date_table)
def update_month_fields(self, parameters, year_field_index=0, month_field_index=1):
"""
Retrieve months from the temporary data table in memory
:param parameters:
:param year_field_index:
:param month_field_index:
:return:
"""
if parameters[year_field_index].filter.list is None or parameters[year_field_index].filter.list == "" or len(parameters[year_field_index].filter.list) == 0: # if this is our first time through, set it all up
self.initialize_year_and_month_fields(parameters, year_field_index)
if not arcpy.Exists(self.temporary_date_table):
return # this seems to occur in Pro, when running the tool - the data doesn't get loaded, but it is calling this function - may be a bug to squash somewhere here.
year = int(parameters[year_field_index].value)
months = arcpy.SearchCursor(self.temporary_date_table, where_clause="data_year={}".format(year))
filter_months = []
for month in months:
filter_months.append(month.getValue("data_month"))
parameters[month_field_index].filter.type = 'ValueList'
parameters[month_field_index].filter.list = filter_months
def initialize_year_and_month_fields(self, parameters, year_field_index=0):
"""
Used on Generate Month and Generate Map
:param parameters:
:return:
"""
self.cleanup() # cleans up the temporary table. If it exists, it's stale
arcpy.CreateTable_management(self.table_workspace, self.table_name)
arcpy.AddField_management(self.temporary_date_table, "data_year", "LONG")
arcpy.AddField_management(self.temporary_date_table, "data_month", "TEXT")
# load the data from the DB
session = get_new_session()
try:
# get years with data from the database to use as selection for tool input
curs = arcpy.InsertCursor(self.temporary_date_table)
q = session.query(extract('year', WaterQuality.date_time), extract('month', WaterQuality.date_time)).distinct()
years = {}
month_names = list(calendar.month_name) # helps translate numeric months to
for row in q: # translate the results to the temporary table
new_record = curs.newRow()
new_record.setValue("data_year", row[0])
new_record.setValue("data_month", month_names[row[1]])
curs.insertRow(new_record)
years[row[0]] = True # indicate we have data for a year
parameters[year_field_index].filter.type = 'ValueList'
parameters[year_field_index].filter.list = sorted(list(years.keys())) # get the distinct set of years
finally:
session.close()
return self.temporary_date_table
def convert_year_and_month(self, year, month):
year_to_use = int(year.value)
month = month.valueAsText
# look up index position in calender.monthname
t = list(calendar.month_name)
month_to_use = int(t.index(month))
return year_to_use, month_to_use
|
mit
|
tectronics/ambhas
|
ambhas/soil_texture.py
|
3
|
8180
|
# -*- coding: utf-8 -*-
"""
Created on Tue Nov 1 18:57:30 2011
@author: Sat Kumar Tomer
@website: www.ambhas.com
@email: [email protected]
http://nowlin.css.msu.edu/software/triangle_form.html
http://ag.arizona.edu/research/rosetta/rosetta.html#download
"""
import matplotlib.nxutils as nx
import numpy as np
from matplotlib.path import Path
class soil_texture:
"""
Input:
sand: percentage sand
clay: percentage clay
output:
"""
def __init__(self, sand, clay, warning=True):
self.sand = sand
self.clay = clay
soil_names = ['silty_loam', 'sand', 'silty_clay_loam', 'loam', 'clay_loam',
'sandy_loam', 'silty_clay', 'sandy_clay_loam', 'loamy_sand ',
'clay', 'silt', 'sandy_clay']
self.soil_names = soil_names
# sand, clay
t0 = np.array([ [0,12], [0,27], [23,27], [50,0], [20,0], [8,12]], float)
t1 = np.array([ [85,0], [90,10], [100,0]], float)
t2 = np.array([ [0,27], [0,40], [20,40], [20,27]], float)
t3 = np.array([ [43,7], [23,27], [45,27], [52,20], [52,7]], float)
t4 = np.array([ [20,27], [20,40], [45,40], [45,27]], float)
t5 = np.array([ [50,0], [43,7], [52,7], [52,20], [80,20], [85,15], [70,0]], float)
t6 = np.array([ [0,40], [0,60], [20,40]], float)
t7 = np.array([ [52,20], [45,27], [45,35], [65,35], [80,20]], float)
t8 = np.array([ [70,0], [85,15], [90,10], [85,0]], float)
t9 = np.array([ [20,40], [0,60], [0,100], [45,55], [45,40]], float)
t10 = np.array([ [0,0], [0,12], [8,12], [20,0]], float)
t11 = np.array([ [45,35], [45,55], [65,35]], float)
#N θr θs log(α) log(n) Ks Ko L
#-- θr -- cm3/cm3
#-- θs -- cm3/cm3
#-- log(α) -- log10(1/cm)
#-- log(n) -- log10
#-- Ks -- log(cm/day)
#-- Ko -- log(cm/day)
#-- L --
shp = [[330, 0.065, 0.073, 0.439, 0.093, -2.296, 0.57, 0.221, 0.14, 1.261, 0.74, 0.243, 0.26, 0.365, 1.42],
[308, 0.053, 0.029, 0.375, 0.055, -1.453, 0.25, 0.502, 0.18, 2.808, 0.59, 1.389, 0.24, -0.930, 0.49],
[172, 0.090, 0.082, 0.482, 0.086, -2.076, 0.59, 0.182, 0.13, 1.046, 0.76, 0.349, 0.26, -0.156, 1.23],
[242, 0.061, 0.073, 0.399, 0.098, -1.954, 0.73, 0.168, 0.13, 1.081, 0.92, 0.568, 0.21, -0.371, 0.84],
[140, 0.079, 0.076, 0.442, 0.079, -1.801, 0.69, 0.151, 0.12, 0.913, 1.09, 0.699, 0.23, -0.763, 0.90],
[476, 0.039, 0.054, 0.387, 0.085, -1.574, 0.56, 0.161, 0.11, 1.583, 0.66, 1.190, 0.21, -0.861, 0.73],
[28, 0.111, 0.119, 0.481, 0.080, -1.790, 0.64, 0.121, 0.10, 0.983, 0.57, 0.501, 0.27, -1.287, 1.23],
[87, 0.063, 0.078, 0.384, 0.061, -1.676, 0.71, 0.124, 0.12, 1.120, 0.85, 0.841, 0.24, -1.280, 0.99],
[201, 0.049, 0.042, 0.390, 0.070, -1.459, 0.47, 0.242, 0.16, 2.022, 0.64, 1.386, 0.24, -0.874, 0.59],
[84, 0.098, 0.107, 0.459, 0.079, -1.825, 0.68, 0.098, 0.07, 1.169, 0.92, 0.472, 0.26, 1.561, 1.39],
[6, 0.050, 0.041, 0.489, 0.078, -2.182, 0.30, 0.225, 0.13, 1.641, 0.27, 0.524, 0.32, 0.624, 1.57],
[11, 0.117, 0.114, 0.385, 0.046, -1.476, 0.57, 0.082, 0.06, 1.055, 0.89, 0.637, 0.34, -3.665, 1.80]]
for i in range(12):
exec("path = Path(t%i)"%i)
if path.contains_point((sand,clay)): tt=i
#exec("if nx.pnpoly(sand, clay, t0):tt=i").replace('0',str(i))
if ~np.isnan(sand*clay):
if sand+clay<100:
self.soil_type = soil_names[tt]
self.theta_r = shp[tt][1]
self.theta_s = shp[tt][3]
self.alpha = 10**shp[tt][5]*100
self.n = 10**shp[tt][7]
self.ks= np.exp(shp[tt][9])/100/86400
self.l= shp[tt][13]
else:
if warning:
print("sand+clay is more than 100 percent")
self.soil_type = np.nan
self.theta_r = np.nan
self.theta_s = np.nan
self.alpha = np.nan
self.n = np.nan
self.ks= np.nan
self.l= np.nan
else:
if warning:
print("sand or clay contains nan")
self.soil_type = np.nan
self.theta_r = np.nan
self.theta_s = np.nan
self.alpha = np.nan
self.n = np.nan
self.ks= np.nan
self.l= np.nan
def get_color(self, scaling=1.0):
"""
gives the standard soil color
sand---> yellow
clay---> magenta
silt---> cyan
based on the article, "TOWARDS A STANDARDISED APPROACH FOR THE
SELECTION OF COLOURS IN SOIL MAPS BASED ON THEIR TEXTURAL COMPOSITION
AND ROCK FRAGMENT ABUNDANCE: AN IMPLEMENTATION WITHIN MACROMEDIA FREEHAND"
by Graciela Metternicht and Jasmin Goetting
"""
# yellow magenta cyan
ymc = {'silty_loam':(17,14,69), 'sand':(92,3,5),
'silty_clay_loam':(10,33,57), 'loam':(43,18,39),
'clay_loam':(33,33,34), 'sandy_loam':(65,10,25),
'silty_clay':(7,46,47), 'sandy_clay_loam':(58,28,14),
'loamy_sand':(83,6,11), 'clay':(22,59,19),
'silt':(7,6,87), 'sandy_clay':(51,42,7)}
y, m, c = ymc[self.soil_type]
self.y = y/100.0
self.m = m/100.0
self.c = c/100.0
self._ymc_to_rgb(scaling)
return self.r, self.g, self.b
def _ymc_to_rgb(self, scaling):
"""
converts ymc(yellow, magenta, cyan) to rgb (red, green, blue)
"""
g = 0.5*(self.c+self.m+self.y - self.m)
r = 0.5*(self.c+self.m+self.y - self.c)
b = 0.5*(self.c+self.m+self.y - self.y)
self.g = scaling*g/(r+g+b)
self.r = scaling*r/(r+g+b)
self.b = scaling*b/(r+g+b)
def se_fun(psi,alpha,n):
"""
psi: pressure head
n: shape index
"""
m = 1-1/n
se = (1+(np.abs(psi/alpha))**n)**(-m)
return se
def wp_fun(f,qr,alpha,n):
wp = qr+(f-qr)*se_fun(-150,alpha,n)
return wp
def fc_fun(f,qr,alpha,n):
fc = qr+(f-qr)*se_fun(-3.3,alpha,n)
return fc
class saxton_rawls:
"""
Soil Water Characteristic Estimates by Texture and Organic Matter for Hydrologic Solutions
Input:
sand: percentage sand
clay: percentage clay
output:
"""
def __init__(self, sand, clay, organic_matter):
self.S = sand
self.C = clay
self.OM = organic_matter
def sm_33(self):
S = self.S
C = self.C
OM = self.OM
theta_33t = -0.251*S + 0.195*C + 0.011*OM + 0.006*(S*OM) - 0.027*(C*OM) + 0.452*(S*C) +.299
theta_33 = theta_33t + 1.283*theta_33t*2 - 0.374*theta_33t - 0.015
print theta_33
return theta_33
def sm_s_33(self):
S = self.S
C = self.C
OM = self.OM
theta_s_33t = 0.278*S + 0.034*C + 0.022*OM -0.018*(S*OM) - 0.027*(C*OM) -0.584*(S*C) +0.078
theta_s_33 = theta_s_33t + 0.636*theta_s_33t - 0.107
return theta_s_33
def sm_s(self):
S = self.S
C = self.C
theta_33 = self.sm_33()
theta_s_33 = self.sm_s_33()
theta_s = theta_33 + theta_s_33 - 0.097*S + 0.043
return theta_s
def sm_s_df(self, density):
theta_s = self.sm_s()
theta_s_df = theta_s*(1-density/2.65)
return theta_s_df
if __name__=='__main__':
sand = 20.0
clay = 10.0
foo = soil_texture(sand,clay)
print foo.soil_type
print("theta_r = %.3f"%foo.theta_r)
wp = wp_fun(foo.theta_s, foo.theta_r, foo.alpha, foo.n)
print("theta_wp = %.3f"%wp)
sand = 49
clay = 23
organic_matter = 2
foo = saxton_rawls(sand, clay, organic_matter)
#print foo.sm_s_df(1.49)
|
lgpl-2.1
|
mbayon/TFG-MachineLearning
|
vbig/lib/python2.7/site-packages/pandas/tests/io/test_packers.py
|
3
|
32058
|
import pytest
from warnings import catch_warnings
import os
import datetime
import numpy as np
import sys
from distutils.version import LooseVersion
from pandas import compat
from pandas.compat import u, PY3
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, period_range, Index, Categorical)
from pandas.errors import PerformanceWarning
from pandas.io.packers import to_msgpack, read_msgpack
import pandas.util.testing as tm
from pandas.util.testing import (ensure_clean,
assert_categorical_equal,
assert_frame_equal,
assert_index_equal,
assert_series_equal,
patch)
from pandas.tests.test_panel import assert_panel_equal
import pandas
from pandas import Timestamp, NaT
from pandas._libs.tslib import iNaT
nan = np.nan
try:
import blosc # NOQA
except ImportError:
_BLOSC_INSTALLED = False
else:
_BLOSC_INSTALLED = True
try:
import zlib # NOQA
except ImportError:
_ZLIB_INSTALLED = False
else:
_ZLIB_INSTALLED = True
@pytest.fixture(scope='module')
def current_packers_data():
# our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_msgpack_data)
return create_msgpack_data()
@pytest.fixture(scope='module')
def all_packers_data():
# our all of our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_data)
return create_data()
def check_arbitrary(a, b):
if isinstance(a, (list, tuple)) and isinstance(b, (list, tuple)):
assert(len(a) == len(b))
for a_, b_ in zip(a, b):
check_arbitrary(a_, b_)
elif isinstance(a, Panel):
assert_panel_equal(a, b)
elif isinstance(a, DataFrame):
assert_frame_equal(a, b)
elif isinstance(a, Series):
assert_series_equal(a, b)
elif isinstance(a, Index):
assert_index_equal(a, b)
elif isinstance(a, Categorical):
# Temp,
# Categorical.categories is changed from str to bytes in PY3
# maybe the same as GH 13591
if PY3 and b.categories.inferred_type == 'string':
pass
else:
tm.assert_categorical_equal(a, b)
elif a is NaT:
assert b is NaT
elif isinstance(a, Timestamp):
assert a == b
assert a.freq == b.freq
else:
assert(a == b)
class TestPackers(object):
def setup_method(self, method):
self.path = '__%s__.msg' % tm.rands(10)
def teardown_method(self, method):
pass
def encode_decode(self, x, compress=None, **kwargs):
with ensure_clean(self.path) as p:
to_msgpack(p, x, compress=compress, **kwargs)
return read_msgpack(p, **kwargs)
class TestAPI(TestPackers):
def test_string_io(self):
df = DataFrame(np.random.randn(10, 2))
s = df.to_msgpack(None)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(compat.BytesIO(s))
tm.assert_frame_equal(result, df)
s = to_msgpack(None, df)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
with ensure_clean(self.path) as p:
s = df.to_msgpack()
fh = open(p, 'wb')
fh.write(s)
fh.close()
result = read_msgpack(p)
tm.assert_frame_equal(result, df)
@pytest.mark.xfail(reason="msgpack currently doesn't work with pathlib")
def test_path_pathlib(self):
df = tm.makeDataFrame()
result = tm.round_trip_pathlib(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
@pytest.mark.xfail(reason="msgpack currently doesn't work with localpath")
def test_path_localpath(self):
df = tm.makeDataFrame()
result = tm.round_trip_localpath(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
def test_iterator_with_string_io(self):
dfs = [DataFrame(np.random.randn(10, 2)) for i in range(5)]
s = to_msgpack(None, *dfs)
for i, result in enumerate(read_msgpack(s, iterator=True)):
tm.assert_frame_equal(result, dfs[i])
def test_invalid_arg(self):
# GH10369
class A(object):
def __init__(self):
self.read = 0
pytest.raises(ValueError, read_msgpack, path_or_buf=None)
pytest.raises(ValueError, read_msgpack, path_or_buf={})
pytest.raises(ValueError, read_msgpack, path_or_buf=A())
class TestNumpy(TestPackers):
def test_numpy_scalar_float(self):
x = np.float32(np.random.rand())
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_scalar_complex(self):
x = np.complex64(np.random.rand() + 1j * np.random.rand())
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_scalar_float(self):
x = np.random.rand()
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_scalar_complex(self):
x = np.random.rand() + 1j * np.random.rand()
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_numpy_float(self):
x = [np.float32(np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_numpy_float_complex(self):
if not hasattr(np, 'complex128'):
pytest.skip('numpy cant handle complex128')
x = [np.float32(np.random.rand()) for i in range(5)] + \
[np.complex128(np.random.rand() + 1j * np.random.rand())
for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_float(self):
x = [np.random.rand() for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_float_complex(self):
x = [np.random.rand() for i in range(5)] + \
[(np.random.rand() + 1j * np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_dict_float(self):
x = {'foo': 1.0, 'bar': 2.0}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_complex(self):
x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="complex value")
def test_dict_numpy_float(self):
x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_numpy_complex(self):
x = {'foo': np.complex128(1.0 + 1.0j),
'bar': np.complex128(2.0 + 2.0j)}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="numpy complex128")
def test_numpy_array_float(self):
# run multiple times
for n in range(10):
x = np.random.rand(10)
for dtype in ['float32', 'float64']:
x = x.astype(dtype)
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_array_complex(self):
x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128)
x_rec = self.encode_decode(x)
assert (all(map(lambda x, y: x == y, x, x_rec)) and
x.dtype == x_rec.dtype)
def test_list_mixed(self):
x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
class TestBasic(TestPackers):
def test_timestamp(self):
for i in [Timestamp(
'20130101'), Timestamp('20130101', tz='US/Eastern'),
Timestamp('201301010501')]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_nat(self):
nat_rec = self.encode_decode(NaT)
assert NaT is nat_rec
def test_datetimes(self):
# fails under 2.6/win32 (np.datetime64 seems broken)
if LooseVersion(sys.version) < '2.7':
pytest.skip('2.6 with np.datetime64 is broken')
for i in [datetime.datetime(2013, 1, 1),
datetime.datetime(2013, 1, 1, 5, 1),
datetime.date(2013, 1, 1),
np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_timedeltas(self):
for i in [datetime.timedelta(days=1),
datetime.timedelta(days=1, seconds=10),
np.timedelta64(1000000)]:
i_rec = self.encode_decode(i)
assert i == i_rec
class TestIndex(TestPackers):
def setup_method(self, method):
super(TestIndex, self).setup_method(method)
self.d = {
'string': tm.makeStringIndex(100),
'date': tm.makeDateIndex(100),
'int': tm.makeIntIndex(100),
'rng': tm.makeRangeIndex(100),
'float': tm.makeFloatIndex(100),
'empty': Index([]),
'tuple': Index(zip(['foo', 'bar', 'baz'], [1, 2, 3])),
'period': Index(period_range('2012-1-1', freq='M', periods=3)),
'date2': Index(date_range('2013-01-1', periods=10)),
'bdate': Index(bdate_range('2013-01-02', periods=10)),
'cat': tm.makeCategoricalIndex(100)
}
self.mi = {
'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'),
('foo', 'two'),
('qux', 'one'), ('qux', 'two')],
names=['first', 'second']),
}
def test_basic_index(self):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with no freq (GH5506)
i = Index([Timestamp('20130101'), Timestamp('20130103')])
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with timezone
i = Index([Timestamp('20130101 9:00:00'), Timestamp(
'20130103 11:00:00')]).tz_localize('US/Eastern')
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_multi_index(self):
for s, i in self.mi.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_unicode(self):
i = tm.makeUnicodeIndex(100)
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def categorical_index(self):
# GH15487
df = DataFrame(np.random.randn(10, 2))
df = df.astype({0: 'category'}).set_index(0)
result = self.encode_decode(df)
tm.assert_frame_equal(result, df)
class TestSeries(TestPackers):
def setup_method(self, method):
super(TestSeries, self).setup_method(method)
self.d = {}
s = tm.makeStringSeries()
s.name = 'string'
self.d['string'] = s
s = tm.makeObjectSeries()
s.name = 'object'
self.d['object'] = s
s = Series(iNaT, dtype='M8[ns]', index=range(5))
self.d['date'] = s
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 2 +
[Timestamp('20130603', tz='CET')] * 3,
'G': [Timestamp('20130102', tz='US/Eastern')] * 5,
'H': Categorical([1, 2, 3, 4, 5]),
'I': Categorical([1, 2, 3, 4, 5], ordered=True),
}
self.d['float'] = Series(data['A'])
self.d['int'] = Series(data['B'])
self.d['mixed'] = Series(data['E'])
self.d['dt_tz_mixed'] = Series(data['F'])
self.d['dt_tz'] = Series(data['G'])
self.d['cat_ordered'] = Series(data['H'])
self.d['cat_unordered'] = Series(data['I'])
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_series_equal(i, i_rec)
class TestCategorical(TestPackers):
def setup_method(self, method):
super(TestCategorical, self).setup_method(method)
self.d = {}
self.d['plain_str'] = Categorical(['a', 'b', 'c', 'd', 'e'])
self.d['plain_str_ordered'] = Categorical(['a', 'b', 'c', 'd', 'e'],
ordered=True)
self.d['plain_int'] = Categorical([5, 6, 7, 8])
self.d['plain_int_ordered'] = Categorical([5, 6, 7, 8], ordered=True)
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_categorical_equal(i, i_rec)
class TestNDFrame(TestPackers):
def setup_method(self, method):
super(TestNDFrame, self).setup_method(method)
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 5,
'G': [Timestamp('20130603', tz='CET')] * 5,
'H': Categorical(['a', 'b', 'c', 'd', 'e']),
'I': Categorical(['a', 'b', 'c', 'd', 'e'], ordered=True),
}
self.frame = {
'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)),
'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)),
'mixed': DataFrame(data)}
with catch_warnings(record=True):
self.panel = {
'float': Panel(dict(ItemA=self.frame['float'],
ItemB=self.frame['float'] + 1))}
def test_basic_frame(self):
for s, i in self.frame.items():
i_rec = self.encode_decode(i)
assert_frame_equal(i, i_rec)
def test_basic_panel(self):
with catch_warnings(record=True):
for s, i in self.panel.items():
i_rec = self.encode_decode(i)
assert_panel_equal(i, i_rec)
def test_multi(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
l = tuple([self.frame['float'], self.frame['float'].A,
self.frame['float'].B, None])
l_rec = self.encode_decode(l)
check_arbitrary(l, l_rec)
# this is an oddity in that packed lists will be returned as tuples
l = [self.frame['float'], self.frame['float']
.A, self.frame['float'].B, None]
l_rec = self.encode_decode(l)
assert isinstance(l_rec, tuple)
check_arbitrary(l, l_rec)
def test_iterator(self):
l = [self.frame['float'], self.frame['float']
.A, self.frame['float'].B, None]
with ensure_clean(self.path) as path:
to_msgpack(path, *l)
for i, packed in enumerate(read_msgpack(path, iterator=True)):
check_arbitrary(packed, l[i])
def tests_datetimeindex_freq_issue(self):
# GH 5947
# inferring freq on the datetimeindex
df = DataFrame([1, 2, 3], index=date_range('1/1/2013', '1/3/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
df = DataFrame([1, 2], index=date_range('1/1/2013', '1/2/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
def test_dataframe_duplicate_column_names(self):
# GH 9618
expected_1 = DataFrame(columns=['a', 'a'])
expected_2 = DataFrame(columns=[1] * 100)
expected_2.loc[0] = np.random.randn(100)
expected_3 = DataFrame(columns=[1, 1])
expected_3.loc[0] = ['abc', np.nan]
result_1 = self.encode_decode(expected_1)
result_2 = self.encode_decode(expected_2)
result_3 = self.encode_decode(expected_3)
assert_frame_equal(result_1, expected_1)
assert_frame_equal(result_2, expected_2)
assert_frame_equal(result_3, expected_3)
class TestSparse(TestPackers):
def _check_roundtrip(self, obj, comparator, **kwargs):
# currently these are not implemetned
# i_rec = self.encode_decode(obj)
# comparator(obj, i_rec, **kwargs)
pytest.raises(NotImplementedError, self.encode_decode, obj)
def test_sparse_series(self):
s = tm.makeStringSeries()
s[3:5] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_series_equal,
check_series_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_series_equal,
check_series_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_series_equal,
check_series_type=True)
def test_sparse_frame(self):
s = tm.makeDataFrame()
s.loc[3:5, 1:3] = np.nan
s.loc[8:10, -2] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_frame_equal,
check_frame_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_frame_equal,
check_frame_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_frame_equal,
check_frame_type=True)
class TestCompression(TestPackers):
"""See https://github.com/pandas-dev/pandas/pull/9783
"""
def setup_method(self, method):
try:
from sqlalchemy import create_engine
self._create_sql_engine = create_engine
except ImportError:
self._SQLALCHEMY_INSTALLED = False
else:
self._SQLALCHEMY_INSTALLED = True
super(TestCompression, self).setup_method(method)
data = {
'A': np.arange(1000, dtype=np.float64),
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
}
self.frame = {
'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])),
'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])),
'mixed': DataFrame(data),
}
def test_plain(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
def _test_compression(self, compress):
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames
for block in value._data.blocks:
assert block.values.flags.writeable
def test_compression_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression('zlib')
def test_compression_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression('blosc')
def _test_compression_warns_when_decompress_caches(self, compress):
not_garbage = []
control = [] # copied data
compress_module = globals()[compress]
real_decompress = compress_module.decompress
def decompress(ob):
"""mock decompress function that delegates to the real
decompress but caches the result and a copy of the result.
"""
res = real_decompress(ob)
not_garbage.append(res) # hold a reference to this bytes object
control.append(bytearray(res)) # copy the data here to check later
return res
# types mapped to values to add in place.
rhs = {
np.dtype('float64'): 1.0,
np.dtype('int32'): 1,
np.dtype('object'): 'a',
np.dtype('datetime64[ns]'): np.timedelta64(1, 'ns'),
np.dtype('timedelta64[ns]'): np.timedelta64(1, 'ns'),
}
with patch(compress_module, 'decompress', decompress), \
tm.assert_produces_warning(PerformanceWarning) as ws:
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames even though
# we needed to copy the data
for block in value._data.blocks:
assert block.values.flags.writeable
# mutate the data in some way
block.values[0] += rhs[block.dtype]
for w in ws:
# check the messages from our warnings
assert str(w.message) == ('copying data after decompressing; '
'this may mean that decompress is '
'caching its result')
for buf, control_buf in zip(not_garbage, control):
# make sure none of our mutations above affected the
# original buffers
assert buf == control_buf
def test_compression_warns_when_decompress_caches_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression_warns_when_decompress_caches('zlib')
def test_compression_warns_when_decompress_caches_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression_warns_when_decompress_caches('blosc')
def _test_small_strings_no_warn(self, compress):
empty = np.array([], dtype='uint8')
with tm.assert_produces_warning(None):
empty_unpacked = self.encode_decode(empty, compress=compress)
tm.assert_numpy_array_equal(empty_unpacked, empty)
assert empty_unpacked.flags.writeable
char = np.array([ord(b'a')], dtype='uint8')
with tm.assert_produces_warning(None):
char_unpacked = self.encode_decode(char, compress=compress)
tm.assert_numpy_array_equal(char_unpacked, char)
assert char_unpacked.flags.writeable
# if this test fails I am sorry because the interpreter is now in a
# bad state where b'a' points to 98 == ord(b'b').
char_unpacked[0] = ord(b'b')
# we compare the ord of bytes b'a' with unicode u'a' because the should
# always be the same (unless we were able to mutate the shared
# character singleton in which case ord(b'a') == ord(b'b').
assert ord(b'a') == ord(u'a')
tm.assert_numpy_array_equal(
char_unpacked,
np.array([ord(b'b')], dtype='uint8'),
)
def test_small_strings_no_warn_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_small_strings_no_warn('zlib')
def test_small_strings_no_warn_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_small_strings_no_warn('blosc')
def test_readonly_axis_blosc(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='blosc')
assert 1. in self.encode_decode(df2['A'], compress='blosc')
def test_readonly_axis_zlib(self):
# GH11880
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='zlib')
assert 1. in self.encode_decode(df2['A'], compress='zlib')
def test_readonly_axis_blosc_to_sql(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='blosc')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
def test_readonly_axis_zlib_to_sql(self):
# GH11880
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='zlib')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
class TestEncoding(TestPackers):
def setup_method(self, method):
super(TestEncoding, self).setup_method(method)
data = {
'A': [compat.u('\u2019')] * 1000,
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
'G': [400] * 1000
}
self.frame = {
'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])),
'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])),
'mixed': DataFrame(data),
}
self.utf_encodings = ['utf8', 'utf16', 'utf32']
def test_utf(self):
# GH10581
for encoding in self.utf_encodings:
for frame in compat.itervalues(self.frame):
result = self.encode_decode(frame, encoding=encoding)
assert_frame_equal(result, frame)
def test_default_encoding(self):
for frame in compat.itervalues(self.frame):
result = frame.to_msgpack()
expected = frame.to_msgpack(encoding='utf8')
assert result == expected
result = self.encode_decode(frame)
assert_frame_equal(result, frame)
def legacy_packers_versions():
# yield the packers versions
path = tm.get_data_path('legacy_msgpack')
for v in os.listdir(path):
p = os.path.join(path, v)
if os.path.isdir(p):
yield v
class TestMsgpack(object):
"""
How to add msgpack tests:
1. Install pandas version intended to output the msgpack.
TestPackers
2. Execute "generate_legacy_storage_files.py" to create the msgpack.
$ python generate_legacy_storage_files.py <output_dir> msgpack
3. Move the created pickle to "data/legacy_msgpack/<version>" directory.
"""
minimum_structure = {'series': ['float', 'int', 'mixed',
'ts', 'mi', 'dup'],
'frame': ['float', 'int', 'mixed', 'mi'],
'panel': ['float'],
'index': ['int', 'date', 'period'],
'mi': ['reg2']}
def check_min_structure(self, data, version):
for typ, v in self.minimum_structure.items():
assert typ in data, '"{0}" not found in unpacked data'.format(typ)
for kind in v:
msg = '"{0}" not found in data["{1}"]'.format(kind, typ)
assert kind in data[typ], msg
def compare(self, current_data, all_data, vf, version):
# GH12277 encoding default used to be latin-1, now utf-8
if LooseVersion(version) < '0.18.0':
data = read_msgpack(vf, encoding='latin-1')
else:
data = read_msgpack(vf)
self.check_min_structure(data, version)
for typ, dv in data.items():
assert typ in all_data, ('unpacked data contains '
'extra key "{0}"'
.format(typ))
for dt, result in dv.items():
assert dt in current_data[typ], ('data["{0}"] contains extra '
'key "{1}"'.format(typ, dt))
try:
expected = current_data[typ][dt]
except KeyError:
continue
# use a specific comparator
# if available
comp_method = "compare_{typ}_{dt}".format(typ=typ, dt=dt)
comparator = getattr(self, comp_method, None)
if comparator is not None:
comparator(result, expected, typ, version)
else:
check_arbitrary(result, expected)
return data
def compare_series_dt_tz(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < '0.17.0':
expected = expected.astype(object)
tm.assert_series_equal(result, expected)
else:
tm.assert_series_equal(result, expected)
def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < '0.17.0':
expected = expected.astype(object)
tm.assert_frame_equal(result, expected)
else:
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize('version', legacy_packers_versions())
def test_msgpacks_legacy(self, current_packers_data, all_packers_data,
version):
pth = tm.get_data_path('legacy_msgpack/{0}'.format(version))
n = 0
for f in os.listdir(pth):
# GH12142 0.17 files packed in P2 can't be read in P3
if (compat.PY3 and version.startswith('0.17.') and
f.split('.')[-4][-1] == '2'):
continue
vf = os.path.join(pth, f)
try:
with catch_warnings(record=True):
self.compare(current_packers_data, all_packers_data,
vf, version)
except ImportError:
# blosc not installed
continue
n += 1
assert n > 0, 'Msgpack files are not tested'
|
mit
|
jstraub/rtmf
|
python/evaluateGravityOrientation.py
|
1
|
5661
|
import numpy as np
import os.path, re, sys
import scipy.io as scio
from scipy.linalg import det
import cv2
import itertools
from js.data.rgbd.rgbdframe import *
import mayavi.mlab as mlab
import matplotlib.pyplot as plt
def plotMF(fig,R,col=None):
mfColor = []
mfColor.append((232/255.0,65/255.0,32/255.0)) # red
mfColor.append((32/255.0,232/255.0,59/255.0)) # green
mfColor.append((32/255.0,182/255.0,232/255.0)) # tuerkis
pts = np.zeros((3,6))
for i in range(0,3):
pts[:,i*2] = -R[:,i]
pts[:,i*2+1] = R[:,i]
if col is None:
mlab.plot3d(pts[0,0:2],pts[1,0:2],pts[2,0:2],figure=fig,color=mfColor[0])
mlab.plot3d(pts[0,2:4],pts[1,2:4],pts[2,2:4],figure=fig,color=mfColor[1])
mlab.plot3d(pts[0,4:6],pts[1,4:6],pts[2,4:6],figure=fig,color=mfColor[2])
else:
mlab.plot3d(pts[0,0:2],pts[1,0:2],pts[2,0:2],figure=fig,color=col)
mlab.plot3d(pts[0,2:4],pts[1,2:4],pts[2,2:4],figure=fig,color=col)
mlab.plot3d(pts[0,4:6],pts[1,4:6],pts[2,4:6],figure=fig,color=col)
def ExtractFloorDirection(pathLabelImage, nImg, lFloor=11):
errosionSize=8
print pathLabelImage
L = cv2.imread(pathLabelImage,cv2.CV_LOAD_IMAGE_UNCHANGED)
floorMap = ((L==lFloor)*255).astype(np.uint8)
kernel = np.ones((errosionSize, errosionSize),np.uint8)
floorMapE = cv2.erode(floorMap,kernel,iterations=1)
# plt.imshow(np.concatenate((floorMap,floorMapE),axis=1))
# plt.show()
print L.shape
print nImg.shape
nFloor = nImg[floorMapE>128,:].T
print nFloor.shape, np.isnan(nFloor).sum()
nFloor = nFloor[:,np.logical_not(np.isnan(nFloor[0,:]))]
print nFloor.shape, np.isnan(nFloor).sum()
nMean = nFloor.sum(axis=1)
nMean /= np.sqrt((nMean**2).sum())
return nMean
mode = "approx"
mode = "vmf"
mode = "vmfCF"
mode = "approxGD"
mode = "directGD"
mode = "direct"
mode = "mmfvmf"
nyuPath = "/data/vision/fisher/data1/nyu_depth_v2/"
rtmfPath = "/data/vision/scratch/fisher/jstraub/rtmf/nyu/"
if False and os.path.isfile("./angularFloorDeviations_rtmf_"+mode+".csv"):
error = np.loadtxt("./angularFloorDeviations_rtmf_"+mode+".csv")
print "nans: ", np.isnan(error[1,:]).sum(), "of", error[1,:].size
error = error[:,np.logical_not(np.isnan(error[1,:]))]
print error.shape
labels = ["unaligned","RTMF "+mode]
plt.figure()
for i in range(2):
errorS = error[i,:].tolist()
errorS.sort()
plt.plot(errorS,1.*np.arange(len(errorS))/(len(errorS)-1),label=labels[i])
plt.ylim([0,1])
plt.xlim([0,25])
plt.legend(loc="best")
plt.ylabel("precentage of scenes")
plt.xlabel("degrees from vertical")
plt.grid(True)
plt.show()
with open(os.path.join(nyuPath,"labels.txt")) as f:
labels = [label[:-1] for label in f.readlines()]
print labels[:20]
lFloor = 0
for i,label in enumerate(labels):
if label == "floor":
lFloor = i+1
break
print "label of floor: ", lFloor
if os.path.isfile("./rtmfPaths_"+mode+".txt"):
with open("./rtmfPaths_"+mode+".txt","r") as f:
rtmfPaths = [path[:-1] for path in f.readlines()]
else:
rtmfPaths = []
for root, dirs, files in os.walk(rtmfPath):
for f in files:
if re.search("[a-z_]+_[0-9]+_[0-9]+_mode_"+mode+"-[-_.0-9a-zA-Z]+_cRmf.csv", f):
rtmfPaths.append(os.path.join(root,f))
rtmfPaths.sort()
with open("./rtmfPaths_"+mode+".txt","w") as f:
f.writelines([path+"\n" for path in rtmfPaths])
print len(rtmfPaths)
labelImgPaths = []
for root, dirs, files in os.walk(nyuPath):
for f in files:
if re.search("[a-z_]+_[0-9]+_[0-9]+_l.png", f):
labelImgPaths.append(os.path.join(root,f))
labelImgPaths.sort()
print len(labelImgPaths)
#import matplotlib.pyplot as plt
#plt.figure()
error = np.zeros((2,len(rtmfPaths)))
for i,rtmfPath in enumerate(rtmfPaths):
rtmfName = re.sub("_mode_"+mode+"-[-_.0-9a-zA-Z]+_cRmf.csv","",os.path.split(rtmfPath)[1])
labelImgPathMatch = ""
for labelImgPath in labelImgPaths:
labelName = re.sub("_l.png","",os.path.split(labelImgPath)[1])
if labelName == rtmfName:
labelImgPathMatch = labelImgPath
break
labelName = re.sub("_l.png","",os.path.split(labelImgPathMatch)[1])
if not rtmfName == labelName:
print " !!!!!!!!!!!! "
print os.path.split(rtmfPath)[1], rtmfName
print os.path.split(labelImgPathMatch)[1], labelName
raw_input()
continue
# try:
R = np.loadtxt(rtmfPath)
rgbd = RgbdFrame(540.)
rgbd.load(re.sub("_l.png","",labelImgPathMatch ))
nMean = ExtractFloorDirection(labelImgPathMatch,rgbd.getNormals())
error[0,i] = np.arccos(np.abs(nMean[1]))*180./np.pi
print "direction of floor surface normals: ", nMean
print "R_rtmf", R
pcC = rgbd.getPc()[rgbd.mask,:].T
anglesToY = []
M = np.concatenate((R, -R),axis=1)
# print M
for ids in itertools.combinations(np.arange(6),3):
Rc = np.zeros((3,3))
for l in range(3):
Rc[:,l] = M[:,ids[l]]
if det(Rc) > 0:
Rn = Rc.T.dot(nMean)
anglesToY.append(np.arccos(np.abs(Rn[1]))*180./np.pi)
print anglesToY[-1], Rn
# figm = mlab.figure(bgcolor=(1,1,1))
# pc = Rc.T.dot(pcC)
# mlab.points3d(pc[0,:],pc[1,:],pc[2,:],
# rgbd.gray[rgbd.mask],colormap='gray',scale_factor=0.01,
# figure=figm,mode='point',mask_points=1)
# mlab.show(stop=True)
# mlab.close(figm)
error[1,i] = min(anglesToY)
print error[:,i]
if False:
n = rgbd.getNormals()[rgbd.mask,:]
figm = mlab.figure(bgcolor=(1,1,1))
mlab.points3d(n[:,0],n[:,1],n[:,2], color=(0.5,0.5,0.5),mode="point")
plotMF(figm,R)
mlab.show(stop=True)
# except:
# print "Unexpected error:", sys.exc_info()[0]
# error[i] = np.nan
np.savetxt("./angularFloorDeviations_rtmf_"+mode+".csv",error)
|
mit
|
nsdf/nsdf
|
examples/moose_Multi/minchan.py
|
3
|
12176
|
# minimal.py ---
# Upi Bhalla, NCBS Bangalore 2014.
#
# Commentary:
#
# Minimal model for loading rdesigneur: reac-diff elec signaling in neurons
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street, Fifth
# Floor, Boston, MA 02110-1301, USA.
#
# Code:
import sys
sys.path.append('../../python')
import os
os.environ['NUMPTHREADS'] = '1'
import math
import numpy
import matplotlib.pyplot as plt
import moose
import proto18
EREST_ACT = -70e-3
def loadElec():
library = moose.Neutral( '/library' )
moose.setCwe( '/library' )
proto18.make_Ca()
proto18.make_Ca_conc()
proto18.make_K_AHP()
proto18.make_K_C()
proto18.make_Na()
proto18.make_K_DR()
proto18.make_K_A()
proto18.make_glu()
proto18.make_NMDA()
proto18.make_Ca_NMDA()
proto18.make_NMDA_Ca_conc()
proto18.make_axon()
moose.setCwe( '/library' )
model = moose.Neutral( '/model' )
cellId = moose.loadModel( 'mincell2.p', '/model/elec', "Neutral" )
return cellId
def loadChem( diffLength ):
chem = moose.Neutral( '/model/chem' )
neuroCompt = moose.NeuroMesh( '/model/chem/kinetics' )
neuroCompt.separateSpines = 1
neuroCompt.geometryPolicy = 'cylinder'
spineCompt = moose.SpineMesh( '/model/chem/compartment_1' )
moose.connect( neuroCompt, 'spineListOut', spineCompt, 'spineList', 'OneToOne' )
psdCompt = moose.PsdMesh( '/model/chem/compartment_2' )
#print 'Meshvolume[neuro, spine, psd] = ', neuroCompt.mesh[0].volume, spineCompt.mesh[0].volume, psdCompt.mesh[0].volume
moose.connect( neuroCompt, 'psdListOut', psdCompt, 'psdList', 'OneToOne' )
modelId = moose.loadModel( 'minimal.g', '/model/chem', 'ee' )
neuroCompt.name = 'dend'
spineCompt.name = 'spine'
psdCompt.name = 'psd'
def makeNeuroMeshModel():
diffLength = 6e-6 # Aim for 2 soma compartments.
elec = loadElec()
loadChem( diffLength )
neuroCompt = moose.element( '/model/chem/dend' )
neuroCompt.diffLength = diffLength
neuroCompt.cellPortion( elec, '/model/elec/#' )
for x in moose.wildcardFind( '/model/chem/##[ISA=PoolBase]' ):
if (x.diffConst > 0):
x.diffConst = 1e-11
for x in moose.wildcardFind( '/model/chem/##/Ca' ):
x.diffConst = 1e-10
# Put in dend solvers
ns = neuroCompt.numSegments
ndc = neuroCompt.numDiffCompts
print 'ns = ', ns, ', ndc = ', ndc
assert( neuroCompt.numDiffCompts == neuroCompt.mesh.num )
assert( ns == 1 ) # soma/dend only
assert( ndc == 2 ) # split into 2.
nmksolve = moose.Ksolve( '/model/chem/dend/ksolve' )
nmdsolve = moose.Dsolve( '/model/chem/dend/dsolve' )
nmstoich = moose.Stoich( '/model/chem/dend/stoich' )
nmstoich.compartment = neuroCompt
nmstoich.ksolve = nmksolve
nmstoich.dsolve = nmdsolve
nmstoich.path = "/model/chem/dend/##"
print 'done setting path, numPools = ', nmdsolve.numPools
assert( nmdsolve.numPools == 1 )
assert( nmdsolve.numAllVoxels == 2 )
assert( nmstoich.numAllPools == 1 )
# oddly, numLocalFields does not work.
ca = moose.element( '/model/chem/dend/DEND/Ca' )
assert( ca.numData == ndc )
# Put in spine solvers. Note that these get info from the neuroCompt
spineCompt = moose.element( '/model/chem/spine' )
sdc = spineCompt.mesh.num
print 'sdc = ', sdc
assert( sdc == 1 )
smksolve = moose.Ksolve( '/model/chem/spine/ksolve' )
smdsolve = moose.Dsolve( '/model/chem/spine/dsolve' )
smstoich = moose.Stoich( '/model/chem/spine/stoich' )
smstoich.compartment = spineCompt
smstoich.ksolve = smksolve
smstoich.dsolve = smdsolve
smstoich.path = "/model/chem/spine/##"
assert( smstoich.numAllPools == 3 )
assert( smdsolve.numPools == 3 )
assert( smdsolve.numAllVoxels == 1 )
# Put in PSD solvers. Note that these get info from the neuroCompt
psdCompt = moose.element( '/model/chem/psd' )
pdc = psdCompt.mesh.num
assert( pdc == 1 )
pmksolve = moose.Ksolve( '/model/chem/psd/ksolve' )
pmdsolve = moose.Dsolve( '/model/chem/psd/dsolve' )
pmstoich = moose.Stoich( '/model/chem/psd/stoich' )
pmstoich.compartment = psdCompt
pmstoich.ksolve = pmksolve
pmstoich.dsolve = pmdsolve
pmstoich.path = "/model/chem/psd/##"
assert( pmstoich.numAllPools == 3 )
assert( pmdsolve.numPools == 3 )
assert( pmdsolve.numAllVoxels == 1 )
foo = moose.element( '/model/chem/psd/Ca' )
print 'PSD: numfoo = ', foo.numData
print 'PSD: numAllVoxels = ', pmksolve.numAllVoxels
# Put in junctions between the diffusion solvers
nmdsolve.buildNeuroMeshJunctions( smdsolve, pmdsolve )
"""
CaNpsd = moose.vec( '/model/chem/psdMesh/PSD/PP1_PSD/CaN' )
print 'numCaN in PSD = ', CaNpsd.nInit, ', vol = ', CaNpsd.volume
CaNspine = moose.vec( '/model/chem/spine/SPINE/CaN_BULK/CaN' )
print 'numCaN in spine = ', CaNspine.nInit, ', vol = ', CaNspine.volume
"""
# set up adaptors
aCa = moose.Adaptor( '/model/chem/dend/DEND/adaptCa', ndc )
adaptCa = moose.vec( '/model/chem/dend/DEND/adaptCa' )
chemCa = moose.vec( '/model/chem/dend/DEND/Ca' )
print 'aCa = ', aCa, ' foo = ', foo, "len( ChemCa ) = ", len( chemCa ), ", numData = ", chemCa.numData, "len( adaptCa ) = ", len( adaptCa )
assert( len( adaptCa ) == ndc )
assert( len( chemCa ) == ndc )
path = '/model/elec/soma/Ca_conc'
elecCa = moose.element( path )
print "=========="
print elecCa
print adaptCa
print chemCa
moose.connect( elecCa, 'concOut', adaptCa[0], 'input', 'Single' )
moose.connect( adaptCa, 'output', chemCa, 'setConc', 'OneToOne' )
adaptCa.inputOffset = 0.0 #
adaptCa.outputOffset = 0.00008 # 80 nM offset in chem.
adaptCa.scale = 1e-3 # 520 to 0.0052 mM
#print adaptCa.outputOffset
#print adaptCa.scale
def addPlot( objpath, field, plot ):
#assert moose.exists( objpath )
if moose.exists( objpath ):
tab = moose.Table( '/graphs/' + plot )
obj = moose.element( objpath )
if obj.className == 'Neutral':
print "addPlot failed: object is a Neutral: ", objpath
return moose.element( '/' )
else:
#print "object was found: ", objpath, obj.className
moose.connect( tab, 'requestOut', obj, field )
return tab
else:
print "addPlot failed: object not found: ", objpath
return moose.element( '/' )
def makeElecPlots():
graphs = moose.Neutral( '/graphs' )
elec = moose.Neutral( '/graphs/elec' )
addPlot( '/model/elec/soma', 'getVm', 'elec/somaVm' )
addPlot( '/model/elec/spine_head', 'getVm', 'elec/spineVm' )
addPlot( '/model/elec/soma/Ca_conc', 'getCa', 'elec/somaCa' )
def makeChemPlots():
graphs = moose.Neutral( '/graphs' )
chem = moose.Neutral( '/graphs/chem' )
addPlot( '/model/chem/psd/Ca_CaM', 'getConc', 'chem/psdCaCam' )
addPlot( '/model/chem/psd/Ca', 'getConc', 'chem/psdCa' )
addPlot( '/model/chem/spine/Ca_CaM', 'getConc', 'chem/spineCaCam' )
addPlot( '/model/chem/spine/Ca', 'getConc', 'chem/spineCa' )
addPlot( '/model/chem/dend/DEND/Ca', 'getConc', 'chem/dendCa' )
def testNeuroMeshMultiscale():
elecDt = 50e-6
chemDt = 0.01
ePlotDt = 0.5e-3
cPlotDt = 0.01
plotName = 'nm.plot'
makeNeuroMeshModel()
print "after model is completely done"
for i in moose.wildcardFind( '/model/chem/#/#/#/transloc#' ):
print i[0].name, i[0].Kf, i[0].Kb, i[0].kf, i[0].kb
"""
for i in moose.wildcardFind( '/model/chem/##[ISA=PoolBase]' ):
if ( i[0].diffConst > 0 ):
grandpaname = i.parent[0].parent.name + '/'
paname = i.parent[0].name + '/'
print grandpaname + paname + i[0].name, i[0].diffConst
print 'Neighbors:'
for t in moose.element( '/model/chem/spine/ksolve/junction' ).neighbors['masterJunction']:
print 'masterJunction <-', t.path
for t in moose.wildcardFind( '/model/chem/#/ksolve' ):
k = moose.element( t[0] )
print k.path + ' localVoxels=', k.numLocalVoxels, ', allVoxels= ', k.numAllVoxels
"""
'''
moose.useClock( 4, '/model/chem/dend/dsolve', 'process' )
moose.useClock( 5, '/model/chem/dend/ksolve', 'process' )
moose.useClock( 5, '/model/chem/spine/ksolve', 'process' )
moose.useClock( 5, '/model/chem/psd/ksolve', 'process' )
'''
makeChemPlots()
makeElecPlots()
moose.setClock( 0, elecDt )
moose.setClock( 1, elecDt )
moose.setClock( 2, elecDt )
moose.setClock( 4, chemDt )
moose.setClock( 5, chemDt )
moose.setClock( 6, chemDt )
moose.setClock( 7, cPlotDt )
moose.setClock( 8, ePlotDt )
moose.useClock( 0, '/model/elec/##[ISA=Compartment]', 'init' )
moose.useClock( 1, '/model/elec/##[ISA=Compartment]', 'process' )
moose.useClock( 1, '/model/elec/##[ISA=SpikeGen]', 'process' )
moose.useClock( 2, '/model/elec/##[ISA=ChanBase],/model/##[ISA=SynBase],/model/##[ISA=CaConc]','process')
#moose.useClock( 5, '/model/chem/##[ISA=PoolBase],/model/##[ISA=ReacBase],/model/##[ISA=EnzBase]', 'process' )
#moose.useClock( 4, '/model/chem/##[ISA=Adaptor]', 'process' )
moose.useClock( 4, '/model/chem/#/dsolve', 'process' )
moose.useClock( 5, '/model/chem/#/ksolve', 'process' )
moose.useClock( 6, '/model/chem/dend/DEND/adaptCa', 'process' )
moose.useClock( 7, '/graphs/chem/#', 'process' )
moose.useClock( 8, '/graphs/elec/#', 'process' )
#hsolve = moose.HSolve( '/model/elec/hsolve' )
#moose.useClock( 1, '/model/elec/hsolve', 'process' )
#hsolve.dt = elecDt
#hsolve.target = '/model/elec/compt'
#moose.reinit()
moose.element( '/model/elec/spine_head' ).inject = 5e-12
moose.element( '/model/chem/psd/Ca' ).concInit = 0.001
moose.element( '/model/chem/spine/Ca' ).concInit = 0.002
moose.element( '/model/chem/dend/DEND/Ca' ).concInit = 0.003
moose.reinit()
"""
print 'pre'
eca = moose.vec( '/model/chem/psd/PSD/CaM/Ca' )
for i in range( 3 ):
print eca[i].concInit, eca[i].conc, eca[i].nInit, eca[i].n, eca[i].volume
print 'dend'
eca = moose.vec( '/model/chem/dend/DEND/Ca' )
#for i in ( 0, 1, 2, 30, 60, 90, 120, 144 ):
for i in range( 13 ):
print i, eca[i].concInit, eca[i].conc, eca[i].nInit, eca[i].n, eca[i].volume
print 'PSD'
eca = moose.vec( '/model/chem/psd/PSD/CaM/Ca' )
for i in range( 3 ):
print eca[i].concInit, eca[i].conc, eca[i].nInit, eca[i].n, eca[i].volume
print 'spine'
eca = moose.vec( '/model/chem/spine/SPINE/CaM/Ca' )
for i in range( 3 ):
print eca[i].concInit, eca[i].conc, eca[i].nInit, eca[i].n, eca[i].volume
"""
moose.start( 0.5 )
plt.ion()
fig = plt.figure( figsize=(8,8) )
chem = fig.add_subplot( 211 )
chem.set_ylim( 0, 0.004 )
plt.ylabel( 'Conc (mM)' )
plt.xlabel( 'time (seconds)' )
for x in moose.wildcardFind( '/graphs/chem/#[ISA=Table]' ):
pos = numpy.arange( 0, x.vector.size, 1 ) * cPlotDt
line1, = chem.plot( pos, x.vector, label=x.name )
plt.legend()
elec = fig.add_subplot( 212 )
plt.ylabel( 'Vm (V)' )
plt.xlabel( 'time (seconds)' )
for x in moose.wildcardFind( '/graphs/elec/#[ISA=Table]' ):
pos = numpy.arange( 0, x.vector.size, 1 ) * ePlotDt
line1, = elec.plot( pos, x.vector, label=x.name )
plt.legend()
fig.canvas.draw()
raw_input()
'''
for x in moose.wildcardFind( '/graphs/##[ISA=Table]' ):
t = numpy.arange( 0, x.vector.size, 1 )
pylab.plot( t, x.vector, label=x.name )
pylab.legend()
pylab.show()
'''
pylab.show()
print 'All done'
def main():
testNeuroMeshMultiscale()
if __name__ == '__main__':
main()
#
# minimal.py ends here.
|
gpl-3.0
|
LarsDu/DeepNuc
|
deepnuc/nucinference.py
|
2
|
29422
|
import tensorflow as tf
import numpy as np
import time
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import pprint
import numpy as np
import os
import sys
import glob
import dubiotools as dbt
from onehotseqmutator import OnehotSeqMutator
sys.path.append(
os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)))
from duseqlogo import LogoTools
import nucheatmap
import nucconvmodel
from collections import OrderedDict
import pickle
class NucInference(object):
"""
Base class for NucBinaryClassifier and NucRegressor
This class should contain all methods that work for both child classes.
This includes train(),save(),and load(). Child classes must contain
method eval_model_metrics()
build_model() should be different due to different loss functions and lack
of classification metrics.
"""
use_onehot_labels = True
def __init__(self,
sess,
train_batcher,
test_batcher,
num_epochs,
learning_rate,
batch_size,
seq_len,
save_dir,
keep_prob,
beta1,
concat_revcom_input,
nn_method_key):
self.sess = sess
self.train_batcher = train_batcher
self.test_batcher = test_batcher
self.seq_len = self.train_batcher.seq_len
self.num_epochs = num_epochs
self.learning_rate = learning_rate
self.batch_size = batch_size
self.seq_len = seq_len
self.save_dir = save_dir
self.summary_dir = self.save_dir+os.sep+'summaries'
self.checkpoint_dir = self.save_dir+os.sep+'checkpoints'
self.metrics_dir = self.save_dir+os.sep+'metrics'
#One minus the dropout_probability if dropout is enabled for a particular model
self.keep_prob = 0.5
#beta1 is a parameter for the AdamOptimizer
self.beta1 = beta1
#This flag will tell the inference method to concatenate
#the reverse complemented version of the input sequence
#to the input vector
self.concat_revcom_input = concat_revcom_input
self.nn_method_key = nn_method_key
self.nn_method = nucconvmodel.methods_dict[nn_method_key]
self.train_steps_per_epoch = int(self.train_batcher.num_records//self.batch_size)
if self.test_batcher:
self.test_steps_per_epoch = int(self.test_batcher.num_records//self.batch_size)
self.num_steps = int(self.train_steps_per_epoch*self.num_epochs)
self.save_on_epoch = 5 #This will be overrided in child class __init__
self.train_metrics_vector = [] #a list of metrics recorded on each save_on_epoch
self.test_metrics_vector =[]
self.epoch = 0
self.step=0
#http://stackoverflow.com/questions/43218731/
#Deprecated method of saving step on graph
#self.global_step = tf.Variable(0, trainable=False,name='global_step')
#Saver should be set in build_model() after all ops are declared
self.saver = None
def save(self):
if not os.path.exists(self.checkpoint_dir):
os.makedirs(self.checkpoint_dir)
#Save checkpoint in tensorflow
checkpoint_name = self.checkpoint_dir+os.sep+'checkpoints'
self.saver.save(self.sess,checkpoint_name,global_step=self.step)
#Save metrics using pickle in the metrics folder
if not os.path.exists(self.metrics_dir):
os.makedirs(self.metrics_dir)
metrics_file = self.metrics_dir+os.sep+'metrics-'+str(self.step)+'.p'
with open(metrics_file,'w') as of:
pickle.dump(self.train_metrics_vector,of)
pickle.dump(self.test_metrics_vector,of)
#Clean the metrics directory of old pickle files (to save storage space)
flist = glob.glob('*.p')
flist_steps = [int(f.strip('.p').split('-')[1]) for f in flist]
max_metric = max(flist_steps+[0])
for f in flist:
if max_metric != int(f.strip('.p').split('-')[1]):
os.remove(f)
def load(self,checkpoint_dir):
'''
Load saved model from checkpoint directory.
'''
print(" Retrieving checkpoints from", checkpoint_dir)
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
self.saver.restore(self.sess,ckpt.model_checkpoint_path)
print "\n\n\n\nSuccessfully loaded checkpoint from",ckpt.model_checkpoint_path
#Extract step from checkpoint filename
self.step = int(os.path.basename(ckpt.model_checkpoint_path).split('-')[1])
self.epoch = int(self.step//self.train_steps_per_epoch)
self.load_pickle_metrics(self.step)
return True
else:
print ("Failed to load checkpoint",checkpoint_dir)
return False
def load_pickle_metrics(self,step):
#Load metrics from pickled metrics file
metrics_file = self.metrics_dir+os.sep+'metrics-'+str(self.step)+'.p'
with open(metrics_file,'r') as of:
self.train_metrics_vector = pickle.load(of)
if self.test_batcher:
self.test_metrics_vector = pickle.load(of)
print "Successfully loaded recorded metrics data from {}".\
format(metrics_file)
def train(self):
"""
Train a model
:returns: Tuple of two dicts: training metrics, and testing metrics
Note: This method was designed to work for both nucregressor and nucclassifier
However, those objects should have different eval_model_metrics() methods, since
classification and regression produce different metrics
"""
#coord = tf.train.Coordinator()
#threads = tf.train.start_queue_runners(self.sess.coord)
start_time = time.time()
#If model already finished training, just return last metrics
if self.step >= self.num_steps or self.epoch>self.num_epochs:
print "Loaded model already finished training"
print "Model was loaded at step {} epoch {} and num_steps set to {} and num epochs set to {}".format(self.step,self.epoch,self.num_steps,self.num_epochs)
#Important note: epochs go from 1 to num_epochs inclusive. The
# last epoch index is equal to num_epochs
for _ in xrange(self.epoch,self.num_epochs):
self.epoch += 1
for _ in xrange(self.train_steps_per_epoch):
self.step += 1
(labels_batch,dna_seq_batch) = self.train_batcher.pull_batch(self.batch_size)
feed_dict={
self.dna_seq_placeholder:dna_seq_batch,
self.labels_placeholder:labels_batch,
self.keep_prob_placeholder:self.keep_prob
}
_,loss_value,_ =self.sess.run([self.train_op, self.loss, self.logits],
feed_dict=feed_dict)
assert not np.isnan(loss_value), 'Model diverged with loss = NaN'
duration = time.time() - start_time
# Write the summaries and print an overview fairly often.
if (self.step % self.train_steps_per_epoch == 0):
# Print status to stdout.
print('Epoch %d Step %d loss = %.4f (%.3f sec)' % (self.epoch, self.step,
loss_value,
duration))
#Writer summary
summary_str = self.sess.run(self.summary_op, feed_dict=feed_dict)
self.summary_writer.add_summary(summary_str, self.step)
self.summary_writer.flush() #ensure summaries written to disk
#Save checkpoint and evaluate training and test sets
if ( self.epoch % self.save_on_epoch == 0
and self.epoch > 0
and self.epoch !=self.num_epochs
and self.step % self.train_steps_per_epoch == 0):
print('Training data eval:')
train_metrics=self.eval_model_metrics(self.train_batcher)
self.print_metrics(train_metrics)
self.train_metrics_vector.append(train_metrics)
if self.test_batcher != None:
print('Testing data eval:')
test_metrics=self.eval_model_metrics(self.test_batcher)
self.test_metrics_vector.append(test_metrics)
self.print_metrics(test_metrics)
print "Saving checkpoints"
#self.save()
if (self.epoch == self.num_epochs and self.step % self.train_steps_per_epoch ==0):
# This is the final step and epoch, save metrics
# Evaluate the entire training set.
print('Training data eval:')
#self.eval_model_accuracy(self.train_batcher)
self.train_metrics_vector.append( self.eval_model_metrics(self.train_batcher,
save_plots=True,
image_name='train_metrics.png'))
if self.test_batcher != None:
print('Testing data eval:')
self.test_metrics_vector.append(self.eval_model_metrics(self.test_batcher,
save_plots=True,
image_name='test_metrics.png'))
print "Saving final checkpoint"
self.save()
#Set return values
ret_metrics = []
if self.train_metrics_vector != []:
ret_metrics.append(self.train_metrics_vector[-1])
else:
ret_metrics.append([])
if self.test_metrics_vector != []:
ret_metrics.append(self.test_metrics_vector[-1])
else:
ret_metrics.append([])
return ret_metrics
def eval_batchers(self,save_plots=True):
# Evaluate training and test batcher data.
print('Training data eval:')
#self.eval_model_accuracy(self.train_batcher)
train_results_dict = self.eval_model_metrics(self.train_batcher,
save_plots=save_plots)
self.print_metrics(train_results_dict)
if self.test_batcher != None:
print('Testing data eval:')
test_results_dict = self.eval_model_metrics(self.test_batcher,
save_plots=save_plots)
self.print_metrics(test_results_dict)
def print_metrics(self,metrics_dict):
for key,value in metrics_dict.viewitems():
#Do not print out arrays!
if type(value) != np.ndarray:
print '\t',key,":\t",value
def eval_batch(self,dna_seq_batch,labels_batch):
""" Evaluate a single batch of labels and data """
feed_dict = {
self.dna_seq_placeholder: dna_seq_batch,
self.labels_placeholder: labels_batch,
self.keep_prob_placeholder: 1.0
}
batch_logits,batch_network = self.sess.run(self.nn_method,feed_dict=feed_dict)
return batch_logits,batch_network
def plot_test_epoch_vs_metric(self,
metric_key="auroc",
suffix = '',
save_plot=True,
xmin = 0.0,
ymin=0.5):
format_dict= {"auroc":"auROC","auPRC":"auprc","f1_score":"F1-Score"}
num_mets = len(self.test_metrics_vector)
if num_mets == 0:
print "Test metrics vector is empty!"
return None
met_y = [m[metric_key] for m in self.test_metrics_vector]
ep_x = [m["epoch"] for m in self.test_metrics_vector]
fig,ax = plt.subplots(1)
ax.plot(ep_x,met_y)
ax.set_xlabel("Number of epochs")
ax.set_xlim(xmin,ep_x[-1]+5)
ax.set_ylim(ymin,1.0)
if metric_key in format_dict:
ax.set_title("Epoch vs. {} {}".format(format_dict[metric_key],suffix))
ax.set_ylabel("{}".format(format_dict[metric_key]))
else:
ax.set_title("Epoch vs.{} {}".format(metric_key,suffix))
ax.set_ylabel("{}".format(metric_key))
if save_plot:
plot_file = self.save_dir+os.sep+"epoch_vs_{}_{}.png".format(metric_key,suffix)
fig.savefig(plot_file)
###Relevance batch methods###
def relevance_from_nucs(self,nucs,label):
"""
Return the relevance of a nucleotide sequence and corresponding label
:nuc_seq: string nucleotide sequence
:label: 1 x num_classes numpy indicator array
:returns: 4xn relevance matrix
:rtype: numpy array
"""
return self.run_relevance(dbt.seq_to_onehot(nuc_seq),label)
def run_relevance(self,onehot_seq,label):
"""
Return the relevance of a onehot encoded sequence and corresponding label
:onehot_seq: nx4 onehot representation of a nucleotide sequence
:label: 1 x num_classes numpy indicator array
:returns: 4xn relevance matrix
:rtype: numpy array
"""
feed_dict = {
self.dna_seq_placeholder: np.expand_dims(onehot_seq,axis=0),
self.labels_placeholder: np.expand_dims(label,axis=0),
self.keep_prob_placeholder: 1.0
}
relevance_batch = self.sess.run(self.relevance,feed_dict=feed_dict)
relevance = np.squeeze(relevance_batch[0],axis=0).T
return relevance
def relevance_from_batcher(self,batcher,index):
"""
:param batcher: DataBatcher object
:param index: integer index of item in DataBatcher object
:returns: 4xn relevance matrix
:rtype: numpy array
"""
batch_size=1 #Needs to be 1 for now due to conv2d_transpose issue
label, onehot_seq = batcher.pull_batch_by_index(index,batch_size)
rel_mat = self.run_relevance(onehot_seq[0],label[0])
return rel_mat
def plot_relevance_logo_from_batcher(self,batcher,index):
batch_size=1 #Needs to be 1 for now due to conv2d_transpose issue
label, onehot_seq = batcher.pull_batch_by_index(index,batch_size)
numeric_label = labels_batch[0].tolist().index(1)
save_fig = self.save_dir+os.sep+'relevance_logo_ind{}_lab{}.png'.format(index,
numeric_label)
self.plot_relevance_logo(onehot_seq,label,save_fig)
def plot_relevance_heatmap_from_batcher(self,batcher,index):
batch_size=1 #Needs to be 1 for now due to conv2d_transpose issue
labels_batch,dna_batch = batcher.pull_batch_by_index(index,batch_size)
numeric_label = labels_batch[0].tolist().index(1)
save_fig = self.save_dir+os.sep+'relevance_heat_ind{}_lab{}.png'.format(index,
numeric_label)
self.plot_relevance_heatmap(dna_batch[0],
labels_batch[0],
save_fig)
def plot_relevance_heatmap(self,onehot_seq,label,save_fig):
relevance = self.run_relevance(onehot_seq,label)
seq = dbt.onehot_to_nuc(onehot_seq.T)
fig,ax = nucheatmap.nuc_heatmap(seq,
relevance,
save_fig=save_fig,
clims = [0,np.max(relevance)],
cmap='Blues')
def plot_relevance_logo(self,onehot_seq,label,save_fig):
logosheets=[]
input_seqs=[]
np.set_printoptions(linewidth=500,precision=4)
save_file = self.save_dir+save_fig
relevance = self.relevance(onehot_seq,label)
r_img = np.squeeze(relevance).T
###Build a "relevance scaled position weight matrix"
#Convert each position to a position probability matrix
r_ppm = r_img/np.sum(r_img,axis=0)
lh = LogoTools.PwmTools.ppm_to_logo_heights(r_ppm)
#Relevance scale logo_heights
r_rel =np.sum(r_img,axis=0) #relavance by position
max_relevance = np.max(r_rel)
min_relevance = np.min(r_rel)
#print "r_rel max", max_relevance
#print "r_rel min", min_relevance
#lh is in bits of information
#Rescale logo_heights to r_rel
scaled_lh = lh * r_rel/(max_relevance - min_relevance)
logosheets.append(scaled_lh*25)
input_seqs.append(onehot_seq.T)
rel_sheet = LogoTools.LogoNucSheet(logosheets,input_seqs,input_type='heights')
rel_sheet.write_to_png(save_file)
#plt.pcolor(r_img,cmap=plt.cm.Reds)
#print "A relevance"
#plt.plot(r_img[0,:])
#print "Relevance by position"
#plt.plot(np.sum(r_img,axis=0))
#logits_np = self.sess.run(self.logits,
# feed_dict=feed_dict)
#Print actual label and inference if classification
#guess = logits_np.tolist()
#guess = guess[0].index(max(guess[0]))
#actual = labels_batch[0].tolist().index(1.)
#print logits_np
#print self.sess.run(self.probs,feed_dict=feed_dict)
#print ("Guess:",(guess))
#print ("Actual:",(actual))
##Alipanahi mut map methods###
def alipanahi_mutmap(self,onehot_seq,label):
"""
Create an matrix representing the effects of every
possible mutation on classification score as described in Alipanahi et al 2015
:onehot_seq: nx4 onehot representation of a nucleotide sequence
:label: 1 x num_classes numpy indicator array
:returns: nx4 mutation map numpy array
"""
#Mutate the pulled batch sequence.
#OnehotSeqMutator will produce every SNP for the input sequence
oh_iter = OnehotSeqMutator(onehot_seq.T) #4xn inputs
eval_batch_size = 75 #Number of generated examples to process in parallel
# with each step
single_pulls = oh_iter.n%eval_batch_size
num_whole_batches = int(oh_iter.n//eval_batch_size+single_pulls)
num_pulls = num_whole_batches+single_pulls
all_probs = np.zeros((oh_iter.n,self.num_classes))
for i in range(num_pulls):
if i<num_whole_batches:
iter_batch_size = eval_batch_size
else:
iter_batch_size=1
labels_batch = np.asarray(iter_batch_size*[label])
dna_seq_batch = oh_iter.pull_batch(iter_batch_size)
feed_dict = {
self.dna_seq_placeholder: dna_seq_batch,
self.labels_placeholder: labels_batch,
self.keep_prob_placeholder: 1.0
}
cur_probs = self.sess.run(self.probs,feed_dict=feed_dict)
#TODO: Map these values back to the original nuc array
if iter_batch_size > 1:
start_ind = iter_batch_size*i
elif iter_batch_size == 1:
start_ind = num_whole_batches*eval_batch_size+(i-num_whole_batches)
else:
print "Never reach this condition"
start_ind = iter_batch_size*i
all_probs[start_ind:start_ind+iter_batch_size,:] = cur_probs
#print "OHseqshape",onehot_seq.shape
seq_len = onehot_seq.shape[0]
amutmap_ds=np.zeros((seq_len,4))
label_index = label.tolist().index(1)
#Onehot seq mutator created SNPs in order
#Fill output matrix with logits except where nucleotides unchanged
#Remember onehot_seq is nx4 while nuc_heatmap takes inputs that are 4xn
ps_feed_dict = {
self.dna_seq_placeholder:np.expand_dims(onehot_seq,axis=0),
self.labels_placeholder: np.expand_dims(label,axis=0),
self.keep_prob_placeholder: 1.0
}
#ps is the original score of the original input sequence
ps = self.sess.run(self.probs,feed_dict=ps_feed_dict)[0][label_index]
k=0
for i in range(seq_len):
for j in range(4):
if onehot_seq[i,j] == 1:
amutmap_ds[i,j] = 0 #Set original letter to 0
else:
#ps_hat is the score for a given snp
ps_hat = all_probs[k,label_index]
amutmap_ds[i,j] = (ps_hat - ps)*max(0,ps_hat,ps)
k+=1
#amutmap_ds is nx4
return amutmap_ds
def alipanahi_mutmap_from_batcher(self,batcher,index):
label,onehot_seq = batcher.pull_batch_by_index(index,batch_size)
return self.alipanahi_mutmap(onehot_seq,label)
def plot_alipanahi_mutmap(self,onehot_seq,label,save_fig):
seq = dbt.onehot_to_nuc(onehot_seq.T)
amut_onehot = self.alipanahi_mutmap(onehot_seq,label)
nucheatmap.nuc_heatmap(seq,amut_onehot.T,save_fig=save_fig)
def plot_alipanahi_mutmap_from_batcher(self,batcher,index):
batch_size = 1
labels_batch, dna_seq_batch = batcher.pull_batch_by_index(index,batch_size)
#print "Index {} has label {}".format(index,labels_batch[0])
numeric_label = labels_batch[0].tolist().index(1)
save_fig = self.save_dir+os.sep+'alipanahi_mut_map_ind{}_lab{}.png'.format(index,
numeric_label)
self.plot_alipanahi_mutmap(dna_seq_batch[0],labels_batch[0],save_fig)
def avg_alipanahi_mutmap_of_batcher(self,batcher):
"""Get every mutmap from a given batcher, then average over all
mutation maps,
Works for num_classes = 2"""
all_labels, all_dna_seqs = batcher.pull_batch_by_index(0,batcher.num_records)
amutmaps = [np.zeros((batcher.num_records,self.seq_len,4))]*self.num_classes
for ci in range(self.num_classes):
for ri in range(batcher.num_records):
#double check this for errors
#amutmap is nx4
amutmaps[ci][ri,:,:] = self.alipanahi_mutmap(all_dna_seqs[ri,:,:],all_labels[ri,:])
return [np.mean(amutmap,axis=0) for amutmap in amutmaps]
def plot_avg_alipanahi_mutmap_of_batcher(self,batcher,fsuffix=''):
amutmaps = self.avg_alipanahi_mutmap_of_batcher(batcher)
for i,amap in enumerate(amutmaps):
# Note: amax(arr, axis=1) give greatest val for each row (nuc for nx4)
max_avg_nuc =(amap == np.amax(amap,axis=1,keepdims=True)).astype(np.float32)
seq = dbt.onehot_to_nuc(max_avg_nuc.T)
alipanahi_mutmap_dir = self.save_dir + os.sep+'alipanahi_mutmap_dir'
if not os.path.exists(alipanahi_mutmap_dir):
os.makedirs(alipanahi_mutmap_dir)
save_fname = alipanahi_mutmap_dir+os.sep+'avg_batcher_mutmap_{}recs_class{}{}.png'.\
format(batcher.num_records,i,fsuffix)
nucheatmap.nuc_heatmap(seq,amap.T,save_fig=save_fname)
###Mutmap methods###
# generate every possible snp and measure change in logit
def mutmap(self,onehot_seq,label):
"""
:onehot_seq: nx4 onehot representation of a nucleotide sequence
:label: integer numeric label (ie: 0 or 1 for a NucBinaryClassifier)
Create an matrix representing the effects of every
possible mutation on classification score as described in Alipanahi et al 2015
"""
#Mutate the pulled batch sequence.
#OnehotSeqMutator will produce every SNP for the input sequence
oh_iter = OnehotSeqMutator(onehot_seq.T) #4xn inputs
eval_batch_size = 75 #Number of generated examples to process in parallel
# with each step
single_pulls = oh_iter.n%eval_batch_size
num_whole_batches = int(oh_iter.n//eval_batch_size+single_pulls)
num_pulls = num_whole_batches+single_pulls
all_logits = np.zeros((oh_iter.n,self.num_classes))
for i in range(num_pulls):
if i<num_whole_batches:
iter_batch_size = eval_batch_size
else:
iter_batch_size=1
labels_batch = np.asarray(iter_batch_size*[label])
dna_seq_batch = oh_iter.pull_batch(iter_batch_size)
feed_dict = {
self.dna_seq_placeholder: dna_seq_batch,
self.labels_placeholder: labels_batch,
self.keep_prob_placeholder: 1.0
}
cur_logits = self.sess.run(self.logits,feed_dict=feed_dict)
#TODO: Map these values back to the original nuc array
if iter_batch_size > 1:
start_ind = iter_batch_size*i
elif iter_batch_size == 1:
start_ind = num_whole_batches*eval_batch_size+(i-num_whole_batches)
else:
print "Never reach this condition"
start_ind = iter_batch_size*i
all_logits[start_ind:start_ind+iter_batch_size,:] = cur_logits
#print "OHseqshape",onehot_seq.shape
seq_len = onehot_seq.shape[0]
mutmap_ds=np.zeros((seq_len,4))
k=0
label_index = label.tolist().index(1)
#Onehot seq mutator created SNPs in order
#Fill output matrix with logits except where nucleotides unchanged
#Remember onehot_seq is nx4 while nuc_heatmap takes inputs that are 4xn
for i in range(seq_len):
for j in range(4):
if onehot_seq[i,j] == 1:
mutmap_ds[i,j] = 0 #Set original letter to 0
else:
mutmap_ds[i,j] = all_logits[k,label_index]
k+=1
return mutmap_ds.T
def mutmap_from_batcher(self,batcher,index):
"""
Create an matrix representing the effects of every
possible mutation on classification score as described in Alipanahi et al 2015.
Retrieve this data from a databatcher
"""
label, onehot_seq = batcher.pull_batch_by_index(index,batch_size=1)
return self.mutmap(onehot_seq,label)
def plot_mutmap(self,onehot_seq,label,save_fig):
"""
:param onehot_seq: nx4 matrix
:param label:
:returns:
:rtype:
"""
seq = dbt.onehot_to_nuc(onehot_seq.T)
mut_onehot = self.mutmap(onehot_seq,label)
#print "mut_onehot",mut_onehot.shape
#print mut_onehot
return nucheatmap.nuc_heatmap(seq,mut_onehot,save_fig=save_fig)
def plot_mutmap_from_batcher(self,batcher,index):
batch_size = 1
labels_batch, dna_seq_batch = batcher.pull_batch_by_index(index,batch_size)
#print "Index {} has label {}".format(index,labels_batch[0])
numeric_label = labels_batch[0].tolist().index(1)
save_fig = self.save_dir+os.sep+'mut_map_ind{}_lab{}.png'.format(index,numeric_label)
return self.plot_mutmap(dna_seq_batch[0],labels_batch[0],save_fig)
#################
def print_global_variables(self):
"""Print all variable names from the current graph"""
print "Printing global_variables"
gvars = list(tf.global_variables())
for var in gvars:
print "Variable name",var.name
print self.sess.run(var)
def get_optimal_metrics(self,metrics_vector, metric_key="auroc"):
"""
Get the metrics from the epoch where a given metric was at it maximum
"""
best_val = 0
for metric in metrics_vector:
best_val = max(metric[metric_key],best_val)
for metric in metrics_vector:
#metric here is an OrderedDict of metrics
if metric[metric_key]==best_val:
return metric
|
gpl-3.0
|
giorgiop/scikit-learn
|
sklearn/metrics/cluster/tests/test_bicluster.py
|
394
|
1770
|
"""Testing for bicluster metrics module"""
import numpy as np
from sklearn.utils.testing import assert_equal, assert_almost_equal
from sklearn.metrics.cluster.bicluster import _jaccard
from sklearn.metrics import consensus_score
def test_jaccard():
a1 = np.array([True, True, False, False])
a2 = np.array([True, True, True, True])
a3 = np.array([False, True, True, False])
a4 = np.array([False, False, True, True])
assert_equal(_jaccard(a1, a1, a1, a1), 1)
assert_equal(_jaccard(a1, a1, a2, a2), 0.25)
assert_equal(_jaccard(a1, a1, a3, a3), 1.0 / 7)
assert_equal(_jaccard(a1, a1, a4, a4), 0)
def test_consensus_score():
a = [[True, True, False, False],
[False, False, True, True]]
b = a[::-1]
assert_equal(consensus_score((a, a), (a, a)), 1)
assert_equal(consensus_score((a, a), (b, b)), 1)
assert_equal(consensus_score((a, b), (a, b)), 1)
assert_equal(consensus_score((a, b), (b, a)), 1)
assert_equal(consensus_score((a, a), (b, a)), 0)
assert_equal(consensus_score((a, a), (a, b)), 0)
assert_equal(consensus_score((b, b), (a, b)), 0)
assert_equal(consensus_score((b, b), (b, a)), 0)
def test_consensus_score_issue2445():
''' Different number of biclusters in A and B'''
a_rows = np.array([[True, True, False, False],
[False, False, True, True],
[False, False, False, True]])
a_cols = np.array([[True, True, False, False],
[False, False, True, True],
[False, False, False, True]])
idx = [0, 2]
s = consensus_score((a_rows, a_cols), (a_rows[idx], a_cols[idx]))
# B contains 2 of the 3 biclusters in A, so score should be 2/3
assert_almost_equal(s, 2.0/3.0)
|
bsd-3-clause
|
ehocchen/trading-with-python
|
sandbox/spreadCalculations.py
|
78
|
1496
|
'''
Created on 28 okt 2011
@author: jev
'''
from tradingWithPython import estimateBeta, Spread, returns, Portfolio, readBiggerScreener
from tradingWithPython.lib import yahooFinance
from pandas import DataFrame, Series
import numpy as np
import matplotlib.pyplot as plt
import os
symbols = ['SPY','IWM']
y = yahooFinance.HistData('temp.csv')
y.startDate = (2007,1,1)
df = y.loadSymbols(symbols,forceDownload=False)
#df = y.downloadData(symbols)
res = readBiggerScreener('CointPairs.csv')
#---check with spread scanner
#sp = DataFrame(index=symbols)
#
#sp['last'] = df.ix[-1,:]
#sp['targetCapital'] = Series({'SPY':100,'IWM':-100})
#sp['targetShares'] = sp['targetCapital']/sp['last']
#print sp
#The dollar-neutral ratio is about 1 * IWM - 1.7 * IWM. You will get the spread = zero (or probably very near zero)
#s = Spread(symbols, histClose = df)
#print s
#s.value.plot()
#print 'beta (returns)', estimateBeta(df[symbols[0]],df[symbols[1]],algo='returns')
#print 'beta (log)', estimateBeta(df[symbols[0]],df[symbols[1]],algo='log')
#print 'beta (standard)', estimateBeta(df[symbols[0]],df[symbols[1]],algo='standard')
#p = Portfolio(df)
#p.setShares([1, -1.7])
#p.value.plot()
quote = yahooFinance.getQuote(symbols)
print quote
s = Spread(symbols,histClose=df, estimateBeta = False)
s.setLast(quote['last'])
s.setShares(Series({'SPY':1,'IWM':-1.7}))
print s
#s.value.plot()
#s.plot()
fig = figure(2)
s.plot()
|
bsd-3-clause
|
IndraVikas/scikit-learn
|
sklearn/utils/tests/test_murmurhash.py
|
261
|
2836
|
# Author: Olivier Grisel <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.externals.six import b, u
from sklearn.utils.murmurhash import murmurhash3_32
from numpy.testing import assert_array_almost_equal
from numpy.testing import assert_array_equal
from nose.tools import assert_equal, assert_true
def test_mmhash3_int():
assert_equal(murmurhash3_32(3), 847579505)
assert_equal(murmurhash3_32(3, seed=0), 847579505)
assert_equal(murmurhash3_32(3, seed=42), -1823081949)
assert_equal(murmurhash3_32(3, positive=False), 847579505)
assert_equal(murmurhash3_32(3, seed=0, positive=False), 847579505)
assert_equal(murmurhash3_32(3, seed=42, positive=False), -1823081949)
assert_equal(murmurhash3_32(3, positive=True), 847579505)
assert_equal(murmurhash3_32(3, seed=0, positive=True), 847579505)
assert_equal(murmurhash3_32(3, seed=42, positive=True), 2471885347)
def test_mmhash3_int_array():
rng = np.random.RandomState(42)
keys = rng.randint(-5342534, 345345, size=3 * 2 * 1).astype(np.int32)
keys = keys.reshape((3, 2, 1))
for seed in [0, 42]:
expected = np.array([murmurhash3_32(int(k), seed)
for k in keys.flat])
expected = expected.reshape(keys.shape)
assert_array_equal(murmurhash3_32(keys, seed), expected)
for seed in [0, 42]:
expected = np.array([murmurhash3_32(k, seed, positive=True)
for k in keys.flat])
expected = expected.reshape(keys.shape)
assert_array_equal(murmurhash3_32(keys, seed, positive=True),
expected)
def test_mmhash3_bytes():
assert_equal(murmurhash3_32(b('foo'), 0), -156908512)
assert_equal(murmurhash3_32(b('foo'), 42), -1322301282)
assert_equal(murmurhash3_32(b('foo'), 0, positive=True), 4138058784)
assert_equal(murmurhash3_32(b('foo'), 42, positive=True), 2972666014)
def test_mmhash3_unicode():
assert_equal(murmurhash3_32(u('foo'), 0), -156908512)
assert_equal(murmurhash3_32(u('foo'), 42), -1322301282)
assert_equal(murmurhash3_32(u('foo'), 0, positive=True), 4138058784)
assert_equal(murmurhash3_32(u('foo'), 42, positive=True), 2972666014)
def test_no_collision_on_byte_range():
previous_hashes = set()
for i in range(100):
h = murmurhash3_32(' ' * i, 0)
assert_true(h not in previous_hashes,
"Found collision on growing empty string")
def test_uniform_distribution():
n_bins, n_samples = 10, 100000
bins = np.zeros(n_bins, dtype=np.float)
for i in range(n_samples):
bins[murmurhash3_32(i, positive=True) % n_bins] += 1
means = bins / n_samples
expected = np.ones(n_bins) / n_bins
assert_array_almost_equal(means / expected, np.ones(n_bins), 2)
|
bsd-3-clause
|
gotomypc/scikit-learn
|
examples/mixture/plot_gmm_sin.py
|
248
|
2747
|
"""
=================================
Gaussian Mixture Model Sine Curve
=================================
This example highlights the advantages of the Dirichlet Process:
complexity control and dealing with sparse data. The dataset is formed
by 100 points loosely spaced following a noisy sine curve. The fit by
the GMM class, using the expectation-maximization algorithm to fit a
mixture of 10 Gaussian components, finds too-small components and very
little structure. The fits by the Dirichlet process, however, show
that the model can either learn a global structure for the data (small
alpha) or easily interpolate to finding relevant local structure
(large alpha), never falling into the problems shown by the GMM class.
"""
import itertools
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
from sklearn.externals.six.moves import xrange
# Number of samples per component
n_samples = 100
# Generate random sample following a sine curve
np.random.seed(0)
X = np.zeros((n_samples, 2))
step = 4 * np.pi / n_samples
for i in xrange(X.shape[0]):
x = i * step - 6
X[i, 0] = x + np.random.normal(0, 0.1)
X[i, 1] = 3 * (np.sin(x) + np.random.normal(0, .2))
color_iter = itertools.cycle(['r', 'g', 'b', 'c', 'm'])
for i, (clf, title) in enumerate([
(mixture.GMM(n_components=10, covariance_type='full', n_iter=100),
"Expectation-maximization"),
(mixture.DPGMM(n_components=10, covariance_type='full', alpha=0.01,
n_iter=100),
"Dirichlet Process,alpha=0.01"),
(mixture.DPGMM(n_components=10, covariance_type='diag', alpha=100.,
n_iter=100),
"Dirichlet Process,alpha=100.")]):
clf.fit(X)
splot = plt.subplot(3, 1, 1 + i)
Y_ = clf.predict(X)
for i, (mean, covar, color) in enumerate(zip(
clf.means_, clf._get_covars(), color_iter)):
v, w = linalg.eigh(covar)
u = w[0] / linalg.norm(w[0])
# as the DP will not use every component it has access to
# unless it needs it, we shouldn't plot the redundant
# components.
if not np.any(Y_ == i):
continue
plt.scatter(X[Y_ == i, 0], X[Y_ == i, 1], .8, color=color)
# Plot an ellipse to show the Gaussian component
angle = np.arctan(u[1] / u[0])
angle = 180 * angle / np.pi # convert to degrees
ell = mpl.patches.Ellipse(mean, v[0], v[1], 180 + angle, color=color)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
splot.add_artist(ell)
plt.xlim(-6, 4 * np.pi - 6)
plt.ylim(-5, 5)
plt.title(title)
plt.xticks(())
plt.yticks(())
plt.show()
|
bsd-3-clause
|
IraKorshunova/kaggle-seizure-detection
|
merger/avg_patients.py
|
1
|
1049
|
from pandas import read_csv, merge
import csv
import sys
from pandas.core.frame import DataFrame
def average_csv_data(patients, filename, target, *data_path):
data_path = data_path[0]
df_list = []
for p in data_path:
df = DataFrame(columns=['clip',target])
for patient in patients:
d = read_csv(p + '/' + patient + target + '.csv')
df = df.append(d)
df_list.append(df)
avg_df = DataFrame(columns=['clip', target])
avg_df['clip'] = df_list[0]['clip']
avg_df[target] = 0
for df in df_list:
avg_df[target] += df[target]
avg_df[target] /= 1.0 * len(df_list)
with open(filename+'.csv', 'wb') as f:
avg_df.to_csv(f, header=True, index=False)
if __name__ == '__main__':
path = sys.argv[1:]
print path
patients = ['Dog_1', 'Dog_2', 'Dog_3', 'Dog_4', 'Patient_1', 'Patient_2', 'Patient_3', 'Patient_4', 'Patient_5',
'Patient_6', 'Patient_7', 'Patient_8']
average_csv_data(patients, 'submission_early', 'early', path)
|
mit
|
mayblue9/scikit-learn
|
sklearn/metrics/cluster/bicluster.py
|
359
|
2797
|
from __future__ import division
import numpy as np
from sklearn.utils.linear_assignment_ import linear_assignment
from sklearn.utils.validation import check_consistent_length, check_array
__all__ = ["consensus_score"]
def _check_rows_and_columns(a, b):
"""Unpacks the row and column arrays and checks their shape."""
check_consistent_length(*a)
check_consistent_length(*b)
checks = lambda x: check_array(x, ensure_2d=False)
a_rows, a_cols = map(checks, a)
b_rows, b_cols = map(checks, b)
return a_rows, a_cols, b_rows, b_cols
def _jaccard(a_rows, a_cols, b_rows, b_cols):
"""Jaccard coefficient on the elements of the two biclusters."""
intersection = ((a_rows * b_rows).sum() *
(a_cols * b_cols).sum())
a_size = a_rows.sum() * a_cols.sum()
b_size = b_rows.sum() * b_cols.sum()
return intersection / (a_size + b_size - intersection)
def _pairwise_similarity(a, b, similarity):
"""Computes pairwise similarity matrix.
result[i, j] is the Jaccard coefficient of a's bicluster i and b's
bicluster j.
"""
a_rows, a_cols, b_rows, b_cols = _check_rows_and_columns(a, b)
n_a = a_rows.shape[0]
n_b = b_rows.shape[0]
result = np.array(list(list(similarity(a_rows[i], a_cols[i],
b_rows[j], b_cols[j])
for j in range(n_b))
for i in range(n_a)))
return result
def consensus_score(a, b, similarity="jaccard"):
"""The similarity of two sets of biclusters.
Similarity between individual biclusters is computed. Then the
best matching between sets is found using the Hungarian algorithm.
The final score is the sum of similarities divided by the size of
the larger set.
Read more in the :ref:`User Guide <biclustering>`.
Parameters
----------
a : (rows, columns)
Tuple of row and column indicators for a set of biclusters.
b : (rows, columns)
Another set of biclusters like ``a``.
similarity : string or function, optional, default: "jaccard"
May be the string "jaccard" to use the Jaccard coefficient, or
any function that takes four arguments, each of which is a 1d
indicator vector: (a_rows, a_columns, b_rows, b_columns).
References
----------
* Hochreiter, Bodenhofer, et. al., 2010. `FABIA: factor analysis
for bicluster acquisition
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2881408/>`__.
"""
if similarity == "jaccard":
similarity = _jaccard
matrix = _pairwise_similarity(a, b, similarity)
indices = linear_assignment(1. - matrix)
n_a = len(a[0])
n_b = len(b[0])
return matrix[indices[:, 0], indices[:, 1]].sum() / max(n_a, n_b)
|
bsd-3-clause
|
sangwook236/sangwook-library
|
python/src/swl/language_processing/util.py
|
2
|
45164
|
import math, random, functools
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import cv2
#--------------------------------------------------------------------
# NOTE [info] >> In order to deal with "Can't pickle local object" error.
def decorate_token(x, prefix_ids, suffix_ids):
return prefix_ids + x + suffix_ids
class TokenConverterBase(object):
def __init__(self, tokens, unknown='<UNK>', sos=None, eos=None, pad=None, prefixes=None, suffixes=None, additional_tokens=None):
"""
Inputs:
tokens (list of tokens): Tokens to be regarded as individual units.
unknown (token): Unknown token.
sos (token or None): A special token to use as <SOS> token. If None, <SOS> token is not used.
All token sequences may start with the Start-Of-Sequence (SOS) token.
eos (token or None): A special token to use as <EOS> token. If None, <EOS> token is not used.
All token sequences may end with the End-Of-Sequence (EOS) token.
pad (token, int, or None): A special token or integer token ID for padding, which may be not an actual token. If None, the pad token is not used.
prefixes (list of tokens): Special tokens to be used as prefix.
suffixes (list of tokens): Special tokens to be used as suffix.
"""
assert unknown is not None
self.unknown, self.sos, self.eos = unknown, sos, eos
if prefixes is None: prefixes = list()
if suffixes is None: suffixes = list()
if self.sos: prefixes = [self.sos] + prefixes
if self.eos: suffixes += [self.eos]
self._num_affixes = len(prefixes + suffixes)
if additional_tokens:
extended_tokens = tokens + additional_tokens + prefixes + suffixes + [self.unknown]
else:
extended_tokens = tokens + prefixes + suffixes + [self.unknown]
#self._tokens = tokens
self._tokens = extended_tokens
self.unknown_id = extended_tokens.index(self.unknown)
prefix_ids, suffix_ids = [extended_tokens.index(tok) for tok in prefixes], [extended_tokens.index(tok) for tok in suffixes]
default_pad_id = -1 #len(extended_tokens)
if pad is None:
self._pad_id = default_pad_id
self.pad = None
elif isinstance(pad, int):
self._pad_id = pad
try:
self.pad = extended_tokens[pad]
except IndexError:
self.pad = None
else:
try:
self._pad_id = extended_tokens.index(pad)
self.pad = pad
except ValueError:
self._pad_id = default_pad_id
self.pad = None
self.auxiliary_token_ids = [self._pad_id] + prefix_ids + suffix_ids
self.decoration_functor = functools.partial(decorate_token, prefix_ids=prefix_ids, suffix_ids=suffix_ids)
if self.eos:
eos_id = extended_tokens.index(self.eos)
#self.auxiliary_token_ids.remove(self.eos) # TODO [decide] >>
self.decode_functor = functools.partial(self._decode_with_eos, eos_id=eos_id)
else:
self.decode_functor = self._decode
@property
def num_tokens(self):
return len(self._tokens)
@property
def tokens(self):
return self._tokens
@property
def UNKNOWN(self):
return self.unknown
@property
def SOS(self):
return self.sos
@property
def EOS(self):
return self.eos
@property
def PAD(self):
return self.pad
@property
def pad_id(self):
return self._pad_id
@property
def num_affixes(self):
return self._num_affixes
# Token sequence -> token ID sequence.
def encode(self, seq, is_bare_output=False, *args, **kwargs):
"""
Inputs:
seq (list of tokens): A sequence of tokens to encode.
is_bare_output (bool): Specifies whether an encoded token ID sequence without prefixes and suffixes is returned or not.
"""
def tok2id(tok):
try:
return self._tokens.index(tok)
except ValueError:
#print('[SWL] Error: Failed to encode a token, {} in {}.'.format(tok, seq))
return self.unknown_id
id_seq = [tok2id(tok) for tok in seq]
return id_seq if is_bare_output else self.decoration_functor(id_seq)
# Token ID sequence -> token sequence.
def decode(self, id_seq, is_string_output=True, *args, **kwargs):
"""
Inputs:
id_seq (list of token IDs): A sequence of integer token IDs to decode.
is_string_output (bool): Specifies whether the decoded output is a string or not.
"""
return self.decode_functor(id_seq, is_string_output, *args, **kwargs)
# Token ID sequence -> token sequence.
def _decode(self, id_seq, is_string_output=True, *args, **kwargs):
def id2tok(tok):
try:
return self._tokens[tok]
except IndexError:
#print('[SWL] Error: Failed to decode a token ID, {} in {}.'.format(tok, id_seq))
return self.unknown # TODO [check] >> Is it correct?
seq = [id2tok(tok) for tok in id_seq if tok not in self.auxiliary_token_ids]
return ''.join(seq) if is_string_output else seq
# Token ID sequence -> token sequence.
def _decode_with_eos(self, id_seq, is_string_output=True, eos_id=None, *args, **kwargs):
def id2tok(tok):
try:
return self._tokens[tok]
except IndexError:
#print('[SWL] Error: Failed to decode a token ID, {} in {}.'.format(tok, id_seq))
return self.unknown # TODO [check] >> Is it correct?
"""
try:
id_seq = id_seq[:id_seq.index(eos_id)] # NOTE [info] >> It is applied to list only.
except ValueError:
pass
return self._decode(id_seq, is_string_output, *args, **kwargs)
"""
tokens = list()
for tok in id_seq:
if tok == eos_id: break
elif tok in self.auxiliary_token_ids: continue
else: tokens.append(id2tok(tok))
return ''.join(tokens) if is_string_output else tokens
"""
def ff(tok):
if tok == eos_id: raise StopIteration
elif tok in self.auxiliary_token_ids: pass # Error: return None.
else: return id2tok(tok)
try:
tokens = map(ff, id_seq)
#tokens = map(ff, filter(lambda tok: tok in self.auxiliary_token_ids, id_seq))
except StopIteration:
pass
"""
class TokenConverter(TokenConverterBase):
def __init__(self, tokens, unknown='<UNK>', sos=None, eos=None, pad=None, prefixes=None, suffixes=None):
super().__init__(tokens, unknown, sos, eos, pad, prefixes, suffixes)
class JamoTokenConverter(TokenConverterBase):
#def __init__(self, tokens, hangeul2jamo_functor, jamo2hangeul_functor, unknown='<UNK>', sos=None, eos=None, soj=None, eoj='<EOJ>', pad=None, prefixes=None, suffixes=None):
def __init__(self, tokens, hangeul2jamo_functor, jamo2hangeul_functor, unknown='<UNK>', sos=None, eos=None, eoj='<EOJ>', pad=None, prefixes=None, suffixes=None):
"""
Inputs:
tokens (list of tokens): Tokens to be regarded as individual units.
hangeul2jamo_functor (functor): A functor to convert a Hangeul letter to a sequence of Jamos.
jamo2hangeul_functor (functor): A functor to convert a sequence of Jamos to a Hangeul letter.
unknown (token): Unknown token.
sos (token or None): A special token to use as <SOS> token. If None, <SOS> token is not used.
All token sequences may start with the Start-Of-Sequence (SOS) token.
eos (token or None): A special token to use as <EOS> token. If None, <EOS> token is not used.
All token sequences may end with the End-Of-Sequence (EOS) token.
soj (token or None): A special token to use as <SOJ> token. If None, <SOJ> token is not used.
All Hangeul jamo sequences may start with the Start-Of-Jamo-Sequence (SOJ) token.
eoj (token or None): A special token to use as <EOJ> token. If None, <EOJ> token is not used.
All Hangeul jamo sequences may end with the End-Of-Jamo-Sequence (EOJ) token.
pad (token, int, or None): A special token or integer token ID for padding, which may be not an actual token. If None, the pad token is not used.
prefixes (list of tokens): Special tokens to be used as prefix.
suffixes (list of tokens): Special tokens to be used as suffix.
"""
#assert soj is not None and eoj is not None
assert eoj is not None
#super().__init__(tokens, unknown, sos, eos, pad, prefixes, suffixes, additional_tokens=[soj, eoj])
super().__init__(tokens, unknown, sos, eos, pad, prefixes, suffixes, additional_tokens=[eoj])
#self.soj, self.eoj = soj, eoj
self.soj, self.eoj = None, eoj
# TODO [check] >> This implementation using itertools.chain() may be slow.
import itertools
#self.hangeul2jamo_functor = hangeul2jamo_functor
self.hangeul2jamo_functor = lambda hgstr: list(itertools.chain(*[[tt] if len(tt) > 1 else hangeul2jamo_functor(tt) for tt in hgstr]))
self.jamo2hangeul_functor = jamo2hangeul_functor
#self.jamo2hangeul_functor = lambda jmstr: list(itertools.chain(*[[tt] if len(tt) > 1 else jamo2hangeul_functor(tt) for tt in jmstr]))
@property
def SOJ(self):
return self.soj
@property
def EOJ(self):
return self.eoj
# Token sequence -> token ID sequence.
def encode(self, seq, is_bare_output=False, *args, **kwargs):
"""
Inputs:
seq (list of tokens): A sequence of tokens to encode.
is_bare_output (bool): Specifies whether an encoded token ID sequence without prefixes and suffixes is returned or not.
"""
try:
return super().encode(self.hangeul2jamo_functor(seq), is_bare_output, *args, **kwargs)
except Exception as ex:
print('[SWL] Error: Failed to encode a token sequence: {}.'.format(seq))
raise
# Token ID sequence -> token sequence.
def decode(self, id_seq, is_string_output=True, *args, **kwargs):
"""
Inputs:
id_seq (list of token IDs): A sequence of integer token IDs to decode.
is_string_output (bool): Specifies whether the decoded output is a string or not.
"""
try:
return self.jamo2hangeul_functor(super().decode(id_seq, is_string_output, *args, **kwargs))
except Exception as ex:
print('[SWL] Error: Failed to decode a token ID sequence: {}.'.format(id_seq))
raise
#--------------------------------------------------------------------
def compute_simple_text_matching_accuracy(text_pairs):
total_text_count = len(text_pairs)
correct_text_count = len(list(filter(lambda x: x[0] == x[1], text_pairs)))
correct_word_count, total_word_count, correct_char_count, total_char_count = 0, 0, 0, 0
for inf_text, gt_text in text_pairs:
inf_words, gt_words = inf_text.split(' '), gt_text.split(' ')
total_word_count += max(len(inf_words), len(gt_words))
correct_word_count += len(list(filter(lambda x: x[0] == x[1], zip(inf_words, gt_words))))
total_char_count += max(len(inf_text), len(gt_text))
correct_char_count += len(list(filter(lambda x: x[0] == x[1], zip(inf_text, gt_text))))
return correct_text_count, total_text_count, correct_word_count, total_word_count, correct_char_count, total_char_count
def compute_sequence_matching_ratio(seq_pairs, isjunk=None):
import difflib
return functools.reduce(lambda total_ratio, pair: total_ratio + difflib.SequenceMatcher(isjunk, pair[0], pair[1]).ratio(), seq_pairs, 0) / len(seq_pairs)
"""
total_ratio = 0
for inf, gt in seq_pairs:
matcher = difflib.SequenceMatcher(isjunk, inf, gt)
# sum(matched sequence lengths) / len(G/T).
total_ratio += functools.reduce(lambda matched_len, mth: matched_len + mth.size, matcher.get_matching_blocks(), 0) / len(gt) if len(gt) > 0 else 0
return total_ratio / len(seq_pairs)
"""
def compute_string_distance(text_pairs):
import jellyfish
#string_distance_functor = jellyfish.hamming_distance
string_distance_functor = jellyfish.levenshtein_distance
#string_distance_functor = jellyfish.damerau_levenshtein_distance
#string_distance_functor = jellyfish.jaro_distance
#string_distance_functor = functools.partial(jellyfish.jaro_winkler, long_tolerance=False)
#string_distance_functor = jellyfish.match_rating_comparison
total_text_count = len(text_pairs)
text_distance = functools.reduce(lambda ss, x: ss + string_distance_functor(x[0], x[1]), text_pairs, 0)
word_distance, total_word_count, char_distance, total_char_count = 0, 0, 0, 0
for inf_text, gt_text in text_pairs:
inf_words, gt_words = inf_text.split(' '), gt_text.split(' ')
total_word_count += max(len(inf_words), len(gt_words))
word_distance += functools.reduce(lambda ss, x: ss + string_distance_functor(x[0], x[1]), zip(inf_words, gt_words), 0)
total_char_count += max(len(inf_text), len(gt_text))
char_distance += functools.reduce(lambda ss, x: ss + string_distance_functor(x[0], x[1]), zip(inf_text, gt_text), 0)
return text_distance, word_distance, char_distance, total_text_count, total_word_count, total_char_count
def compute_sequence_precision_and_recall(seq_pairs, classes=None, isjunk=None):
import difflib
if classes is None:
classes = list(zip(*seq_pairs))
classes = sorted(functools.reduce(lambda x, txt: x.union(txt), classes[0] + classes[1], set()))
"""
# Too slow.
def compute_metric(seq_pairs, cls):
TP_FP, TP_FN, TP = 0, 0, 0
for inf, gt in seq_pairs:
TP_FP += inf.count(cls) # Retrieved examples. TP + FP.
TP_FN += gt.count(cls) # Relevant examples. TP + FN.
#TP += len(list(filter(lambda ig: ig[0] == ig[1] == cls, zip(inf, gt)))) # Too simple.
#TP += sum([inf[mth.a:mth.a+mth.size].count(cls) for mth in difflib.SequenceMatcher(isjunk, inf, gt).get_matching_blocks() if mth.size > 0])
TP = functools.reduce(lambda tot, mth: tot + inf[mth.a:mth.a+mth.size].count(cls) if mth.size > 0 else tot, difflib.SequenceMatcher(isjunk, inf, gt).get_matching_blocks(), TP)
return TP_FP, TP_FN, TP
# A list of (TP + FP, TP + FN, TP)'s.
#return list(map(lambda cls: compute_metric(seq_pairs, cls), classes)), classes
# A dictionary of {class: (TP + FP, TP + FN, TP)} pairs.
return {cls: metric for cls, metric in zip(classes, map(lambda cls: compute_metric(seq_pairs, cls), classes))}
"""
metrics = {cls: [0, 0, 0] for cls in classes} # A dictionary of {class: (TP + FP, TP + FN, TP)} pairs.
for inf, gt in seq_pairs:
#for cls in set(inf): metrics[cls][0] += inf.count(cls) # Retrieved examples. TP + FP.
#for cls in set(gt): metrics[cls][1] += gt.count(cls) # Relevant examples. TP + FN.
for cls in inf: metrics[cls][0] += 1 # Retrieved examples. TP + FP.
for cls in gt: metrics[cls][1] += 1 # Relevant examples. TP + FN.
matches = difflib.SequenceMatcher(isjunk, inf, gt).get_matching_blocks()
for mth in matches:
if mth.size > 0:
#for cls in set(inf[mth.a:mth.a+mth.size]): metrics[cls][2] += inf[mth.a:mth.a+mth.size].count(cls)
for cls in inf[mth.a:mth.a+mth.size]: metrics[cls][2] += 1
return metrics
#--------------------------------------------------------------------
def compute_text_size(text, font_type, font_index, font_size):
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
text_size = font.getsize(text) # (width, height).
font_offset = font.getoffset(text) # (x, y).
return text_size[0] + font_offset[0], text_size[1] + font_offset[1]
def generate_text_image(text, font_type, font_index, font_size, font_color, bg_color, image_size=None, text_offset=None, crop_text_area=True, draw_text_border=False, char_space_ratio=None, mode='RGB', mask=False, mask_mode='1'):
if char_space_ratio is None or 1 == char_space_ratio:
if mask:
return generate_simple_text_image_and_mask(text, font_type, font_index, font_size, font_color, bg_color, image_size, text_offset, crop_text_area, draw_text_border, mode, mask_mode)
else:
return generate_simple_text_image(text, font_type, font_index, font_size, font_color, bg_color, image_size, text_offset, crop_text_area, draw_text_border, mode)
else:
if mask:
return generate_per_character_text_image_and_mask(text, font_type, font_index, font_size, font_color, bg_color, image_size, text_offset, crop_text_area, draw_text_border, char_space_ratio, mode, mask_mode)
else:
return generate_per_character_text_image(text, font_type, font_index, font_size, font_color, bg_color, image_size, text_offset, crop_text_area, draw_text_border, char_space_ratio, mode)
def generate_simple_text_image(text, font_type, font_index, font_size, font_color, bg_color, image_size=None, text_offset=None, crop_text_area=True, draw_text_border=False, mode='RGB'):
if image_size is None:
image_size = (math.ceil(len(text) * font_size * 1.1), math.ceil((text.count('\n') + 1) * font_size * 1.1))
if text_offset is None:
text_offset = (0, 0)
# TODO [improve] >> Other color modes have to be supported.
if 'L' == mode or '1' == mode:
image_depth = 1
elif 'RGBA' == mode:
image_depth = 4
else:
image_depth = 3
if font_color is None:
#font_color = (random.randrange(256),) * image_depth # Uses a random grayscale font color.
font_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random RGB font color.
if bg_color is None:
#bg_color = (random.randrange(256),) * image_depth # Uses a random grayscale background color.
bg_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random RGB background color.
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
img = Image.new(mode=mode, size=image_size, color=bg_color)
#img = Image.new(mode='RGB', size=image_size, color=bg_color)
#img = Image.new(mode='RGBA', size=image_size, color=bg_color)
#img = Image.new(mode='L', size=image_size, color=bg_color)
#img = Image.new(mode='1', size=image_size, color=bg_color)
draw = ImageDraw.Draw(img)
# Draws text.
draw.text(xy=text_offset, text=text, font=font, fill=font_color)
if draw_text_border or crop_text_area:
#text_size = font.getsize(text) # (width, height). This is erroneous for multiline text.
text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
text_rect = (text_offset[0], text_offset[1], text_offset[0] + text_size[0] + font_offset[0], text_offset[1] + text_size[1] + font_offset[1])
# Draws a rectangle surrounding text.
if draw_text_border:
draw.rectangle(text_rect, outline='red', width=5)
# Crops text area.
if crop_text_area:
img = img.crop(text_rect)
return img
def generate_simple_text_image_and_mask(text, font_type, font_index, font_size, font_color, bg_color, image_size=None, text_offset=None, crop_text_area=True, draw_text_border=False, mode='RGB', mask_mode='1'):
if image_size is None:
image_size = (math.ceil(len(text) * font_size * 1.1), math.ceil((text.count('\n') + 1) * font_size * 1.1))
if text_offset is None:
text_offset = (0, 0)
# TODO [improve] >> Other color modes have to be supported.
if 'L' == mode or '1' == mode:
image_depth = 1
elif 'RGBA' == mode:
image_depth = 4
else:
image_depth = 3
if font_color is None:
#font_color = (random.randrange(256),) * image_depth # Uses a random grayscale font color.
font_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random RGB font color.
if bg_color is None:
#bg_color = (random.randrange(256),) * image_depth # Uses a random grayscale background color.
bg_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random RGB background color.
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
img = Image.new(mode=mode, size=image_size, color=bg_color)
#img = Image.new(mode='RGB', size=image_size, color=bg_color)
#img = Image.new(mode='RGBA', size=image_size, color=bg_color)
#img = Image.new(mode='L', size=image_size, color=bg_color)
#img = Image.new(mode='1', size=image_size, color=bg_color)
draw_img = ImageDraw.Draw(img)
msk = Image.new(mode=mask_mode, size=image_size, color=0)
#msk = Image.new(mode='1', size=image_size, color=0) # {0, 1}, bool.
#msk = Image.new(mode='L', size=image_size, color=0) # [0, 255], uint8.
draw_msk = ImageDraw.Draw(msk)
# Draws text.
draw_img.text(xy=text_offset, text=text, font=font, fill=font_color)
draw_msk.text(xy=text_offset, text=text, font=font, fill=255)
if draw_text_border or crop_text_area:
#text_size = font.getsize(text) # (width, height). This is erroneous for multiline text.
text_size = draw_img.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
text_rect = (text_offset[0], text_offset[1], text_offset[0] + text_size[0] + font_offset[0], text_offset[1] + text_size[1] + font_offset[1])
# Draws a rectangle surrounding text.
if draw_text_border:
draw_img.rectangle(text_rect, outline='red', width=5)
# Crops text area.
if crop_text_area:
img = img.crop(text_rect)
msk = msk.crop(text_rect)
return img, msk
def generate_per_character_text_image(text, font_type, font_index, font_size, font_color, bg_color, image_size=None, text_offset=None, crop_text_area=True, draw_text_border=False, char_space_ratio=None, mode='RGB'):
num_chars, num_newlines = len(text), text.count('\n')
if image_size is None:
image_size = (math.ceil(num_chars * font_size * char_space_ratio * 1.1), math.ceil((num_newlines + 1) * font_size * 1.1))
if text_offset is None:
text_offset = (0, 0)
# TODO [improve] >> Other color modes have to be supported.
if 'L' == mode or '1' == mode:
image_depth = 1
elif 'RGBA' == mode:
image_depth = 4
else:
image_depth = 3
if bg_color is None:
#bg_color = (random.randrange(256),) * image_depth # Uses a random grayscale background color.
bg_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random background color.
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
img = Image.new(mode=mode, size=image_size, color=bg_color)
#img = Image.new(mode='RGB', size=image_size, color=bg_color)
#img = Image.new(mode='RGBA', size=image_size, color=bg_color)
#img = Image.new(mode='L', size=image_size, color=bg_color)
#img = Image.new(mode='1', size=image_size, color=bg_color)
draw = ImageDraw.Draw(img)
# Draws text.
char_offset = list(text_offset)
char_space = math.ceil(font_size * char_space_ratio)
if font_color is None:
for ch in text:
if '\n' == ch:
char_offset[0] = text_offset[0]
char_offset[1] += font_size
continue
draw.text(xy=char_offset, text=ch, font=font, fill=tuple(random.randrange(256) for _ in range(image_depth))) # Random font color.
char_offset[0] += char_space
#elif len(font_colors) == num_chars:
# for idx, (ch, fcolor) in enumerate(zip(text, font_colors)):
# char_offset[0] = text_offset[0] + char_space * idx
# draw.text(xy=char_offset, text=ch, font=font, fill=fcolor)
else:
for ch in text:
if '\n' == ch:
char_offset[0] = text_offset[0]
char_offset[1] += font_size
continue
draw.text(xy=char_offset, text=ch, font=font, fill=font_color)
char_offset[0] += char_space
if draw_text_border or crop_text_area:
#text_size = list(font.getsize(text)) # (width, height). This is erroneous for multiline text.
text_size = list(draw.textsize(text, font=font)) # (width, height).
if num_chars > 1:
max_chars_in_line = functools.reduce(lambda ll, line: max(ll, len(line)), text.splitlines(), 0)
#text_size[0] = char_space * (max_chars_in_line - 1) + font_size
text_size[0] = char_space * (max_chars_in_line - 1) + font.getsize(text[-1])[0]
text_size[1] = (num_newlines + 1) * font_size
font_offset = font.getoffset(text) # (x, y).
text_rect = (text_offset[0], text_offset[1], text_offset[0] + text_size[0] + font_offset[0], text_offset[1] + text_size[1] + font_offset[1])
# Draws a rectangle surrounding text.
if draw_text_border:
draw.rectangle(text_rect, outline='red', width=5)
# Crops text area.
if crop_text_area:
img = img.crop(text_rect)
return img
def generate_per_character_text_image_and_mask(text, font_type, font_index, font_size, font_color, bg_color, image_size=None, text_offset=None, crop_text_area=True, draw_text_border=False, char_space_ratio=None, mode='RGB', mask_mode='1'):
num_chars, num_newlines = len(text), text.count('\n')
if image_size is None:
image_size = (math.ceil(num_chars * font_size * char_space_ratio * 1.1), math.ceil((num_newlines + 1) * font_size * 1.1))
if text_offset is None:
text_offset = (0, 0)
# TODO [improve] >> Other color modes have to be supported.
if 'L' == mode or '1' == mode:
image_depth = 1
elif 'RGBA' == mode:
image_depth = 4
else:
image_depth = 3
if bg_color is None:
#bg_color = (random.randrange(256),) * image_depth # Uses a random grayscale background color.
bg_color = tuple(random.randrange(256) for _ in range(image_depth)) # Uses a random background color.
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
img = Image.new(mode=mode, size=image_size, color=bg_color)
#img = Image.new(mode='RGB', size=image_size, color=bg_color)
#img = Image.new(mode='RGBA', size=image_size, color=bg_color)
#img = Image.new(mode='L', size=image_size, color=bg_color)
#img = Image.new(mode='1', size=image_size, color=bg_color)
draw_img = ImageDraw.Draw(img)
msk = Image.new(mode=mask_mode, size=image_size, color=0)
#msk = Image.new(mode='1', size=image_size, color=0) # {0, 1}, bool.
#msk = Image.new(mode='L', size=image_size, color=0) # [0, 255], uint8.
draw_msk = ImageDraw.Draw(msk)
# Draws text.
char_offset = list(text_offset)
char_space = math.ceil(font_size * char_space_ratio)
if font_color is None:
for ch in text:
if '\n' == ch:
char_offset[0] = text_offset[0]
char_offset[1] += font_size
continue
draw_img.text(xy=char_offset, text=ch, font=font, fill=tuple(random.randrange(256) for _ in range(image_depth))) # Random font color.
draw_msk.text(xy=char_offset, text=ch, font=font, fill=255)
char_offset[0] += char_space
#elif len(font_colors) == num_chars:
# for idx, (ch, fcolor) in enumerate(zip(text, font_colors)):
# char_offset[0] = text_offset[0] + char_space * idx
# draw_img.text(xy=char_offset, text=ch, font=font, fill=fcolor)
# draw_msk.text(xy=char_offset, text=ch, font=font, fill=255)
else:
for ch in text:
if '\n' == ch:
char_offset[0] = text_offset[0]
char_offset[1] += font_size
continue
draw_img.text(xy=char_offset, text=ch, font=font, fill=font_color)
draw_msk.text(xy=char_offset, text=ch, font=font, fill=255)
char_offset[0] += char_space
if draw_text_border or crop_text_area:
#text_size = list(font.getsize(text)) # (width, height). This is erroneous for multiline text.
text_size = list(draw_img.textsize(text, font=font)) # (width, height).
if num_chars > 1:
max_chars_in_line = functools.reduce(lambda ll, line: max(ll, len(line)), text.splitlines(), 0)
#text_size[0] = char_space * (max_chars_in_line - 1) + font_size
text_size[0] = char_space * (max_chars_in_line - 1) + font.getsize(text[-1])[0]
text_size[1] = (num_newlines + 1) * font_size
font_offset = font.getoffset(text) # (x, y).
text_rect = (text_offset[0], text_offset[1], text_offset[0] + text_size[0] + font_offset[0], text_offset[1] + text_size[1] + font_offset[1])
# Draws a rectangle surrounding text.
if draw_text_border:
draw_img.rectangle(text_rect, outline='red', width=5)
# Crops text area.
if crop_text_area:
img = img.crop(text_rect)
msk = msk.crop(text_rect)
return img, msk
def generate_text_mask_and_distribution(text, font, rotation_angle=None):
import scipy.stats
text_size = font.getsize(text) # (width, height). This is erroneous for multiline text.
#text_size = (math.ceil(len(text) * font_size * 1.1), math.ceil((text.count('\n') + 1) * font_size * 1.1))
# Draw a distribution of character centers.
text_pil = Image.new('L', text_size, 0)
text_draw = ImageDraw.Draw(text_pil)
text_draw.text(xy=(0, 0), text=text, font=font, fill=255)
x, y = np.mgrid[0:text_pil.size[0], 0:text_pil.size[1]]
#x, y = np.mgrid[0:text_pil.size[0]:0.5, 0:text_pil.size[1]:0.5]
pos = np.dstack((x, y))
text_pdf = np.zeros(x.shape, dtype=np.float32)
offset = [0, 0]
for ch in text:
#char_size = font.getsize(ch) # (width, height). This is erroneous for multiline text.
char_size = text_draw.textsize(ch, font=font) # (width, height).
font_offset = font.getoffset(ch) # (x, y).
char_rect = (offset[0], offset[1], offset[0] + char_size[0] + font_offset[0], offset[1] + char_size[1] + font_offset[1])
if not ch.isspace():
# TODO [decide] >> Which one is better?
pts = cv2.findNonZero(np.array(text_pil)[char_rect[1]:char_rect[3],char_rect[0]:char_rect[2]])
if pts is not None:
pts += offset
center, axis, angle = cv2.minAreaRect(pts)
angle = math.radians(angle)
"""
try:
pts = np.squeeze(pts, axis=1)
center = np.mean(pts, axis=0)
size = np.max(pts, axis=0) - np.min(pts, axis=0)
pts = pts - center # Centering.
u, s, vh = np.linalg.svd(pts, full_matrices=True)
center = center + offset
#axis = s * max(size) / max(s)
axis = s * math.sqrt((size[0] * size[0] + size[1] * size[1]) / (s[0] * s[0] + s[1] * s[1]))
angle = math.atan2(vh[0,1], vh[0,0])
except np.linalg.LinAlgError:
print('np.linalg.LinAlgError raised.')
raise
"""
cos_theta, sin_theta = math.cos(angle), math.sin(angle)
R = np.array([[cos_theta, -sin_theta], [sin_theta, cos_theta]])
# TODO [decide] >> Which one is better?
#cov = np.diag(np.array(axis)) # 1 * sigma.
cov = np.diag(np.array(axis) * 2) # 2 * sigma.
cov = np.matmul(R, np.matmul(cov, R.T))
try:
char_pdf = scipy.stats.multivariate_normal(center, cov, allow_singular=False).pdf(pos)
#char_pdf = scipy.stats.multivariate_normal(center, cov, allow_singular=True).pdf(pos)
# TODO [decide] >>
char_pdf /= np.max(char_pdf)
text_pdf = np.where(text_pdf >= char_pdf, text_pdf, char_pdf)
#text_pdf += char_pdf
except np.linalg.LinAlgError:
print('[SWL] Warning: Singular covariance, {} of {}.'.format(ch, text))
else:
print('[SWL] Warning: No non-zero point in {} of {}.'.format(ch, text))
offset[0] += char_size[0] + font_offset[0]
#text_pdf /= np.sum(text_pdf) # sum(text_pdf) = 1.
#text_pdf /= np.max(text_pdf)
text_pdf_pil = Image.fromarray(text_pdf.T)
text_mask_pil = Image.new('L', text_size, 0)
text_mask_draw = ImageDraw.Draw(text_mask_pil)
text_mask_draw.text(xy=(0, 0), text=text, font=font, fill=255)
if rotation_angle is not None:
# Rotates the image around the top-left corner point.
text_mask_pil = text_mask_pil.rotate(rotation_angle, expand=1)
text_pdf_pil = text_pdf_pil.rotate(rotation_angle, expand=1)
return np.asarray(text_mask_pil), np.asarray(text_pdf_pil)
#--------------------------------------------------------------------
def draw_text_on_image(img, text, font_type, font_index, font_size, font_color, text_offset=(0, 0), rotation_angle=None):
font = ImageFont.truetype(font=font_type, size=font_size, index=font_index)
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
text_rect = (text_offset[0], text_offset[1], text_offset[0] + text_size[0] + font_offset[0], text_offset[1] + text_size[1] + font_offset[1])
bg_img = Image.fromarray(img)
# Draws text.
if rotation_angle is None:
bg_draw = ImageDraw.Draw(bg_img)
bg_draw.text(xy=text_offset, text=text, font=font, fill=font_color)
text_mask = Image.new('L', bg_img.size, (0,))
mask_draw = ImageDraw.Draw(text_mask)
mask_draw.text(xy=text_offset, text=text, font=font, fill=(255,))
x1, y1, x2, y2 = text_rect
text_bbox = [[x1, y1], [x2, y1], [x2, y2], [x1, y2]]
else:
#text_img = Image.new('RGBA', text_size, (0, 0, 0, 0))
text_img = Image.new('RGBA', text_size, (255, 255, 255, 0))
sx0, sy0 = text_img.size
text_draw = ImageDraw.Draw(text_img)
text_draw.text(xy=(0, 0), text=text, font=font, fill=font_color)
text_img = text_img.rotate(rotation_angle, expand=1) # Rotates the image around the top-left corner point.
sx, sy = text_img.size
bg_img.paste(text_img, (text_offset[0], text_offset[1], text_offset[0] + sx, text_offset[1] + sy), text_img)
text_mask = Image.new('L', bg_img.size, (0,))
text_mask.paste(text_img, (text_offset[0], text_offset[1], text_offset[0] + sx, text_offset[1] + sy), text_img)
dx, dy = (sx0 - sx) / 2, (sy0 - sy) / 2
x1, y1, x2, y2 = text_rect
rect = (((x1 + x2) / 2, (y1 + y2) / 2), (x2 - x1, y2 - y1), -rotation_angle)
text_bbox = cv2.boxPoints(rect)
text_bbox = list(map(lambda xy: [xy[0] - dx, xy[1] - dy], text_bbox))
img = np.asarray(bg_img, dtype=img.dtype)
text_mask = np.asarray(text_mask, dtype=np.uint8)
return img, text_mask, text_bbox
def transform_text(text, tx, ty, rotation_angle, font, text_offset=None):
cos_angle, sin_angle = math.cos(math.radians(rotation_angle)), math.sin(math.radians(rotation_angle))
def transform(x, z):
return int(round(x * cos_angle - z * sin_angle)) + tx, int(round(x * sin_angle + z * cos_angle)) - ty
if text_offset is None:
text_offset = (0, 0) # The coordinates (x, y) before transformation.
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xmin, zmax = min([x1, x2, x3, x4]), max([z1, z2, z3, z4])
##x0, y0 = xmin, -zmax
#text_bbox = np.array([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
dx, dy = xmin - tx, -zmax - ty
#x0, y0 = xmin - dx, -zmax - dy
text_bbox = np.array([[x1 - dx, -z1 - dy], [x2 - dx, -z2 - dy], [x3 - dx, -z3 - dy], [x4 - dx, -z4 - dy]])
return text_bbox
def transform_text_on_image(text, tx, ty, rotation_angle, img, font, font_color, bg_color, text_offset=None):
cos_angle, sin_angle = math.cos(math.radians(rotation_angle)), math.sin(math.radians(rotation_angle))
def transform(x, z):
return int(round(x * cos_angle - z * sin_angle)) + tx, int(round(x * sin_angle + z * cos_angle)) - ty
if text_offset is None:
text_offset = (0, 0) # The coordinates (x, y) before transformation.
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xmin, zmax = min([x1, x2, x3, x4]), max([z1, z2, z3, z4])
#x0, y0 = xmin, -zmax
#text_bbox = np.array([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
dx, dy = xmin - tx, -zmax - ty
x0, y0 = xmin - dx, -zmax - dy
text_bbox = np.array([[x1 - dx, -z1 - dy], [x2 - dx, -z2 - dy], [x3 - dx, -z3 - dy], [x4 - dx, -z4 - dy]])
#text_img = Image.new('RGBA', text_size, (0, 0, 0, 0))
text_img = Image.new('RGBA', text_size, (255, 255, 255, 0))
text_draw = ImageDraw.Draw(text_img)
text_draw.text(xy=(0, 0), text=text, font=font, fill=font_color)
text_img = text_img.rotate(rotation_angle, expand=1) # Rotates the image around the top-left corner point.
text_rect = (x0, y0, x0 + text_img.size[0], y0 + text_img.size[1])
bg_img = Image.fromarray(img)
bg_img.paste(text_img, text_rect, text_img)
text_mask = Image.new('L', bg_img.size, (0,))
text_mask.paste(text_img, text_rect, text_img)
img = np.asarray(bg_img, dtype=img.dtype)
text_mask = np.asarray(text_mask, dtype=np.uint8)
return text_bbox, img, text_mask
def transform_texts(texts, tx, ty, rotation_angle, font, text_offsets=None):
cos_angle, sin_angle = math.cos(math.radians(rotation_angle)), math.sin(math.radians(rotation_angle))
def transform(x, z):
return int(round(x * cos_angle - z * sin_angle)) + tx, int(round(x * sin_angle + z * cos_angle)) - ty
if text_offsets is None:
text_offsets, text_sizes = list(), list()
max_text_height = 0
for idx, text in enumerate(texts):
if 0 == idx:
text_offset = (0, 0) # The coordinates (x, y) before transformation.
else:
prev_texts = ' '.join(texts[:idx]) + ' '
text_size = font.getsize(prev_texts) # (width, height).
text_offset = (text_size[0], 0) # (x, y).
text_offsets.append(text_offset)
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
sx, sy = text_size[0] + font_offset[0], text_size[1] + font_offset[1]
text_sizes.append((sx, sy))
if sy > max_text_height:
max_text_height = sy
tmp_text_offsets = list()
for offset, sz in zip(text_offsets, text_sizes):
dy = int(round((max_text_height - sz[1]) / 2))
tmp_text_offsets.append((offset[0], offset[1] + dy))
text_offsets = tmp_text_offsets
else:
if len(texts) != len(text_offsets):
print('[SWL] Error: Unmatched lengths of texts and text offsets {} != {}.'.format(len(texts), len(text_offsets)))
return None
text_sizes = list()
for text in texts:
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
text_sizes.append((text_size[0] + font_offset[0], text_size[1] + font_offset[1]))
text_bboxes = list()
"""
for text, text_offset, text_size in zip(texts, text_offsets, text_sizes):
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xmin, zmax = min([x1, x2, x3, x4]), max([z1, z2, z3, z4])
#x0, y0 = xmin, -zmax
text_bboxes.append([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
return np.array(text_bboxes)
"""
xy0_list = list()
for text_offset, text_size in zip(text_offsets, text_sizes):
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xy0_list.append((min([x1, x2, x3, x4]), -max([z1, z2, z3, z4])))
text_bboxes.append([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
text_bboxes = np.array(text_bboxes)
dxy = functools.reduce(lambda xym, xy0: (min(xym[0], xy0[0] - tx), min(xym[1], xy0[1] - ty)), xy0_list, (0, 0))
text_bboxes[:,:] -= dxy
return text_bboxes
def transform_texts_on_image(texts, tx, ty, rotation_angle, img, font, font_color, bg_color, text_offsets=None):
cos_angle, sin_angle = math.cos(math.radians(rotation_angle)), math.sin(math.radians(rotation_angle))
def transform(x, z):
return int(round(x * cos_angle - z * sin_angle)) + tx, int(round(x * sin_angle + z * cos_angle)) - ty
if text_offsets is None:
text_offsets, text_sizes = list(), list()
max_text_height = 0
for idx, text in enumerate(texts):
if 0 == idx:
text_offset = (0, 0) # The coordinates (x, y) before transformation.
else:
prev_texts = ' '.join(texts[:idx]) + ' '
text_size = font.getsize(prev_texts) # (width, height).
text_offset = (text_size[0], 0) # (x, y).
text_offsets.append(text_offset)
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
sx, sy = text_size[0] + font_offset[0], text_size[1] + font_offset[1]
text_sizes.append((sx, sy))
if sy > max_text_height:
max_text_height = sy
tmp_text_offsets = list()
for offset, sz in zip(text_offsets, text_sizes):
dy = int(round((max_text_height - sz[1]) / 2))
tmp_text_offsets.append((offset[0], offset[1] + dy))
text_offsets = tmp_text_offsets
else:
if len(texts) != len(text_offsets):
print('[SWL] Error: Unmatched lengths of texts and text offsets {} != {}.'.format(len(texts), len(text_offsets)))
return None, None, None
text_sizes = list()
for text in texts:
text_size = font.getsize(text) # (width, height).
#text_size = draw.textsize(text, font=font) # (width, height).
font_offset = font.getoffset(text) # (x, y).
text_sizes.append((text_size[0] + font_offset[0], text_size[1] + font_offset[1]))
bg_img = Image.fromarray(img)
text_mask = Image.new('L', bg_img.size, (0,))
text_bboxes = list()
"""
for text, text_offset, text_size in zip(texts, text_offsets, text_sizes):
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xmin, zmax = min([x1, x2, x3, x4]), max([z1, z2, z3, z4])
x0, y0 = xmin, -zmax
text_bboxes.append([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
#text_img = Image.new('RGBA', text_size, (0, 0, 0, 0))
text_img = Image.new('RGBA', text_size, (255, 255, 255, 0))
text_draw = ImageDraw.Draw(text_img)
text_draw.text(xy=(0, 0), text=text, font=font, fill=font_color)
text_img = text_img.rotate(rotation_angle, expand=1) # Rotates the image around the top-left corner point.
text_rect = (x0, y0, x0 + text_img.size[0], y0 + text_img.size[1])
bg_img.paste(text_img, text_rect, text_img)
text_mask.paste(text_img, text_rect, text_img)
"""
xy0_list = list()
for text_offset, text_size in zip(text_offsets, text_sizes):
# z = -y.
# xy: left-handed, xz: right-handed.
x1, z1 = transform(text_offset[0], -text_offset[1])
x2, z2 = transform(text_offset[0] + text_size[0], -text_offset[1])
x3, z3 = transform(text_offset[0] + text_size[0], -(text_offset[1] + text_size[1]))
x4, z4 = transform(text_offset[0], -(text_offset[1] + text_size[1]))
xy0_list.append((min([x1, x2, x3, x4]), -max([z1, z2, z3, z4])))
text_bboxes.append([[x1, -z1], [x2, -z2], [x3, -z3], [x4, -z4]])
text_bboxes = np.array(text_bboxes)
dxy = functools.reduce(lambda xym, xy0: (min(xym[0], xy0[0] - tx), min(xym[1], xy0[1] - ty)), xy0_list, (0, 0))
text_bboxes[:,:] -= dxy
for text, text_size, xy0 in zip(texts, text_sizes, xy0_list):
x0, y0 = xy0[0] - dxy[0], xy0[1] - dxy[1]
#text_img = Image.new('RGBA', text_size, (0, 0, 0, 0))
text_img = Image.new('RGBA', text_size, (255, 255, 255, 0))
text_draw = ImageDraw.Draw(text_img)
text_draw.text(xy=(0, 0), text=text, font=font, fill=font_color)
text_img = text_img.rotate(rotation_angle, expand=1) # Rotates the image around the top-left corner point.
text_rect = (x0, y0, x0 + text_img.size[0], y0 + text_img.size[1])
bg_img.paste(text_img, text_rect, text_img)
text_mask.paste(text_img, text_rect, text_img)
img = np.asarray(bg_img, dtype=img.dtype)
text_mask = np.asarray(text_mask, dtype=np.uint8)
return text_bboxes, img, text_mask
#--------------------------------------------------------------------
def draw_character_histogram(texts, charset=None):
if charset is None:
import string
if True:
charset = \
string.ascii_uppercase + \
string.ascii_lowercase + \
string.digits + \
string.punctuation + \
' '
else:
hangul_letter_filepath = '../../data/language_processing/hangul_ksx1001.txt'
#hangul_letter_filepath = '../../data/language_processing/hangul_ksx1001_1.txt'
#hangul_letter_filepath = '../../data/language_processing/hangul_unicode.txt'
with open(hangul_letter_filepath, 'r', encoding='UTF-8') as fd:
#hangeul_charset = fd.read().strip('\n') # A strings.
hangeul_charset = fd.read().replace(' ', '').replace('\n', '') # A string.
#hangeul_charset = fd.readlines() # A list of string.
#hangeul_charset = fd.read().splitlines() # A list of strings.
#hangeul_jamo_charset = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㅏㅐㅑㅒㅓㅔㅕㅖㅗㅛㅜㅠㅡㅣ'
#hangeul_jamo_charset = 'ㄱㄲㄳㄴㄵㄶㄷㄸㄹㄺㄻㄼㄽㄾㄿㅀㅁㅂㅃㅄㅅㅆㅇㅈㅉㅊㅋㅌㅍㅎㅏㅐㅑㅒㅓㅔㅕㅖㅗㅛㅜㅠㅡㅣ'
hangeul_jamo_charset = 'ㄱㄲㄳㄴㄵㄶㄷㄸㄹㄺㄻㄼㄽㄾㄿㅀㅁㅂㅃㅄㅅㅆㅇㅈㅉㅊㅋㅌㅍㅎㅏㅐㅑㅒㅓㅔㅕㅖㅗㅘㅙㅚㅛㅜㅝㅞㅟㅠㅡㅢㅣ'
charset = \
hangeul_charset + \
hangeul_jamo_charset + \
string.ascii_uppercase + \
string.ascii_lowercase + \
string.digits + \
string.punctuation + \
' '
charset = sorted(charset)
#charset = ''.join(sorted(charset))
#--------------------
char_dict = dict()
for ch in charset:
char_dict[ch] = 0
for txt in texts:
if not txt:
continue
for ch in txt:
try:
char_dict[ch] += 1
except KeyError:
print('[SWL] Warning: Invalid character, {} in {}.'.format(ch, txt))
#--------------------
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 6))
x_label = np.arange(len(char_dict.keys()))
plt.bar(x_label, char_dict.values(), align='center', alpha=0.5)
plt.xticks(x_label, char_dict.keys())
plt.show()
fig.savefig('./character_frequency.png')
plt.close(fig)
|
gpl-2.0
|
rohanp/scikit-learn
|
benchmarks/bench_plot_randomized_svd.py
|
12
|
17567
|
"""
Benchmarks on the power iterations phase in randomized SVD.
We test on various synthetic and real datasets the effect of increasing
the number of power iterations in terms of quality of approximation
and running time. A number greater than 0 should help with noisy matrices,
which are characterized by a slow spectral decay.
We test several policy for normalizing the power iterations. Normalization
is crucial to avoid numerical issues.
The quality of the approximation is measured by the spectral norm discrepancy
between the original input matrix and the reconstructed one (by multiplying
the randomized_svd's outputs). The spectral norm is always equivalent to the
largest singular value of a matrix. (3) justifies this choice. However, one can
notice in these experiments that Frobenius and spectral norms behave
very similarly in a qualitative sense. Therefore, we suggest to run these
benchmarks with `enable_spectral_norm = False`, as Frobenius' is MUCH faster to
compute.
The benchmarks follow.
(a) plot: time vs norm, varying number of power iterations
data: many datasets
goal: compare normalization policies and study how the number of power
iterations affect time and norm
(b) plot: n_iter vs norm, varying rank of data and number of components for
randomized_SVD
data: low-rank matrices on which we control the rank
goal: study whether the rank of the matrix and the number of components
extracted by randomized SVD affect "the optimal" number of power iterations
(c) plot: time vs norm, varing datasets
data: many datasets
goal: compare default configurations
We compare the following algorithms:
- randomized_svd(..., power_iteration_normalizer='none')
- randomized_svd(..., power_iteration_normalizer='LU')
- randomized_svd(..., power_iteration_normalizer='QR')
- randomized_svd(..., power_iteration_normalizer='auto')
- fbpca.pca() from https://github.com/facebook/fbpca (if installed)
Conclusion
----------
- n_iter=2 appears to be a good default value
- power_iteration_normalizer='none' is OK if n_iter is small, otherwise LU
gives similar errors to QR but is cheaper. That's what 'auto' implements.
References
----------
(1) Finding structure with randomness: Stochastic algorithms for constructing
approximate matrix decompositions
Halko, et al., 2009 http://arxiv.org/abs/arXiv:0909.4061
(2) A randomized algorithm for the decomposition of matrices
Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert
(3) An implementation of a randomized algorithm for principal component
analysis
A. Szlam et al. 2014
"""
# Author: Giorgio Patrini
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import gc
import pickle
from time import time
from collections import defaultdict
import os.path
from sklearn.utils import gen_batches
from sklearn.utils.validation import check_random_state
from sklearn.utils.extmath import randomized_svd
from sklearn.datasets.samples_generator import (make_low_rank_matrix,
make_sparse_uncorrelated)
from sklearn.datasets import (fetch_lfw_people,
fetch_mldata,
fetch_20newsgroups_vectorized,
fetch_olivetti_faces,
fetch_rcv1)
try:
import fbpca
fbpca_available = True
except ImportError:
fbpca_available = False
# If this is enabled, tests are much slower and will crash with the large data
enable_spectral_norm = False
# TODO: compute approximate spectral norms with the power method as in
# Estimating the largest eigenvalues by the power and Lanczos methods with
# a random start, Jacek Kuczynski and Henryk Wozniakowski, SIAM Journal on
# Matrix Analysis and Applications, 13 (4): 1094-1122, 1992.
# This approximation is a very fast estimate of the spectral norm, but depends
# on starting random vectors.
# Determine when to switch to batch computation for matrix norms,
# in case the reconstructed (dense) matrix is too large
MAX_MEMORY = np.int(2e9)
# The following datasets can be dowloaded manually from:
# CIFAR 10: http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
# SVHN: http://ufldl.stanford.edu/housenumbers/train_32x32.mat
CIFAR_FOLDER = "./cifar-10-batches-py/"
SVHN_FOLDER = "./SVHN/"
datasets = ['low rank matrix', 'lfw_people', 'olivetti_faces', '20newsgroups',
'MNIST original', 'CIFAR', 'a1a', 'SVHN', 'uncorrelated matrix']
big_sparse_datasets = ['big sparse matrix', 'rcv1']
def unpickle(file):
fo = open(file, 'rb')
dict = pickle.load(fo, encoding='latin1')
fo.close()
return dict['data']
def handle_missing_dataset(file_folder):
if not os.path.isdir(file_folder):
print("%s file folder not found. Test skipped." % file_folder)
return 0
def get_data(dataset_name):
print("Getting dataset: %s" % dataset_name)
if dataset_name == 'lfw_people':
X = fetch_lfw_people().data
elif dataset_name == '20newsgroups':
X = fetch_20newsgroups_vectorized().data[:, :100000]
elif dataset_name == 'olivetti_faces':
X = fetch_olivetti_faces().data
elif dataset_name == 'rcv1':
X = fetch_rcv1().data
elif dataset_name == 'CIFAR':
if handle_missing_dataset(CIFAR_FOLDER) == "skip":
return
X1 = [unpickle("%sdata_batch_%d" % (CIFAR_FOLDER, i + 1))
for i in range(5)]
X = np.vstack(X1)
del X1
elif dataset_name == 'SVHN':
if handle_missing_dataset(SVHN_FOLDER) == 0:
return
X1 = sp.io.loadmat("%strain_32x32.mat" % SVHN_FOLDER)['X']
X2 = [X1[:, :, :, i].reshape(32 * 32 * 3) for i in range(X1.shape[3])]
X = np.vstack(X2)
del X1
del X2
elif dataset_name == 'low rank matrix':
X = make_low_rank_matrix(n_samples=500, n_features=np.int(1e4),
effective_rank=100, tail_strength=.5,
random_state=random_state)
elif dataset_name == 'uncorrelated matrix':
X, _ = make_sparse_uncorrelated(n_samples=500, n_features=10000,
random_state=random_state)
elif dataset_name == 'big sparse matrix':
sparsity = np.int(1e6)
size = np.int(1e6)
small_size = np.int(1e4)
data = np.random.normal(0, 1, np.int(sparsity/10))
data = np.repeat(data, 10)
row = np.random.uniform(0, small_size, sparsity)
col = np.random.uniform(0, small_size, sparsity)
X = sp.sparse.csr_matrix((data, (row, col)), shape=(size, small_size))
del data
del row
del col
else:
X = fetch_mldata(dataset_name).data
return X
def plot_time_vs_s(time, norm, point_labels, title):
plt.figure()
colors = ['g', 'b', 'y']
for i, l in enumerate(sorted(norm.keys())):
if l is not "fbpca":
plt.plot(time[l], norm[l], label=l, marker='o', c=colors.pop())
else:
plt.plot(time[l], norm[l], label=l, marker='^', c='red')
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, -20),
textcoords='offset points', ha='right', va='bottom')
plt.legend(loc="upper right")
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("running time [s]")
def scatter_time_vs_s(time, norm, point_labels, title):
plt.figure()
size = 100
for i, l in enumerate(sorted(norm.keys())):
if l is not "fbpca":
plt.scatter(time[l], norm[l], label=l, marker='o', c='b', s=size)
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, -80),
textcoords='offset points', ha='right',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
va='bottom', size=11, rotation=90)
else:
plt.scatter(time[l], norm[l], label=l, marker='^', c='red', s=size)
for label, x, y in zip(point_labels, list(time[l]), list(norm[l])):
plt.annotate(label, xy=(x, y), xytext=(0, 30),
textcoords='offset points', ha='right',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
va='bottom', size=11, rotation=90)
plt.legend(loc="best")
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("running time [s]")
def plot_power_iter_vs_s(power_iter, s, title):
plt.figure()
for l in sorted(s.keys()):
plt.plot(power_iter, s[l], label=l, marker='o')
plt.legend(loc="lower right", prop={'size': 10})
plt.suptitle(title)
plt.ylabel("norm discrepancy")
plt.xlabel("n_iter")
def svd_timing(X, n_comps, n_iter, n_oversamples,
power_iteration_normalizer='auto', method=None):
"""
Measure time for decomposition
"""
print("... running SVD ...")
if method is not 'fbpca':
gc.collect()
t0 = time()
U, mu, V = randomized_svd(X, n_comps, n_oversamples, n_iter,
power_iteration_normalizer,
random_state=random_state, transpose=False)
call_time = time() - t0
else:
gc.collect()
t0 = time()
# There is a different convention for l here
U, mu, V = fbpca.pca(X, n_comps, raw=True, n_iter=n_iter,
l=n_oversamples+n_comps)
call_time = time() - t0
return U, mu, V, call_time
def norm_diff(A, norm=2, msg=True):
"""
Compute the norm diff with the original matrix, when randomized
SVD is called with *params.
norm: 2 => spectral; 'fro' => Frobenius
"""
if msg:
print("... computing %s norm ..." % norm)
if norm == 2:
# s = sp.linalg.norm(A, ord=2) # slow
value = sp.sparse.linalg.svds(A, k=1, return_singular_vectors=False)
else:
if sp.sparse.issparse(A):
value = sp.sparse.linalg.norm(A, ord=norm)
else:
value = sp.linalg.norm(A, ord=norm)
return value
def scalable_frobenius_norm_discrepancy(X, U, s, V):
# if the input is not too big, just call scipy
if X.shape[0] * X.shape[1] < MAX_MEMORY:
A = X - U.dot(np.diag(s).dot(V))
return norm_diff(A, norm='fro')
print("... computing fro norm by batches...")
batch_size = 1000
Vhat = np.diag(s).dot(V)
cum_norm = .0
for batch in gen_batches(X.shape[0], batch_size):
M = X[batch, :] - U[batch, :].dot(Vhat)
cum_norm += norm_diff(M, norm='fro', msg=False)
return np.sqrt(cum_norm)
def bench_a(X, dataset_name, power_iter, n_oversamples, n_comps):
all_time = defaultdict(list)
if enable_spectral_norm:
all_spectral = defaultdict(list)
X_spectral_norm = norm_diff(X, norm=2, msg=False)
all_frobenius = defaultdict(list)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
for pi in power_iter:
for pm in ['none', 'LU', 'QR']:
print("n_iter = %d on sklearn - %s" % (pi, pm))
U, s, V, time = svd_timing(X, n_comps, n_iter=pi,
power_iteration_normalizer=pm,
n_oversamples=n_oversamples)
label = "sklearn - %s" % pm
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if fbpca_available:
print("n_iter = %d on fbca" % (pi))
U, s, V, time = svd_timing(X, n_comps, n_iter=pi,
power_iteration_normalizer=pm,
n_oversamples=n_oversamples,
method='fbpca')
label = "fbpca"
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if enable_spectral_norm:
title = "%s: spectral norm diff vs running time" % (dataset_name)
plot_time_vs_s(all_time, all_spectral, power_iter, title)
title = "%s: Frobenius norm diff vs running time" % (dataset_name)
plot_time_vs_s(all_time, all_frobenius, power_iter, title)
def bench_b(power_list):
n_samples, n_features = 1000, 10000
data_params = {'n_samples': n_samples, 'n_features': n_features,
'tail_strength': .7, 'random_state': random_state}
dataset_name = "low rank matrix %d x %d" % (n_samples, n_features)
ranks = [10, 50, 100]
if enable_spectral_norm:
all_spectral = defaultdict(list)
all_frobenius = defaultdict(list)
for rank in ranks:
X = make_low_rank_matrix(effective_rank=rank, **data_params)
if enable_spectral_norm:
X_spectral_norm = norm_diff(X, norm=2, msg=False)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
for n_comp in [np.int(rank/2), rank, rank*2]:
label = "rank=%d, n_comp=%d" % (rank, n_comp)
print(label)
for pi in power_list:
U, s, V, _ = svd_timing(X, n_comp, n_iter=pi, n_oversamples=2,
power_iteration_normalizer='LU')
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if enable_spectral_norm:
title = "%s: spectral norm diff vs n power iteration" % (dataset_name)
plot_power_iter_vs_s(power_iter, all_spectral, title)
title = "%s: frobenius norm diff vs n power iteration" % (dataset_name)
plot_power_iter_vs_s(power_iter, all_frobenius, title)
def bench_c(datasets, n_comps):
all_time = defaultdict(list)
if enable_spectral_norm:
all_spectral = defaultdict(list)
all_frobenius = defaultdict(list)
for dataset_name in datasets:
X = get_data(dataset_name)
if X is None:
continue
if enable_spectral_norm:
X_spectral_norm = norm_diff(X, norm=2, msg=False)
X_fro_norm = norm_diff(X, norm='fro', msg=False)
n_comps = np.minimum(n_comps, np.min(X.shape))
label = "sklearn"
print("%s %d x %d - %s" %
(dataset_name, X.shape[0], X.shape[1], label))
U, s, V, time = svd_timing(X, n_comps, n_iter=2, n_oversamples=10,
method=label)
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if fbpca_available:
label = "fbpca"
print("%s %d x %d - %s" %
(dataset_name, X.shape[0], X.shape[1], label))
U, s, V, time = svd_timing(X, n_comps, n_iter=2, n_oversamples=2,
method=label)
all_time[label].append(time)
if enable_spectral_norm:
A = U.dot(np.diag(s).dot(V))
all_spectral[label].append(norm_diff(X - A, norm=2) /
X_spectral_norm)
f = scalable_frobenius_norm_discrepancy(X, U, s, V)
all_frobenius[label].append(f / X_fro_norm)
if len(all_time) == 0:
raise ValueError("No tests ran. Aborting.")
if enable_spectral_norm:
title = "normalized spectral norm diff vs running time"
scatter_time_vs_s(all_time, all_spectral, datasets, title)
title = "normalized Frobenius norm diff vs running time"
scatter_time_vs_s(all_time, all_frobenius, datasets, title)
if __name__ == '__main__':
random_state = check_random_state(1234)
power_iter = np.linspace(0, 6, 7, dtype=int)
n_comps = 50
for dataset_name in datasets:
X = get_data(dataset_name)
if X is None:
continue
print(" >>>>>> Benching sklearn and fbpca on %s %d x %d" %
(dataset_name, X.shape[0], X.shape[1]))
bench_a(X, dataset_name, power_iter, n_oversamples=2,
n_comps=np.minimum(n_comps, np.min(X.shape)))
print(" >>>>>> Benching on simulated low rank matrix with variable rank")
bench_b(power_iter)
print(" >>>>>> Benching sklearn and fbpca default configurations")
bench_c(datasets + big_sparse_datasets, n_comps)
plt.show()
|
bsd-3-clause
|
chrisburr/scikit-learn
|
sklearn/metrics/regression.py
|
9
|
17386
|
"""Metrics to assess performance on regression task
Functions named as ``*_score`` return a scalar value to maximize: the higher
the better
Function named as ``*_error`` or ``*_loss`` return a scalar value to minimize:
the lower the better
"""
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Olivier Grisel <[email protected]>
# Arnaud Joly <[email protected]>
# Jochen Wersdorfer <[email protected]>
# Lars Buitinck <[email protected]>
# Joel Nothman <[email protected]>
# Noel Dawe <[email protected]>
# Manoj Kumar <[email protected]>
# Michael Eickenberg <[email protected]>
# Konstantin Shmelkov <[email protected]>
# License: BSD 3 clause
from __future__ import division
import numpy as np
from ..utils.validation import check_array, check_consistent_length
from ..utils.validation import column_or_1d
from ..externals.six import string_types
import warnings
__ALL__ = [
"mean_absolute_error",
"mean_squared_error",
"median_absolute_error",
"r2_score",
"explained_variance_score"
]
def _check_reg_targets(y_true, y_pred, multioutput):
"""Check that y_true and y_pred belong to the same regression task
Parameters
----------
y_true : array-like,
y_pred : array-like,
multioutput : array-like or string in ['raw_values', uniform_average',
'variance_weighted'] or None
None is accepted due to backward compatibility of r2_score().
Returns
-------
type_true : one of {'continuous', continuous-multioutput'}
The type of the true target data, as output by
'utils.multiclass.type_of_target'
y_true : array-like of shape = (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples, n_outputs)
Estimated target values.
multioutput : array-like of shape = (n_outputs) or string in ['raw_values',
uniform_average', 'variance_weighted'] or None
Custom output weights if ``multioutput`` is array-like or
just the corresponding argument if ``multioutput`` is a
correct keyword.
"""
check_consistent_length(y_true, y_pred)
y_true = check_array(y_true, ensure_2d=False)
y_pred = check_array(y_pred, ensure_2d=False)
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
multioutput_options = (None, 'raw_values', 'uniform_average',
'variance_weighted')
if multioutput not in multioutput_options:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
def mean_absolute_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean absolute error regression loss
Read more in the :ref:`User Guide <mean_absolute_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average']
or array-like of shape (n_outputs)
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
If multioutput is 'raw_values', then mean absolute error is returned
for each output separately.
If multioutput is 'uniform_average' or an ndarray of weights, then the
weighted average of all output errors is returned.
MAE output is non-negative floating point. The best value is 0.0.
Examples
--------
>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([ 0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.849...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
output_errors = np.average(np.abs(y_pred - y_true),
weights=sample_weight, axis=0)
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def mean_squared_error(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Mean squared error regression loss
Read more in the :ref:`User Guide <mean_squared_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average']
or array-like of shape (n_outputs)
Defines aggregating of multiple output values.
Array-like value defines weights used to average errors.
'raw_values' :
Returns a full set of errors in case of multioutput input.
'uniform_average' :
Errors of all outputs are averaged with uniform weight.
Returns
-------
loss : float or ndarray of floats
A non-negative floating point value (the best value is 0.0), or an
array of floating point values, one for each individual target.
Examples
--------
>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [[0.5, 1],[-1, 1],[7, -6]]
>>> y_pred = [[0, 2],[-1, 2],[8, -5]]
>>> mean_squared_error(y_true, y_pred) # doctest: +ELLIPSIS
0.708...
>>> mean_squared_error(y_true, y_pred, multioutput='raw_values')
... # doctest: +ELLIPSIS
array([ 0.416..., 1. ])
>>> mean_squared_error(y_true, y_pred, multioutput=[0.3, 0.7])
... # doctest: +ELLIPSIS
0.824...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
output_errors = np.average((y_true - y_pred) ** 2, axis=0,
weights=sample_weight)
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def median_absolute_error(y_true, y_pred):
"""Median absolute error regression loss
Read more in the :ref:`User Guide <median_absolute_error>`.
Parameters
----------
y_true : array-like of shape = (n_samples)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples)
Estimated target values.
Returns
-------
loss : float
A positive floating point value (the best value is 0.0).
Examples
--------
>>> from sklearn.metrics import median_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> median_absolute_error(y_true, y_pred)
0.5
"""
y_type, y_true, y_pred, _ = _check_reg_targets(y_true, y_pred,
'uniform_average')
if y_type == 'continuous-multioutput':
raise ValueError("Multioutput not supported in median_absolute_error")
return np.median(np.abs(y_pred - y_true))
def explained_variance_score(y_true, y_pred,
sample_weight=None,
multioutput='uniform_average'):
"""Explained variance regression score function
Best possible score is 1.0, lower values are worse.
Read more in the :ref:`User Guide <explained_variance_score>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average', \
'variance_weighted'] or array-like of shape (n_outputs)
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
Returns
-------
score : float or ndarray of floats
The explained variance or ndarray if 'multioutput' is 'raw_values'.
Notes
-----
This is not a symmetric function.
Examples
--------
>>> from sklearn.metrics import explained_variance_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred) # doctest: +ELLIPSIS
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, multioutput='uniform_average')
... # doctest: +ELLIPSIS
0.983...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
y_diff_avg = np.average(y_true - y_pred, weights=sample_weight, axis=0)
numerator = np.average((y_true - y_pred - y_diff_avg) ** 2,
weights=sample_weight, axis=0)
y_true_avg = np.average(y_true, weights=sample_weight, axis=0)
denominator = np.average((y_true - y_true_avg) ** 2,
weights=sample_weight, axis=0)
nonzero_numerator = numerator != 0
nonzero_denominator = denominator != 0
valid_score = nonzero_numerator & nonzero_denominator
output_scores = np.ones(y_true.shape[1])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing to np.average() None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
def r2_score(y_true, y_pred,
sample_weight=None,
multioutput=None):
"""R^2 (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). A constant model that always
predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
Read more in the :ref:`User Guide <r2_score>`.
Parameters
----------
y_true : array-like of shape = (n_samples) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape = (n_samples) or (n_samples, n_outputs)
Estimated target values.
sample_weight : array-like of shape = (n_samples), optional
Sample weights.
multioutput : string in ['raw_values', 'uniform_average', \
'variance_weighted'] or None or array-like of shape (n_outputs)
Defines aggregating of multiple output scores.
Array-like value defines weights used to average scores.
Default value correponds to 'variance_weighted', this behaviour is
deprecated since version 0.17 and will be changed to 'uniform_average'
starting from 0.19.
'raw_values' :
Returns a full set of scores in case of multioutput input.
'uniform_average' :
Scores of all outputs are averaged with uniform weight.
'variance_weighted' :
Scores of all outputs are averaged, weighted by the variances
of each individual output.
Returns
-------
z : float or ndarray of floats
The R^2 score or ndarray of scores if 'multioutput' is
'raw_values'.
Notes
-----
This is not a symmetric function.
Unlike most other scores, R^2 score may be negative (it need not actually
be the square of a quantity R).
References
----------
.. [1] `Wikipedia entry on the Coefficient of determination
<http://en.wikipedia.org/wiki/Coefficient_of_determination>`_
Examples
--------
>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred) # doctest: +ELLIPSIS
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='variance_weighted') # doctest: +ELLIPSIS
0.938...
"""
y_type, y_true, y_pred, multioutput = _check_reg_targets(
y_true, y_pred, multioutput)
if sample_weight is not None:
sample_weight = column_or_1d(sample_weight)
weight = sample_weight[:, np.newaxis]
else:
weight = 1.
numerator = (weight * (y_true - y_pred) ** 2).sum(axis=0,
dtype=np.float64)
denominator = (weight * (y_true - np.average(
y_true, axis=0, weights=sample_weight)) ** 2).sum(axis=0,
dtype=np.float64)
nonzero_denominator = denominator != 0
nonzero_numerator = numerator != 0
valid_score = nonzero_denominator & nonzero_numerator
output_scores = np.ones([y_true.shape[1]])
output_scores[valid_score] = 1 - (numerator[valid_score] /
denominator[valid_score])
# arbitrary set to zero to avoid -inf scores, having a constant
# y_true is not interesting for scoring a regression anyway
output_scores[nonzero_numerator & ~nonzero_denominator] = 0.
if multioutput is None and y_true.shape[1] != 1:
warnings.warn("Default 'multioutput' behavior now corresponds to "
"'variance_weighted' value which is deprecated since "
"0.17, it will be changed to 'uniform_average' "
"starting from 0.19.",
DeprecationWarning)
multioutput = 'variance_weighted'
if isinstance(multioutput, string_types):
if multioutput == 'raw_values':
# return scores individually
return output_scores
elif multioutput == 'uniform_average':
# passing None as weights results is uniform mean
avg_weights = None
elif multioutput == 'variance_weighted':
avg_weights = denominator
# avoid fail on constant y or one-element arrays
if not np.any(nonzero_denominator):
if not np.any(nonzero_numerator):
return 1.0
else:
return 0.0
else:
avg_weights = multioutput
return np.average(output_scores, weights=avg_weights)
|
bsd-3-clause
|
Eric89GXL/scikit-learn
|
sklearn/cluster/bicluster/spectral.py
|
4
|
19538
|
"""Implements spectral biclustering algorithms.
Authors : Kemal Eren
License: BSD 3 clause
"""
from abc import ABCMeta, abstractmethod
import numpy as np
from scipy.sparse import dia_matrix
from scipy.sparse import issparse
from sklearn.base import BaseEstimator, BiclusterMixin
from sklearn.externals import six
from sklearn.utils.arpack import svds
from sklearn.utils.arpack import eigsh
from sklearn.cluster import KMeans
from sklearn.cluster import MiniBatchKMeans
from sklearn.utils.extmath import randomized_svd
from sklearn.utils.extmath import safe_sparse_dot
from sklearn.utils.extmath import make_nonnegative
from sklearn.utils.extmath import norm
from sklearn.utils.validation import assert_all_finite
from sklearn.utils.validation import check_arrays
from .utils import check_array_ndim
def _scale_normalize(X):
"""Normalize ``X`` by scaling rows and columns independently.
Returns the normalized matrix and the row and column scaling
factors.
"""
X = make_nonnegative(X)
row_diag = np.asarray(1.0 / np.sqrt(X.sum(axis=1))).squeeze()
col_diag = np.asarray(1.0 / np.sqrt(X.sum(axis=0))).squeeze()
row_diag = np.where(np.isnan(row_diag), 0, row_diag)
col_diag = np.where(np.isnan(col_diag), 0, col_diag)
if issparse(X):
n_rows, n_cols = X.shape
r = dia_matrix((row_diag, [0]), shape=(n_rows, n_rows))
c = dia_matrix((col_diag, [0]), shape=(n_cols, n_cols))
an = r * X * c
else:
an = row_diag[:, np.newaxis] * X * col_diag
return an, row_diag, col_diag
def _bistochastic_normalize(X, max_iter=1000, tol=1e-5):
"""Normalize rows and columns of ``X`` simultaneously so that all
rows sum to one constant and all columns sum to a different
constant.
"""
# According to paper, this can also be done more efficiently with
# deviation reduction and balancing algorithms.
X = make_nonnegative(X)
X_scaled = X
dist = None
for _ in range(max_iter):
X_new, _, _ = _scale_normalize(X_scaled)
if issparse(X):
dist = norm(X_scaled.data - X.data)
else:
dist = norm(X_scaled - X_new)
X_scaled = X_new
if dist is not None and dist < tol:
break
return X_scaled
def _log_normalize(X):
"""Normalize ``X`` according to Kluger's log-interactions scheme."""
X = make_nonnegative(X, min_value=1)
if issparse(X):
raise ValueError("Cannot compute log of a sparse matrix,"
" because log(x) diverges to -infinity as x"
" goes to 0.")
L = np.log(X)
row_avg = L.mean(axis=1)[:, np.newaxis]
col_avg = L.mean(axis=0)
avg = L.mean()
return L - row_avg - col_avg + avg
class BaseSpectral(six.with_metaclass(ABCMeta, BaseEstimator,
BiclusterMixin)):
"""Base class for spectral biclustering."""
@abstractmethod
def __init__(self, n_clusters=3, svd_method="randomized",
n_svd_vecs=None, mini_batch=False, init="k-means++",
n_init=10, n_jobs=1, random_state=None):
self.n_clusters = n_clusters
self.svd_method = svd_method
self.n_svd_vecs = n_svd_vecs
self.mini_batch = mini_batch
self.init = init
self.n_init = n_init
self.n_jobs = n_jobs
self.random_state = random_state
def _check_parameters(self):
legal_svd_methods = ('randomized', 'arpack')
if self.svd_method not in legal_svd_methods:
raise ValueError("Unknown SVD method: '{}'. svd_method must be"
" one of {}.".format(self.svd_method,
legal_svd_methods))
def fit(self, X):
"""Creates a biclustering for X.
Parameters
----------
X : array-like, shape (n_samples, n_features)
"""
X, = check_arrays(X, sparse_format='csr', dtype=np.float64)
check_array_ndim(X)
self._check_parameters()
self._fit(X)
def _svd(self, array, n_components, n_discard):
"""Returns first `n_components` left and right singular
vectors u and v, discarding the first `n_discard`.
"""
if self.svd_method == 'randomized':
kwargs = {}
if self.n_svd_vecs is not None:
kwargs['n_oversamples'] = self.n_svd_vecs
u, _, vt = randomized_svd(array, n_components,
random_state=self.random_state,
**kwargs)
elif self.svd_method == 'arpack':
u, _, vt = svds(array, k=n_components, ncv=self.n_svd_vecs)
if np.any(np.isnan(vt)):
# some eigenvalues of A * A.T are negative, causing
# sqrt() to be np.nan. This causes some vectors in vt
# to be np.nan.
_, v = eigsh(safe_sparse_dot(array.T, array),
ncv=self.n_svd_vecs)
vt = v.T
if np.any(np.isnan(u)):
_, u = eigsh(safe_sparse_dot(array, array.T),
ncv=self.n_svd_vecs)
assert_all_finite(u)
assert_all_finite(vt)
u = u[:, n_discard:]
vt = vt[n_discard:]
return u, vt.T
def _k_means(self, data, n_clusters):
if self.mini_batch:
model = MiniBatchKMeans(n_clusters,
init=self.init,
n_init=self.n_init,
random_state=self.random_state)
else:
model = KMeans(n_clusters, init=self.init,
n_init=self.n_init, n_jobs=self.n_jobs,
random_state=self.random_state)
model.fit(data)
centroid = model.cluster_centers_
labels = model.labels_
return centroid, labels
class SpectralCoclustering(BaseSpectral):
"""Spectral Co-Clustering algorithm (Dhillon, 2001).
Clusters rows and columns of an array `X` to solve the relaxed
normalized cut of the bipartite graph created from `X` as follows:
the edge between row vertex `i` and column vertex `j` has weight
`X[i, j]`.
The resulting bicluster structure is block-diagonal, since each
row and each column belongs to exactly one bicluster.
Supports sparse matrices, as long as they are nonnegative.
Parameters
----------
n_clusters : integer, optional, default: 3
The number of biclusters to find.
svd_method : string, optional, default: 'randomized'
Selects the algorithm for finding singular vectors. May be
'randomized' or 'arpack'. If 'randomized', use
:func:`sklearn.utils.extmath.randomized_svd`, which may be faster
for large matrices. If 'arpack', use
:func:`sklearn.utils.arpack.svds`, which is more accurate, but
possibly slower in some cases.
n_svd_vecs : int, optional, default: None
Number of vectors to use in calculating the SVD. Corresponds
to `ncv` when `svd_method=arpack` and `n_oversamples` when
`svd_method` is 'randomized`.
mini_batch : bool, optional, default: False
Whether to use mini-batch k-means, which is faster but may get
different results.
init : {'k-means++', 'random' or an ndarray}
Method for initialization of k-means algorithm; defaults to
'k-means++'.
n_init : int, optional, default: 10
Number of random initializations that are tried with the
k-means algorithm.
If mini-batch k-means is used, the best initialization is
chosen and the algorithm runs once. Otherwise, the algorithm
is run for each initialization and the best solution chosen.
n_jobs : int, optional, default: 1
The number of jobs to use for the computation. This works by breaking
down the pairwise matrix into n_jobs even slices and computing them in
parallel.
If -1 all CPUs are used. If 1 is given, no parallel computing code is
used at all, which is useful for debugging. For n_jobs below -1,
(n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one
are used.
random_state : int seed, RandomState instance, or None (default)
A pseudo random number generator used by the K-Means
initialization.
Attributes
----------
`rows_` : array-like, shape (n_row_clusters, n_rows)
Results of the clustering. `rows[i, r]` is True if cluster `i`
contains row `r`. Available only after calling ``fit``.
`columns_` : array-like, shape (n_column_clusters, n_columns)
Results of the clustering, like `rows`.
`row_labels_` : array-like, shape (n_rows,)
The bicluster label of each row.
`column_labels_` : array-like, shape (n_cols,)
The bicluster label of each column.
References
----------
* Dhillon, Inderjit S, 2001. `Co-clustering documents and words using
bipartite spectral graph partitioning
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.3011>`__.
"""
def __init__(self, n_clusters=3, svd_method='randomized',
n_svd_vecs=None, mini_batch=False, init='k-means++',
n_init=10, n_jobs=1, random_state=None):
super(SpectralCoclustering, self).__init__(n_clusters,
svd_method,
n_svd_vecs,
mini_batch,
init,
n_init,
n_jobs,
random_state)
def _fit(self, X):
normalized_data, row_diag, col_diag = _scale_normalize(X)
n_sv = 1 + int(np.ceil(np.log2(self.n_clusters)))
u, v = self._svd(normalized_data, n_sv, n_discard=1)
z = np.vstack((row_diag[:, np.newaxis] * u,
col_diag[:, np.newaxis] * v))
_, labels = self._k_means(z, self.n_clusters)
n_rows = X.shape[0]
self.row_labels_ = labels[:n_rows]
self.column_labels_ = labels[n_rows:]
self.rows_ = np.vstack(self.row_labels_ == c
for c in range(self.n_clusters))
self.columns_ = np.vstack(self.column_labels_ == c
for c in range(self.n_clusters))
class SpectralBiclustering(BaseSpectral):
"""Spectral biclustering (Kluger, 2003).
Partitions rows and columns under the assumption that the data has
an underlying checkerboard structure. For instance, if there are
two row partitions and three column partitions, each row will
belong to three biclusters, and each column will belong to two
biclusters. The outer product of the corresponding row and column
label vectors gives this checkerboard structure.
Parameters
----------
n_clusters : integer or tuple (n_row_clusters, n_column_clusters)
The number of row and column clusters in the checkerboard
structure.
method : string, optional, default: 'bistochastic'
Method of normalizing and converting singular vectors into
biclusters. May be one of 'scale', 'bistochastic', or 'log'.
The authors recommend using 'log'. If the data is sparse,
however, log normalization will not work, which is why the
default is 'bistochastic'. CAUTION: if `method='log'`, the
data must not be sparse.
n_components : integer, optional, default: 6
Number of singular vectors to check.
n_best : integer, optional, default: 3
Number of best singular vectors to which to project the data
for clustering.
svd_method : string, optional, default: 'randomized'
Selects the algorithm for finding singular vectors. May be
'randomized' or 'arpack'. If 'randomized', uses
`sklearn.utils.extmath.randomized_svd`, which may be faster
for large matrices. If 'arpack', uses
`sklearn.utils.arpack.svds`, which is more accurate, but
possibly slower in some cases.
n_svd_vecs : int, optional, default: None
Number of vectors to use in calculating the SVD. Corresponds
to `ncv` when `svd_method=arpack` and `n_oversamples` when
`svd_method` is 'randomized`.
mini_batch : bool, optional, default: False
Whether to use mini-batch k-means, which is faster but may get
different results.
init : {'k-means++', 'random' or an ndarray}
Method for initialization of k-means algorithm; defaults to
'k-means++'.
n_init : int, optional, default: 10
Number of random initializations that are tried with the
k-means algorithm.
If mini-batch k-means is used, the best initialization is
chosen and the algorithm runs once. Otherwise, the algorithm
is run for each initialization and the best solution chosen.
n_jobs : int, optional, default: 1
The number of jobs to use for the computation. This works by breaking
down the pairwise matrix into n_jobs even slices and computing them in
parallel.
If -1 all CPUs are used. If 1 is given, no parallel computing code is
used at all, which is useful for debugging. For n_jobs below -1,
(n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one
are used.
random_state : int seed, RandomState instance, or None (default)
A pseudo random number generator used by the K-Means
initialization.
Attributes
----------
`rows_` : array-like, shape (n_row_clusters, n_rows)
Results of the clustering. `rows[i, r]` is True if cluster `i`
contains row `r`. Available only after calling ``fit``.
`columns_` : array-like, shape (n_column_clusters, n_columns)
Results of the clustering, like `rows`.
`row_labels_` : array-like, shape (n_rows,)
Row partition labels.
`column_labels_` : array-like, shape (n_cols,)
Column partition labels.
References
----------
* Kluger, Yuval, et. al., 2003. `Spectral biclustering of microarray
data: coclustering genes and conditions
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.135.1608>`__.
"""
def __init__(self, n_clusters=3, method='bistochastic',
n_components=6, n_best=3, svd_method='randomized',
n_svd_vecs=None, mini_batch=False, init='k-means++',
n_init=10, n_jobs=1, random_state=None):
super(SpectralBiclustering, self).__init__(n_clusters,
svd_method,
n_svd_vecs,
mini_batch,
init,
n_init,
n_jobs,
random_state)
self.method = method
self.n_components = n_components
self.n_best = n_best
def _check_parameters(self):
super(SpectralBiclustering, self)._check_parameters()
legal_methods = ('bistochastic', 'scale', 'log')
if self.method not in legal_methods:
raise ValueError("Unknown method: '{}'. method must be"
" one of {}.".format(self.method, legal_methods))
try:
int(self.n_clusters)
except TypeError:
try:
r, c = self.n_clusters
int(r)
int(c)
except (ValueError, TypeError):
raise ValueError("Incorrect parameter n_clusters has value:"
" {}. It should either be a single integer"
" or an iterable with two integers:"
" (n_row_clusters, n_column_clusters)")
if self.n_components < 1:
raise ValueError("Parameter n_components must be greater than 0,"
" but its value is {}".format(self.n_components))
if self.n_best < 1:
raise ValueError("Parameter n_best must be greater than 0,"
" but its value is {}".format(self.n_best))
if self.n_best > self.n_components:
raise ValueError("n_best cannot be larger than"
" n_components, but {} > {}"
"".format(self.n_best, self.n_components))
def _fit(self, X):
n_sv = self.n_components
if self.method == 'bistochastic':
normalized_data = _bistochastic_normalize(X)
n_sv += 1
elif self.method == 'scale':
normalized_data, _, _ = _scale_normalize(X)
n_sv += 1
elif self.method == 'log':
normalized_data = _log_normalize(X)
n_discard = 0 if self.method == 'log' else 1
u, v = self._svd(normalized_data, n_sv, n_discard)
ut = u.T
vt = v.T
try:
n_row_clusters, n_col_clusters = self.n_clusters
except TypeError:
n_row_clusters = n_col_clusters = self.n_clusters
best_ut = self._fit_best_piecewise(ut, self.n_best,
n_row_clusters)
best_vt = self._fit_best_piecewise(vt, self.n_best,
n_col_clusters)
self.row_labels_ = self._project_and_cluster(X, best_vt.T,
n_row_clusters)
self.column_labels_ = self._project_and_cluster(X.T, best_ut.T,
n_col_clusters)
self.rows_ = np.vstack(self.row_labels_ == label
for label in range(n_row_clusters)
for _ in range(n_col_clusters))
self.columns_ = np.vstack(self.column_labels_ == label
for _ in range(n_row_clusters)
for label in range(n_col_clusters))
def _fit_best_piecewise(self, vectors, n_best, n_clusters):
"""Find the ``n_best`` vectors that are best approximated by piecewise
constant vectors.
The piecewise vectors are found by k-means; the best is chosen
according to Euclidean distance.
"""
def make_piecewise(v):
centroid, labels = self._k_means(v.reshape(-1, 1), n_clusters)
return centroid[labels].ravel()
piecewise_vectors = np.apply_along_axis(make_piecewise,
axis=1, arr=vectors)
dists = np.apply_along_axis(norm, axis=1,
arr=(vectors - piecewise_vectors))
result = vectors[np.argsort(dists)[:n_best]]
return result
def _project_and_cluster(self, data, vectors, n_clusters):
"""Project ``data`` to ``vectors`` and cluster the result."""
projected = safe_sparse_dot(data, vectors)
_, labels = self._k_means(projected, n_clusters)
return labels
|
bsd-3-clause
|
JoeBartelmo/goddard
|
gui/BeamGapWidget.py
|
2
|
4545
|
# Copyright (c) 2016, Jeffrey Maggio and Joseph Bartelmo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
# associated documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial
# portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
# LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import matplotlib
matplotlib.use('TkAgg')
import numpy
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
import Tkinter as tk
from multiprocessing import Process
from Queue import Queue, Empty
import sys
from Threads import TelemetryThread
class BeamGapWidget(tk.Toplevel):
'''
Widget designed to display incoming beamgap data
parent is the root container
queue is a custom queue where each object contains only necessary
information for this widget: Valmar data from telemetry json
'''
def __init__(self, parent, queue, **kwargs):
tk.Toplevel.__init__(self, parent, **kwargs)
self.queue = queue
self.parent = parent
self.protocol("WM_DELETE_WINDOW", self.onDelete)
self._init_ui()
self.updateloop()
def onDelete(self):
'''
Destroy callback
'''
self.destroy()
def _init_ui(self):
# Image display label
self.fig = Figure()
self.gap = self.fig.add_subplot(111)
self.canvas = FigureCanvasTkAgg(self.fig, master=self)
self.canvas.draw()
self.canvas.get_tk_widget().grid(row=0, column = 0, rowspan = 2, sticky = 'nsew')
self.label = tk.Label(self, text='Std. Dev.: None\t\tBeam Number: 0',justify='left')
self.label.grid(row=2, column=0, sticky='sew')
self.grid_columnconfigure(0, weight=1)
self.grid_rowconfigure(0, weight=1)
def beamGapDisplay(self, data):
line = numpy.zeros(len(data)) + (data/2)
lineLength = numpy.arange(0,len(data))
std = numpy.std(data)
stdL = line - std
return line, lineLength, std, stdL
def updateloop(self):
try:
data = self.queue.get(False)
ibeamNum = data['IbeamCounter']
ibeamDist = numpy.asarray(data['BeamGap'])
print len(ibeamDist)
#pipe that data to the beamGapDisplay function below
line, lineLength, std, stdL = self.beamGapDisplay(ibeamDist)
self.gap.cla()
self.gap.plot(line, lineLength, 'r', \
-line, lineLength, 'r', \
stdL, lineLength, 'g--', \
-stdL, lineLength, 'g--')
#TODO: set to default beam gap width within some margin
self.gap.set_xlabel('Distance (pix)')
self.gap.set_ylabel('Beam Data Length (pix)')
#self.gap.set_xlim()
self.gap.set_ylim(0,len(ibeamDist))
self.label['text'] = 'Std. Dev:' + str(std) + '\t\tBeam Num:' + str(ibeamNum)
self.canvas.draw()
except Empty:
pass
self.after(100, self.updateloop)
if __name__ == '__main__':
from Queue import Queue
from MarsTestHarness import MarsTestHarness
from threading import Thread
telem_q = Queue()
beam_q = Queue()
mth = MarsTestHarness(telem_q, beam_q)
Thread(target=mth.generateQueueData).start()
Thread(target=mth.generateQueueData).run()
#data = numpy.arange(0,1,0.05)
root = tk.Tk()
#t = BeamGapWidget(root, Queue())
t = BeamGapWidget(root, beam_q)
#t = BeamGapWidget(root, data)
t.grid()
root.update()
print t.winfo_height(), t.winfo_width()
root.mainloop()
sys.exit()
|
mit
|
joernhees/scikit-learn
|
sklearn/linear_model/ransac.py
|
12
|
19391
|
# coding: utf-8
# Author: Johannes Schönberger
#
# License: BSD 3 clause
import numpy as np
import warnings
from ..base import BaseEstimator, MetaEstimatorMixin, RegressorMixin, clone
from ..utils import check_random_state, check_array, check_consistent_length
from ..utils.random import sample_without_replacement
from ..utils.validation import check_is_fitted
from .base import LinearRegression
from ..utils.validation import has_fit_parameter
_EPSILON = np.spacing(1)
def _dynamic_max_trials(n_inliers, n_samples, min_samples, probability):
"""Determine number trials such that at least one outlier-free subset is
sampled for the given inlier/outlier ratio.
Parameters
----------
n_inliers : int
Number of inliers in the data.
n_samples : int
Total number of samples in the data.
min_samples : int
Minimum number of samples chosen randomly from original data.
probability : float
Probability (confidence) that one outlier-free sample is generated.
Returns
-------
trials : int
Number of trials.
"""
inlier_ratio = n_inliers / float(n_samples)
nom = max(_EPSILON, 1 - probability)
denom = max(_EPSILON, 1 - inlier_ratio ** min_samples)
if nom == 1:
return 0
if denom == 1:
return float('inf')
return abs(float(np.ceil(np.log(nom) / np.log(denom))))
class RANSACRegressor(BaseEstimator, MetaEstimatorMixin, RegressorMixin):
"""RANSAC (RANdom SAmple Consensus) algorithm.
RANSAC is an iterative algorithm for the robust estimation of parameters
from a subset of inliers from the complete data set. More information can
be found in the general documentation of linear models.
A detailed description of the algorithm can be found in the documentation
of the ``linear_model`` sub-package.
Read more in the :ref:`User Guide <ransac_regression>`.
Parameters
----------
base_estimator : object, optional
Base estimator object which implements the following methods:
* `fit(X, y)`: Fit model to given training data and target values.
* `score(X, y)`: Returns the mean accuracy on the given test data,
which is used for the stop criterion defined by `stop_score`.
Additionally, the score is used to decide which of two equally
large consensus sets is chosen as the better one.
If `base_estimator` is None, then
``base_estimator=sklearn.linear_model.LinearRegression()`` is used for
target values of dtype float.
Note that the current implementation only supports regression
estimators.
min_samples : int (>= 1) or float ([0, 1]), optional
Minimum number of samples chosen randomly from original data. Treated
as an absolute number of samples for `min_samples >= 1`, treated as a
relative number `ceil(min_samples * X.shape[0]`) for
`min_samples < 1`. This is typically chosen as the minimal number of
samples necessary to estimate the given `base_estimator`. By default a
``sklearn.linear_model.LinearRegression()`` estimator is assumed and
`min_samples` is chosen as ``X.shape[1] + 1``.
residual_threshold : float, optional
Maximum residual for a data sample to be classified as an inlier.
By default the threshold is chosen as the MAD (median absolute
deviation) of the target values `y`.
is_data_valid : callable, optional
This function is called with the randomly selected data before the
model is fitted to it: `is_data_valid(X, y)`. If its return value is
False the current randomly chosen sub-sample is skipped.
is_model_valid : callable, optional
This function is called with the estimated model and the randomly
selected data: `is_model_valid(model, X, y)`. If its return value is
False the current randomly chosen sub-sample is skipped.
Rejecting samples with this function is computationally costlier than
with `is_data_valid`. `is_model_valid` should therefore only be used if
the estimated model is needed for making the rejection decision.
max_trials : int, optional
Maximum number of iterations for random sample selection.
max_skips : int, optional
Maximum number of iterations that can be skipped due to finding zero
inliers or invalid data defined by ``is_data_valid`` or invalid models
defined by ``is_model_valid``.
.. versionadded:: 0.19
stop_n_inliers : int, optional
Stop iteration if at least this number of inliers are found.
stop_score : float, optional
Stop iteration if score is greater equal than this threshold.
stop_probability : float in range [0, 1], optional
RANSAC iteration stops if at least one outlier-free set of the training
data is sampled in RANSAC. This requires to generate at least N
samples (iterations)::
N >= log(1 - probability) / log(1 - e**m)
where the probability (confidence) is typically set to high value such
as 0.99 (the default) and e is the current fraction of inliers w.r.t.
the total number of samples.
residual_metric : callable, optional
Metric to reduce the dimensionality of the residuals to 1 for
multi-dimensional target values ``y.shape[1] > 1``. By default the sum
of absolute differences is used::
lambda dy: np.sum(np.abs(dy), axis=1)
.. deprecated:: 0.18
``residual_metric`` is deprecated from 0.18 and will be removed in
0.20. Use ``loss`` instead.
loss : string, callable, optional, default "absolute_loss"
String inputs, "absolute_loss" and "squared_loss" are supported which
find the absolute loss and squared loss per sample
respectively.
If ``loss`` is a callable, then it should be a function that takes
two arrays as inputs, the true and predicted value and returns a 1-D
array with the ``i``th value of the array corresponding to the loss
on `X[i]`.
If the loss on a sample is greater than the ``residual_threshold``, then
this sample is classified as an outlier.
random_state : int, RandomState instance or None, optional, default None
The generator used to initialize the centers. If int, random_state is
the seed used by the random number generator; If RandomState instance,
random_state is the random number generator; If None, the random number
generator is the RandomState instance used by `np.random`.
Attributes
----------
estimator_ : object
Best fitted model (copy of the `base_estimator` object).
n_trials_ : int
Number of random selection trials until one of the stop criteria is
met. It is always ``<= max_trials``.
inlier_mask_ : bool array of shape [n_samples]
Boolean mask of inliers classified as ``True``.
n_skips_no_inliers_ : int
Number of iterations skipped due to finding zero inliers.
.. versionadded:: 0.19
n_skips_invalid_data_ : int
Number of iterations skipped due to invalid data defined by
``is_data_valid``.
.. versionadded:: 0.19
n_skips_invalid_model_ : int
Number of iterations skipped due to an invalid model defined by
``is_model_valid``.
.. versionadded:: 0.19
References
----------
.. [1] https://en.wikipedia.org/wiki/RANSAC
.. [2] http://www.cs.columbia.edu/~belhumeur/courses/compPhoto/ransac.pdf
.. [3] http://www.bmva.org/bmvc/2009/Papers/Paper355/Paper355.pdf
"""
def __init__(self, base_estimator=None, min_samples=None,
residual_threshold=None, is_data_valid=None,
is_model_valid=None, max_trials=100, max_skips=np.inf,
stop_n_inliers=np.inf, stop_score=np.inf,
stop_probability=0.99, residual_metric=None,
loss='absolute_loss', random_state=None):
self.base_estimator = base_estimator
self.min_samples = min_samples
self.residual_threshold = residual_threshold
self.is_data_valid = is_data_valid
self.is_model_valid = is_model_valid
self.max_trials = max_trials
self.max_skips = max_skips
self.stop_n_inliers = stop_n_inliers
self.stop_score = stop_score
self.stop_probability = stop_probability
self.residual_metric = residual_metric
self.random_state = random_state
self.loss = loss
def fit(self, X, y, sample_weight=None):
"""Fit estimator using RANSAC algorithm.
Parameters
----------
X : array-like or sparse matrix, shape [n_samples, n_features]
Training data.
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values.
sample_weight : array-like, shape = [n_samples]
Individual weights for each sample
raises error if sample_weight is passed and base_estimator
fit method does not support it.
Raises
------
ValueError
If no valid consensus set could be found. This occurs if
`is_data_valid` and `is_model_valid` return False for all
`max_trials` randomly chosen sub-samples.
"""
X = check_array(X, accept_sparse='csr')
y = check_array(y, ensure_2d=False)
check_consistent_length(X, y)
if self.base_estimator is not None:
base_estimator = clone(self.base_estimator)
else:
base_estimator = LinearRegression()
if self.min_samples is None:
# assume linear model by default
min_samples = X.shape[1] + 1
elif 0 < self.min_samples < 1:
min_samples = np.ceil(self.min_samples * X.shape[0])
elif self.min_samples >= 1:
if self.min_samples % 1 != 0:
raise ValueError("Absolute number of samples must be an "
"integer value.")
min_samples = self.min_samples
else:
raise ValueError("Value for `min_samples` must be scalar and "
"positive.")
if min_samples > X.shape[0]:
raise ValueError("`min_samples` may not be larger than number "
"of samples ``X.shape[0]``.")
if self.stop_probability < 0 or self.stop_probability > 1:
raise ValueError("`stop_probability` must be in range [0, 1].")
if self.residual_threshold is None:
# MAD (median absolute deviation)
residual_threshold = np.median(np.abs(y - np.median(y)))
else:
residual_threshold = self.residual_threshold
if self.residual_metric is not None:
warnings.warn(
"'residual_metric' was deprecated in version 0.18 and "
"will be removed in version 0.20. Use 'loss' instead.",
DeprecationWarning)
if self.loss == "absolute_loss":
if y.ndim == 1:
loss_function = lambda y_true, y_pred: np.abs(y_true - y_pred)
else:
loss_function = lambda \
y_true, y_pred: np.sum(np.abs(y_true - y_pred), axis=1)
elif self.loss == "squared_loss":
if y.ndim == 1:
loss_function = lambda y_true, y_pred: (y_true - y_pred) ** 2
else:
loss_function = lambda \
y_true, y_pred: np.sum((y_true - y_pred) ** 2, axis=1)
elif callable(self.loss):
loss_function = self.loss
else:
raise ValueError(
"loss should be 'absolute_loss', 'squared_loss' or a callable."
"Got %s. " % self.loss)
random_state = check_random_state(self.random_state)
try: # Not all estimator accept a random_state
base_estimator.set_params(random_state=random_state)
except ValueError:
pass
estimator_fit_has_sample_weight = has_fit_parameter(base_estimator,
"sample_weight")
estimator_name = type(base_estimator).__name__
if (sample_weight is not None and not
estimator_fit_has_sample_weight):
raise ValueError("%s does not support sample_weight. Samples"
" weights are only used for the calibration"
" itself." % estimator_name)
if sample_weight is not None:
sample_weight = np.asarray(sample_weight)
n_inliers_best = 1
score_best = -np.inf
inlier_mask_best = None
X_inlier_best = None
y_inlier_best = None
self.n_skips_no_inliers_ = 0
self.n_skips_invalid_data_ = 0
self.n_skips_invalid_model_ = 0
# number of data samples
n_samples = X.shape[0]
sample_idxs = np.arange(n_samples)
n_samples, _ = X.shape
self.n_trials_ = 0
max_trials = self.max_trials
while self.n_trials_ < max_trials:
self.n_trials_ += 1
if (self.n_skips_no_inliers_ + self.n_skips_invalid_data_ +
self.n_skips_invalid_model_) > self.max_skips:
break
# choose random sample set
subset_idxs = sample_without_replacement(n_samples, min_samples,
random_state=random_state)
X_subset = X[subset_idxs]
y_subset = y[subset_idxs]
# check if random sample set is valid
if (self.is_data_valid is not None
and not self.is_data_valid(X_subset, y_subset)):
self.n_skips_invalid_data_ += 1
continue
# fit model for current random sample set
if sample_weight is None:
base_estimator.fit(X_subset, y_subset)
else:
base_estimator.fit(X_subset, y_subset,
sample_weight=sample_weight[subset_idxs])
# check if estimated model is valid
if (self.is_model_valid is not None and not
self.is_model_valid(base_estimator, X_subset, y_subset)):
self.n_skips_invalid_model_ += 1
continue
# residuals of all data for current random sample model
y_pred = base_estimator.predict(X)
# XXX: Deprecation: Remove this if block in 0.20
if self.residual_metric is not None:
diff = y_pred - y
if diff.ndim == 1:
diff = diff.reshape(-1, 1)
residuals_subset = self.residual_metric(diff)
else:
residuals_subset = loss_function(y, y_pred)
# classify data into inliers and outliers
inlier_mask_subset = residuals_subset < residual_threshold
n_inliers_subset = np.sum(inlier_mask_subset)
# less inliers -> skip current random sample
if n_inliers_subset < n_inliers_best:
self.n_skips_no_inliers_ += 1
continue
# extract inlier data set
inlier_idxs_subset = sample_idxs[inlier_mask_subset]
X_inlier_subset = X[inlier_idxs_subset]
y_inlier_subset = y[inlier_idxs_subset]
# score of inlier data set
score_subset = base_estimator.score(X_inlier_subset,
y_inlier_subset)
# same number of inliers but worse score -> skip current random
# sample
if (n_inliers_subset == n_inliers_best
and score_subset < score_best):
continue
# save current random sample as best sample
n_inliers_best = n_inliers_subset
score_best = score_subset
inlier_mask_best = inlier_mask_subset
X_inlier_best = X_inlier_subset
y_inlier_best = y_inlier_subset
max_trials = min(
max_trials,
_dynamic_max_trials(n_inliers_best, n_samples,
min_samples, self.stop_probability))
# break if sufficient number of inliers or score is reached
if n_inliers_best >= self.stop_n_inliers or \
score_best >= self.stop_score:
break
# if none of the iterations met the required criteria
if inlier_mask_best is None:
if ((self.n_skips_no_inliers_ + self.n_skips_invalid_data_ +
self.n_skips_invalid_model_) > self.max_skips):
raise ValueError(
"RANSAC skipped more iterations than `max_skips` without"
" finding a valid consensus set. Iterations were skipped"
" because each randomly chosen sub-sample failed the"
" passing criteria. See estimator attributes for"
" diagnostics (n_skips*).")
else:
raise ValueError(
"RANSAC could not find a valid consensus set. All"
" `max_trials` iterations were skipped because each"
" randomly chosen sub-sample failed the passing criteria."
" See estimator attributes for diagnostics (n_skips*).")
else:
if (self.n_skips_no_inliers_ + self.n_skips_invalid_data_ +
self.n_skips_invalid_model_) > self.max_skips:
warnings.warn("RANSAC found a valid consensus set but exited"
" early due to skipping more iterations than"
" `max_skips`. See estimator attributes for"
" diagnostics (n_skips*).",
UserWarning)
# estimate final model using all inliers
base_estimator.fit(X_inlier_best, y_inlier_best)
self.estimator_ = base_estimator
self.inlier_mask_ = inlier_mask_best
return self
def predict(self, X):
"""Predict using the estimated model.
This is a wrapper for `estimator_.predict(X)`.
Parameters
----------
X : numpy array of shape [n_samples, n_features]
Returns
-------
y : array, shape = [n_samples] or [n_samples, n_targets]
Returns predicted values.
"""
check_is_fitted(self, 'estimator_')
return self.estimator_.predict(X)
def score(self, X, y):
"""Returns the score of the prediction.
This is a wrapper for `estimator_.score(X, y)`.
Parameters
----------
X : numpy array or sparse matrix of shape [n_samples, n_features]
Training data.
y : array, shape = [n_samples] or [n_samples, n_targets]
Target values.
Returns
-------
z : float
Score of the prediction.
"""
check_is_fitted(self, 'estimator_')
return self.estimator_.score(X, y)
|
bsd-3-clause
|
aberaud/opendht
|
python/tools/dht/tests.py
|
3
|
34950
|
# -*- coding: utf-8 -*-
# Copyright (C) 2015 Savoir-Faire Linux Inc.
# Author(s): Adrien Béraud <[email protected]>
# Simon Désaulniers <[email protected]>
import sys
import os
import threading
import random
import string
import time
import subprocess
import re
import collections
from matplotlib.ticker import FuncFormatter
import math
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
from networkx.drawing.nx_agraph import graphviz_layout
from opendht import *
from dht.network import DhtNetwork, DhtNetworkSubProcess
############
# Common #
############
# matplotlib display format for bits (b, Kb, Mb)
bit_format = None
Kbit_format = FuncFormatter(lambda x, pos: '%1.1f' % (x*1024**-1) + 'Kb')
Mbit_format = FuncFormatter(lambda x, pos: '%1.1f' % (x*1024**-2) + 'Mb')
def random_str_val(size=1024):
"""Creates a random string value of specified size.
@param size: Size, in bytes, of the value.
@type size: int
@return: Random string value
@rtype : str
"""
return ''.join(random.choice(string.hexdigits) for _ in range(size))
def random_hash():
"""Creates random InfoHash.
"""
return InfoHash(random_str_val(size=40).encode())
def timer(f, *args):
"""
Start a timer which count time taken for execute function f
@param f : Function to time
@type f : function
@param args : Arguments of the function f
@type args : list
@rtype : timer
@return : Time taken by the function f
"""
start = time.time()
f(*args)
return time.time() - start
def reset_before_test(featureTestMethod):
"""
This is a decorator for all test methods needing reset().
@param featureTestMethod: The method to be decorated. All decorated methods
must have 'self' object as first arg.
@type featureTestMethod: function
"""
def call(*args, **kwargs):
self = args[0]
if isinstance(self, FeatureTest):
self._reset()
return featureTestMethod(*args, **kwargs)
return call
def display_plot(yvals, xvals=None, yformatter=None, display_time=3, **kwargs):
"""
Displays a plot of data in interactive mode. This method is made to be
called successively for plot refreshing.
@param yvals: Ordinate values (float).
@type yvals: list
@param xvals: Abscissa values (float).
@type xvals: list
@param yformatter: The matplotlib FuncFormatter to use for y values.
@type yformatter: matplotlib.ticker.FuncFormatter
@param displaytime: The time matplotlib can take to refresht the plot.
@type displaytime: int
"""
plt.ion()
plt.clf()
plt.show()
if yformatter:
plt.axes().yaxis.set_major_formatter(Kbit_format)
if xvals:
plt.plot(xvals, yvals, **kwargs)
else:
plt.plot(yvals, **kwargs)
plt.pause(display_time)
def display_traffic_plot(ifname):
"""Displays the traffic plot for a given interface name.
@param ifname: Interface name.
@type ifname: string
"""
ydata = []
xdata = []
# warning: infinite loop
interval = 2
for rate in iftop_traffic_data(ifname, interval=interval):
ydata.append(rate)
xdata.append((xdata[-1] if len(xdata) > 0 else 0) + interval)
display_plot(ydata, xvals=xdata, yformatter=Kbit_format, color='blue')
def iftop_traffic_data(ifname, interval=2, rate_type='send_receive'):
"""
Generator (yields data) function collecting traffic data from iftop
subprocess.
@param ifname: Interface to listen to.
@type ifname: string
@param interval: Interval of time between to data collections. Possible
values are 2, 10 or 40.
@type interval: int
@param rates: (default: send_receive) Wether to pick "send", "receive"
or "send and receive" rates. Possible values : "send",
"receive" and "send_receive".
@type rates: string
@param _format: Format in which to display data on the y axis.
Possible values: Mb, Kb or b.
@type _format: string
"""
# iftop stdout string format
SEND_RATE_STR = "Total send rate"
RECEIVE_RATE_STR = "Total receive rate"
SEND_RECEIVE_RATE_STR = "Total send and receive rate"
RATE_STR = {
"send" : SEND_RATE_STR,
"receive" : RECEIVE_RATE_STR,
"send_receive" : SEND_RECEIVE_RATE_STR
}
TWO_SECONDS_RATE_COL = 0
TEN_SECONDS_RATE_COL = 1
FOURTY_SECONDS_RATE_COL = 2
COLS = {
2 : TWO_SECONDS_RATE_COL,
10 : TEN_SECONDS_RATE_COL,
40 : FOURTY_SECONDS_RATE_COL
}
FLOAT_REGEX = "[0-9]+[.]*[0-9]*"
BIT_REGEX = "[KM]*b"
iftop = subprocess.Popen(["iftop", "-i", ifname, "-t"], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
while True:
line = iftop.stdout.readline().decode()
if RATE_STR[rate_type] in line:
rate, unit = re.findall("("+FLOAT_REGEX+")("+BIT_REGEX+")", line)[COLS[interval]]
rate = float(rate)
if unit == "Kb":
rate *= 1024
elif unit == "Mb":
rate *= 1024**2
yield rate
###########
# Tests #
###########
class FeatureTest(object):
"""
This is a base test.
"""
done = 0
lock = None
def __init__(self, test, workbench):
"""
@param test: The test string indicating the test to run. This string is
determined in the child classes.
@type test: string
@param workbench: A WorkBench object to use inside this test.
@type workbench: WorkBench
"""
self._test = test
self._workbench = workbench
self._bootstrap = self._workbench.get_bootstrap()
def _reset(self):
"""
Resets some static variables.
This method is most likely going to be called before each tests.
"""
FeatureTest.done = 0
FeatureTest.lock = threading.Condition()
def run(self):
raise NotImplementedError('This method must be implemented.')
##################################
# PHT #
##################################
class PhtTest(FeatureTest):
"""TODO
"""
indexEntries = None
prefix = None
key = None
def __init__(self, test, workbench, opts):
"""
@param test: is one of the following:
- 'insert': indexes a considerable amount of data in
the PHT structure.
TODO
@type test: string
@param opts: Dictionnary containing options for the test. Allowed
options are:
- 'num_keys': this specifies the number of keys to insert
in the PHT during the test.
@type opts: dict
"""
super(PhtTest, self).__init__(test, workbench)
self._num_keys = opts['num_keys'] if 'num_keys' in opts else 32
self._timer = True if 'timer' in opts else False
def _reset(self):
super(PhtTest, self)._reset()
PhtTest.indexEntries = []
@staticmethod
def lookupCb(vals, prefix):
PhtTest.indexEntries = list(vals)
PhtTest.prefix = prefix.decode()
DhtNetwork.log('Index name: <todo>')
DhtNetwork.log('Leaf prefix:', prefix)
for v in vals:
DhtNetwork.log('[ENTRY]:', v)
@staticmethod
def lookupDoneCb(ok):
DhtNetwork.log('[LOOKUP]:', PhtTest.key, "--", "success!" if ok else "Fail...")
with FeatureTest.lock:
FeatureTest.lock.notify()
@staticmethod
def insertDoneCb(ok):
DhtNetwork.log('[INSERT]:', PhtTest.key, "--", "success!" if ok else "Fail...")
with FeatureTest.lock:
FeatureTest.lock.notify()
@staticmethod
def drawTrie(trie_dict):
"""
Draws the trie structure of the PHT from dictionnary.
@param trie_dict: Dictionnary of index entries (prefix -> entry).
@type trie_dict: dict
"""
prefixes = list(trie_dict.keys())
if len(prefixes) == 0:
return
edges = list([])
for prefix in prefixes:
for i in range(-1, len(prefix)-1):
u = prefix[:i+1]
x = ("." if i == -1 else u, u+"0")
y = ("." if i == -1 else u, u+"1")
if x not in edges:
edges.append(x)
if y not in edges:
edges.append(y)
# TODO: use a binary tree position layout...
# UPDATE : In a better way [change lib]
G = nx.Graph(sorted(edges, key=lambda x: len(x[0])))
plt.title("PHT: Tree")
pos=graphviz_layout(G,prog='dot')
nx.draw(G, pos, with_labels=True, node_color='white')
plt.show()
def run(self):
try:
if self._test == 'insert':
self._insertTest()
except Exception as e:
print(e)
finally:
self._bootstrap.resize(1)
###########
# Tests #
###########
@reset_before_test
def _insertTest(self):
"""TODO: Docstring for _massIndexTest.
"""
bootstrap = self._bootstrap
bootstrap.resize(2)
dht = bootstrap.get(1)
NUM_DIG = max(math.log(self._num_keys, 2)/4, 5) # at least 5 digit keys.
keyspec = collections.OrderedDict([('foo', NUM_DIG)])
pht = Pht(b'foo_index', keyspec, dht)
DhtNetwork.log('PHT has',
pht.MAX_NODE_ENTRY_COUNT,
'node'+ ('s' if pht.MAX_NODE_ENTRY_COUNT > 1 else ''),
'per leaf bucket.')
keys = [{
[_ for _ in keyspec.keys()][0] :
''.join(random.SystemRandom().choice(string.hexdigits)
for _ in range(NUM_DIG)).encode()
} for n in range(self._num_keys)]
all_entries = {}
# Index all entries.
for key in keys:
PhtTest.key = key
with FeatureTest.lock:
time_taken = timer(pht.insert, key, IndexValue(random_hash()), PhtTest.insertDoneCb)
if self._timer:
DhtNetwork.log('This insert step took : ', time_taken, 'second')
FeatureTest.lock.wait()
time.sleep(1)
# Recover entries now that the trie is complete.
for key in keys:
PhtTest.key = key
with FeatureTest.lock:
time_taken = timer(pht.lookup, key, PhtTest.lookupCb, PhtTest.lookupDoneCb)
if self._timer:
DhtNetwork.log('This lookup step took : ', time_taken, 'second')
FeatureTest.lock.wait()
all_entries[PhtTest.prefix] = [e.__str__()
for e in PhtTest.indexEntries]
for p in all_entries.keys():
DhtNetwork.log('All entries under prefix', p, ':')
DhtNetwork.log(all_entries[p])
PhtTest.drawTrie(all_entries)
##################################
# DHT #
##################################
class DhtFeatureTest(FeatureTest):
"""
This is a base dht test.
"""
#static variables used by class callbacks
successfullTransfer = lambda lv,fv: len(lv) == len(fv)
foreignNodes = None
foreignValues = None
def __init__(self, test, workbench):
super(DhtFeatureTest, self).__init__(test, workbench)
def _reset(self):
super(DhtFeatureTest, self)._reset()
DhtFeatureTest.foreignNodes = []
DhtFeatureTest.foreignValues = []
@staticmethod
def getcb(value):
vstr = value.__str__()[:100]
DhtNetwork.Log.log('[GET]: %s' % vstr + ("..." if len(vstr) > 100 else ""))
DhtFeatureTest.foreignValues.append(value)
return True
@staticmethod
def putDoneCb(ok, nodes):
with FeatureTest.lock:
if not ok:
DhtNetwork.Log.log("[PUT]: failed!")
FeatureTest.done -= 1
FeatureTest.lock.notify()
@staticmethod
def getDoneCb(ok, nodes):
with FeatureTest.lock:
if not ok:
DhtNetwork.Log.log("[GET]: failed!")
else:
for node in nodes:
if not node.getNode().isExpired():
DhtFeatureTest.foreignNodes.append(node.getId().toString())
FeatureTest.done -= 1
FeatureTest.lock.notify()
def _dhtPut(self, producer, _hash, *values):
with FeatureTest.lock:
for val in values:
vstr = val.__str__()[:100]
DhtNetwork.Log.log('[PUT]:', _hash.toString(), '->', vstr + ("..." if len(vstr) > 100 else ""))
FeatureTest.done += 1
producer.put(_hash, val, DhtFeatureTest.putDoneCb)
while FeatureTest.done > 0:
FeatureTest.lock.wait()
def _dhtGet(self, consumer, _hash):
DhtFeatureTest.foreignValues = []
DhtFeatureTest.foreignNodes = []
with FeatureTest.lock:
FeatureTest.done += 1
DhtNetwork.Log.log('[GET]:', _hash.toString())
consumer.get(_hash, DhtFeatureTest.getcb, DhtFeatureTest.getDoneCb)
while FeatureTest.done > 0:
FeatureTest.lock.wait()
def _gottaGetThemAllPokeNodes(self, consumer, hashes, nodes=None):
for h in hashes:
self._dhtGet(consumer, h)
if nodes is not None:
for n in DhtFeatureTest.foreignNodes:
nodes.add(n)
class PersistenceTest(DhtFeatureTest):
"""
This tests persistence of data on the network.
"""
def __init__(self, test, workbench, opts):
"""
@param test: is one of the following:
- 'mult_time': test persistence of data based on internal
OpenDHT storage maintenance timings.
- 'delete': test persistence of data upon deletion of
nodes.
- 'replace': replacing cluster successively.
@type test: string
OPTIONS
- dump_str_log: Enables storage log at test ending.
- keep_alive: Keeps the test running indefinately. This may be useful
to manually analyse the network traffic during a longer
period.
- num_producers: Number of producers of data during a DHT test.
- num_values: Number of values to initialize the DHT with.
"""
# opts
super(PersistenceTest, self).__init__(test, workbench)
self._traffic_plot = True if 'traffic_plot' in opts else False
self._dump_storage = True if 'dump_str_log' in opts else False
self._op_plot = True if 'op_plot' in opts else False
self._keep_alive = True if 'keep_alive' in opts else False
self._num_producers = opts['num_producers'] if 'num_producers' in opts else None
self._num_values = opts['num_values'] if 'num_values' in opts else None
def _trigger_dp(self, trigger_nodes, _hash, count=1):
"""
Triggers the data persistence over time. In order to this, `count` nodes
are created with an id around the hash of a value.
@param trigger_nodes: List of created nodes. The nodes created in this
function are append to this list.
@type trigger_nodes: list
@param _hash: Is the id of the value around which creating nodes.
@type _hash: InfoHash
@param count: The number of nodes to create with id around the id of
value.
@type count: int
"""
_hash_str = _hash.toString().decode()
_hash_int = int(_hash_str, 16)
for i in range(int(-count/2), int(count/2)+1):
_hash_str = '{:40x}'.format(_hash_int + i)
config = DhtConfig()
config.setNodeId(InfoHash(_hash_str.encode()))
n = DhtRunner()
n.run(config=config)
n.bootstrap(self._bootstrap.ip4,
str(self._bootstrap.port))
DhtNetwork.log('Node','['+_hash_str+']',
'started around', _hash.toString().decode()
if n.isRunning() else
'failed to start...'
)
trigger_nodes.append(n)
def _result(self, local_values, new_nodes):
bootstrap = self._bootstrap
if not DhtFeatureTest.successfullTransfer(local_values, DhtFeatureTest.foreignValues):
DhtNetwork.Log.log('[GET]: Only %s on %s values persisted.' %
(len(DhtFeatureTest.foreignValues), len(local_values)))
else:
DhtNetwork.Log.log('[GET]: All values successfully persisted.')
if DhtFeatureTest.foreignValues:
if new_nodes:
DhtNetwork.Log.log('Values are newly found on:')
for node in new_nodes:
DhtNetwork.Log.log(node)
if self._dump_storage:
DhtNetwork.Log.log('Dumping all storage log from '\
'hosting nodes.')
for proc in self._workbench.procs:
proc.sendClusterRequest(DhtNetworkSubProcess.DUMP_STORAGE_REQ, DhtFeatureTest.foreignNodes)
else:
DhtNetwork.Log.log("Values didn't reach new hosting nodes after shutdown.")
def run(self):
try:
if self._test == 'normal':
self._totallyNormalTest()
elif self._test == 'delete':
self._deleteTest()
elif self._test == 'replace':
self._replaceClusterTest()
elif self._test == 'mult_time':
self._multTimeTest()
else:
raise NameError("This test is not defined '" + self._test + "'")
except Exception as e:
traceback.print_tb(e.__traceback__)
print(type(e).__name__+':', e, file=sys.stderr)
finally:
if self._traffic_plot or self._op_plot:
plot_fname = "traffic-plot"
print('plot saved to', plot_fname)
plt.savefig(plot_fname)
self._bootstrap.resize(1)
###########
# Tests #
###########
@reset_before_test
def _totallyNormalTest(self):
"""
Reproduces a network in a realistic state.
"""
trigger_nodes = []
wb = self._workbench
bootstrap = self._bootstrap
# Value representing an ICE packet. Each ICE packet is around 1KB.
VALUE_SIZE = 1024
num_values_per_hash = self._num_values/wb.node_num if self._num_values else 5
# nodes and values counters
total_nr_values = 0
nr_nodes = wb.node_num
op_cv = threading.Condition()
# values string in string format. Used for sending cluster request.
hashes = [random_hash() for _ in range(wb.node_num)]
def normalBehavior(do, t):
nonlocal total_nr_values, op_cv
while True:
with op_cv:
do()
time.sleep(random.uniform(0.0, float(t)))
def putRequest():
nonlocal hashes, VALUE_SIZE, total_nr_values
lock = threading.Condition()
def dcb(success):
nonlocal total_nr_values, lock
if success:
total_nr_values += 1
DhtNetwork.Log.log("INFO: "+ str(total_nr_values)+" values put on the dht since begining")
with lock:
lock.notify()
with lock:
DhtNetwork.Log.warn("Random value put on the DHT...")
random.choice(wb.procs).sendClusterPutRequest(random.choice(hashes).toString(),
random_str_val(size=VALUE_SIZE).encode(),
done_cb=dcb)
lock.wait()
puts = threading.Thread(target=normalBehavior, args=(putRequest, 30.0/wb.node_num))
puts.daemon = True
puts.start()
def newNodeRequest():
nonlocal nr_nodes
lock = threading.Condition()
def dcb(success):
nonlocal nr_nodes, lock
nr_nodes += 1
DhtNetwork.Log.log("INFO: now "+str(nr_nodes)+" nodes on the dht")
with lock:
lock.notify()
with lock:
DhtNetwork.Log.warn("Node joining...")
random.choice(wb.procs).sendClusterRequest(DhtNetworkSubProcess.NEW_NODE_REQ, done_cb=dcb)
lock.wait()
connections = threading.Thread(target=normalBehavior, args=(newNodeRequest, 1*50.0/wb.node_num))
connections.daemon = True
connections.start()
def shutdownNodeRequest():
nonlocal nr_nodes
lock = threading.Condition()
def dcb(success):
nonlocal nr_nodes, lock
if success:
nr_nodes -= 1
DhtNetwork.Log.log("INFO: now "+str(nr_nodes)+" nodes on the dht")
else:
DhtNetwork.Log.err("Oops.. No node to shutodwn.")
with lock:
lock.notify()
with lock:
DhtNetwork.Log.warn("Node shutting down...")
random.choice(wb.procs).sendClusterRequest(DhtNetworkSubProcess.SHUTDOWN_NODE_REQ, done_cb=dcb)
lock.wait()
shutdowns = threading.Thread(target=normalBehavior, args=(shutdownNodeRequest, 1*60.0/wb.node_num))
shutdowns.daemon = True
shutdowns.start()
if self._traffic_plot:
display_traffic_plot('br'+wb.ifname)
else:
# blocks in matplotlib thread
while True:
plt.pause(3600)
@reset_before_test
def _deleteTest(self):
"""
It uses Dht shutdown call from the API to gracefuly finish the nodes one
after the other.
"""
bootstrap = self._bootstrap
ops_count = []
bootstrap.resize(3)
consumer = bootstrap.get(1)
producer = bootstrap.get(2)
myhash = random_hash()
local_values = [Value(b'foo'), Value(b'bar'), Value(b'foobar')]
self._dhtPut(producer, myhash, *local_values)
#checking if values were transfered
self._dhtGet(consumer, myhash)
if not DhtFeatureTest.successfullTransfer(local_values, DhtFeatureTest.foreignValues):
if DhtFeatureTest.foreignValues:
DhtNetwork.Log.log('[GET]: Only ', len(DhtFeatureTest.foreignValues) ,' on ',
len(local_values), ' values successfully put.')
else:
DhtNetwork.Log.log('[GET]: 0 values successfully put')
if DhtFeatureTest.foreignValues and DhtFeatureTest.foreignNodes:
DhtNetwork.Log.log('Values are found on :')
for node in DhtFeatureTest.foreignNodes:
DhtNetwork.Log.log(node)
for _ in range(max(1, int(self._workbench.node_num/32))):
DhtNetwork.Log.log('Removing all nodes hosting target values...')
cluster_ops_count = 0
for proc in self._workbench.procs:
DhtNetwork.Log.log('[REMOVE]: sending shutdown request to', proc)
lock = threading.Condition()
def dcb(success):
nonlocal lock
if not success:
DhtNetwork.Log.err("Failed to shutdown.")
with lock:
lock.notify()
with lock:
proc.sendClusterRequest(
DhtNetworkSubProcess.SHUTDOWN_NODE_REQ,
DhtFeatureTest.foreignNodes,
done_cb=dcb
)
lock.wait()
DhtNetwork.Log.log('sending message stats request')
def msg_dcb(stats):
nonlocal cluster_ops_count, lock
if stats:
cluster_ops_count += sum(stats[1:])
with lock:
lock.notify()
with lock:
proc.sendGetMessageStats(done_cb=msg_dcb)
lock.wait()
DhtNetwork.Log.log("5 seconds wait...")
time.sleep(5)
ops_count.append(cluster_ops_count/self._workbench.node_num)
# checking if values were transfered to new nodes
foreignNodes_before_delete = DhtFeatureTest.foreignNodes
DhtNetwork.Log.log('[GET]: trying to fetch persistent values')
self._dhtGet(consumer, myhash)
new_nodes = set(DhtFeatureTest.foreignNodes) - set(foreignNodes_before_delete)
self._result(local_values, new_nodes)
if self._op_plot:
display_plot(ops_count, color='blue')
else:
DhtNetwork.Log.log("[GET]: either couldn't fetch values or nodes hosting values...")
if traffic_plot_thread:
print("Traffic plot running for ever. Ctrl-c for stopping it.")
traffic_plot_thread.join()
@reset_before_test
def _replaceClusterTest(self):
"""
It replaces all clusters one after the other.
"""
clusters = 8
bootstrap = self._bootstrap
bootstrap.resize(3)
consumer = bootstrap.get(1)
producer = bootstrap.get(2)
myhash = random_hash()
local_values = [Value(b'foo'), Value(b'bar'), Value(b'foobar')]
self._dhtPut(producer, myhash, *local_values)
self._dhtGet(consumer, myhash)
initial_nodes = DhtFeatureTest.foreignNodes
DhtNetwork.Log.log('Replacing', clusters, 'random clusters successively...')
for n in range(clusters):
i = random.randint(0, len(self._workbench.procs)-1)
proc = self._workbench.procs[i]
DhtNetwork.Log.log('Replacing', proc)
proc.sendClusterRequest(DhtNetworkSubProcess.SHUTDOWN_CLUSTER_REQ)
self._workbench.stop_cluster(i)
self._workbench.start_cluster(i)
DhtNetwork.Log.log('[GET]: trying to fetch persistent values')
self._dhtGet(consumer, myhash)
new_nodes = set(DhtFeatureTest.foreignNodes) - set(initial_nodes)
self._result(local_values, new_nodes)
@reset_before_test
def _multTimeTest(self):
"""
Multiple put() calls are made from multiple nodes to multiple hashes
after what a set of 8 nodes is created around each hashes in order to
enable storage maintenance each nodes. Therefor, this tests will wait 10
minutes for the nodes to trigger storage maintenance.
"""
trigger_nodes = []
bootstrap = self._bootstrap
N_PRODUCERS = self._num_producers if self._num_values else 16
DP_TIMEOUT = 1
hashes = []
# Generating considerable amount of values of size 1KB.
VALUE_SIZE = 1024
NUM_VALUES = self._num_values if self._num_values else 50
values = [Value(random_str_val(size=VALUE_SIZE).encode()) for _ in range(NUM_VALUES)]
bootstrap.resize(N_PRODUCERS+2)
consumer = bootstrap.get(N_PRODUCERS+1)
producers = (bootstrap.get(n) for n in range(1,N_PRODUCERS+1))
for p in producers:
hashes.append(random_hash())
self._dhtPut(p, hashes[-1], *values)
once = True
while self._keep_alive or once:
nodes = set([])
self._gottaGetThemAllPokeNodes(consumer, hashes, nodes=nodes)
DhtNetwork.Log.log("Values are found on:")
for n in nodes:
DhtNetwork.Log.log(n)
DhtNetwork.Log.log("Creating 8 nodes around all of these hashes...")
for _hash in hashes:
self._trigger_dp(trigger_nodes, _hash, count=8)
DhtNetwork.Log.log('Waiting', DP_TIMEOUT+1, 'minutes for normal storage maintenance.')
time.sleep((DP_TIMEOUT+1)*60)
DhtNetwork.Log.log('Deleting old nodes from previous search.')
for proc in self._workbench.procs:
DhtNetwork.Log.log('[REMOVE]: sending delete request to', proc)
proc.sendClusterRequest(
DhtNetworkSubProcess.REMOVE_NODE_REQ,
nodes)
# new consumer (fresh cache)
bootstrap.resize(N_PRODUCERS+1)
bootstrap.resize(N_PRODUCERS+2)
consumer = bootstrap.get(N_PRODUCERS+1)
nodes_after_time = set([])
self._gottaGetThemAllPokeNodes(consumer, hashes, nodes=nodes_after_time)
self._result(values, nodes_after_time - nodes)
once = False
class PerformanceTest(DhtFeatureTest):
"""
Tests for general performance of dht operations.
"""
def __init__(self, test, workbench, opts):
"""
@param test: is one of the following:
- 'gets': multiple get operations and statistical results.
- 'delete': perform multiple put() operations followed
by targeted deletion of nodes hosting the values. Doing
so until half of the nodes on the network remain.
@type test: string
"""
super(PerformanceTest, self).__init__(test, workbench)
def run(self):
try:
if self._test == 'gets':
self._getsTimesTest()
elif self._test == 'delete':
self._delete()
else:
raise NameError("This test is not defined '" + self._test + "'")
except Exception as e:
traceback.print_tb(e.__traceback__)
print(type(e).__name__+':', e, file=sys.stderr)
finally:
self._bootstrap.resize(1)
###########
# Tests #
###########
@reset_before_test
def _getsTimesTest(self):
"""
Tests for performance of the DHT doing multiple get() operation.
"""
bootstrap = self._bootstrap
plt.ion()
fig, axes = plt.subplots(2, 1)
fig.tight_layout()
lax = axes[0]
hax = axes[1]
lines = None#ax.plot([])
#plt.ylabel('time (s)')
hax.set_ylim(0, 2)
# let the network stabilise
plt.pause(20)
#start = time.time()
times = []
lock = threading.Condition()
done = 0
def getcb(v):
nonlocal bootstrap
DhtNetwork.Log.log("found", v)
return True
def donecb(ok, nodes, start):
nonlocal bootstrap, lock, done, times
t = time.time()-start
with lock:
if not ok:
DhtNetwork.Log.log("failed !")
times.append(t)
done -= 1
lock.notify()
def update_plot():
nonlocal lines
while lines:
l = lines.pop()
l.remove()
del l
if len(times) > 1:
n, bins, lines = hax.hist(times, 100, normed=1, histtype='stepfilled', color='g')
hax.set_ylim(min(n), max(n))
lines.extend(lax.plot(times, color='blue'))
plt.draw()
def run_get():
nonlocal done
done += 1
start = time.time()
bootstrap.front().get(InfoHash.getRandom(), getcb, lambda ok, nodes: donecb(ok, nodes, start))
plt.pause(5)
plt.show()
update_plot()
times = []
for n in range(10):
self._workbench.replace_cluster()
plt.pause(2)
DhtNetwork.Log.log("Getting 50 random hashes succesively.")
for i in range(50):
with lock:
for _ in range(1):
run_get()
while done > 0:
lock.wait()
update_plot()
plt.pause(.1)
update_plot()
print("Took", np.sum(times), "mean", np.mean(times), "std", np.std(times), "min", np.min(times), "max", np.max(times))
print('GET calls timings benchmark test : DONE. ' \
'Close Matplotlib window for terminating the program.')
plt.ioff()
plt.show()
@reset_before_test
def _delete(self):
"""
Tests for performance of get() and put() operations on the network while
deleting around the target hash.
"""
bootstrap = self._bootstrap
bootstrap.resize(3)
consumer = bootstrap.get(1)
producer = bootstrap.get(2)
myhash = random_hash()
local_values = [Value(b'foo'), Value(b'bar'), Value(b'foobar')]
for _ in range(max(1, int(self._workbench.node_num/32))):
self._dhtGet(consumer, myhash)
DhtNetwork.Log.log("Waiting 15 seconds...")
time.sleep(15)
self._dhtPut(producer, myhash, *local_values)
#checking if values were transfered
self._dhtGet(consumer, myhash)
DhtNetwork.Log.log('Values are found on :')
for node in DhtFeatureTest.foreignNodes:
DhtNetwork.Log.log(node)
if not DhtFeatureTest.successfullTransfer(local_values, DhtFeatureTest.foreignValues):
if DhtFeatureTest.foreignValues:
DhtNetwork.Log.log('[GET]: Only ', len(DhtFeatureTest.foreignValues) ,' on ',
len(local_values), ' values successfully put.')
else:
DhtNetwork.Log.log('[GET]: 0 values successfully put')
DhtNetwork.Log.log('Removing all nodes hosting target values...')
for proc in self._workbench.procs:
DhtNetwork.Log.log('[REMOVE]: sending shutdown request to', proc)
proc.sendClusterRequest(
DhtNetworkSubProcess.SHUTDOWN_NODE_REQ,
DhtFeatureTest.foreignNodes
)
|
gpl-3.0
|
alceubissoto/gp-tcc
|
lessop/cgptk_mrlf.py
|
1
|
14446
|
import matplotlib
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import tkinter as tk
import random
import numpy as np
from itertools import product
import time
import logging
IND_MAX_SIZE = 1000
POP_SIZE = 5
MUTATION_CHANCE = 0.05
MAX_GENERATIONS = 100000
l_op = ['np.logical_and','np.logical_or','not np.logical_and','not np.logical_or']
N_INPUTS = 5
LARGE_FONT = ("Verdana", 12)
SMALL_FONT = ("Verdana", 8)
combList = []
answer = []
xListBest = []
yListBestFitness = []
xListAverage = []
yListAverageFitness = []
f = Figure(figsize=(6.7, 5), dpi=100)
a = f.add_subplot(111)
a.grid(True)
moment = time.strftime("%Y-%b-%d__%H_%M_%S",time.localtime())
logging.basicConfig(filename='statistics_' + moment + '.log', level=logging.INFO)
def animate():
global xListBest, xListAverage, yListBestFitness, yListAverageFitness, a
a.clear()
a.plot(xListBest, yListBestFitness, 'b-', label="Best")
a.plot(xListAverage, yListAverageFitness, 'r-', label="Average")
a.grid(True)
a.get_xaxis().tick_bottom()
a.get_yaxis().tick_left()
a.set_xlabel('Number of Generations')
a.set_ylabel('Cost')
a.set_title('Population Evolution')
f.canvas.draw()
class Gui(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
label = tk.Label(self, text="Population Size", font=LARGE_FONT)
label.grid(sticky=tk.W)
label = tk.Label(self, text="Default: " + str(POP_SIZE), font=SMALL_FONT)
label.grid(sticky=tk.W)
self.entry0 = tk.Entry(self, bd=5)
self.entry0.grid(row=0, column=0)
label = tk.Label(self, text="Individual Max Size", font=LARGE_FONT)
label.grid(sticky=tk.W, row=0, column=1)
label = tk.Label(self, text="Default: " + str(IND_MAX_SIZE), font=SMALL_FONT)
label.grid(sticky=tk.W, row=1, column=1)
self.entry1 = tk.Entry(self, bd=5)
self.entry1.grid(row=0, column=1)
label = tk.Label(self, text="Number of Inputs", font=LARGE_FONT)
label.grid(sticky=tk.W)
label = tk.Label(self, text="Default: " + str(N_INPUTS), font=SMALL_FONT)
label.grid(sticky=tk.W)
self.entry2 = tk.Entry(self, bd=5)
self.entry2.grid(row=2, column=0)
label = tk.Label(self, text="Number of Generations", font=LARGE_FONT)
label.grid(sticky=tk.W)
label = tk.Label(self, text="Default: " + str(MAX_GENERATIONS), font=SMALL_FONT)
label.grid(sticky=tk.W)
self.entry4 = tk.Entry(self, bd=5)
self.entry4.grid(row=4, column=0)
label = tk.Label(self, text="Mutation Probability", font=LARGE_FONT)
label.grid(sticky=tk.W, row=4, column=1)
label = tk.Label(self, text="Default: " + str(MUTATION_CHANCE), font=SMALL_FONT)
label.grid(sticky=tk.W, row=5, column=1)
self.entry5 = tk.Entry(self, bd=5)
self.entry5.grid(row=4, column=1)
button = tk.Button(self, text="Start", command=self.startGeneticProgramming, font=LARGE_FONT)
button.grid(row=12)
canvas = FigureCanvasTkAgg(f, self)
canvas.show()
canvas.get_tk_widget().grid(row=13, column=0)
self.text = tk.Text(self, wrap="word")
self.text.grid(row=13, column=1)
def write(self, txt):
self.text.insert(tk.END, str(txt))
def startGeneticProgramming(self):
# for index in range(0, 20):
print (self.entry1.get(), self.entry2.get())
global IND_MAX_SIZE, POP_SIZE, N_INPUTS, MAX_GENERATIONS, MUTATION_CHANCE, l_op, l_in, combList, answer
if self.entry0.get() != "":
POP_SIZE = int(self.entry0.get())
if self.entry1.get() != "":
IND_MAX_SIZE = int(self.entry1.get())
if self.entry2.get() != "":
N_INPUTS = int(self.entry2.get())
if self.entry4.get() != "":
MAX_GENERATIONS = int(self.entry4.get())
if self.entry5.get() != "":
MUTATION_CHANCE = float(self.entry5.get())
print('ninputs', N_INPUTS)
l_in = createTerminalList(N_INPUTS)
combList = createCombList(N_INPUTS)
answer = createParityAnswer()
print(answer)
l_op = ['np.logical_and','np.logical_or','not np.logical_and','not np.logical_or','np.logical_xor', 'not np.logical_xor']
#logging.info("\n\nTree Max Size: " + str(TREE_MAX_SIZE) + ", " +
# "Populations Size: " + str(POPULATION_SIZE) + ", " +
# "Number of Inputs: " + str(N_INPUTS) + ", " +
# "Tournament Size: " + str(TOURNAMENT_SIZE) + ", " +
# "Number of Generations: " + str(NUMBER_OF_GENERATIONS) + ", " +
# "Mutation Probability: " + str(MUTATION_PROBABILITY) + ", " +
# "Start Time: " + str(datetime.datetime.now()))
best_individual = reproduction()
self.write("\nBEST INDIVIDUAL: \n" + str(best_individual['genotype']) + "\nFITNESS: " + str(best_individual['fitness']))
def createCombList(number_inputs):
return ["".join(seq) for seq in product("01", repeat=number_inputs)]
def createTerminalList(number_inputs):
lT = []
for i in range(number_inputs):
lT.append('in' + str(i))
return lT
def createParityAnswer():
PARITY_SIZE_M = 2 ** N_INPUTS
inputs = [None] * PARITY_SIZE_M
outputs = [None] * PARITY_SIZE_M
for i in range(PARITY_SIZE_M):
inputs[i] = [None] * N_INPUTS
value = i
dividor = PARITY_SIZE_M
parity = 1
for j in range(N_INPUTS):
dividor /= 2
if value >= dividor:
inputs[i][j] = 1
parity = int(not parity)
value -= dividor
else:
inputs[i][j] = 0
outputs[i] = parity
return outputs
def evaluateIndividual(used_nodes, output):
result = list()
i = 0
input_values = []
# For each combination of 1s and 0s, evaluate:
for combination in combList:
for j in range(len(combination)):
# Prepare the array A to pass the correct value of combination.
# Example: (0, 0, 0)
# (0, 0, 1) ...
input_values.append(int(combination[j]))
# evaluateRec(A) is responsible for do the actual evaluation, with the A values passed that were crafted.
result.append(decode(used_nodes, input_values, output))
input_values[:] = []
i += 1
return result
def generateInd():
new_ind = {}
new_ind['genotype'] = []
possible_values = list(range(0, len(l_in)))
operators = list(range(0, len(l_op)))
last = len(l_in)-1;
for i in range(0, random.randrange(1,IND_MAX_SIZE)):
new_ind['genotype'].append(random.choice(possible_values))
new_ind['genotype'].append(random.choice(possible_values))
new_ind['genotype'].append(random.choice(operators))
last+=1
possible_values.append(last)
new_ind['output'] = random.choice(possible_values[N_INPUTS:])
return new_ind
def selectUsedNodes(ind):
out = ind['output']
gen = ind['genotype']
to_eval=[]
to_eval.append(out)
evaluated = list(range(0, len(l_in)))
used_nodes = {}
inicial_position = len(l_in)
while(len(to_eval)>0):
#print("to_eval", to_eval)
if(to_eval[0] in evaluated):
to_eval.pop(0)
else:
# node, (como obter seu valor (gen, gen, gen))
#used_nodesput.append(to_eval[0])
tmp = []
for i in range(0,3):
value = gen[(to_eval[0]-inicial_position)*3 + i]
tmp.append(value)
if((value not in evaluated) and (i!=2)):
to_eval.append(value)
used_nodes[to_eval[0]] = tmp
evaluated.append(to_eval[0])
return used_nodes
def decode(used_nodes, input_values, output):
tmp = []
iterations = int(len(used_nodes))
l_known = {}
#inputs
for i in range(0, len(input_values)):
l_known[i]=bool(input_values[i])
#evaluations
for key in sorted(used_nodes.keys()):
l_known[key] = eval(str(l_op[used_nodes[key][2]])+'('+str(l_known[used_nodes[key][0]])\
+', '+str(l_known[used_nodes[key][1]])+')')
#print(l_known)
return l_known[output]
def calcFitness(used_nodes, output_node):
fitness = 0.0
evaluation = evaluateIndividual(used_nodes, output_node)
#print(evaluation)
for i in range(0, len(answer)):
if answer[i] == evaluation[i]:
fitness += 1.0
return fitness
def createPopulation():
population = []
for i in range(0, POP_SIZE):
temp_ind = generateInd()
used_nodes = selectUsedNodes(temp_ind)
temp_ind['fitness'] = calcFitness(used_nodes, temp_ind['output'])
population.append(temp_ind)
return population
def sortPopulation(population):
newlist = sorted(population, key=lambda k: k['fitness'], reverse=True)
return newlist
def mutate(individual):
possible_values = list(range(0, int(len(individual['genotype'])/3)))
operators = list(range(0, len(l_op)))
ind = individual
active_nodes = selectUsedNodes(individual)
new_ind={}
mutated_genotype = []
index = 0
# Mutate Genotype
#print('IND', ind)
node_to_change = random.choice(list(active_nodes.keys()))
gene_to_change = random.randint(0, 2)
#print('ntc: ', node_to_change)
#print('gtc: ', gene_to_change)
which_gene = -1
for i in range(0, len(ind['genotype'])):
#print(ind['genotype'][i])
#print('ifclause: ', int(i/3)+N_INPUTS)
#print('which_gene', which_gene)
if (int(i/3)+N_INPUTS == node_to_change):
which_gene+=1
if(which_gene == gene_to_change):
if(gene_to_change==2):
#print('ntc: ', node_to_change)
#print('gtc: ', gene_to_change)
value_op = random.choice(operators)
#print('valueop', value_op)
mutated_genotype.append(value_op)
else:
#print('ntc: ', node_to_change)
#print('gtc: ', gene_to_change)
value = random.choice(possible_values[0:int(i/3)+N_INPUTS])
#print('value', value)
mutated_genotype.append(value)
which_gene=1000
else:
mutated_genotype.append(ind['genotype'][i])
else:
if(random.random() < MUTATION_CHANCE):
if((i+1)%3 == 0):
mutated_genotype.append(random.choice(operators))
else:
mutated_genotype.append(random.choice(possible_values[0:int((i+1)/3)+N_INPUTS]))
else:
mutated_genotype.append(ind['genotype'][i])
new_ind['genotype'] = mutated_genotype
# Mutate Output
if(random.random() < MUTATION_CHANCE):
new_ind['output'] = random.choice(possible_values)
else:
new_ind['output'] = individual['output']
#print('NEW IND', new_ind)
# Calculate new Fitness
used_nodes = selectUsedNodes(new_ind)
#print('output_m', individual['output'])
#print('output_new', new_ind['output'])
#print('un', used_nodes)
#print('mutated', new_ind['genotype'])
#print('before', individual['genotype'])
new_ind['fitness'] = calcFitness(used_nodes, new_ind['output'])
return new_ind
def reproduction():
global IND_MAX_SIZE, POP_SIZE, N_INPUTS, MAX_GENERATIONS, MUTATION_CHANCE, l_op, l_in, combList, answer
print(answer, combList, N_INPUTS)
pop = createPopulation()
sorted_pop = sortPopulation(pop)
for i in range(0, MAX_GENERATIONS):
new_pop=[]
new_pop.append(sorted_pop[0])
for j in range(1, POP_SIZE):
new_pop.append(mutate(sorted_pop[0]))
sorted_pop = sortPopulation(new_pop)
print('gen: ', i, ', fit: ', sorted_pop[0]['fitness'])
#print(sorted_pop)
# Animation control
if i % 10 == 0:
average = 0.0
for index in range(0, len(sorted_pop)):
average += sorted_pop[index]['fitness']
average = average / POP_SIZE
xListBest.append(i)
xListAverage.append(i)
yListBestFitness.append(sorted_pop[0]['fitness'])
yListAverageFitness.append(average)
animate()
if (sorted_pop[0]['fitness']==len(answer)):
xListBest.append(i)
yListBestFitness.append(sorted_pop[0]['fitness'])
average = 0.0
for index in range(0, len(sorted_pop)):
average += sorted_pop[index]['fitness']
average = average / POP_SIZE
xListAverage.append(i)
yListAverageFitness.append(average)
animate()
print('generations to success: ', i)
logging.info("Best Fitness: " + str(sorted_pop[0]['fitness']) + ", " +
"Generations: " + str(i) + "\n")
break
return sorted_pop[0]
##l_in = createTerminalList(N_INPUTS)
##combList = createCombList(N_INPUTS)
##answer = createParityAnswer()
#print('answer', answer)
#new_ind = generateInd()
#used_nodes = selectUsedNodes(new_ind) #selectUsedNodes
#print(new_ind)
#node_to_change = random.choice(list(used_nodes.keys()))
#print('used_nodes', used_nodes)
#print('keys', used_nodes.keys())
#print(node_to_change)
#print('sorted', sorted(used_nodes.keys()))
#print('output', new_ind['output'])
#decode(used_nodes, new_ind['output'])
#print('evaluation', evaluateIndividual(combList, used_nodes, new_ind['output']))
#print('fitness', calcFitness(used_nodes, new_ind['output']))
#population = createPopulation()
#sortedpop = sortPopulation(population)
#print('new_ind1', new_ind['genotype'])
#mutated = mutate(new_ind)
#print('new_ind2', new_ind['genotype'])
#print('mutated1', mutated['genotype'])
#print('mutated', mutated)
##best_individual = reproduction()
##print(best_individual)
app = Gui()
app.mainloop()
#l_op = ['and','or','xor']
|
mit
|
qiu997018209/MachineLearning
|
机器学习常用算法公式推导及分析与代码实现/贝叶斯/Kaggle影评观众情绪分类/kaggle_movie_reviewer.py
|
1
|
4564
|
#coding:utf-8
'''
Created on Apr 10, 2017
@author: ubuntu
'''
import re #正则表达式
from bs4 import BeautifulSoup #html标签处理
import pandas as pd
def review_to_wordlist(review):
'''
把IMDB的评论转成词序列
'''
#去掉HTML标签,拿到内容
review_text = BeautifulSoup(review).getText()
#用正则表达式获取符合规范的部分:^ 符号表示必须从文本开始处匹配
review_text = re.sub("[^a-zA-z]"," ",review_text)
#小写化所有的词,并转成词list
words = review_text.lower().split()
return words
#“header= 0”表示该文件的第一行包含列名称,“delimiter=\t”表示字段由\t分割, quoting=3告诉Python忽略双引号
train=pd.read_csv("labeledTrainData.tsv",header=0,delimiter="\t",quoting=3)
test=pd.read_csv("testData.tsv",header=0,delimiter="\t",quoting=3)
# 取出情感标签,positive/褒 或者 negative/贬
y_train = train['sentiment']
# 将训练和测试数据都转成词list
train_data = []
for i in xrange(0,len(train["review"])):
#列表中的每一个元素用空格连接起来
train_data.append(" ".join(review_to_wordlist(train["review"][i])))
test_data = []
for i in xrange(0,len(test["review"])):
train_data.append(" ".join(review_to_wordlist(test["review"][i])))
'''
特征处理
TF-IDF简介:TF-IDF倾向于过滤掉常见的词语,保留重要的词语,字词的重要性随着它在文件中出现的次数成正比增加,但同时会随着它在语料库中出现的频率成反比下降
TF:term frequency词频
IDF:inverse document frequency逆向文件频率
词频 (TF) 是一词语出现的次数除以该文件的总词语数。
假如一篇文件的总词语数是100个,而词语“母牛”出现了3次,那么“母牛”一词在该文件中的词频就是3/100=0.03。
一个计算文件频率 (DF) 的方法是测定有多少份文件出现过“母牛”一词,然后除以文件集里包含的文件总数。
所以,如果“母牛”一词在1,000份文件出现过,而文件总数是10,000,000份的话,其逆向文件频率就是 log(10,000,000 / 1,000)=4。
最后的TF-IDF的分数为0.03 * 4=0.12
'''
from sklearn.feature_extraction.text import TfidfTransformer as TFIV
# 初始化TFIV对象,去停用词,加2元语言模型
#让一个词语的概率依赖于它前面一个词语。我们将这种模型称作bigram(2-gram,二元语言模型
#各参数介绍http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
tfv = TFIV(min_df=3, max_features=None, strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}', ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1, stop_words = 'english')
# 合并训练和测试集以便进行TFIDF向量化操作
X_all = train_data + test_data
len_train = len(train_data)
# 这一步有点慢,去喝杯茶刷会儿微博知乎歇会儿...
tfv.fit(X_all)
#
X_all = tfv.transform(X_all)
# 恢复成训练集和测试集部分
X = X_all[:len_train]
X_test = X_all[len_train:]
# 多项式朴素贝叶斯
from sklearn.naive_bayes import MultinomialNB as MNB
model_NB = MNB()
model_NB.fit(X, y_train) #特征数据直接灌进来
MNB(alpha=1.0, class_prior=None, fit_prior=True)
from sklearn.cross_validation import cross_val_score
import numpy as np
print "多项式贝叶斯分类器20折交叉验证得分: ", np.mean(cross_val_score(model_NB, X, y_train, cv=20, scoring='roc_auc'))
# 多项式贝叶斯分类器20折交叉验证得分: 0.950837239
'''
# 折腾一下逻辑回归,恩
from sklearn.linear_model import LogisticRegression as LR
from sklearn.grid_search import GridSearchCV
# 设定grid search的参数
grid_values = {'C':[30]}
# 设定打分为roc_auc
model_LR = GridSearchCV(LR(penalty = 'L2', dual = True, random_state = 0), grid_values, scoring = 'roc_auc', cv = 20)
# 数据灌进来
model_LR.fit(X,y_train)
# 20折交叉验证,开始漫长的等待...
GridSearchCV(cv=20, estimator=LR(C=1.0, class_weight=None, dual=True,
fit_intercept=True, intercept_scaling=1, penalty='L2', random_state=0, tol=0.0001),
fit_params={}, iid=True, loss_func=None, n_jobs=1,
param_grid={'C': [30]}, pre_dispatch='2*n_jobs', refit=True,
score_func=None, scoring='roc_auc', verbose=0)
#输出结果
print model_LR.grid_scores_
'''
'''
在这些问题中,朴素贝叶斯能取得和逻辑回归相近的成绩,但是训练速度远快于逻辑回归,真正的直接和高效。
'''
|
apache-2.0
|
acbart/megabus-wanderlust
|
analyze.py
|
1
|
3744
|
import sys, os
import networkx as nx
import matplotlib.pyplot as plt
from heapq import heappop, heappush
from random import randint
import time, datetime
from requests import ConnectionError
from megabus import Trek, Stop, Trip
from megabus import cache_search, report, day_range, report_append, long_trip
from util import load_json, safe_str
DATA = load_json("data/location_data.json")
america = nx.Graph()
for edge in DATA['weighted_destinations']:
america.add_edge(edge["origin"], edge["destination"],
weight=edge["duration"])
def find_route(source="Christiansburg, VA", target="Newark, DE",
start= "12/20/2014", end = "12/27/2014"):
route = nx.dijkstra_path(america,source,target)
route = [str(r) for r in route]
long_trip("{}-{}".format(source,target), route, (start, end))
return route
find_route()
sys.exit()
def wander(start_date, end_date, start_city, no_loop_back=True, money_limit = None):
filename = "{} _ {}.txt".format(start_city.replace(",",""), start_date.replace("/","-"))
if type(start_date) == str:
start_date, end_date = map(lambda x : datetime.datetime.strptime(x, "%m/%d/%Y"), (start_date, end_date))
incomplete_routes = [(0, Trek(Stop(start_date, start_city)))]
#finished_routes = []
id = 0
file = open(filename, 'w')
while incomplete_routes:
try:
cost, trek = heappop(incomplete_routes)
print "Trying", trek.route(), cost
next_stops = G[trek.arrival.city]
next_trips = []
for stop, distance in next_stops.iteritems():
for date in day_range(trek.arrival.date, end_date, days=1):
next_trips.extend(cache_search(trek.arrival.city,
stop,
date))
if trek.arrival.city == start_city and trek:
report_append(file, id, trek)
id+= 1
for trip in next_trips:
if trip.arrival.date <= end_date and trek.arrival.date < trip.departure.date:
if trek.arrival.city != trip.arrival.city:
new_trek = trek.add_trip(trip)
if money_limit is None or new_trek.cost <= money_limit:
if not no_loop_back or (trek.departure.city not in [stop.city for stop in new_trek.route()[1:-1]]):
if trip.arrival.city == start_city:
#print "Found", new_trek.route()
report_append(file, id, new_trek)
id+= 1
heappush(incomplete_routes, (new_trek.value, new_trek))
except ConnectionError, c:
print c
heappush(incomplete_routes, (new_trek.value, trek))
time.sleep(1)
#finished_routes.sort(key=lambda trek: trek.cost / float(len(trek)))
file.close()
#report("test", finished_routes)
#return finished_routes
# Given a starting point
# Find all paths possible from that point
# Wish to minimize cost
# Wish to Maximize distance
# Limit travel to certain time range
# Must end where you started
print wander("10/4/2013", "10/6/2013", "Christiansburg, VA", money_limit=30)
#print nx.bfs_successors(G, "Newark, DE")
#from megabus import long_trip
#long_trip("test_route.txt", ROUTE, ("8/5/2013", "8/7/2013"))
#pos=nx.spring_layout(G)
#nx.draw_networkx_labels(G,pos,font_size=12,font_family='sans-serif')
#nx.draw_networkx_nodes(G,pos,node_size=700)
#nx.draw_networkx_edges(G,pos,width=6)
#plt.axis('off')
#plt.savefig("us-map.png")
#plt.show()
|
lgpl-2.1
|
chenyyx/scikit-learn-doc-zh
|
examples/en/model_selection/plot_confusion_matrix.py
|
63
|
3231
|
"""
================
Confusion matrix
================
Example of confusion matrix usage to evaluate the quality
of the output of a classifier on the iris data set. The
diagonal elements represent the number of points for which
the predicted label is equal to the true label, while
off-diagonal elements are those that are mislabeled by the
classifier. The higher the diagonal values of the confusion
matrix the better, indicating many correct predictions.
The figures show the confusion matrix with and without
normalization by class support size (number of elements
in each class). This kind of normalization can be
interesting in case of class imbalance to have a more
visual interpretation of which class is being misclassified.
Here the results are not as good as they could be as our
choice for the regularization parameter C was not the best.
In real life applications this parameter is usually chosen
using :ref:`grid_search`.
"""
print(__doc__)
import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
class_names = iris.target_names
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
classifier = svm.SVC(kernel='linear', C=0.01)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
|
gpl-3.0
|
marcocaccin/scikit-learn
|
sklearn/datasets/tests/test_svmlight_format.py
|
228
|
11221
|
from bz2 import BZ2File
import gzip
from io import BytesIO
import numpy as np
import os
import shutil
from tempfile import NamedTemporaryFile
from sklearn.externals.six import b
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import raises
from sklearn.utils.testing import assert_in
import sklearn
from sklearn.datasets import (load_svmlight_file, load_svmlight_files,
dump_svmlight_file)
currdir = os.path.dirname(os.path.abspath(__file__))
datafile = os.path.join(currdir, "data", "svmlight_classification.txt")
multifile = os.path.join(currdir, "data", "svmlight_multilabel.txt")
invalidfile = os.path.join(currdir, "data", "svmlight_invalid.txt")
invalidfile2 = os.path.join(currdir, "data", "svmlight_invalid_order.txt")
def test_load_svmlight_file():
X, y = load_svmlight_file(datafile)
# test X's shape
assert_equal(X.indptr.shape[0], 7)
assert_equal(X.shape[0], 6)
assert_equal(X.shape[1], 21)
assert_equal(y.shape[0], 6)
# test X's non-zero values
for i, j, val in ((0, 2, 2.5), (0, 10, -5.2), (0, 15, 1.5),
(1, 5, 1.0), (1, 12, -3),
(2, 20, 27)):
assert_equal(X[i, j], val)
# tests X's zero values
assert_equal(X[0, 3], 0)
assert_equal(X[0, 5], 0)
assert_equal(X[1, 8], 0)
assert_equal(X[1, 16], 0)
assert_equal(X[2, 18], 0)
# test can change X's values
X[0, 2] *= 2
assert_equal(X[0, 2], 5)
# test y
assert_array_equal(y, [1, 2, 3, 4, 1, 2])
def test_load_svmlight_file_fd():
# test loading from file descriptor
X1, y1 = load_svmlight_file(datafile)
fd = os.open(datafile, os.O_RDONLY)
try:
X2, y2 = load_svmlight_file(fd)
assert_array_equal(X1.data, X2.data)
assert_array_equal(y1, y2)
finally:
os.close(fd)
def test_load_svmlight_file_multilabel():
X, y = load_svmlight_file(multifile, multilabel=True)
assert_equal(y, [(0, 1), (2,), (), (1, 2)])
def test_load_svmlight_files():
X_train, y_train, X_test, y_test = load_svmlight_files([datafile] * 2,
dtype=np.float32)
assert_array_equal(X_train.toarray(), X_test.toarray())
assert_array_equal(y_train, y_test)
assert_equal(X_train.dtype, np.float32)
assert_equal(X_test.dtype, np.float32)
X1, y1, X2, y2, X3, y3 = load_svmlight_files([datafile] * 3,
dtype=np.float64)
assert_equal(X1.dtype, X2.dtype)
assert_equal(X2.dtype, X3.dtype)
assert_equal(X3.dtype, np.float64)
def test_load_svmlight_file_n_features():
X, y = load_svmlight_file(datafile, n_features=22)
# test X'shape
assert_equal(X.indptr.shape[0], 7)
assert_equal(X.shape[0], 6)
assert_equal(X.shape[1], 22)
# test X's non-zero values
for i, j, val in ((0, 2, 2.5), (0, 10, -5.2),
(1, 5, 1.0), (1, 12, -3)):
assert_equal(X[i, j], val)
# 21 features in file
assert_raises(ValueError, load_svmlight_file, datafile, n_features=20)
def test_load_compressed():
X, y = load_svmlight_file(datafile)
with NamedTemporaryFile(prefix="sklearn-test", suffix=".gz") as tmp:
tmp.close() # necessary under windows
with open(datafile, "rb") as f:
shutil.copyfileobj(f, gzip.open(tmp.name, "wb"))
Xgz, ygz = load_svmlight_file(tmp.name)
# because we "close" it manually and write to it,
# we need to remove it manually.
os.remove(tmp.name)
assert_array_equal(X.toarray(), Xgz.toarray())
assert_array_equal(y, ygz)
with NamedTemporaryFile(prefix="sklearn-test", suffix=".bz2") as tmp:
tmp.close() # necessary under windows
with open(datafile, "rb") as f:
shutil.copyfileobj(f, BZ2File(tmp.name, "wb"))
Xbz, ybz = load_svmlight_file(tmp.name)
# because we "close" it manually and write to it,
# we need to remove it manually.
os.remove(tmp.name)
assert_array_equal(X.toarray(), Xbz.toarray())
assert_array_equal(y, ybz)
@raises(ValueError)
def test_load_invalid_file():
load_svmlight_file(invalidfile)
@raises(ValueError)
def test_load_invalid_order_file():
load_svmlight_file(invalidfile2)
@raises(ValueError)
def test_load_zero_based():
f = BytesIO(b("-1 4:1.\n1 0:1\n"))
load_svmlight_file(f, zero_based=False)
def test_load_zero_based_auto():
data1 = b("-1 1:1 2:2 3:3\n")
data2 = b("-1 0:0 1:1\n")
f1 = BytesIO(data1)
X, y = load_svmlight_file(f1, zero_based="auto")
assert_equal(X.shape, (1, 3))
f1 = BytesIO(data1)
f2 = BytesIO(data2)
X1, y1, X2, y2 = load_svmlight_files([f1, f2], zero_based="auto")
assert_equal(X1.shape, (1, 4))
assert_equal(X2.shape, (1, 4))
def test_load_with_qid():
# load svmfile with qid attribute
data = b("""
3 qid:1 1:0.53 2:0.12
2 qid:1 1:0.13 2:0.1
7 qid:2 1:0.87 2:0.12""")
X, y = load_svmlight_file(BytesIO(data), query_id=False)
assert_array_equal(y, [3, 2, 7])
assert_array_equal(X.toarray(), [[.53, .12], [.13, .1], [.87, .12]])
res1 = load_svmlight_files([BytesIO(data)], query_id=True)
res2 = load_svmlight_file(BytesIO(data), query_id=True)
for X, y, qid in (res1, res2):
assert_array_equal(y, [3, 2, 7])
assert_array_equal(qid, [1, 1, 2])
assert_array_equal(X.toarray(), [[.53, .12], [.13, .1], [.87, .12]])
@raises(ValueError)
def test_load_invalid_file2():
load_svmlight_files([datafile, invalidfile, datafile])
@raises(TypeError)
def test_not_a_filename():
# in python 3 integers are valid file opening arguments (taken as unix
# file descriptors)
load_svmlight_file(.42)
@raises(IOError)
def test_invalid_filename():
load_svmlight_file("trou pic nic douille")
def test_dump():
Xs, y = load_svmlight_file(datafile)
Xd = Xs.toarray()
# slicing a csr_matrix can unsort its .indices, so test that we sort
# those correctly
Xsliced = Xs[np.arange(Xs.shape[0])]
for X in (Xs, Xd, Xsliced):
for zero_based in (True, False):
for dtype in [np.float32, np.float64, np.int32]:
f = BytesIO()
# we need to pass a comment to get the version info in;
# LibSVM doesn't grok comments so they're not put in by
# default anymore.
dump_svmlight_file(X.astype(dtype), y, f, comment="test",
zero_based=zero_based)
f.seek(0)
comment = f.readline()
try:
comment = str(comment, "utf-8")
except TypeError: # fails in Python 2.x
pass
assert_in("scikit-learn %s" % sklearn.__version__, comment)
comment = f.readline()
try:
comment = str(comment, "utf-8")
except TypeError: # fails in Python 2.x
pass
assert_in(["one", "zero"][zero_based] + "-based", comment)
X2, y2 = load_svmlight_file(f, dtype=dtype,
zero_based=zero_based)
assert_equal(X2.dtype, dtype)
assert_array_equal(X2.sorted_indices().indices, X2.indices)
if dtype == np.float32:
assert_array_almost_equal(
# allow a rounding error at the last decimal place
Xd.astype(dtype), X2.toarray(), 4)
else:
assert_array_almost_equal(
# allow a rounding error at the last decimal place
Xd.astype(dtype), X2.toarray(), 15)
assert_array_equal(y, y2)
def test_dump_multilabel():
X = [[1, 0, 3, 0, 5],
[0, 0, 0, 0, 0],
[0, 5, 0, 1, 0]]
y = [[0, 1, 0], [1, 0, 1], [1, 1, 0]]
f = BytesIO()
dump_svmlight_file(X, y, f, multilabel=True)
f.seek(0)
# make sure it dumps multilabel correctly
assert_equal(f.readline(), b("1 0:1 2:3 4:5\n"))
assert_equal(f.readline(), b("0,2 \n"))
assert_equal(f.readline(), b("0,1 1:5 3:1\n"))
def test_dump_concise():
one = 1
two = 2.1
three = 3.01
exact = 1.000000000000001
# loses the last decimal place
almost = 1.0000000000000001
X = [[one, two, three, exact, almost],
[1e9, 2e18, 3e27, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
y = [one, two, three, exact, almost]
f = BytesIO()
dump_svmlight_file(X, y, f)
f.seek(0)
# make sure it's using the most concise format possible
assert_equal(f.readline(),
b("1 0:1 1:2.1 2:3.01 3:1.000000000000001 4:1\n"))
assert_equal(f.readline(), b("2.1 0:1000000000 1:2e+18 2:3e+27\n"))
assert_equal(f.readline(), b("3.01 \n"))
assert_equal(f.readline(), b("1.000000000000001 \n"))
assert_equal(f.readline(), b("1 \n"))
f.seek(0)
# make sure it's correct too :)
X2, y2 = load_svmlight_file(f)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
def test_dump_comment():
X, y = load_svmlight_file(datafile)
X = X.toarray()
f = BytesIO()
ascii_comment = "This is a comment\nspanning multiple lines."
dump_svmlight_file(X, y, f, comment=ascii_comment, zero_based=False)
f.seek(0)
X2, y2 = load_svmlight_file(f, zero_based=False)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
# XXX we have to update this to support Python 3.x
utf8_comment = b("It is true that\n\xc2\xbd\xc2\xb2 = \xc2\xbc")
f = BytesIO()
assert_raises(UnicodeDecodeError,
dump_svmlight_file, X, y, f, comment=utf8_comment)
unicode_comment = utf8_comment.decode("utf-8")
f = BytesIO()
dump_svmlight_file(X, y, f, comment=unicode_comment, zero_based=False)
f.seek(0)
X2, y2 = load_svmlight_file(f, zero_based=False)
assert_array_almost_equal(X, X2.toarray())
assert_array_equal(y, y2)
f = BytesIO()
assert_raises(ValueError,
dump_svmlight_file, X, y, f, comment="I've got a \0.")
def test_dump_invalid():
X, y = load_svmlight_file(datafile)
f = BytesIO()
y2d = [y]
assert_raises(ValueError, dump_svmlight_file, X, y2d, f)
f = BytesIO()
assert_raises(ValueError, dump_svmlight_file, X, y[:-1], f)
def test_dump_query_id():
# test dumping a file with query_id
X, y = load_svmlight_file(datafile)
X = X.toarray()
query_id = np.arange(X.shape[0]) // 2
f = BytesIO()
dump_svmlight_file(X, y, f, query_id=query_id, zero_based=True)
f.seek(0)
X1, y1, query_id1 = load_svmlight_file(f, query_id=True, zero_based=True)
assert_array_almost_equal(X, X1.toarray())
assert_array_almost_equal(y, y1)
assert_array_almost_equal(query_id, query_id1)
|
bsd-3-clause
|
kambysese/mne-python
|
tutorials/sample-datasets/plot_brainstorm_auditory.py
|
3
|
16058
|
# -*- coding: utf-8 -*-
"""
.. _tut-brainstorm-auditory:
====================================
Brainstorm auditory tutorial dataset
====================================
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see :footcite:`TadelEtAl2011` and the
associated `brainstorm site
<https://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>`_.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
`FieldTrip bug tracker
<http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300>`__.
References
----------
.. footbibliography::
"""
# Authors: Mainak Jas <[email protected]>
# Eric Larson <[email protected]>
# Jaakko Leppakangas <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
###############################################################################
# To reduce memory consumption and running time, some of the steps are
# precomputed. To run everything from scratch change this to False. With
# ``use_precomputed = False`` running time of this script can be several
# minutes even on a fast computer.
use_precomputed = True
###############################################################################
# The data was collected with a CTF 275 system at 2400 Hz and low-pass
# filtered at 600 Hz. Here the data and empty room data files are read to
# construct instances of :class:`mne.io.Raw`.
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
###############################################################################
# In the memory saving mode we use ``preload=False`` and use the memory
# efficient IO which loads the data on demand. However, filtering and some
# other functions require the data to be preloaded in the memory.
raw = read_raw_ctf(raw_fname1)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2)])
raw_erm = read_raw_ctf(erm_fname)
###############################################################################
# Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
# sensors and 2 EEG electrodes (Cz and Pz).
# In addition:
#
# - 1 stim channel for marking presentation times for the stimuli
# - 1 audio channel for the sent signal
# - 1 response channel for recording the button presses
# - 1 ECG bipolar
# - 2 EOG bipolar (vertical and horizontal)
# - 12 head tracking channels
# - 20 unused channels
#
# The head tracking channels and the unused channels are marked as misc
# channels. Here we define the EOG and ECG channels.
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick(['meg', 'stim', 'misc', 'eog', 'ecg']).load_data()
###############################################################################
# For noise reduction, a set of bad segments have been identified and stored
# in csv files. The bad segments are later used to reject epochs that overlap
# with them.
# The file for the second run also contains some saccades. The saccades are
# removed by using SSP. We use pandas to read the data from the csv files. You
# can also view the files with your favorite text editor.
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.set_annotations(annotations)
del onsets, durations, descriptions
###############################################################################
# Here we compute the saccade and EOG projectors for magnetometers and add
# them to the raw data. The projectors are added to both runs.
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
baseline=(None, None),
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
###############################################################################
# Visually inspect the effects of projections. Click on 'proj' button at the
# bottom right corner to toggle the projectors on/off. EOG events can be
# plotted by adding the event list as a keyword argument. As the bad segments
# and saccades were added as annotations to the raw data, they are plotted as
# well.
raw.plot(block=True)
###############################################################################
# Typical preprocessing step is the removal of power line artifact (50 Hz or
# 60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
# original 60 Hz artifact and the harmonics. The power spectra are plotted
# before and after the filtering to show the effect. The drop after 600 Hz
# appears because the data was filtered during the acquisition. In memory
# saving mode we do the filtering at evoked stage, which is not something you
# usually would do.
if not use_precomputed:
raw.plot_psd(tmax=np.inf, picks='meg')
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks='meg')
###############################################################################
# We also lowpass filter the data at 100 Hz to remove the hf components.
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
###############################################################################
# Epoching and averaging.
# First some parameters are defined and events extracted from the stimulus
# channel (UPPT001). The rejection thresholds are defined as peak-to-peak
# values and are in T / m for gradiometers, T for magnetometers and
# V for EOG and EEG channels.
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
###############################################################################
# The event timing is adjusted by comparing the trigger times on detected
# sound onsets on channel UADC001-4408.
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
###############################################################################
# We mark a set of bad channels that seem noisier than others. This can also
# be done interactively with ``raw.plot`` by clicking the channel name
# (or the line). The marked channels are added as bad when the browser window
# is closed.
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
###############################################################################
# The epochs (trials) are created for MEG channels. First we find the picks
# for MEG and EOG channels. Then the epochs are constructed using these picks.
# The epochs overlapping with annotated bad segments are also rejected by
# default. To turn off rejection by bad segments (as was done earlier with
# saccades) you can use keyword ``reject_by_annotation=False``.
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=['meg', 'eog'],
baseline=(None, 0), reject=reject, preload=False,
proj=True)
###############################################################################
# We only use first 40 good epochs from each run. Since we first drop the bad
# epochs, the indices of the epochs are no longer same as in the original
# epochs collection. Investigation of the event timings reveals that first
# epoch from the second run corresponds to index 182.
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs
###############################################################################
# The averages for each conditions are computed.
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
###############################################################################
# Typical preprocessing step is the removal of power line artifact (50 Hz or
# 60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
# line artifacts (and high frequency information). Normally this would be done
# to raw data (with :func:`mne.io.Raw.filter`), but to reduce memory
# consumption of this tutorial, we do it at evoked stage. (At the raw stage,
# you could alternatively notch filter with :func:`mne.io.Raw.notch_filter`.)
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
###############################################################################
# Here we plot the ERF of standard and deviant conditions. In both conditions
# we can see the P50 and N100 responses. The mismatch negativity is visible
# only in the deviant condition around 100-200 ms. P200 is also visible around
# 170 ms in both conditions but much stronger in the standard condition. P300
# is visible in deviant condition only (decision making in preparation of the
# button press). You can view the topographies from a certain time span by
# painting an area with clicking and holding the left mouse button.
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
###############################################################################
# Show activations as topography figures.
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
###############################################################################
# We can see the MMN effect more clearly by looking at the difference between
# the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
# P300 are emphasised.
evoked_difference = combine_evoked([evoked_dev, evoked_std], weights=[1, -1])
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
###############################################################################
# Source estimation.
# We compute the noise covariance matrix from the empty room measurement
# and use it for the other runs.
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
###############################################################################
# The transformation is read from a file:
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
###############################################################################
# To save time and memory, the forward solution is read from a file. Set
# ``use_precomputed=False`` in the beginning of this script to build the
# forward solution from scratch. The head surfaces for constructing a BEM
# solution are read from a file. Since the data only contains MEG channels, we
# only need the inner skull surface for making the forward solution. For more
# information: :ref:`CHDBBCEJ`, :func:`mne.setup_source_space`,
# :ref:`bem-model`, :func:`mne.bem.make_watershed_bem`.
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
###############################################################################
# The sources are computed using dSPM method and plotted on an inflated brain
# surface. For interactive controls over the image, use keyword
# ``time_viewer=True``.
# Standard condition.
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
###############################################################################
# Deviant condition.
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
###############################################################################
# Difference.
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
|
bsd-3-clause
|
adbuerger/tsysblend
|
examples/k_bau_temperatures.py
|
1
|
3923
|
import sys
import os
sys.path.append(os.path.join(os.environ["HOME"], "tsysblend"))
import bpy
from tsysblend.state import LayerState, CubeState
from mathutils import Vector
from tsysblend.flow import MassFlow, HeatFlow
import tsysblend.config as tc
import pandas as pd
import numpy as np
import pdb
tc.clearScene()
tc.setColorBounds(20,35)
tc.setBackgroundColor((1,1,1))
from_time = "2015-08-24 00:00:00" # hier Betrachungszeitraum festlegen
to_time = "2015-08-24 23:59:59"
df = pd.read_table(os.path.join(os.environ["HOME"], "tsysblend/examples/temp_kbau_remastered.csv"), delimiter = "\t")
df["Time"]= pd.to_datetime(df["Time"])
df =df.set_index("Time")
datafr = np.round(df[from_time:to_time],1)
dtime = datafr.reset_index()
tc.createLegend((0,15,12),"Legende","°C",elements=7, text_size=0.8)
tc.animateSingleElement((0,-5,16),dtime, "Time", text_size=0.8)
tc.animateSingleElement((0,-5,14),datafr,"KIT Temperature", unit="°C", text_size=0.8)
a= 2
b = 3
c = 0.3
z = 5
# 0.Stock
k0aso = CubeState(datafr, "k0aso", location=(-a,-b,0), scale=(a,b,c), unit="°C")
k0asw = CubeState(datafr, "k0asw", location=(-a,b,0), scale=(a,b,c), unit="°C")
k0ano = CubeState(datafr, "k0ano", location=(a,-b,0), scale=(a,b,c), unit="°C")
k0anw = CubeState(datafr, "k0anw", location=(a,b,0), scale=(a,b,c), unit="°C")
k0to = CubeState(datafr, "k0to", location=(0,-2.5*b,0), scale=(2*a,0.5*b,c), unit="°C")
k0tw = CubeState(datafr, "k0tw", location=(0,2.5*b,0), scale=(2*a,0.5*b,c), unit="°C")
k002m = CubeState(datafr, "k002m", location=(-3*a,-b,0), scale=(a,b,c), unit="°C")
k003 = CubeState(datafr, "k003", location=(-3*a,b,0), scale=(a,b,c), unit="°C")
k009 = CubeState(datafr, "k009", location=(3*a,-b,0), scale=(a,b,c), unit="°C")
k008m = CubeState(datafr, "k008m", location=(3*a,b,0), scale=(a,b,c), unit="°C")
# 1.Stock
k1aso = CubeState(datafr, "k1aso", location=(-a,-b,z), scale=(a,b,c), unit="°C")
k1asw = CubeState(datafr, "k1asw", location=(-a,b,z), scale=(a,b,c), unit="°C")
k1ano = CubeState(datafr, "k1ano", location=(a,-b,z), scale=(a,b,c), unit="°C")
k1anw = CubeState(datafr, "k1anw", location=(a,b,z), scale=(a,b,c), unit="°C")
k1to = CubeState(datafr, "k1to" , location=(0,-2.5*b,z), scale=(2*a,0.5*b,c), unit="°C")
k1tw = CubeState(datafr, "k1tw", location=(0,2.5*b,z), scale=(2*a,0.5*b,c), unit="°C" )
k102a = CubeState(datafr, "k102a", location=(-3*a,-b,z), scale=(a,b,c), unit="°C")
k103 = CubeState(datafr, "k103", location=(-2.5*a,1.5*b,z), scale=(0.5*a,0.5*b,c), unit="°C")
k103a = CubeState(datafr, "k103a", location=(-3*a,0.5*b,z), scale=(a,0.5*b,c), unit="°C")
k103b = CubeState(datafr, "k103b", location=(-3.5*a,1.5*b,z), scale=(0.5*a,0.5*b,c), unit="°C")
k109 = CubeState(datafr, "k109", location=(3*a,-b,z), scale=(a,b,c), unit="°C")
k108m = CubeState(datafr, "k108m", location=(3*a,b,z), scale=(a,b,c), unit="°C")
# 2.Stock
#k2aso = CubeState((-a,-b,1.8*z), text_location=text_location, "k2aso", (a,b,c), "°C", datafr)
k2asw = CubeState(datafr, "k2asw", location=(-a,b,1.8*z), scale=(a,b,c), unit="°C")
k2ano = CubeState(datafr, "k2ano", location=(a,-b,1.8*z), scale=(a,b,c), unit="°C")
k2anw = CubeState(datafr, "k2anw", location=(a,b,1.8*z), scale=(a,b,c), unit="°C")
k2to = CubeState(datafr, "k2to", location=(0,-2.5*b,1.8*z), scale=(2*a,0.5*b,c), unit="°C")
k2tw = CubeState(datafr, "k2tw", location=(0,2.5*b,1.8*z), scale=(2*a,0.5*b,c), unit="°C")
k202m = CubeState(datafr, "k202m", location=(-3*a,-b,1.8*z), scale=(a,b,c), unit="°C")
k203 = CubeState(datafr, "k203", location=(-3*a,b,1.8*z), scale=(a,b,c), unit="°C")
k208 = CubeState(datafr, "k208", location=(3*a,-b,1.8*z), scale=(a,b,c), unit="°C")
k207m = CubeState(datafr, "k207m", location=(3*a,b,1.8*z), scale=(a,b,c), unit="°C")
tc.animateColor()
tc.setEndFrame(len(datafr.index))
tc.addCamera((40.0, 2, 15), (1.4, 0, 1.578))
tc.addLamp((25.0, 30.0, 6.5), (0,0.698, 0.349))
|
lgpl-3.0
|
blink1073/scikit-image
|
doc/examples/edges/plot_convex_hull.py
|
9
|
1487
|
"""
===========
Convex Hull
===========
The convex hull of a binary image is the set of pixels included in the
smallest convex polygon that surround all white pixels in the input.
In this example, we show how the input pixels (white) get filled in by the
convex hull (white and grey).
A good overview of the algorithm is given on `Steve Eddin's blog
<http://blogs.mathworks.com/steve/2011/10/04/binary-image-convex-hull-algorithm-notes/>`__.
"""
import numpy as np
import matplotlib.pyplot as plt
from skimage.morphology import convex_hull_image
image = np.array(
[[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=float)
original_image = np.copy(image)
chull = convex_hull_image(image)
image[chull] += 1
# image is now:
# [[ 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 2. 0. 0. 0. 0.]
# [ 0. 0. 0. 2. 1. 2. 0. 0. 0.]
# [ 0. 0. 2. 1. 1. 1. 2. 0. 0.]
# [ 0. 2. 1. 1. 1. 1. 1. 2. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 6))
ax1.set_title('Original picture')
ax1.imshow(original_image, cmap=plt.cm.gray, interpolation='nearest')
ax1.set_xticks([]), ax1.set_yticks([])
ax2.set_title('Transformed picture')
ax2.imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax2.set_xticks([]), ax2.set_yticks([])
plt.show()
|
bsd-3-clause
|
mikebenfield/scikit-learn
|
sklearn/tests/test_discriminant_analysis.py
|
37
|
11979
|
import numpy as np
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import ignore_warnings
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.discriminant_analysis import _cov
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]], dtype='f')
y = np.array([1, 1, 1, 2, 2, 2])
y3 = np.array([1, 1, 2, 2, 3, 3])
# Degenerate data with only one feature (still should be separable)
X1 = np.array([[-2, ], [-1, ], [-1, ], [1, ], [1, ], [2, ]], dtype='f')
# Data is just 9 separable points in the plane
X6 = np.array([[0, 0], [-2, -2], [-2, -1], [-1, -1], [-1, -2],
[1, 3], [1, 2], [2, 1], [2, 2]])
y6 = np.array([1, 1, 1, 1, 1, 2, 2, 2, 2])
y7 = np.array([1, 2, 3, 2, 3, 1, 2, 3, 1])
# Degenerate data with 1 feature (still should be separable)
X7 = np.array([[-3, ], [-2, ], [-1, ], [-1, ], [0, ], [1, ], [1, ],
[2, ], [3, ]])
# Data that has zero variance in one dimension and needs regularization
X2 = np.array([[-3, 0], [-2, 0], [-1, 0], [-1, 0], [0, 0], [1, 0], [1, 0],
[2, 0], [3, 0]])
# One element class
y4 = np.array([1, 1, 1, 1, 1, 1, 1, 1, 2])
# Data with less samples in a class than n_features
X5 = np.c_[np.arange(8), np.zeros((8, 3))]
y5 = np.array([0, 0, 0, 0, 0, 1, 1, 1])
solver_shrinkage = [('svd', None), ('lsqr', None), ('eigen', None),
('lsqr', 'auto'), ('lsqr', 0), ('lsqr', 0.43),
('eigen', 'auto'), ('eigen', 0), ('eigen', 0.43)]
def test_lda_predict():
# Test LDA classification.
# This checks that LDA implements fit and predict and returns correct
# values for simple toy data.
for test_case in solver_shrinkage:
solver, shrinkage = test_case
clf = LinearDiscriminantAnalysis(solver=solver, shrinkage=shrinkage)
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y, 'solver %s' % solver)
# Assert that it works with 1D data
y_pred1 = clf.fit(X1, y).predict(X1)
assert_array_equal(y_pred1, y, 'solver %s' % solver)
# Test probability estimates
y_proba_pred1 = clf.predict_proba(X1)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y,
'solver %s' % solver)
y_log_proba_pred1 = clf.predict_log_proba(X1)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1,
8, 'solver %s' % solver)
# Primarily test for commit 2f34950 -- "reuse" of priors
y_pred3 = clf.fit(X, y3).predict(X)
# LDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y3), 'solver %s' % solver)
# Test invalid shrinkages
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=-0.2231)
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="eigen", shrinkage="dummy")
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="svd", shrinkage="auto")
assert_raises(NotImplementedError, clf.fit, X, y)
# Test unknown solver
clf = LinearDiscriminantAnalysis(solver="dummy")
assert_raises(ValueError, clf.fit, X, y)
def test_lda_priors():
# Test priors (negative priors)
priors = np.array([0.5, -0.5])
clf = LinearDiscriminantAnalysis(priors=priors)
msg = "priors must be non-negative"
assert_raise_message(ValueError, msg, clf.fit, X, y)
# Test that priors passed as a list are correctly handled (run to see if
# failure)
clf = LinearDiscriminantAnalysis(priors=[0.5, 0.5])
clf.fit(X, y)
# Test that priors always sum to 1
priors = np.array([0.5, 0.6])
prior_norm = np.array([0.45, 0.55])
clf = LinearDiscriminantAnalysis(priors=priors)
assert_warns(UserWarning, clf.fit, X, y)
assert_array_almost_equal(clf.priors_, prior_norm, 2)
def test_lda_coefs():
# Test if the coefficients of the solvers are approximately the same.
n_features = 2
n_classes = 2
n_samples = 1000
X, y = make_blobs(n_samples=n_samples, n_features=n_features,
centers=n_classes, random_state=11)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_lsqr = LinearDiscriminantAnalysis(solver="lsqr")
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_svd.fit(X, y)
clf_lda_lsqr.fit(X, y)
clf_lda_eigen.fit(X, y)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_lsqr.coef_, 1)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_eigen.coef_, 1)
assert_array_almost_equal(clf_lda_eigen.coef_, clf_lda_lsqr.coef_, 1)
def test_lda_transform():
# Test LDA transform.
clf = LinearDiscriminantAnalysis(solver="svd", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="eigen", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="lsqr", n_components=1)
clf.fit(X, y)
msg = "transform not implemented for 'lsqr'"
assert_raise_message(NotImplementedError, msg, clf.transform, X)
def test_lda_explained_variance_ratio():
# Test if the sum of the normalized eigen vectors values equals 1,
# Also tests whether the explained_variance_ratio_ formed by the
# eigen solver is the same as the explained_variance_ratio_ formed
# by the svd solver
state = np.random.RandomState(0)
X = state.normal(loc=0, scale=100, size=(40, 20))
y = state.randint(0, 3, size=(40,))
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_eigen.fit(X, y)
assert_almost_equal(clf_lda_eigen.explained_variance_ratio_.sum(), 1.0, 3)
assert_equal(clf_lda_eigen.explained_variance_ratio_.shape, (2,),
"Unexpected length for explained_variance_ratio_")
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_svd.fit(X, y)
assert_almost_equal(clf_lda_svd.explained_variance_ratio_.sum(), 1.0, 3)
assert_equal(clf_lda_svd.explained_variance_ratio_.shape, (2,),
"Unexpected length for explained_variance_ratio_")
assert_array_almost_equal(clf_lda_svd.explained_variance_ratio_,
clf_lda_eigen.explained_variance_ratio_)
def test_lda_orthogonality():
# arrange four classes with their means in a kite-shaped pattern
# the longer distance should be transformed to the first component, and
# the shorter distance to the second component.
means = np.array([[0, 0, -1], [0, 2, 0], [0, -2, 0], [0, 0, 5]])
# We construct perfectly symmetric distributions, so the LDA can estimate
# precise means.
scatter = np.array([[0.1, 0, 0], [-0.1, 0, 0], [0, 0.1, 0], [0, -0.1, 0],
[0, 0, 0.1], [0, 0, -0.1]])
X = (means[:, np.newaxis, :] + scatter[np.newaxis, :, :]).reshape((-1, 3))
y = np.repeat(np.arange(means.shape[0]), scatter.shape[0])
# Fit LDA and transform the means
clf = LinearDiscriminantAnalysis(solver="svd").fit(X, y)
means_transformed = clf.transform(means)
d1 = means_transformed[3] - means_transformed[0]
d2 = means_transformed[2] - means_transformed[1]
d1 /= np.sqrt(np.sum(d1 ** 2))
d2 /= np.sqrt(np.sum(d2 ** 2))
# the transformed within-class covariance should be the identity matrix
assert_almost_equal(np.cov(clf.transform(scatter).T), np.eye(2))
# the means of classes 0 and 3 should lie on the first component
assert_almost_equal(np.abs(np.dot(d1[:2], [1, 0])), 1.0)
# the means of classes 1 and 2 should lie on the second component
assert_almost_equal(np.abs(np.dot(d2[:2], [0, 1])), 1.0)
def test_lda_scaling():
# Test if classification works correctly with differently scaled features.
n = 100
rng = np.random.RandomState(1234)
# use uniform distribution of features to make sure there is absolutely no
# overlap between classes.
x1 = rng.uniform(-1, 1, (n, 3)) + [-10, 0, 0]
x2 = rng.uniform(-1, 1, (n, 3)) + [10, 0, 0]
x = np.vstack((x1, x2)) * [1, 100, 10000]
y = [-1] * n + [1] * n
for solver in ('svd', 'lsqr', 'eigen'):
clf = LinearDiscriminantAnalysis(solver=solver)
# should be able to separate the data perfectly
assert_equal(clf.fit(x, y).score(x, y), 1.0,
'using covariance: %s' % solver)
def test_qda():
# QDA classification.
# This checks that QDA implements fit and predict and returns
# correct values for a simple toy dataset.
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
assert_array_equal(y_pred, y6)
# Assure that it works with 1D data
y_pred1 = clf.fit(X7, y6).predict(X7)
assert_array_equal(y_pred1, y6)
# Test probas estimates
y_proba_pred1 = clf.predict_proba(X7)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y6)
y_log_proba_pred1 = clf.predict_log_proba(X7)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1, 8)
y_pred3 = clf.fit(X6, y7).predict(X6)
# QDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y7))
# Classes should have at least 2 elements
assert_raises(ValueError, clf.fit, X6, y4)
def test_qda_priors():
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
n_pos = np.sum(y_pred == 2)
neg = 1e-10
clf = QuadraticDiscriminantAnalysis(priors=np.array([neg, 1 - neg]))
y_pred = clf.fit(X6, y6).predict(X6)
n_pos2 = np.sum(y_pred == 2)
assert_greater(n_pos2, n_pos)
def test_qda_store_covariances():
# The default is to not set the covariances_ attribute
clf = QuadraticDiscriminantAnalysis().fit(X6, y6)
assert_true(not hasattr(clf, 'covariances_'))
# Test the actual attribute:
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariances_'))
assert_array_almost_equal(
clf.covariances_[0],
np.array([[0.7, 0.45], [0.45, 0.7]])
)
assert_array_almost_equal(
clf.covariances_[1],
np.array([[0.33333333, -0.33333333], [-0.33333333, 0.66666667]])
)
def test_qda_regularization():
# the default is reg_param=0. and will cause issues
# when there is a constant variable
clf = QuadraticDiscriminantAnalysis()
with ignore_warnings():
y_pred = clf.fit(X2, y6).predict(X2)
assert_true(np.any(y_pred != y6))
# adding a little regularization fixes the problem
clf = QuadraticDiscriminantAnalysis(reg_param=0.01)
with ignore_warnings():
clf.fit(X2, y6)
y_pred = clf.predict(X2)
assert_array_equal(y_pred, y6)
# Case n_samples_in_a_class < n_features
clf = QuadraticDiscriminantAnalysis(reg_param=0.1)
with ignore_warnings():
clf.fit(X5, y5)
y_pred5 = clf.predict(X5)
assert_array_equal(y_pred5, y5)
def test_covariance():
x, y = make_blobs(n_samples=100, n_features=5,
centers=1, random_state=42)
# make features correlated
x = np.dot(x, np.arange(x.shape[1] ** 2).reshape(x.shape[1], x.shape[1]))
c_e = _cov(x, 'empirical')
assert_almost_equal(c_e, c_e.T)
c_s = _cov(x, 'auto')
assert_almost_equal(c_s, c_s.T)
|
bsd-3-clause
|
mavenlin/tensorflow
|
tensorflow/examples/learn/text_classification_character_cnn.py
|
29
|
5666
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example of using convolutional networks over characters for DBpedia dataset.
This model is similar to one described in this paper:
"Character-level Convolutional Networks for Text Classification"
http://arxiv.org/abs/1509.01626
and is somewhat alternative to the Lua code from here:
https://github.com/zhangxiangxiao/Crepe
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
import numpy as np
import pandas
from sklearn import metrics
import tensorflow as tf
FLAGS = None
MAX_DOCUMENT_LENGTH = 100
N_FILTERS = 10
FILTER_SHAPE1 = [20, 256]
FILTER_SHAPE2 = [20, N_FILTERS]
POOLING_WINDOW = 4
POOLING_STRIDE = 2
MAX_LABEL = 15
CHARS_FEATURE = 'chars' # Name of the input character feature.
def char_cnn_model(features, labels, mode):
"""Character level convolutional neural network model to predict classes."""
features_onehot = tf.one_hot(features[CHARS_FEATURE], 256)
input_layer = tf.reshape(
features_onehot, [-1, MAX_DOCUMENT_LENGTH, 256, 1])
with tf.variable_scope('CNN_Layer1'):
# Apply Convolution filtering on input sequence.
conv1 = tf.layers.conv2d(
input_layer,
filters=N_FILTERS,
kernel_size=FILTER_SHAPE1,
padding='VALID',
# Add a ReLU for non linearity.
activation=tf.nn.relu)
# Max pooling across output of Convolution+Relu.
pool1 = tf.layers.max_pooling2d(
conv1,
pool_size=POOLING_WINDOW,
strides=POOLING_STRIDE,
padding='SAME')
# Transpose matrix so that n_filters from convolution becomes width.
pool1 = tf.transpose(pool1, [0, 1, 3, 2])
with tf.variable_scope('CNN_Layer2'):
# Second level of convolution filtering.
conv2 = tf.layers.conv2d(
pool1,
filters=N_FILTERS,
kernel_size=FILTER_SHAPE2,
padding='VALID')
# Max across each filter to get useful features for classification.
pool2 = tf.squeeze(tf.reduce_max(conv2, 1), squeeze_dims=[1])
# Apply regular WX + B and classification.
logits = tf.layers.dense(pool2, MAX_LABEL, activation=None)
predicted_classes = tf.argmax(logits, 1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'class': predicted_classes,
'prob': tf.nn.softmax(logits)
})
onehot_labels = tf.one_hot(labels, MAX_LABEL, 1, 0)
loss = tf.losses.softmax_cross_entropy(
onehot_labels=onehot_labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
labels=labels, predictions=predicted_classes)
}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main(unused_argv):
# Prepare training and testing data
dbpedia = tf.contrib.learn.datasets.load_dataset(
'dbpedia', test_with_fake_data=FLAGS.test_with_fake_data, size='large')
x_train = pandas.DataFrame(dbpedia.train.data)[1]
y_train = pandas.Series(dbpedia.train.target)
x_test = pandas.DataFrame(dbpedia.test.data)[1]
y_test = pandas.Series(dbpedia.test.target)
# Process vocabulary
char_processor = tf.contrib.learn.preprocessing.ByteProcessor(
MAX_DOCUMENT_LENGTH)
x_train = np.array(list(char_processor.fit_transform(x_train)))
x_test = np.array(list(char_processor.transform(x_test)))
x_train = x_train.reshape([-1, MAX_DOCUMENT_LENGTH, 1, 1])
x_test = x_test.reshape([-1, MAX_DOCUMENT_LENGTH, 1, 1])
# Build model
classifier = tf.estimator.Estimator(model_fn=char_cnn_model)
# Train.
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={CHARS_FEATURE: x_train},
y=y_train,
batch_size=len(x_train),
num_epochs=None,
shuffle=True)
classifier.train(input_fn=train_input_fn, steps=100)
# Predict.
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={CHARS_FEATURE: x_test},
y=y_test,
num_epochs=1,
shuffle=False)
predictions = classifier.predict(input_fn=test_input_fn)
y_predicted = np.array(list(p['class'] for p in predictions))
y_predicted = y_predicted.reshape(np.array(y_test).shape)
# Score with sklearn.
score = metrics.accuracy_score(y_test, y_predicted)
print('Accuracy (sklearn): {0:f}'.format(score))
# Score with tensorflow.
scores = classifier.evaluate(input_fn=test_input_fn)
print('Accuracy (tensorflow): {0:f}'.format(scores['accuracy']))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--test_with_fake_data',
default=False,
help='Test the example code with fake data.',
action='store_true')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
|
apache-2.0
|
cangermueller/deepcpg
|
scripts/dcpg_train.py
|
1
|
28511
|
#!/usr/bin/env python
"""Train a DeepCpG model to predict DNA methylation.
Trains a DeepCpG model on DNA (DNA model), neighboring methylation states
(CpG model), or both (Joint model) to predict CpG methylation of multiple cells.
Allows to fine-tune individual models or to train them from scratch.
Examples
--------
Train a DNA model on chromosome 1, 3, and 5, and use chromosome 13, 14, and
15 for validation:
.. code:: bash
dcpg_train.py
./data/c{1,3,5}_*.h5
--val_files ./data/c{13,14,15}_*.h5
--dna_model CnnL2h128
--out_dir ./models/dna
Train a CpG model:
.. code:: bash
dcpg_train.py
./data/c{1,3,5}_*.h5
--val_files ./data/c{13,14,15}_*.h5
--cpg_model RnnL1
--out_dir ./models/cpg
Train a Joint model using a pre-trained DNA and CpG model:
.. code:: bash
dcpg_train.py
./data/c{1,3,5}_*.h5
--val_files ./data/c{13,14,15}_*.h5
--dna_model ./models/dna
--cpg_model ./models/cpg
--joint_model JointL2h512
--train_models joint
--out_dir ./models/joint
See Also
--------
* ``dcpg_eval.py``: For evaluating a trained model and imputing methylation
profiles.
"""
from __future__ import print_function
from __future__ import division
from collections import OrderedDict
import os
import random
import re
import sys
import argparse
import h5py as h5
import logging
import numpy as np
import pandas as pd
import six
from six.moves import range
from keras import callbacks as kcbk
from keras.models import Model
from keras.optimizers import Adam
from deepcpg import callbacks as cbk
from deepcpg import data as dat
from deepcpg import metrics as met
from deepcpg import models as mod
from deepcpg.models.utils import is_input_layer, is_output_layer
from deepcpg.data import hdf, OUTPUT_SEP
from deepcpg.utils import format_table, make_dir, EPS
LOG_PRECISION = 4
CLA_METRICS = [met.acc]
REG_METRICS = [met.mse, met.mae]
def remove_outputs(model):
while is_output_layer(model.layers[-1], model):
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.output_names = None
def rename_layers(model, scope=None):
if not scope:
scope = model.scope
for layer in model.layers:
if is_input_layer(layer) or layer.name.startswith(scope):
continue
layer.name = '%s/%s' % (scope, layer.name)
def get_output_stats(output):
stats = OrderedDict()
output = np.ma.masked_values(output, dat.CPG_NAN)
stats['nb_tot'] = len(output)
stats['nb_obs'] = np.sum(output != dat.CPG_NAN)
stats['frac_obs'] = stats['nb_obs'] / stats['nb_tot']
stats['mean'] = float(np.mean(output))
stats['var'] = float(np.var(output))
return stats
def get_output_weights(output_names, weight_patterns):
regex_weights = dict()
for weight_pattern in weight_patterns:
tmp = [tmp.strip() for tmp in weight_pattern.split('=')]
if len(tmp) != 2:
raise ValueError('Invalid weight pattern "%s"!' % (weight_pattern))
regex_weights[tmp[0]] = float(tmp[1])
output_weights = dict()
for output_name in output_names:
for regex, weight in six.iteritems(regex_weights):
if re.match(regex, output_name):
output_weights[output_name] = weight
if output_name not in output_weights:
output_weights[output_name] = 1.0
return output_weights
def get_class_weights(labels, nb_class=None):
freq = np.bincount(labels) / len(labels)
if nb_class is None:
nb_class = len(freq)
if len(freq) < nb_class:
tmp = np.zeros(nb_class, dtype=freq.dtype)
tmp[:len(freq)] = freq
freq = tmp
weights = 1 / (freq + EPS)
weights /= weights.sum()
return weights
def get_output_class_weights(output_name, output):
output = output[output != dat.CPG_NAN]
_output_name = output_name.split(OUTPUT_SEP)
if _output_name[0] == 'cpg':
weights = get_class_weights(output, 2)
elif _output_name[-1] == 'cat_var':
weights = get_class_weights(output, 3)
elif _output_name[-1] in ['cat2_var', 'diff', 'mode']:
weights = get_class_weights(output, 2)
else:
return None
weights = OrderedDict(zip(range(len(weights)), weights))
return weights
def perf_logs_str(logs):
t = logs.to_csv(None, sep='\t', float_format='%.4f', index=False)
return t
def get_metrics(output_name):
_output_name = output_name.split(OUTPUT_SEP)
if _output_name[0] == 'cpg':
metrics = CLA_METRICS
elif _output_name[0] == 'bulk':
metrics = REG_METRICS + CLA_METRICS
elif _output_name[-1] in ['diff', 'mode', 'cat2_var']:
metrics = CLA_METRICS
elif _output_name[-1] == 'mean':
metrics = REG_METRICS + CLA_METRICS
elif _output_name[-1] == 'var':
metrics = REG_METRICS
elif _output_name[-1] == 'cat_var':
metrics = [met.cat_acc]
else:
raise ValueError('Invalid output name "%s"!' % output_name)
return metrics
class App(object):
def run(self, args):
name = os.path.basename(args[0])
parser = self.create_parser(name)
opts = parser.parse_args(args[1:])
return self.main(name, opts)
def create_parser(self, name):
p = argparse.ArgumentParser(
prog=name,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='Trains model on DNA (DNA model), neighboring '
'methylation states (CpG model), or both (Joint model) to predict '
'CpG methylation of multiple cells.')
# IO
g = p.add_argument_group('input-output arguments')
g.add_argument(
'train_files',
nargs='+',
help='Training data files')
g.add_argument(
'--val_files',
nargs='+',
help='Validation data files')
g.add_argument(
'-o', '--out_dir',
default='./train',
help='Output directory')
g = p.add_argument_group('arguments to define the model architecture')
models = sorted(list(mod.dna.list_models().keys()))
g.add_argument(
'--dna_model',
help='Name of DNA model or files of existing model.'
' Available models: %s' % ', '.join(models),
nargs='+')
g.add_argument(
'--dna_wlen',
help='DNA window length',
type=int)
models = sorted(list(mod.cpg.list_models().keys()))
g.add_argument(
'--cpg_model',
help='Name of CpG model or files of existing model.'
' Available models: %s' % ', '.join(models),
nargs='+')
g.add_argument(
'--cpg_wlen',
help='CpG window length',
type=int)
models = sorted(list(mod.joint.list_models().keys()))
g.add_argument(
'--joint_model',
help='Name of Joint model.'
' Available models: %s' % ', '.join(models),
default='JointL2h512')
g.add_argument(
'--model_files',
help='Files of existing model',
nargs='+')
g = p.add_argument_group('arguments to define which model components '
'are trained')
g.add_argument(
'--fine_tune',
help='Only train output layers',
action='store_true')
g.add_argument(
'--train_models',
help='Only train the specified models',
choices=['dna', 'cpg', 'joint'],
nargs='+')
g.add_argument(
'--trainable',
help='Regex of layers that should be trained',
nargs='+')
g.add_argument(
'--not_trainable',
help='Regex of layers that should not be trained',
nargs='+')
g.add_argument(
'--freeze_filter',
help='Exclude filter weights of first convolutional layer from '
'training',
action='store_true')
g.add_argument(
'--filter_weights',
help='HDF5 file with weights to be used for initializing filters',
nargs='+')
g = p.add_argument_group('training arguments')
g.add_argument(
'--learning_rate',
help='Learning rate',
type=float,
default=0.0001)
g.add_argument(
'--learning_rate_decay',
help='Exponential learning rate decay factor',
type=float,
default=0.975)
g.add_argument(
'--nb_epoch',
help='Maximum # training epochs',
type=int,
default=30)
g.add_argument(
'--nb_train_sample',
help='Maximum # training samples',
type=int)
g.add_argument(
'--nb_val_sample',
help='Maximum # validation samples',
type=int)
g.add_argument(
'--batch_size',
help='Batch size',
type=int,
default=128)
g.add_argument(
'--early_stopping',
help='Early stopping patience',
type=int,
default=5)
g.add_argument(
'--dropout',
help='Dropout rate',
type=float,
default=0.0)
g.add_argument(
'--l1_decay',
help='L1 weight decay',
type=float,
default=0.0001)
g.add_argument(
'--l2_decay',
help='L2 weight decay',
type=float,
default=0.0001)
g.add_argument(
'--no_tensorboard',
help='Do not store Tensorboard summaries',
action='store_true')
g = p.add_argument_group('arguments to select outputs and weights')
g.add_argument(
'--output_names',
help='Regex to select outputs',
nargs='+',
default=['cpg/.*'])
g.add_argument(
'--nb_output',
type=int,
help='Maximum number of outputs')
g.add_argument(
'--no_class_weights',
help='Do not weight classes',
action='store_true')
g.add_argument(
'--output_weights',
help='Output weights defined as a list of `output`=`weight` '
'patterns, where `output` is a regex of output names, and '
'`weight` the weight that is assigned to them',
nargs='+')
g.add_argument(
'--replicate_names',
help='Regex to select replicates',
nargs='+')
g.add_argument(
'--nb_replicate',
type=int,
help='Maximum number of replicates')
g = p.add_argument_group('advanced arguments')
g.add_argument(
'--max_time',
help='Maximum training time in hours',
type=float)
g.add_argument(
'--stop_file',
help='File that terminates training if it exists')
g.add_argument(
'--seed',
help='Seed of random number generator',
type=int,
default=0)
g.add_argument(
'--no_log_outputs',
help='Do not log performance metrics of individual outputs',
action='store_true')
g.add_argument(
'--verbose',
help='More detailed log messages',
action='store_true')
g.add_argument(
'--log_file',
help='Write log messages to file')
g.add_argument(
'--data_q_size',
help='Size of data generator queue',
type=int,
default=10)
g.add_argument(
'--data_nb_worker',
help='Number of worker for data generator queue',
type=int,
default=1)
return p
def get_callbacks(self):
opts = self.opts
callbacks = []
if opts.val_files:
callbacks.append(kcbk.EarlyStopping(
'val_loss' if opts.val_files else 'loss',
patience=opts.early_stopping,
verbose=1
))
callbacks.append(kcbk.ModelCheckpoint(
os.path.join(opts.out_dir, 'model_weights_train.h5'),
save_best_only=False))
monitor = 'val_loss' if opts.val_files else 'loss'
callbacks.append(kcbk.ModelCheckpoint(
os.path.join(opts.out_dir, 'model_weights_val.h5'),
monitor=monitor,
save_best_only=True, verbose=1
))
max_time = int(opts.max_time * 3600) if opts.max_time else None
callbacks.append(cbk.TrainingStopper(
max_time=max_time,
stop_file=opts.stop_file,
verbose=1
))
def learning_rate_schedule(epoch):
lr = opts.learning_rate * opts.learning_rate_decay**epoch
print('Learning rate: %.3g' % lr)
return lr
callbacks.append(kcbk.LearningRateScheduler(learning_rate_schedule))
def save_lc(epoch, epoch_logs, val_epoch_logs):
logs = {'lc_train.tsv': epoch_logs,
'lc_val.tsv': val_epoch_logs}
for name, logs in six.iteritems(logs):
if not logs:
continue
logs = pd.DataFrame(logs)
with open(os.path.join(opts.out_dir, name), 'w') as f:
f.write(perf_logs_str(logs))
metrics = OrderedDict()
for metric_funs in six.itervalues(self.metrics):
for metric_fun in metric_funs:
metrics[metric_fun.__name__] = True
metrics = ['loss'] + list(metrics.keys())
self.perf_logger = cbk.PerformanceLogger(
callbacks=[save_lc],
metrics=metrics,
precision=LOG_PRECISION,
verbose=not opts.no_log_outputs
)
callbacks.append(self.perf_logger)
if not opts.no_tensorboard:
callbacks.append(kcbk.TensorBoard(
log_dir=opts.out_dir,
histogram_freq=0,
write_graph=True,
write_images=True
))
return callbacks
def print_output_stats(self, output_stats):
table = OrderedDict()
for name, stats in six.iteritems(output_stats):
table.setdefault('name', []).append(name)
for key in stats:
table.setdefault(key, []).append(stats[key])
print('Output statistics:')
print(format_table(table))
print()
def print_class_weights(self, class_weights):
table = OrderedDict()
for name, class_weight in six.iteritems(class_weights):
if not class_weight:
continue
column = []
for cla, weight in six.iteritems(class_weight):
column.append('%s=%.2f' % (cla, weight))
table[name] = column
if table:
print('Class weights:')
print(format_table(table))
print()
def build_dna_model(self):
opts = self.opts
log = self.log
if os.path.exists(opts.dna_model[0]):
log.info('Loading existing DNA model ...')
dna_model = mod.load_model(opts.dna_model, log=log.info)
remove_outputs(dna_model)
rename_layers(dna_model, 'dna')
else:
log.info('Building DNA model ...')
dna_model_builder = mod.dna.get(opts.dna_model[0])(
l1_decay=opts.l1_decay,
l2_decay=opts.l2_decay,
dropout=opts.dropout)
dna_wlen = dat.get_dna_wlen(opts.train_files[0], opts.dna_wlen)
dna_inputs = dna_model_builder.inputs(dna_wlen)
dna_model = dna_model_builder(dna_inputs)
return dna_model
def build_cpg_model(self):
opts = self.opts
log = self.log
replicate_names = dat.get_replicate_names(
opts.train_files[0],
regex=opts.replicate_names,
nb_key=opts.nb_replicate)
if not replicate_names:
raise ValueError('No replicates found!')
print('Replicate names:')
print(', '.join(replicate_names))
print()
cpg_wlen = dat.get_cpg_wlen(opts.train_files[0], opts.cpg_wlen)
if os.path.exists(opts.cpg_model[0]):
log.info('Loading existing CpG model ...')
src_cpg_model = mod.load_model(opts.cpg_model, log=log.info)
remove_outputs(src_cpg_model)
rename_layers(src_cpg_model, 'cpg')
nb_replicate = src_cpg_model.input_shape[0][1]
if nb_replicate != len(replicate_names):
tmp = 'CpG model was trained with %d replicates but %d'
'replicates provided. Copying weight to new model ...'
tmp %= (nb_replicate, len(replicate_names))
log.info('Replicate names differ: '
'Copying weights to new model ...')
cpg_model_builder = mod.cpg.get(src_cpg_model.name)(
l1_decay=opts.l1_decay,
l2_decay=opts.l2_decay,
dropout=opts.dropout)
cpg_inputs = cpg_model_builder.inputs(cpg_wlen, replicate_names)
cpg_model = cpg_model_builder(cpg_inputs)
mod.copy_weights(src_cpg_model, cpg_model)
else:
cpg_model = src_cpg_model
else:
log.info('Building CpG model ...')
cpg_model_builder = mod.cpg.get(opts.cpg_model[0])(
l1_decay=opts.l1_decay,
l2_decay=opts.l2_decay,
dropout=opts.dropout)
cpg_inputs = cpg_model_builder.inputs(cpg_wlen, replicate_names)
cpg_model = cpg_model_builder(cpg_inputs)
return cpg_model
def build_model(self):
opts = self.opts
log = self.log
output_names = dat.get_output_names(opts.train_files[0],
regex=opts.output_names,
nb_key=opts.nb_output)
if not output_names:
raise ValueError('No outputs found!')
dna_model = None
if opts.dna_model:
dna_model = self.build_dna_model()
cpg_model = None
if opts.cpg_model:
cpg_model = self.build_cpg_model()
if dna_model is not None and cpg_model is not None:
log.info('Joining models ...')
joint_model_builder = mod.joint.get(opts.joint_model)(
l1_decay=opts.l1_decay,
l2_decay=opts.l2_decay,
dropout=opts.dropout)
stem = joint_model_builder([dna_model, cpg_model])
stem.name = '_'.join([stem.name, dna_model.name, cpg_model.name])
elif dna_model is not None:
stem = dna_model
elif cpg_model is not None:
stem = cpg_model
else:
log.info('Loading existing model ...')
stem = mod.load_model(opts.model_files, log=log.info)
if sorted(output_names) == sorted(stem.output_names):
return stem
log.info('Removing existing output layers ...')
remove_outputs(stem)
outputs = mod.add_output_layers(stem.outputs[0], output_names)
model = Model(inputs=stem.inputs, outputs=outputs, name=stem.name)
return model
def set_trainability(self, model):
opts = self.opts
trainable = []
not_trainable = []
if opts.fine_tune:
not_trainable.append('.*')
elif opts.train_models:
not_trainable.append('.*')
for name in opts.train_models:
trainable.append('%s/' % name)
if opts.freeze_filter:
not_trainable.append(mod.get_first_conv_layer(model.layers).name)
if not trainable and opts.trainable:
trainable = opts.trainable
if not not_trainable and opts.not_trainable:
not_trainable = opts.not_trainable
if not trainable and not not_trainable:
return
table = OrderedDict()
table['layer'] = []
table['trainable'] = []
for layer in model.layers:
if is_input_layer(layer) or is_output_layer(layer, model):
continue
if not hasattr(layer, 'trainable'):
continue
for regex in not_trainable:
if re.match(regex, layer.name):
layer.trainable = False
for regex in trainable:
if re.match(regex, layer.name):
layer.trainable = True
table['layer'].append(layer.name)
table['trainable'].append(layer.trainable)
print('Layer trainability:')
print(format_table(table))
print()
def init_filter_weights(self, filename, conv_layer):
h5_file = h5.File(filename[0], 'r')
group = h5_file
if len(filename) > 1:
group = h5_file[filename[1]]
weights = group['weights'].value
bias = None
if 'bias' in group:
bias = group['bias'].value
h5_file.close()
assert weights.ndim == 4
if weights.shape[1] != 1:
weights = weights[:, :, :, 0]
weights = np.swapaxes(weights, 0, 2)
weights = np.expand_dims(weights, 1)
# filter_size x 1 x 4 x nb_filter
cur_weights, cur_bias = conv_layer.get_weights()
# Adapt number of filters
tmp = min(weights.shape[-1], cur_weights.shape[-1])
weights = weights[:, :, :, :tmp]
# Adapt filter size
if len(weights) > len(cur_weights):
# Truncate weights
idx = (len(weights) - len(cur_weights)) // 2
weights = weights[idx:(idx + len(cur_weights))]
elif len(weights) < len(cur_weights):
# Pad weights
shape = [len(cur_weights)] + list(weights.shape[1:])
pad_weights = np.random.uniform(0, 1, shape) * 1e-2
idx = (len(cur_weights) - len(weights)) // 2
pad_weights[idx:(idx + len(weights))] = weights
weights = pad_weights
assert np.all(weights.shape[:-1] == cur_weights.shape[:-1])
cur_weights[:, :, :, :weights.shape[-1]] = weights
if bias is not None:
bias = bias[:len(cur_bias)]
cur_bias[:len(bias)] = bias
conv_layer.set_weights((cur_weights, cur_bias))
print('%d filters initialized' % weights.shape[-1])
def main(self, name, opts):
logging.basicConfig(filename=opts.log_file,
format='%(levelname)s (%(asctime)s): %(message)s')
log = logging.getLogger(name)
if opts.verbose:
log.setLevel(logging.DEBUG)
else:
log.setLevel(logging.INFO)
if opts.seed is not None:
np.random.seed(opts.seed)
random.seed(opts.seed)
self.log = log
self.opts = opts
make_dir(opts.out_dir)
log.info('Building model ...')
model = self.build_model()
model.summary()
self.set_trainability(model)
if opts.filter_weights:
conv_layer = mod.get_first_conv_layer(model.layers)
log.info('Initializing filters of %s ...' % conv_layer.name)
self.init_filter_weights(opts.filter_weights, conv_layer)
mod.save_model(model, os.path.join(opts.out_dir, 'model.json'))
log.info('Computing output statistics ...')
output_names = model.output_names
output_stats = OrderedDict()
if opts.no_class_weights:
class_weights = None
else:
class_weights = OrderedDict()
for name in output_names:
output = hdf.read(opts.train_files, 'outputs/%s' % name,
nb_sample=opts.nb_train_sample)
output = list(output.values())[0]
output_stats[name] = get_output_stats(output)
if class_weights is not None:
class_weights[name] = get_output_class_weights(name, output)
self.print_output_stats(output_stats)
if class_weights:
self.print_class_weights(class_weights)
output_weights = None
if opts.output_weights:
log.info('Initializing output weights ...')
output_weights = get_output_weights(output_names,
opts.output_weights)
print('Output weights:')
for output_name in output_names:
if output_name in output_weights:
print('%s: %.2f' % (output_name,
output_weights[output_name]))
print()
self.metrics = dict()
for output_name in output_names:
self.metrics[output_name] = get_metrics(output_name)
optimizer = Adam(lr=opts.learning_rate)
model.compile(optimizer=optimizer,
loss=mod.get_objectives(output_names),
loss_weights=output_weights,
metrics=self.metrics)
log.info('Loading data ...')
replicate_names = dat.get_replicate_names(
opts.train_files[0],
regex=opts.replicate_names,
nb_key=opts.nb_replicate)
data_reader = mod.data_reader_from_model(
model, replicate_names=replicate_names)
nb_train_sample = dat.get_nb_sample(opts.train_files,
opts.nb_train_sample)
train_data = data_reader(opts.train_files,
class_weights=class_weights,
batch_size=opts.batch_size,
nb_sample=nb_train_sample,
shuffle=True,
loop=True)
if opts.val_files:
nb_val_sample = dat.get_nb_sample(opts.val_files,
opts.nb_val_sample)
val_data = data_reader(opts.val_files,
batch_size=opts.batch_size,
nb_sample=nb_val_sample,
shuffle=False,
loop=True)
else:
val_data = None
nb_val_sample = None
log.info('Initializing callbacks ...')
callbacks = self.get_callbacks()
log.info('Training model ...')
print()
print('Training samples: %d' % nb_train_sample)
if nb_val_sample:
print('Validation samples: %d' % nb_val_sample)
model.fit_generator(
train_data,
steps_per_epoch=nb_train_sample // opts.batch_size,
epochs=opts.nb_epoch,
callbacks=callbacks,
validation_data=val_data,
validation_steps=nb_val_sample // opts.batch_size,
max_queue_size=opts.data_q_size,
workers=opts.data_nb_worker,
verbose=0)
print('\nTraining set performance:')
print(format_table(self.perf_logger.epoch_logs,
precision=LOG_PRECISION))
if self.perf_logger.val_epoch_logs:
print('\nValidation set performance:')
print(format_table(self.perf_logger.val_epoch_logs,
precision=LOG_PRECISION))
# Restore model with highest validation performance
filename = os.path.join(opts.out_dir, 'model_weights_val.h5')
if os.path.isfile(filename):
model.load_weights(filename)
# Delete metrics since they cause problems when loading the model
# from HDF5 file. Metrics can be loaded from json + weights file.
model.metrics = None
model.metrics_names = None
model.metrics_tensors = None
model.save(os.path.join(opts.out_dir, 'model.h5'))
log.info('Done!')
return 0
if __name__ == '__main__':
app = App()
app.run(sys.argv)
|
mit
|
Arn-O/kadenze-deep-creative-apps
|
session-4/libs/inception.py
|
13
|
4890
|
"""
Creative Applications of Deep Learning w/ Tensorflow.
Kadenze, Inc.
Copyright Parag K. Mital, June 2016.
"""
import os
import numpy as np
from tensorflow.python.platform import gfile
import tensorflow as tf
import matplotlib.pyplot as plt
from skimage.transform import resize as imresize
from .utils import download_and_extract_tar, download_and_extract_zip
def inception_download(data_dir='inception', version='v5'):
"""Download a pretrained inception network.
Parameters
----------
data_dir : str, optional
Location of the pretrained inception network download.
version : str, optional
Version of the model: ['v3'] or 'v5'.
"""
if version == 'v3':
download_and_extract_tar(
'https://s3.amazonaws.com/cadl/models/inception-2015-12-05.tgz',
data_dir)
return (os.path.join(data_dir, 'classify_image_graph_def.pb'),
os.path.join(data_dir, 'imagenet_synset_to_human_label_map.txt'))
else:
download_and_extract_zip(
'https://s3.amazonaws.com/cadl/models/inception5h.zip', data_dir)
return (os.path.join(data_dir, 'tensorflow_inception_graph.pb'),
os.path.join(data_dir, 'imagenet_comp_graph_label_strings.txt'))
def get_inception_model(data_dir='inception', version='v5'):
"""Get a pretrained inception network.
Parameters
----------
data_dir : str, optional
Location of the pretrained inception network download.
version : str, optional
Version of the model: ['v3'] or 'v5'.
Returns
-------
net : dict
{'graph_def': graph_def, 'labels': synsets}
where the graph_def is a tf.GraphDef and the synsets
map an integer label from 0-1000 to a list of names
"""
# Download the trained net
model, labels = inception_download(data_dir, version)
# Parse the ids and synsets
txt = open(labels).readlines()
synsets = [(key, val.strip()) for key, val in enumerate(txt)]
# Load the saved graph
with gfile.GFile(model, 'rb') as f:
graph_def = tf.GraphDef()
try:
graph_def.ParseFromString(f.read())
except:
print('try adding PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python' +
'to environment. e.g.:\n' +
'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python ipython\n' +
'See here for info: ' +
'https://github.com/tensorflow/tensorflow/issues/582')
return {
'graph_def': graph_def,
'labels': synsets,
'preprocess': preprocess,
'deprocess': deprocess
}
def preprocess(img, crop=True, resize=True, dsize=(299, 299)):
if img.dtype != np.uint8:
img *= 255.0
if crop:
crop = np.min(img.shape[:2])
r = (img.shape[0] - crop) // 2
c = (img.shape[1] - crop) // 2
cropped = img[r: r + crop, c: c + crop]
else:
cropped = img
if resize:
rsz = imresize(cropped, dsize, preserve_range=True)
else:
rsz = cropped
if rsz.ndim == 2:
rsz = rsz[..., np.newaxis]
rsz = rsz.astype(np.float32)
# subtract imagenet mean
return (rsz - 117)
def deprocess(img):
return np.clip(img + 117, 0, 255).astype(np.uint8)
def test_inception():
"""Loads the inception network and applies it to a test image.
"""
with tf.Session() as sess:
net = get_inception_model()
tf.import_graph_def(net['graph_def'], name='inception')
g = tf.get_default_graph()
names = [op.name for op in g.get_operations()]
x = g.get_tensor_by_name(names[0] + ':0')
softmax = g.get_tensor_by_name(names[-3] + ':0')
from skimage import data
img = preprocess(data.coffee())[np.newaxis]
res = np.squeeze(softmax.eval(feed_dict={x: img}))
print([(res[idx], net['labels'][idx])
for idx in res.argsort()[-5:][::-1]])
"""Let's visualize the network's gradient activation
when backpropagated to the original input image. This
is effectively telling us which pixels contribute to the
predicted class or given neuron"""
pools = [name for name in names if 'pool' in name.split('/')[-1]]
fig, axs = plt.subplots(1, len(pools))
for pool_i, poolname in enumerate(pools):
pool = g.get_tensor_by_name(poolname + ':0')
pool.get_shape()
neuron = tf.reduce_max(pool, 1)
saliency = tf.gradients(neuron, x)
neuron_idx = tf.arg_max(pool, 1)
this_res = sess.run([saliency[0], neuron_idx],
feed_dict={x: img})
grad = this_res[0][0] / np.max(np.abs(this_res[0]))
axs[pool_i].imshow((grad * 128 + 128).astype(np.uint8))
axs[pool_i].set_title(poolname)
|
apache-2.0
|
zorojean/scikit-learn
|
sklearn/cross_decomposition/tests/test_pls.py
|
215
|
11427
|
import numpy as np
from sklearn.utils.testing import (assert_array_almost_equal,
assert_array_equal, assert_true, assert_raise_message)
from sklearn.datasets import load_linnerud
from sklearn.cross_decomposition import pls_
from nose.tools import assert_equal
def test_pls():
d = load_linnerud()
X = d.data
Y = d.target
# 1) Canonical (symmetric) PLS (PLS 2 blocks canonical mode A)
# ===========================================================
# Compare 2 algo.: nipals vs. svd
# ------------------------------
pls_bynipals = pls_.PLSCanonical(n_components=X.shape[1])
pls_bynipals.fit(X, Y)
pls_bysvd = pls_.PLSCanonical(algorithm="svd", n_components=X.shape[1])
pls_bysvd.fit(X, Y)
# check equalities of loading (up to the sign of the second column)
assert_array_almost_equal(
pls_bynipals.x_loadings_,
np.multiply(pls_bysvd.x_loadings_, np.array([1, -1, 1])), decimal=5,
err_msg="nipals and svd implementation lead to different x loadings")
assert_array_almost_equal(
pls_bynipals.y_loadings_,
np.multiply(pls_bysvd.y_loadings_, np.array([1, -1, 1])), decimal=5,
err_msg="nipals and svd implementation lead to different y loadings")
# Check PLS properties (with n_components=X.shape[1])
# ---------------------------------------------------
plsca = pls_.PLSCanonical(n_components=X.shape[1])
plsca.fit(X, Y)
T = plsca.x_scores_
P = plsca.x_loadings_
Wx = plsca.x_weights_
U = plsca.y_scores_
Q = plsca.y_loadings_
Wy = plsca.y_weights_
def check_ortho(M, err_msg):
K = np.dot(M.T, M)
assert_array_almost_equal(K, np.diag(np.diag(K)), err_msg=err_msg)
# Orthogonality of weights
# ~~~~~~~~~~~~~~~~~~~~~~~~
check_ortho(Wx, "x weights are not orthogonal")
check_ortho(Wy, "y weights are not orthogonal")
# Orthogonality of latent scores
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
check_ortho(T, "x scores are not orthogonal")
check_ortho(U, "y scores are not orthogonal")
# Check X = TP' and Y = UQ' (with (p == q) components)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# center scale X, Y
Xc, Yc, x_mean, y_mean, x_std, y_std =\
pls_._center_scale_xy(X.copy(), Y.copy(), scale=True)
assert_array_almost_equal(Xc, np.dot(T, P.T), err_msg="X != TP'")
assert_array_almost_equal(Yc, np.dot(U, Q.T), err_msg="Y != UQ'")
# Check that rotations on training data lead to scores
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Xr = plsca.transform(X)
assert_array_almost_equal(Xr, plsca.x_scores_,
err_msg="rotation on X failed")
Xr, Yr = plsca.transform(X, Y)
assert_array_almost_equal(Xr, plsca.x_scores_,
err_msg="rotation on X failed")
assert_array_almost_equal(Yr, plsca.y_scores_,
err_msg="rotation on Y failed")
# "Non regression test" on canonical PLS
# --------------------------------------
# The results were checked against the R-package plspm
pls_ca = pls_.PLSCanonical(n_components=X.shape[1])
pls_ca.fit(X, Y)
x_weights = np.array(
[[-0.61330704, 0.25616119, -0.74715187],
[-0.74697144, 0.11930791, 0.65406368],
[-0.25668686, -0.95924297, -0.11817271]])
assert_array_almost_equal(pls_ca.x_weights_, x_weights)
x_rotations = np.array(
[[-0.61330704, 0.41591889, -0.62297525],
[-0.74697144, 0.31388326, 0.77368233],
[-0.25668686, -0.89237972, -0.24121788]])
assert_array_almost_equal(pls_ca.x_rotations_, x_rotations)
y_weights = np.array(
[[+0.58989127, 0.7890047, 0.1717553],
[+0.77134053, -0.61351791, 0.16920272],
[-0.23887670, -0.03267062, 0.97050016]])
assert_array_almost_equal(pls_ca.y_weights_, y_weights)
y_rotations = np.array(
[[+0.58989127, 0.7168115, 0.30665872],
[+0.77134053, -0.70791757, 0.19786539],
[-0.23887670, -0.00343595, 0.94162826]])
assert_array_almost_equal(pls_ca.y_rotations_, y_rotations)
# 2) Regression PLS (PLS2): "Non regression test"
# ===============================================
# The results were checked against the R-packages plspm, misOmics and pls
pls_2 = pls_.PLSRegression(n_components=X.shape[1])
pls_2.fit(X, Y)
x_weights = np.array(
[[-0.61330704, -0.00443647, 0.78983213],
[-0.74697144, -0.32172099, -0.58183269],
[-0.25668686, 0.94682413, -0.19399983]])
assert_array_almost_equal(pls_2.x_weights_, x_weights)
x_loadings = np.array(
[[-0.61470416, -0.24574278, 0.78983213],
[-0.65625755, -0.14396183, -0.58183269],
[-0.51733059, 1.00609417, -0.19399983]])
assert_array_almost_equal(pls_2.x_loadings_, x_loadings)
y_weights = np.array(
[[+0.32456184, 0.29892183, 0.20316322],
[+0.42439636, 0.61970543, 0.19320542],
[-0.13143144, -0.26348971, -0.17092916]])
assert_array_almost_equal(pls_2.y_weights_, y_weights)
y_loadings = np.array(
[[+0.32456184, 0.29892183, 0.20316322],
[+0.42439636, 0.61970543, 0.19320542],
[-0.13143144, -0.26348971, -0.17092916]])
assert_array_almost_equal(pls_2.y_loadings_, y_loadings)
# 3) Another non-regression test of Canonical PLS on random dataset
# =================================================================
# The results were checked against the R-package plspm
n = 500
p_noise = 10
q_noise = 5
# 2 latents vars:
np.random.seed(11)
l1 = np.random.normal(size=n)
l2 = np.random.normal(size=n)
latents = np.array([l1, l1, l2, l2]).T
X = latents + np.random.normal(size=4 * n).reshape((n, 4))
Y = latents + np.random.normal(size=4 * n).reshape((n, 4))
X = np.concatenate(
(X, np.random.normal(size=p_noise * n).reshape(n, p_noise)), axis=1)
Y = np.concatenate(
(Y, np.random.normal(size=q_noise * n).reshape(n, q_noise)), axis=1)
np.random.seed(None)
pls_ca = pls_.PLSCanonical(n_components=3)
pls_ca.fit(X, Y)
x_weights = np.array(
[[0.65803719, 0.19197924, 0.21769083],
[0.7009113, 0.13303969, -0.15376699],
[0.13528197, -0.68636408, 0.13856546],
[0.16854574, -0.66788088, -0.12485304],
[-0.03232333, -0.04189855, 0.40690153],
[0.1148816, -0.09643158, 0.1613305],
[0.04792138, -0.02384992, 0.17175319],
[-0.06781, -0.01666137, -0.18556747],
[-0.00266945, -0.00160224, 0.11893098],
[-0.00849528, -0.07706095, 0.1570547],
[-0.00949471, -0.02964127, 0.34657036],
[-0.03572177, 0.0945091, 0.3414855],
[0.05584937, -0.02028961, -0.57682568],
[0.05744254, -0.01482333, -0.17431274]])
assert_array_almost_equal(pls_ca.x_weights_, x_weights)
x_loadings = np.array(
[[0.65649254, 0.1847647, 0.15270699],
[0.67554234, 0.15237508, -0.09182247],
[0.19219925, -0.67750975, 0.08673128],
[0.2133631, -0.67034809, -0.08835483],
[-0.03178912, -0.06668336, 0.43395268],
[0.15684588, -0.13350241, 0.20578984],
[0.03337736, -0.03807306, 0.09871553],
[-0.06199844, 0.01559854, -0.1881785],
[0.00406146, -0.00587025, 0.16413253],
[-0.00374239, -0.05848466, 0.19140336],
[0.00139214, -0.01033161, 0.32239136],
[-0.05292828, 0.0953533, 0.31916881],
[0.04031924, -0.01961045, -0.65174036],
[0.06172484, -0.06597366, -0.1244497]])
assert_array_almost_equal(pls_ca.x_loadings_, x_loadings)
y_weights = np.array(
[[0.66101097, 0.18672553, 0.22826092],
[0.69347861, 0.18463471, -0.23995597],
[0.14462724, -0.66504085, 0.17082434],
[0.22247955, -0.6932605, -0.09832993],
[0.07035859, 0.00714283, 0.67810124],
[0.07765351, -0.0105204, -0.44108074],
[-0.00917056, 0.04322147, 0.10062478],
[-0.01909512, 0.06182718, 0.28830475],
[0.01756709, 0.04797666, 0.32225745]])
assert_array_almost_equal(pls_ca.y_weights_, y_weights)
y_loadings = np.array(
[[0.68568625, 0.1674376, 0.0969508],
[0.68782064, 0.20375837, -0.1164448],
[0.11712173, -0.68046903, 0.12001505],
[0.17860457, -0.6798319, -0.05089681],
[0.06265739, -0.0277703, 0.74729584],
[0.0914178, 0.00403751, -0.5135078],
[-0.02196918, -0.01377169, 0.09564505],
[-0.03288952, 0.09039729, 0.31858973],
[0.04287624, 0.05254676, 0.27836841]])
assert_array_almost_equal(pls_ca.y_loadings_, y_loadings)
# Orthogonality of weights
# ~~~~~~~~~~~~~~~~~~~~~~~~
check_ortho(pls_ca.x_weights_, "x weights are not orthogonal")
check_ortho(pls_ca.y_weights_, "y weights are not orthogonal")
# Orthogonality of latent scores
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
check_ortho(pls_ca.x_scores_, "x scores are not orthogonal")
check_ortho(pls_ca.y_scores_, "y scores are not orthogonal")
def test_PLSSVD():
# Let's check the PLSSVD doesn't return all possible component but just
# the specificied number
d = load_linnerud()
X = d.data
Y = d.target
n_components = 2
for clf in [pls_.PLSSVD, pls_.PLSRegression, pls_.PLSCanonical]:
pls = clf(n_components=n_components)
pls.fit(X, Y)
assert_equal(n_components, pls.y_scores_.shape[1])
def test_univariate_pls_regression():
# Ensure 1d Y is correctly interpreted
d = load_linnerud()
X = d.data
Y = d.target
clf = pls_.PLSRegression()
# Compare 1d to column vector
model1 = clf.fit(X, Y[:, 0]).coef_
model2 = clf.fit(X, Y[:, :1]).coef_
assert_array_almost_equal(model1, model2)
def test_predict_transform_copy():
# check that the "copy" keyword works
d = load_linnerud()
X = d.data
Y = d.target
clf = pls_.PLSCanonical()
X_copy = X.copy()
Y_copy = Y.copy()
clf.fit(X, Y)
# check that results are identical with copy
assert_array_almost_equal(clf.predict(X), clf.predict(X.copy(), copy=False))
assert_array_almost_equal(clf.transform(X), clf.transform(X.copy(), copy=False))
# check also if passing Y
assert_array_almost_equal(clf.transform(X, Y),
clf.transform(X.copy(), Y.copy(), copy=False))
# check that copy doesn't destroy
# we do want to check exact equality here
assert_array_equal(X_copy, X)
assert_array_equal(Y_copy, Y)
# also check that mean wasn't zero before (to make sure we didn't touch it)
assert_true(np.all(X.mean(axis=0) != 0))
def test_scale():
d = load_linnerud()
X = d.data
Y = d.target
# causes X[:, -1].std() to be zero
X[:, -1] = 1.0
for clf in [pls_.PLSCanonical(), pls_.PLSRegression(),
pls_.PLSSVD()]:
clf.set_params(scale=True)
clf.fit(X, Y)
def test_pls_errors():
d = load_linnerud()
X = d.data
Y = d.target
for clf in [pls_.PLSCanonical(), pls_.PLSRegression(),
pls_.PLSSVD()]:
clf.n_components = 4
assert_raise_message(ValueError, "Invalid number of components", clf.fit, X, Y)
|
bsd-3-clause
|
slizb/swag-bag
|
python/swagbag/precompute_colors.py
|
1
|
2390
|
import json
import pandas as pd
import numpy as np
# todo: scrape ncaa teams from here http://dynasties.operationsports.com/team-colors.php?sport=ncaa
# todo: hook in colormath
def hex_to_rgb(value):
lv = len(value)
rgb_list = [int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)]
rgb_str = ' '.join(str(x) for x in rgb_list)
return rgb_str
def add_hash_to_hex_codes(hex_list):
hashed = []
for i, unhashed in enumerate(hex_list):
hashed.append('#' + unhashed)
return hashed
def rgb_to_hex(rgb_string):
red, green, blue = unpack_rgb(rgb_string)
return '%02x%02x%02x' % (red, green, blue)
def unpack_rgb(rgb_string):
rgb_list = rgb_string.split()
red = int(rgb_list[0])
green = int(rgb_list[1])
blue = int(rgb_list[2])
return (red, green, blue)
def rgb_to_cmyk(rgb_string):
# formula taken from http://www.rapidtables.com/convert/color/rgb-to-cmyk.htm
red, green, blue = unpack_rgb(rgb_string)
r = red / 255
g = green / 255
b = blue / 255
if r == 0 and g == 0 and b == 0:
c = m = y = k = 0
else:
k = 1 - max(r, g, b)
c = (1 - r - k) / (1 - k) * 100
m = (1 - g - k) / (1 - k) * 100
y = (1 - b - k) / (1 - k) * 100
cmyk_string = ' '.join([str(c), str(m), str(y), str(k)])
return cmyk_string
def fill_color_styles():
with open('../data/teams.json') as data_file:
data = json.load(data_file)
df = pd.DataFrame(data)
df['hex'] = np.nan
df['rgb'] = np.nan
df['cmyk'] = np.nan
for i, row in enumerate(df.colors):
# hex
try:
df.hex[i] = row['hex']
except KeyError:
hex_conversion = [rgb_to_hex(r) for r in row['rgb']]
df.hex[i] = hex_conversion
# rgb
try:
df.rgb[i] = row['rgb']
except KeyError:
rgb_conversion = [hex_to_rgb(h) for h in row['hex']]
df.rgb[i] = rgb_conversion
# cmyk
try:
df.cmyk[i] = row['cmyk']
except KeyError:
cmyk_conversion = [rgb_to_cmyk(r) for r in df.rgb[i]]
df.cmyk[i] = cmyk_conversion
df = df.drop('colors', axis=1)
df['hex'] = df['hex'].apply(lambda x: add_hash_to_hex_codes(x))
df.to_pickle('../data/team_color_frame.pkl')
if __name__ == "__main__":
fill_color_styles()
|
mit
|
nhuntwalker/astroML
|
book_figures/chapter6/fig_great_wall_KDE.py
|
3
|
5422
|
"""
Great Wall KDE
--------------
Figure 6.3
Kernel density estimation for galaxies within the SDSS "Great Wall." The
top-left panel shows points that are galaxies, projected by their spatial
locations (right ascension and distance determined from redshift measurement)
onto the equatorial plane (declination ~ 0 degrees). The remaining panels show
estimates of the density of these points using kernel density estimation with
a Gaussian kernel (upper right), a top-hat kernel (lower left), and an
exponential kernel (lower right). Compare also to figure 6.4.
"""
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
from scipy.spatial import cKDTree
from scipy.stats import gaussian_kde
from astroML.datasets import fetch_great_wall
# Scikit-learn 0.14 added sklearn.neighbors.KernelDensity, which is a very
# fast kernel density estimator based on a KD Tree. We'll use this if
# available (and raise a warning if it isn't).
try:
from sklearn.neighbors import KernelDensity
use_sklearn_KDE = True
except:
import warnings
warnings.warn("KDE will be removed in astroML version 0.3. Please "
"upgrade to scikit-learn 0.14+ and use "
"sklearn.neighbors.KernelDensity.", DeprecationWarning)
from astroML.density_estimation import KDE
use_sklearn_KDE = False
#----------------------------------------------------------------------
# This function adjusts matplotlib settings for a uniform feel in the textbook.
# Note that with usetex=True, fonts are rendered with LaTeX. This may
# result in an error if LaTeX is not installed on your system. In that case,
# you can set usetex to False.
from astroML.plotting import setup_text_plots
setup_text_plots(fontsize=8, usetex=True)
#------------------------------------------------------------
# Fetch the great wall data
X = fetch_great_wall()
#------------------------------------------------------------
# Create the grid on which to evaluate the results
Nx = 50
Ny = 125
xmin, xmax = (-375, -175)
ymin, ymax = (-300, 200)
#------------------------------------------------------------
# Evaluate for several models
Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx),
np.linspace(ymin, ymax, Ny)))).T
kernels = ['gaussian', 'tophat', 'exponential']
dens = []
if use_sklearn_KDE:
kde1 = KernelDensity(5, kernel='gaussian')
log_dens1 = kde1.fit(X).score_samples(Xgrid)
dens1 = X.shape[0] * np.exp(log_dens1).reshape((Ny, Nx))
kde2 = KernelDensity(5, kernel='tophat')
log_dens2 = kde2.fit(X).score_samples(Xgrid)
dens2 = X.shape[0] * np.exp(log_dens2).reshape((Ny, Nx))
kde3 = KernelDensity(5, kernel='exponential')
log_dens3 = kde3.fit(X).score_samples(Xgrid)
dens3 = X.shape[0] * np.exp(log_dens3).reshape((Ny, Nx))
else:
kde1 = KDE(metric='gaussian', h=5)
dens1 = kde1.fit(X).eval(Xgrid).reshape((Ny, Nx))
kde2 = KDE(metric='tophat', h=5)
dens2 = kde2.fit(X).eval(Xgrid).reshape((Ny, Nx))
kde3 = KDE(metric='exponential', h=5)
dens3 = kde3.fit(X).eval(Xgrid).reshape((Ny, Nx))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(5, 2.2))
fig.subplots_adjust(left=0.12, right=0.95, bottom=0.2, top=0.9,
hspace=0.01, wspace=0.01)
# First plot: scatter the points
ax1 = plt.subplot(221, aspect='equal')
ax1.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k')
ax1.text(0.95, 0.9, "input", ha='right', va='top',
transform=ax1.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
# Second plot: gaussian kernel
ax2 = plt.subplot(222, aspect='equal')
ax2.imshow(dens1.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax2.text(0.95, 0.9, "Gaussian $(h=5)$", ha='right', va='top',
transform=ax2.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
# Third plot: top-hat kernel
ax3 = plt.subplot(223, aspect='equal')
ax3.imshow(dens2.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax3.text(0.95, 0.9, "top-hat $(h=5)$", ha='right', va='top',
transform=ax3.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
ax3.images[0].set_clim(0.01, 0.8)
# Fourth plot: exponential kernel
ax4 = plt.subplot(224, aspect='equal')
ax4.imshow(dens3.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax4.text(0.95, 0.9, "exponential $(h=5)$", ha='right', va='top',
transform=ax4.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
for ax in [ax1, ax2, ax3, ax4]:
ax.set_xlim(ymin, ymax - 0.01)
ax.set_ylim(xmin, xmax)
for ax in [ax1, ax2]:
ax.xaxis.set_major_formatter(plt.NullFormatter())
for ax in [ax3, ax4]:
ax.set_xlabel('$y$ (Mpc)')
for ax in [ax2, ax4]:
ax.yaxis.set_major_formatter(plt.NullFormatter())
for ax in [ax1, ax3]:
ax.set_ylabel('$x$ (Mpc)')
plt.show()
|
bsd-2-clause
|
mfjb/scikit-learn
|
sklearn/learning_curve.py
|
27
|
13650
|
"""Utilities to evaluate models with respect to a variable
"""
# Author: Alexander Fabisch <[email protected]>
#
# License: BSD 3 clause
import warnings
import numpy as np
from .base import is_classifier, clone
from .cross_validation import check_cv
from .externals.joblib import Parallel, delayed
from .cross_validation import _safe_split, _score, _fit_and_score
from .metrics.scorer import check_scoring
from .utils import indexable
from .utils.fixes import astype
__all__ = ['learning_curve', 'validation_curve']
def learning_curve(estimator, X, y, train_sizes=np.linspace(0.1, 1.0, 5),
cv=None, scoring=None, exploit_incremental_learning=False,
n_jobs=1, pre_dispatch="all", verbose=0):
"""Learning curve.
Determines cross-validated training and test scores for different training
set sizes.
A cross-validation generator splits the whole dataset k times in training
and test data. Subsets of the training set with varying sizes will be used
to train the estimator and a score for each training subset size and the
test set will be computed. Afterwards, the scores will be averaged over
all k runs for each training subset size.
Read more in the :ref:`User Guide <learning_curves>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
cv : integer or cross-validation generator, optional, default=3
A cross-validation generator to use. If int, determines the number
of folds in StratifiedKFold if estimator is a classifier and the
target y is binary or multiclass, or the number of folds in KFold
otherwise.
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects.
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
exploit_incremental_learning : boolean, optional, default: False
If the estimator supports incremental learning, this will be
used to speed up fitting for different training set sizes.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_sizes_abs : array, shape = (n_unique_ticks,), dtype int
Numbers of training examples that has been used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See :ref:`examples/model_selection/plot_learning_curve.py
<example_model_selection_plot_learning_curve.py>`
"""
if exploit_incremental_learning and not hasattr(estimator, "partial_fit"):
raise ValueError("An estimator must support the partial_fit interface "
"to exploit incremental learning")
X, y = indexable(X, y)
# Make a list since we will be iterating multiple times over the folds
cv = list(check_cv(cv, X, y, classifier=is_classifier(estimator)))
scorer = check_scoring(estimator, scoring=scoring)
# HACK as long as boolean indices are allowed in cv generators
if cv[0][0].dtype == bool:
new_cv = []
for i in range(len(cv)):
new_cv.append((np.nonzero(cv[i][0])[0], np.nonzero(cv[i][1])[0]))
cv = new_cv
n_max_training_samples = len(cv[0][0])
# Because the lengths of folds can be significantly different, it is
# not guaranteed that we use all of the available training data when we
# use the first 'n_max_training_samples' samples.
train_sizes_abs = _translate_train_sizes(train_sizes,
n_max_training_samples)
n_unique_ticks = train_sizes_abs.shape[0]
if verbose > 0:
print("[learning_curve] Training set sizes: " + str(train_sizes_abs))
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
if exploit_incremental_learning:
classes = np.unique(y) if is_classifier(estimator) else None
out = parallel(delayed(_incremental_fit_estimator)(
clone(estimator), X, y, classes, train, test, train_sizes_abs,
scorer, verbose) for train, test in cv)
else:
out = parallel(delayed(_fit_and_score)(
clone(estimator), X, y, scorer, train[:n_train_samples], test,
verbose, parameters=None, fit_params=None, return_train_score=True)
for train, test in cv for n_train_samples in train_sizes_abs)
out = np.array(out)[:, :2]
n_cv_folds = out.shape[0] // n_unique_ticks
out = out.reshape(n_cv_folds, n_unique_ticks, 2)
out = np.asarray(out).transpose((2, 1, 0))
return train_sizes_abs, out[0], out[1]
def _translate_train_sizes(train_sizes, n_max_training_samples):
"""Determine absolute sizes of training subsets and validate 'train_sizes'.
Examples:
_translate_train_sizes([0.5, 1.0], 10) -> [5, 10]
_translate_train_sizes([5, 10], 10) -> [5, 10]
Parameters
----------
train_sizes : array-like, shape (n_ticks,), dtype float or int
Numbers of training examples that will be used to generate the
learning curve. If the dtype is float, it is regarded as a
fraction of 'n_max_training_samples', i.e. it has to be within (0, 1].
n_max_training_samples : int
Maximum number of training samples (upper bound of 'train_sizes').
Returns
-------
train_sizes_abs : array, shape (n_unique_ticks,), dtype int
Numbers of training examples that will be used to generate the
learning curve. Note that the number of ticks might be less
than n_ticks because duplicate entries will be removed.
"""
train_sizes_abs = np.asarray(train_sizes)
n_ticks = train_sizes_abs.shape[0]
n_min_required_samples = np.min(train_sizes_abs)
n_max_required_samples = np.max(train_sizes_abs)
if np.issubdtype(train_sizes_abs.dtype, np.float):
if n_min_required_samples <= 0.0 or n_max_required_samples > 1.0:
raise ValueError("train_sizes has been interpreted as fractions "
"of the maximum number of training samples and "
"must be within (0, 1], but is within [%f, %f]."
% (n_min_required_samples,
n_max_required_samples))
train_sizes_abs = astype(train_sizes_abs * n_max_training_samples,
dtype=np.int, copy=False)
train_sizes_abs = np.clip(train_sizes_abs, 1,
n_max_training_samples)
else:
if (n_min_required_samples <= 0 or
n_max_required_samples > n_max_training_samples):
raise ValueError("train_sizes has been interpreted as absolute "
"numbers of training samples and must be within "
"(0, %d], but is within [%d, %d]."
% (n_max_training_samples,
n_min_required_samples,
n_max_required_samples))
train_sizes_abs = np.unique(train_sizes_abs)
if n_ticks > train_sizes_abs.shape[0]:
warnings.warn("Removed duplicate entries from 'train_sizes'. Number "
"of ticks will be less than than the size of "
"'train_sizes' %d instead of %d)."
% (train_sizes_abs.shape[0], n_ticks), RuntimeWarning)
return train_sizes_abs
def _incremental_fit_estimator(estimator, X, y, classes, train, test,
train_sizes, scorer, verbose):
"""Train estimator on training subsets incrementally and compute scores."""
train_scores, test_scores = [], []
partitions = zip(train_sizes, np.split(train, train_sizes)[:-1])
for n_train_samples, partial_train in partitions:
train_subset = train[:n_train_samples]
X_train, y_train = _safe_split(estimator, X, y, train_subset)
X_partial_train, y_partial_train = _safe_split(estimator, X, y,
partial_train)
X_test, y_test = _safe_split(estimator, X, y, test, train_subset)
if y_partial_train is None:
estimator.partial_fit(X_partial_train, classes=classes)
else:
estimator.partial_fit(X_partial_train, y_partial_train,
classes=classes)
train_scores.append(_score(estimator, X_train, y_train, scorer))
test_scores.append(_score(estimator, X_test, y_test, scorer))
return np.array((train_scores, test_scores)).T
def validation_curve(estimator, X, y, param_name, param_range, cv=None,
scoring=None, n_jobs=1, pre_dispatch="all", verbose=0):
"""Validation curve.
Determine training and test scores for varying parameter values.
Compute scores for an estimator with different values of a specified
parameter. This is similar to grid search with one parameter. However, this
will also compute training scores and is merely a utility for plotting the
results.
Read more in the :ref:`User Guide <validation_curve>`.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
param_name : string
Name of the parameter that will be varied.
param_range : array-like, shape (n_values,)
The values of the parameter that will be evaluated.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
pre_dispatch : integer or string, optional
Number of predispatched jobs for parallel execution (default is
all). The option can reduce the allocated memory. The string can
be an expression like '2*n_jobs'.
verbose : integer, optional
Controls the verbosity: the higher, the more messages.
Returns
-------
train_scores : array, shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scores : array, shape (n_ticks, n_cv_folds)
Scores on test set.
Notes
-----
See
:ref:`examples/model_selection/plot_validation_curve.py
<example_model_selection_plot_validation_curve.py>`
"""
X, y = indexable(X, y)
cv = check_cv(cv, X, y, classifier=is_classifier(estimator))
scorer = check_scoring(estimator, scoring=scoring)
parallel = Parallel(n_jobs=n_jobs, pre_dispatch=pre_dispatch,
verbose=verbose)
out = parallel(delayed(_fit_and_score)(
estimator, X, y, scorer, train, test, verbose,
parameters={param_name: v}, fit_params=None, return_train_score=True)
for train, test in cv for v in param_range)
out = np.asarray(out)[:, :2]
n_params = len(param_range)
n_cv_folds = out.shape[0] // n_params
out = out.reshape(n_cv_folds, n_params, 2).transpose((2, 1, 0))
return out[0], out[1]
|
bsd-3-clause
|
fraricci/pymatgen
|
pymatgen/apps/battery/plotter.py
|
5
|
3385
|
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module provides plotting capabilities for battery related applications.
"""
__author__ = "Shyue Ping Ong"
__copyright__ = "Copyright 2012, The Materials Project"
__version__ = "0.1"
__maintainer__ = "Shyue Ping Ong"
__email__ = "[email protected]"
__date__ = "Jul 12, 2012"
from collections import OrderedDict
from pymatgen.util.plotting import pretty_plot
class VoltageProfilePlotter:
"""
A plotter to make voltage profile plots for batteries.
Args:
xaxis: The quantity to use as the xaxis. Can be either capacity (the
default), or the frac_x.
"""
def __init__(self, xaxis="capacity"):
self._electrodes = OrderedDict()
self.xaxis = xaxis
def add_electrode(self, electrode, label=None):
"""
Add an electrode to the plot.
Args:
electrode: An electrode. All electrodes satisfying the
AbstractElectrode interface should work.
label: A label for the electrode. If None, defaults to a counting
system, i.e. 'Electrode 1', 'Electrode 2', ...
"""
if not label:
label = "Electrode {}".format(len(self._electrodes) + 1)
self._electrodes[label] = electrode
def get_plot_data(self, electrode):
x = []
y = []
cap = 0
most_discharged = electrode[-1].frac_discharge
norm = most_discharged / (1 - most_discharged)
for vpair in electrode:
if self.xaxis == "capacity":
x.append(cap)
cap += vpair.mAh / electrode.normalization_mass
x.append(cap)
else:
x.append(vpair.frac_charge / (1 - vpair.frac_charge) / norm)
x.append(vpair.frac_discharge / (1 - vpair.frac_discharge)
/ norm)
y.extend([vpair.voltage] * 2)
x.append(x[-1])
y.append(0)
return x, y
def get_plot(self, width=8, height=8):
"""
Returns a plot object.
Args:
width: Width of the plot. Defaults to 8 in.
height: Height of the plot. Defaults to 6 in.
Returns:
A matplotlib plot object.
"""
plt = pretty_plot(width, height)
for label, electrode in self._electrodes.items():
(x, y) = self.get_plot_data(electrode)
plt.plot(x, y, '-', linewidth=2, label=label)
plt.legend()
if self.xaxis == "capacity":
plt.xlabel('Capacity (mAh/g)')
else:
plt.xlabel('Fraction')
plt.ylabel('Voltage (V)')
plt.tight_layout()
return plt
def show(self, width=8, height=6):
"""
Show the voltage profile plot.
Args:
width: Width of the plot. Defaults to 8 in.
height: Height of the plot. Defaults to 6 in.
"""
self.get_plot(width, height).show()
def save(self, filename, image_format="eps", width=8, height=6):
"""
Save the plot to an image file.
Args:
filename: Filename to save to.
image_format: Format to save to. Defaults to eps.
"""
self.get_plot(width, height).savefig(filename, format=image_format)
|
mit
|
laurent-george/bokeh
|
examples/compat/mpl/polycollection.py
|
34
|
1276
|
from matplotlib.collections import PolyCollection
import matplotlib.pyplot as plt
import numpy as np
from bokeh import mpl
from bokeh.plotting import output_file, show
# Generate data. In this case, we'll make a bunch of center-points and generate
# verticies by subtracting random offsets from those center-points
numpoly, numverts = 100, 4
centers = 100 * (np.random.random((numpoly, 2)) - 0.5)
offsets = 10 * (np.random.random((numverts, numpoly, 2)) - 0.5)
verts = centers + offsets
verts = np.swapaxes(verts, 0, 1)
# In your case, "verts" might be something like:
# verts = zip(zip(lon1, lat1), zip(lon2, lat2), ...)
# If "data" in your case is a numpy array, there are cleaner ways to reorder
# things to suit.
facecolors = ['red', 'green', 'blue', 'cyan', 'yellow', 'magenta', 'black']
edgecolors = ['cyan', 'yellow', 'magenta', 'black', 'red', 'green', 'blue']
widths = [5, 10, 20, 10, 5]
# Make the collection and add it to the plot.
col = PolyCollection(verts, facecolor=facecolors, edgecolor=edgecolors,
linewidth=widths, linestyle='--', alpha=0.5)
ax = plt.axes()
ax.add_collection(col)
plt.xlim([-60, 60])
plt.ylim([-60, 60])
plt.title("MPL-PolyCollection support in Bokeh")
output_file("polycollection.html")
show(mpl.to_bokeh())
|
bsd-3-clause
|
glennq/scikit-learn
|
sklearn/semi_supervised/label_propagation.py
|
39
|
16726
|
# coding=utf8
"""
Label propagation in the context of this module refers to a set of
semi-supervised classification algorithms. At a high level, these algorithms
work by forming a fully-connected graph between all points given and solving
for the steady-state distribution of labels at each point.
These algorithms perform very well in practice. The cost of running can be very
expensive, at approximately O(N^3) where N is the number of (labeled and
unlabeled) points. The theory (why they perform so well) is motivated by
intuitions from random walk algorithms and geometric relationships in the data.
For more information see the references below.
Model Features
--------------
Label clamping:
The algorithm tries to learn distributions of labels over the dataset. In the
"Hard Clamp" mode, the true ground labels are never allowed to change. They
are clamped into position. In the "Soft Clamp" mode, they are allowed some
wiggle room, but some alpha of their original value will always be retained.
Hard clamp is the same as soft clamping with alpha set to 1.
Kernel:
A function which projects a vector into some higher dimensional space. This
implementation supports RBF and KNN kernels. Using the RBF kernel generates
a dense matrix of size O(N^2). KNN kernel will generate a sparse matrix of
size O(k*N) which will run much faster. See the documentation for SVMs for
more info on kernels.
Examples
--------
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelPropagation
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = np.where(np.random.randint(0, 2,
... size=len(iris.target)))
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
LabelPropagation(...)
Notes
-----
References:
[1] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux. In Semi-Supervised
Learning (2006), pp. 193-216
[2] Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux. Efficient
Non-Parametric Function Induction in Semi-Supervised Learning. AISTAT 2005
"""
# Authors: Clay Woolam <[email protected]>
# License: BSD
from abc import ABCMeta, abstractmethod
import numpy as np
from scipy import sparse
from ..base import BaseEstimator, ClassifierMixin
from ..externals import six
from ..metrics.pairwise import rbf_kernel
from ..neighbors.unsupervised import NearestNeighbors
from ..utils.extmath import safe_sparse_dot
from ..utils.graph import graph_laplacian
from ..utils.multiclass import check_classification_targets
from ..utils.validation import check_X_y, check_is_fitted, check_array
# Helper functions
def _not_converged(y_truth, y_prediction, tol=1e-3):
"""basic convergence check"""
return np.abs(y_truth - y_prediction).sum() > tol
class BaseLabelPropagation(six.with_metaclass(ABCMeta, BaseEstimator,
ClassifierMixin)):
"""Base class for label propagation module.
Parameters
----------
kernel : {'knn', 'rbf', callable}
String identifier for kernel function to use or the kernel function
itself. Only 'rbf' and 'knn' strings are valid inputs. The function
passed should take two inputs, each of shape [n_samples, n_features],
and return a [n_samples, n_samples] shaped weight matrix
gamma : float
Parameter for rbf kernel
alpha : float
Clamping factor
max_iter : float
Change maximum number of iterations allowed
tol : float
Convergence tolerance: threshold to consider the system at steady
state
n_neighbors : integer > 0
Parameter for knn kernel
n_jobs : int, optional (default = 1)
The number of parallel jobs to run.
If ``-1``, then the number of jobs is set to the number of CPU cores.
"""
def __init__(self, kernel='rbf', gamma=20, n_neighbors=7,
alpha=1, max_iter=30, tol=1e-3, n_jobs=1):
self.max_iter = max_iter
self.tol = tol
# kernel parameters
self.kernel = kernel
self.gamma = gamma
self.n_neighbors = n_neighbors
# clamping factor
self.alpha = alpha
self.n_jobs = n_jobs
def _get_kernel(self, X, y=None):
if self.kernel == "rbf":
if y is None:
return rbf_kernel(X, X, gamma=self.gamma)
else:
return rbf_kernel(X, y, gamma=self.gamma)
elif self.kernel == "knn":
if self.nn_fit is None:
self.nn_fit = NearestNeighbors(self.n_neighbors,
n_jobs=self.n_jobs).fit(X)
if y is None:
return self.nn_fit.kneighbors_graph(self.nn_fit._fit_X,
self.n_neighbors,
mode='connectivity')
else:
return self.nn_fit.kneighbors(y, return_distance=False)
elif callable(self.kernel):
if y is None:
return self.kernel(X, X)
else:
return self.kernel(X, y)
else:
raise ValueError("%s is not a valid kernel. Only rbf and knn"
" or an explicit function "
" are supported at this time." % self.kernel)
@abstractmethod
def _build_graph(self):
raise NotImplementedError("Graph construction must be implemented"
" to fit a label propagation model.")
def predict(self, X):
"""Performs inductive inference across the model.
Parameters
----------
X : array_like, shape = [n_samples, n_features]
Returns
-------
y : array_like, shape = [n_samples]
Predictions for input data
"""
probas = self.predict_proba(X)
return self.classes_[np.argmax(probas, axis=1)].ravel()
def predict_proba(self, X):
"""Predict probability for each possible outcome.
Compute the probability estimates for each single sample in X
and each possible outcome seen during training (categorical
distribution).
Parameters
----------
X : array_like, shape = [n_samples, n_features]
Returns
-------
probabilities : array, shape = [n_samples, n_classes]
Normalized probability distributions across
class labels
"""
check_is_fitted(self, 'X_')
X_2d = check_array(X, accept_sparse=['csc', 'csr', 'coo', 'dok',
'bsr', 'lil', 'dia'])
weight_matrices = self._get_kernel(self.X_, X_2d)
if self.kernel == 'knn':
probabilities = []
for weight_matrix in weight_matrices:
ine = np.sum(self.label_distributions_[weight_matrix], axis=0)
probabilities.append(ine)
probabilities = np.array(probabilities)
else:
weight_matrices = weight_matrices.T
probabilities = np.dot(weight_matrices, self.label_distributions_)
normalizer = np.atleast_2d(np.sum(probabilities, axis=1)).T
probabilities /= normalizer
return probabilities
def fit(self, X, y):
"""Fit a semi-supervised label propagation model based
All the input data is provided matrix X (labeled and unlabeled)
and corresponding label matrix y with a dedicated marker value for
unlabeled samples.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
A {n_samples by n_samples} size matrix will be created from this
y : array_like, shape = [n_samples]
n_labeled_samples (unlabeled points are marked as -1)
All unlabeled samples will be transductively assigned labels
Returns
-------
self : returns an instance of self.
"""
X, y = check_X_y(X, y)
self.X_ = X
check_classification_targets(y)
# actual graph construction (implementations should override this)
graph_matrix = self._build_graph()
# label construction
# construct a categorical distribution for classification only
classes = np.unique(y)
classes = (classes[classes != -1])
self.classes_ = classes
n_samples, n_classes = len(y), len(classes)
y = np.asarray(y)
unlabeled = y == -1
clamp_weights = np.ones((n_samples, 1))
clamp_weights[unlabeled, 0] = self.alpha
# initialize distributions
self.label_distributions_ = np.zeros((n_samples, n_classes))
for label in classes:
self.label_distributions_[y == label, classes == label] = 1
y_static = np.copy(self.label_distributions_)
if self.alpha > 0.:
y_static *= 1 - self.alpha
y_static[unlabeled] = 0
l_previous = np.zeros((self.X_.shape[0], n_classes))
remaining_iter = self.max_iter
if sparse.isspmatrix(graph_matrix):
graph_matrix = graph_matrix.tocsr()
while (_not_converged(self.label_distributions_, l_previous, self.tol)
and remaining_iter > 1):
l_previous = self.label_distributions_
self.label_distributions_ = safe_sparse_dot(
graph_matrix, self.label_distributions_)
# clamp
self.label_distributions_ = np.multiply(
clamp_weights, self.label_distributions_) + y_static
remaining_iter -= 1
normalizer = np.sum(self.label_distributions_, axis=1)[:, np.newaxis]
self.label_distributions_ /= normalizer
# set the transduction item
transduction = self.classes_[np.argmax(self.label_distributions_,
axis=1)]
self.transduction_ = transduction.ravel()
self.n_iter_ = self.max_iter - remaining_iter
return self
class LabelPropagation(BaseLabelPropagation):
"""Label Propagation classifier
Read more in the :ref:`User Guide <label_propagation>`.
Parameters
----------
kernel : {'knn', 'rbf', callable}
String identifier for kernel function to use or the kernel function
itself. Only 'rbf' and 'knn' strings are valid inputs. The function
passed should take two inputs, each of shape [n_samples, n_features],
and return a [n_samples, n_samples] shaped weight matrix.
gamma : float
Parameter for rbf kernel
n_neighbors : integer > 0
Parameter for knn kernel
alpha : float
Clamping factor
max_iter : float
Change maximum number of iterations allowed
tol : float
Convergence tolerance: threshold to consider the system at steady
state
Attributes
----------
X_ : array, shape = [n_samples, n_features]
Input array.
classes_ : array, shape = [n_classes]
The distinct labels used in classifying instances.
label_distributions_ : array, shape = [n_samples, n_classes]
Categorical distribution for each item.
transduction_ : array, shape = [n_samples]
Label assigned to each item via the transduction.
n_iter_ : int
Number of iterations run.
Examples
--------
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelPropagation
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = np.where(np.random.randint(0, 2,
... size=len(iris.target)))
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
LabelPropagation(...)
References
----------
Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data
with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon
University, 2002 http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf
See Also
--------
LabelSpreading : Alternate label propagation strategy more robust to noise
"""
def _build_graph(self):
"""Matrix representing a fully connected graph between each sample
This basic implementation creates a non-stochastic affinity matrix, so
class distributions will exceed 1 (normalization may be desired).
"""
if self.kernel == 'knn':
self.nn_fit = None
affinity_matrix = self._get_kernel(self.X_)
normalizer = affinity_matrix.sum(axis=0)
if sparse.isspmatrix(affinity_matrix):
affinity_matrix.data /= np.diag(np.array(normalizer))
else:
affinity_matrix /= normalizer[:, np.newaxis]
return affinity_matrix
class LabelSpreading(BaseLabelPropagation):
"""LabelSpreading model for semi-supervised learning
This model is similar to the basic Label Propagation algorithm,
but uses affinity matrix based on the normalized graph Laplacian
and soft clamping across the labels.
Read more in the :ref:`User Guide <label_propagation>`.
Parameters
----------
kernel : {'knn', 'rbf', callable}
String identifier for kernel function to use or the kernel function
itself. Only 'rbf' and 'knn' strings are valid inputs. The function
passed should take two inputs, each of shape [n_samples, n_features],
and return a [n_samples, n_samples] shaped weight matrix
gamma : float
parameter for rbf kernel
n_neighbors : integer > 0
parameter for knn kernel
alpha : float
clamping factor
max_iter : float
maximum number of iterations allowed
tol : float
Convergence tolerance: threshold to consider the system at steady
state
n_jobs : int, optional (default = 1)
The number of parallel jobs to run.
If ``-1``, then the number of jobs is set to the number of CPU cores.
Attributes
----------
X_ : array, shape = [n_samples, n_features]
Input array.
classes_ : array, shape = [n_classes]
The distinct labels used in classifying instances.
label_distributions_ : array, shape = [n_samples, n_classes]
Categorical distribution for each item.
transduction_ : array, shape = [n_samples]
Label assigned to each item via the transduction.
n_iter_ : int
Number of iterations run.
Examples
--------
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelSpreading
>>> label_prop_model = LabelSpreading()
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = np.where(np.random.randint(0, 2,
... size=len(iris.target)))
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
LabelSpreading(...)
References
----------
Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston,
Bernhard Schoelkopf. Learning with local and global consistency (2004)
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.3219
See Also
--------
LabelPropagation : Unregularized graph based semi-supervised learning
"""
def __init__(self, kernel='rbf', gamma=20, n_neighbors=7, alpha=0.2,
max_iter=30, tol=1e-3, n_jobs=1):
# this one has different base parameters
super(LabelSpreading, self).__init__(kernel=kernel, gamma=gamma,
n_neighbors=n_neighbors,
alpha=alpha, max_iter=max_iter,
tol=tol,
n_jobs=n_jobs)
def _build_graph(self):
"""Graph matrix for Label Spreading computes the graph laplacian"""
# compute affinity matrix (or gram matrix)
if self.kernel == 'knn':
self.nn_fit = None
n_samples = self.X_.shape[0]
affinity_matrix = self._get_kernel(self.X_)
laplacian = graph_laplacian(affinity_matrix, normed=True)
laplacian = -laplacian
if sparse.isspmatrix(laplacian):
diag_mask = (laplacian.row == laplacian.col)
laplacian.data[diag_mask] = 0.0
else:
laplacian.flat[::n_samples + 1] = 0.0 # set diag to 0.0
return laplacian
|
bsd-3-clause
|
liangz0707/scikit-learn
|
examples/ensemble/plot_partial_dependence.py
|
249
|
4456
|
"""
========================
Partial Dependence Plots
========================
Partial dependence plots show the dependence between the target function [1]_
and a set of 'target' features, marginalizing over the
values of all other features (the complement features). Due to the limits
of human perception the size of the target feature set must be small (usually,
one or two) thus the target features are usually chosen among the most
important features
(see :attr:`~sklearn.ensemble.GradientBoostingRegressor.feature_importances_`).
This example shows how to obtain partial dependence plots from a
:class:`~sklearn.ensemble.GradientBoostingRegressor` trained on the California
housing dataset. The example is taken from [HTF2009]_.
The plot shows four one-way and one two-way partial dependence plots.
The target variables for the one-way PDP are:
median income (`MedInc`), avg. occupants per household (`AvgOccup`),
median house age (`HouseAge`), and avg. rooms per household (`AveRooms`).
We can clearly see that the median house price shows a linear relationship
with the median income (top left) and that the house price drops when the
avg. occupants per household increases (top middle).
The top right plot shows that the house age in a district does not have
a strong influence on the (median) house price; so does the average rooms
per household.
The tick marks on the x-axis represent the deciles of the feature values
in the training data.
Partial dependence plots with two target features enable us to visualize
interactions among them. The two-way partial dependence plot shows the
dependence of median house price on joint values of house age and avg.
occupants per household. We can clearly see an interaction between the
two features:
For an avg. occupancy greater than two, the house price is nearly independent
of the house age, whereas for values less than two there is a strong dependence
on age.
.. [HTF2009] T. Hastie, R. Tibshirani and J. Friedman,
"Elements of Statistical Learning Ed. 2", Springer, 2009.
.. [1] For classification you can think of it as the regression score before
the link function.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
from sklearn.datasets.california_housing import fetch_california_housing
# fetch California housing dataset
cal_housing = fetch_california_housing()
# split 80/20 train-test
X_train, X_test, y_train, y_test = train_test_split(cal_housing.data,
cal_housing.target,
test_size=0.2,
random_state=1)
names = cal_housing.feature_names
print('_' * 80)
print("Training GBRT...")
clf = GradientBoostingRegressor(n_estimators=100, max_depth=4,
learning_rate=0.1, loss='huber',
random_state=1)
clf.fit(X_train, y_train)
print("done.")
print('_' * 80)
print('Convenience plot with ``partial_dependence_plots``')
print
features = [0, 5, 1, 2, (5, 1)]
fig, axs = plot_partial_dependence(clf, X_train, features, feature_names=names,
n_jobs=3, grid_resolution=50)
fig.suptitle('Partial dependence of house value on nonlocation features\n'
'for the California housing dataset')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
print('_' * 80)
print('Custom 3d plot via ``partial_dependence``')
print
fig = plt.figure()
target_feature = (1, 5)
pdp, (x_axis, y_axis) = partial_dependence(clf, target_feature,
X=X_train, grid_resolution=50)
XX, YY = np.meshgrid(x_axis, y_axis)
Z = pdp.T.reshape(XX.shape).T
ax = Axes3D(fig)
surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu)
ax.set_xlabel(names[target_feature[0]])
ax.set_ylabel(names[target_feature[1]])
ax.set_zlabel('Partial dependence')
# pretty init view
ax.view_init(elev=22, azim=122)
plt.colorbar(surf)
plt.suptitle('Partial dependence of house value on median age and '
'average occupancy')
plt.subplots_adjust(top=0.9)
plt.show()
|
bsd-3-clause
|
loopdigga98/quora-kaggle
|
models/keras_predict.py
|
1
|
6001
|
# coding: utf-8
# In[1]:
# fit word2vec on full/test questions
# fit tokenizer on full/test questions
# In[2]:
import pandas as pd
import numpy as np
import seaborn as sns
import nltk
import sklearn as sk
from sklearn.feature_extraction.text import TfidfVectorizer
from keras.models import Sequential
from keras.layers import Dense, Input, Flatten
from keras.layers import Conv1D, MaxPooling1D, Embedding
from keras.models import Model
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import load_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, accuracy_score
from nltk.corpus import stopwords
import gensim, logging
import json
import os.path
MAX_NUM_WORDS = 125
# In[3]:
def submit(y_pred, test, filename):
sub = pd.DataFrame()
sub = pd.DataFrame()
sub['test_id'] = test['test_id']
sub['is_duplicate'] = y_pred
sub.to_csv(filename, index=False)
def save_sparse_csr(filename,array):
np.savez(filename,data = array.data ,indices=array.indices,
indptr =array.indptr, shape=array.shape )
def load_sparse_csr(filename):
loader = np.load(filename)
return csr_matrix(( loader['data'], loader['indices'], loader['indptr']),
shape = loader['shape'])
def correct_dataset(dataset):
dataset.loc[(dataset['question1'] == dataset['question2']), 'is_duplicate'] = 1
return dataset
def process_dataset(dataset, correct_dataset=False):
dataset['question1'].fillna(' ', inplace=True)
dataset['question2'].fillna(' ', inplace=True)
#delete punctuation
dataset['question1'] = dataset['question1'].str.replace('[^\w\s]','')
dataset['question2'] = dataset['question2'].str.replace('[^\w\s]','')
#lower questions
dataset['question1'] = dataset['question1'].str.lower()
dataset['question2'] = dataset['question2'].str.lower()
#union questions
dataset['union'] = pd.Series(dataset['question1']).str.cat(dataset['question2'], sep=' ')
if correct_dataset:
return correct_dataset(dataset)
else:
return dataset
def split_and_rem_stop_words(line):
cachedStopWords = stopwords.words("english")
return [word for word in line.split() if word not in cachedStopWords]
def create_word_to_vec(sentences, embedding_path, verbose=0, save=1, **params_for_w2v):
if verbose:
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
model = gensim.models.Word2Vec(sentences, **params_for_w2v)
if save:
model.save(embedding_path)
return model
def create_embeddings(sentences, embeddings_path='embeddings/embedding.npz',
verbose=0, **params):
"""
Generate embeddings from a batch of text
:param embeddings_path: where to save the embeddings
:param vocab_path: where to save the word-index map
"""
if verbose:
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
model = gensim.models.Word2Vec(sentences, **params)
weights = model.wv.syn0
np.save(open(embeddings_path, 'wb'), weights)
def load_vocab(vocab_path):
"""
Load word -> index and index -> word mappings
:param vocab_path: where the word-index map is saved
:return: word2idx, idx2word
"""
with open(vocab_path, 'r') as f:
data = json.loads(f.read())
word2idx = data
idx2word = dict([(v, k) for k, v in data.items()])
return word2idx, idx2word
def get_word2vec_embedding_layer(embeddings_path):
"""
Generate an embedding layer word2vec embeddings
:param embeddings_path: where the embeddings are saved (as a numpy file)
:return: the generated embedding layer
"""
weights = np.load(open(embeddings_path, 'rb'))
layer = Embedding(input_dim=weights.shape[0], output_dim=weights.shape[1], weights=[weights],
trainable=False)
return layer
# In[4]:
#Load train
print 'Loading datasets'
if os.path.isfile('dataframes/train.h5'):
train = pd.read_pickle('dataframes/train.h5')
else:
train = pd.read_csv('../datasets/train.csv')
train = process_dataset(train)
train['union_splitted'] = train['union'].apply(lambda sentence: split_and_rem_stop_words(sentence))
train.to_pickle('dataframes/train.h5')
# In[5]:
# Load test
if all([os.path.isfile('dataframes/test_0.h5'), os.path.isfile('dataframes/test_1.h5'),
os.path.isfile('dataframes/test_2.h5'), os.path.isfile('dataframes/test_3.h5')]):
test = pd.read_csv('../datasets/test.csv')
test = process_dataset(test)
# test_0 = pd.read_pickle('dataframes/test_0.h5')
# test_1 = pd.read_pickle('dataframes/test_1.h5')
# test_2 = pd.read_pickle('dataframes/test_2.h5')
# test_3 = pd.read_pickle('dataframes/test_3.h5')
# test_0.columns = ['union_splitted']
# test_1.columns = ['union_splitted']
# test_2.columns = ['union_splitted']
# test_3.columns = ['union_splitted']
# test_full_splitted = test_0.append(
# test_1.append(
# test_2.append(
# test_3)))
# test['union_splitted'] = test_full_splitted['union_splitted'].values
else:
print 'Not enough files for test'
# In[ ]:
print 'Tokenizing'
#Tokenize test
tokenizer = Tokenizer(nb_words=MAX_NUM_WORDS, split=' ')
tokenizer.fit_on_texts(train['union'])
sequences = tokenizer.texts_to_sequences(test['union'])
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
X_test = pad_sequences(sequences, maxlen=MAX_NUM_WORDS)
print('Shape of data tensor:', X_test.shape)
# In[9]:
#Load model
model = load_model('keras_models/new_model_6_epochs.h5')
# In[ ]:
#predict
y_preds = model.predict(X_test, batch_size=128, verbose=1)
submit(y_preds, test, '../submissions/keras_6_epochs_1_dropout.csv')
|
apache-2.0
|
craigulmer/airline-plotters
|
gap_plot.py
|
1
|
5452
|
# This plotter is a tool for discovering locations where there isn't
# much coverage. It inspects each flight and looks for segments that
# had a longer travel time than we expected. If the duration is above
# a threshold, we plot the segment.
#
# eg. if you grabbed data every 6 minutes, you'd hope that each segment
# in a track would be about 6 minutes (plus some for extra for book
# keeping). If a segment was 12 or 18 minutes, you'd suspect that
# 2 or 3 samples were missing.
import sys
from math import *
from collections import defaultdict
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from numpy import *
import gzip
import bz2
# Globals
crop_region='world' #'usa_main' #sfbay, world, ukraine
#color='b' #Line color. Can also do '-ob' for line w/ points
alpha='0.09' #How faint to make the line.
resolution='l' #Set level-of-detail on maps: c,l,h
linewidth=1.0 #How thick to make the line
sampleperiod=400 #every 6minutes with some gap
num_periods=4
altitude_thresh=1000.0 #ignore everything below this height
# Parse our wkt (well-known text) format. Technically, I think we're not
# wkt compliant because our individual data values are
# (lon, lat, alt, seconds_since_epoc). Other parsers seem to choke on
# more than 2 or 3 values, which seems simple-minded to me.
def parseWkt(s):
s2 = s.replace("LINESTRING (","").replace(")","");
vals = s2.split(",")
array2d = [[float(digit) for digit in line.split()] for line in vals]
return array2d;
# Computing distance based on coordinates isn't easy. Remember, lon
# gets smaller as you get near the poles. Haversine is the standard
# op people do to convert to angles and map to distance. This version is
# all based at sea level, and doesn't take into account altitude.
def haversine(lon1, lat1, lon2, lat2):
degree_to_rad = float(pi/180.0)
d_lat = (lat2 - lat1) * degree_to_rad
d_lon = (lon2 - lon1) * degree_to_rad
a=pow(sin(d_lat/2),2) + cos(lat1 * degree_to_rad) * cos(lat2 * degree_to_rad) * pow(sin(d_lon/2),2)
c=2*atan2(sqrt(a),sqrt(1-a))
mi = 3956 * c
return mi
if len(sys.argv) > 1:
filename = sys.argv[1]
else:
filename = raw_input('Enter data filename: ')
# Set up the plt window
fig = plt.figure()
ax=fig.add_axes([0.1,0.1,0.8,0.8])
# Some regions of interest lon1,lat1, lon2,lat2
lims = defaultdict(list)
lims['sfbay'] = [-126.0, 36.0, -119.0, 40.0]
lims['ukraine'] = [ 19.0, 42.0, 43.0, 54.0]
lims['mediterranean'] = [ -70.0, 5.0, 75.0, 55.0]
lims['world'] = [-180.0,-60.0, 180.0, 80.0]
lims['nscamerica'] = [-180.0,-60.0, -30.0, 80.0]
lims['usa_con'] = [-175.0, 24.0, -40.0, 72.0]
lims['usa_main'] = [-130.0, 23.0, -62.0, 50.0]
lm = lims[crop_region]
m = Basemap(projection='merc', lat_0 = 0, lon_0 = 0, lat_ts=20.,
resolution = resolution, area_thresh = 0.1,
llcrnrlon=lm[0], llcrnrlat=lm[1],
urcrnrlon=lm[2], urcrnrlat=lm[3])
# Map options
#m.bluemarble() #Use images for backgound. Hard to see routes
m.drawlsmask() #Nice grey outlines
#m.drawcoastlines(linewidth=0.2) #Unfortunately, has lakes too
m.drawcountries()
#plt.show() #Just to test out
lengths=[]
start_loc=[]
val=0;
toss_count=0
if filename.endswith(".gz"):
file = gzip.open(filename)
elif filename.endswith(".bz2"):
file = bz2.BZ2File(filename)
else:
file = open(filename)
for line in file:
#print line
(np,tag,swkt)=line.split('\t');
(plane,airline,src,dst)=tag.split('|')
#if(np<4):
# continue
data = parseWkt(swkt)
d=0.0
dist=[]
alt=[]
speed=[]
for i in range(len(data)):
if (i==0):
continue
lon1=data[i][0]
lon2=data[i-1][0]
lat1=data[i][1]
lat2=data[i-1][1]
mi = haversine(lon1, lat1, lon2, lat2)
t=data[i][3]-data[i-1][3]
#Look for segments that were longer than a certain
#duration, and were actually in the air
#if((t>2*sampleperiod) and (t<4*sampleperiod)
if 0:
# Try plotting good and bad using different colors
color='b'
if((t>=num_periods*sampleperiod)
and (data[i-1][2]>altitude_thresh) and (data[i][2]>altitude_thresh)):
color='r'
tmpx=(data[i-1][0], data[i][0])
if( abs(tmpx[1]-tmpx[0]) < 180.0):
tmpy=(data[i-1][1], data[i][1])
mx,my=m(tmpx,tmpy)
m.plot(mx, my, color, alpha=alpha, lw=linewidth)
else:
if((t>=num_periods*sampleperiod)
and (data[i-1][2]>altitude_thresh) and (data[i][2]>altitude_thresh)):
color='r'
tmpx=(data[i-1][0], data[i][0])
if( abs(tmpx[1]-tmpx[0]) < 180.0):
tmpy=(data[i-1][1], data[i][1])
mx,my=m(tmpx,tmpy)
m.plot(mx, my, color, alpha=alpha, lw=linewidth)
if(t==0):
print "Bad time at ",i
t=1
mph = (3600.0*mi)/t
d=d+mi
dmi=d #d*0.000189394;
dist.append(dmi)
alt.append(data[i][2]/5280.0)
speed.append(mph)
lengths.append(dmi)
speed_max=max(speed)
val=val+1
#if(val>100):
# break
plt.show()
|
mit
|
zhenv5/scikit-learn
|
examples/cluster/plot_dict_face_patches.py
|
337
|
2747
|
"""
Online learning of a dictionary of parts of faces
==================================================
This example uses a large dataset of faces to learn a set of 20 x 20
images patches that constitute faces.
From the programming standpoint, it is interesting because it shows how
to use the online API of the scikit-learn to process a very large
dataset by chunks. The way we proceed is that we load an image at a time
and extract randomly 50 patches from this image. Once we have accumulated
500 of these patches (using 10 images), we run the `partial_fit` method
of the online KMeans object, MiniBatchKMeans.
The verbose setting on the MiniBatchKMeans enables us to see that some
clusters are reassigned during the successive calls to
partial-fit. This is because the number of patches that they represent
has become too low, and it is better to choose a random new
cluster.
"""
print(__doc__)
import time
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.cluster import MiniBatchKMeans
from sklearn.feature_extraction.image import extract_patches_2d
faces = datasets.fetch_olivetti_faces()
###############################################################################
# Learn the dictionary of images
print('Learning the dictionary... ')
rng = np.random.RandomState(0)
kmeans = MiniBatchKMeans(n_clusters=81, random_state=rng, verbose=True)
patch_size = (20, 20)
buffer = []
index = 1
t0 = time.time()
# The online learning part: cycle over the whole dataset 6 times
index = 0
for _ in range(6):
for img in faces.images:
data = extract_patches_2d(img, patch_size, max_patches=50,
random_state=rng)
data = np.reshape(data, (len(data), -1))
buffer.append(data)
index += 1
if index % 10 == 0:
data = np.concatenate(buffer, axis=0)
data -= np.mean(data, axis=0)
data /= np.std(data, axis=0)
kmeans.partial_fit(data)
buffer = []
if index % 100 == 0:
print('Partial fit of %4i out of %i'
% (index, 6 * len(faces.images)))
dt = time.time() - t0
print('done in %.2fs.' % dt)
###############################################################################
# Plot the results
plt.figure(figsize=(4.2, 4))
for i, patch in enumerate(kmeans.cluster_centers_):
plt.subplot(9, 9, i + 1)
plt.imshow(patch.reshape(patch_size), cmap=plt.cm.gray,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.suptitle('Patches of faces\nTrain time %.1fs on %d patches' %
(dt, 8 * len(faces.images)), fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.show()
|
bsd-3-clause
|
ARM-software/lisa
|
external/workload-automation/wa/workloads/deepbench/__init__.py
|
5
|
5650
|
# Copyright 2018 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=E1101,W0201
import os
import re
import pandas as pd
from wa import Workload, Parameter, Alias, Executable
from wa.utils.types import numeric
class Deepbench(Workload):
name = 'deepbench'
description = """
Benchmarks operations that are important to deep learning. Including GEMM
and convolution.
The benchmark and its documentation are available here:
https://github.com/baidu-research/DeepBench
.. note:: parameters of matrices used in each sub-test are added as
classifiers to the metrics. See the benchmark documentation
for the explanation of the various parameters
.. note:: at the moment only the "Arm Benchmarks" subset of DeepBench
is supported.
"""
parameters = [
Parameter('test', default='gemm',
allowed_values=['gemm', 'conv', 'sparse'],
description='''
Specifies which of the available benchmarks will be run.
gemm
Performs GEneral Matrix Multiplication of dense matrices
of varying sizes.
conv
Performs convolutions on inputs in NCHW format.
sparse
Performs GEneral Matrix Multiplication of sparse matrices
of varying sizes, and compares them to corresponding dense
operations.
'''),
]
aliases = [
Alias('deep-gemm', test='gemm'),
Alias('deep-conv', test='conv'),
Alias('deep-sparse', test='sparse'),
]
test_metrics = {
'gemm': ['time (msec)', 'GOPS'],
'conv': ['fwd_time (usec)'],
'sparse': ['sparse time (usec)', 'dense time (usec)', 'speedup'],
}
lower_is_better = {
'time (msec)': True,
'GOPS': False,
'fwd_time (usec)': True,
'sparse time (usec)': True,
'dense time (usec)': True,
'speedup': False,
}
installed = {}
def initialize(self, context):
self.exe_name = '{}_bench'.format(self.test)
if self.exe_name not in self.installed:
resource = Executable(self, self.target.abi, self.exe_name)
host_exe = context.get_resource(resource)
self.target.killall(self.exe_name)
self.installed[self.exe_name] = self.target.install(host_exe)
self.target_exe = self.installed[self.exe_name]
def setup(self, context):
self.target.killall(self.exe_name)
def run(self, context):
self.output = None
try:
timeout = 10800
self.output = self.target.execute(self.target_exe, timeout=timeout)
except KeyboardInterrupt:
self.target.killall(self.exe_name)
raise
def extract_results(self, context):
if self.output:
outfile = os.path.join(context.output_directory, '{}.output'.format(self.test))
with open(outfile, 'w') as wfh:
wfh.write(self.output)
context.add_artifact('deepbench-output', outfile, 'raw', "deepbench's stdout")
def update_output(self, context):
raw_file = context.get_artifact_path('deepbench-output')
if not raw_file:
return
table = read_result_table(raw_file)
for _, row in table.iterrows():
items = dict(row)
metrics = []
for metric_name in self.test_metrics[self.test]:
metrics.append((metric_name, items.pop(metric_name)))
for name, value in metrics:
context.add_metric(name, value,
lower_is_better=self.lower_is_better[name],
classifiers=items)
def finalize(self, context):
if self.cleanup_assets:
if self.exe_name in self.installed:
self.target.uninstall(self.exe_name)
del self.installed[self.exe_name]
def numeric_best_effort(value):
try:
return numeric(value)
except ValueError:
return value
def read_result_table(filepath):
columns = []
entries = []
with open(filepath) as fh:
try:
# fast-forward to the header
line = next(fh)
while not line.startswith('----'):
line = next(fh)
header_line = next(fh)
haader_sep = re.compile(r'(?<=[) ]) ')
# Since headers can contain spaces, use two spaces as column separator
parts = [p.strip() for p in haader_sep.split(header_line)]
columns = [p for p in parts if p]
line = next(fh)
while line.strip():
if line.startswith('----'):
line = next(fh)
row = [numeric_best_effort(i) for i in line.strip().split()]
entries.append(row)
line = next(fh)
except StopIteration:
pass
return pd.DataFrame(entries, columns=columns)
|
apache-2.0
|
wackymaster/QTClock
|
Libraries/matplotlib/testing/jpl_units/__init__.py
|
8
|
3266
|
#=======================================================================
"""
This is a sample set of units for use with testing unit conversion
of matplotlib routines. These are used because they use very strict
enforcement of unitized data which will test the entire spectrum of how
unitized data might be used (it is not always meaningful to convert to
a float without specific units given).
UnitDbl is essentially a unitized floating point number. It has a
minimal set of supported units (enough for testing purposes). All
of the mathematical operation are provided to fully test any behaviour
that might occur with unitized data. Remeber that unitized data has
rules as to how it can be applied to one another (a value of distance
cannot be added to a value of time). Thus we need to guard against any
accidental "default" conversion that will strip away the meaning of the
data and render it neutered.
Epoch is different than a UnitDbl of time. Time is something that can be
measured where an Epoch is a specific moment in time. Epochs are typically
referenced as an offset from some predetermined epoch.
A difference of two epochs is a Duration. The distinction between a
Duration and a UnitDbl of time is made because an Epoch can have different
frames (or units). In the case of our test Epoch class the two allowed
frames are 'UTC' and 'ET' (Note that these are rough estimates provided for
testing purposes and should not be used in production code where accuracy
of time frames is desired). As such a Duration also has a frame of
reference and therefore needs to be called out as different that a simple
measurement of time since a delta-t in one frame may not be the same in another.
"""
#=======================================================================
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from matplotlib.externals import six
from .Duration import Duration
from .Epoch import Epoch
from .UnitDbl import UnitDbl
from .Duration import Duration
from .Epoch import Epoch
from .UnitDbl import UnitDbl
from .StrConverter import StrConverter
from .EpochConverter import EpochConverter
from .UnitDblConverter import UnitDblConverter
from .UnitDblFormatter import UnitDblFormatter
#=======================================================================
__version__ = "1.0"
__all__ = [
'register',
'Duration',
'Epoch',
'UnitDbl',
'UnitDblFormatter',
]
#=======================================================================
def register():
"""Register the unit conversion classes with matplotlib."""
import matplotlib.units as mplU
mplU.registry[ str ] = StrConverter()
mplU.registry[ Epoch ] = EpochConverter()
mplU.registry[ UnitDbl ] = UnitDblConverter()
#=======================================================================
# Some default unit instances
# Distances
m = UnitDbl( 1.0, "m" )
km = UnitDbl( 1.0, "km" )
mile = UnitDbl( 1.0, "mile" )
# Angles
deg = UnitDbl( 1.0, "deg" )
rad = UnitDbl( 1.0, "rad" )
# Time
sec = UnitDbl( 1.0, "sec" )
min = UnitDbl( 1.0, "min" )
hr = UnitDbl( 1.0, "hour" )
day = UnitDbl( 24.0, "hour" )
sec = UnitDbl( 1.0, "sec" )
|
mit
|
bbcdli/xuexi
|
vid_ana_k/eva_vid_newer_wait_cap_done_for_full_vid.py
|
1
|
12535
|
#eva_vid.py
# keras c3d eva_model
# !/usr/bin/env python
import matplotlib
matplotlib.use('Agg')
from keras.models import model_from_json
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import c3d_keras_model as c3d_model
import sys
import keras.backend as K
import time
# import gizeh
##############################
# hy for saving video clips
from moviepy.editor import *
from moviepy.video.VideoClip import VideoClip
from moviepy.Clip import Clip
##############################
LOG_ON = False
PROJ_DIR = os.path.dirname(os.path.abspath(__file__))
NUM_CLASSES = 2
CLIP_LENGTH = 16
TEST_VIDEO_LOAD_PATH = os.path.join(PROJ_DIR,'..','aggr_vids/')
EVA_SAVE_PATH_NO_AGGR = os.path.join(PROJ_DIR,'save_no_aggr/')
if not os.path.exists(EVA_SAVE_PATH_NO_AGGR):
os.makedirs(EVA_SAVE_PATH_NO_AGGR)
EVA_SAVE_PATH = os.path.join(PROJ_DIR,'save_aggr/')
if not os.path.exists(EVA_SAVE_PATH):
os.makedirs(EVA_SAVE_PATH)
v_dirs = sorted([s for s in os.listdir(TEST_VIDEO_LOAD_PATH) if '.mp4' in s
and ('02_Tweety im Zug105_115_F' in s)])
#01_fight_in_train_T
#04_Tweety im Zug_29_49_F
TEST_VIDEOS = []
for dir in v_dirs:
v = TEST_VIDEO_LOAD_PATH + dir
TEST_VIDEOS.append(v)
dim_ordering = K.image_dim_ordering()
print "[Info] image_dim_order (from default ~/.keras/keras.json)={}".format(
dim_ordering)
backend = dim_ordering
log_path = os.path.join(PROJ_DIR,'logs_hy/')
if not os.path.exists(log_path):
os.makedirs(log_path)
str_log = ''
class Logger(object):
def __init__(self, log_path, str_log):
self.terminal = sys.stdout
from datetime import datetime
self.str_log = str_log
self.log_path = log_path
self.log = open(datetime.now().strftime(log_path + '%Y_%m_%d_%H_%M' + str_log + '.log'), "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
# this flush method is needed for python 3 compatibility.
# this handles the flush command by doing nothing.
# you might want to specify some extra behavior here.
pass
if LOG_ON:
sys.stdout = Logger(log_path, str_log)
def diagnose(data, verbose=True, label='input', plots=False, backend='tf'):
if data.ndim > 2:
if backend == 'th':
data = np.transpose(data, (1, 2, 3, 0))
# else:
# data = np.transpose(data, (0, 2, 1, 3))
min_num_spatial_axes = 10
max_outputs_to_show = 3
ndim = data.ndim
print "[Info] {}.ndim={}".format(label, ndim)
print "[Info] {}.shape={}".format(label, data.shape)
for d in range(ndim):
num_this_dim = data.shape[d]
if num_this_dim >= min_num_spatial_axes: # check for spatial axes
# just first, center, last indices
range_this_dim = [0, num_this_dim / 2, num_this_dim - 1]
else:
# sweep all indices for non-spatial axes
range_this_dim = range(num_this_dim)
for i in range_this_dim:
new_dim = tuple([d] + range(d) + range(d + 1, ndim))
sliced = np.transpose(data, new_dim)[i, ...]
print("[Info] {}, dim:{} {}-th slice: "
"(min, max, mean, std)=({}, {}, {}, {})".format(
label,
d, i,
np.min(sliced),
np.max(sliced),
np.mean(sliced),
np.std(sliced)))
if plots:
# assume (l, h, w, c)-shaped input
if data.ndim != 4:
print("[Error] data (shape={}) is not 4-dim. Check data".format(
data.shape))
return
l, h, w, c = data.shape
if l >= min_num_spatial_axes or \
h < min_num_spatial_axes or \
w < min_num_spatial_axes:
print("[Error] data (shape={}) does not look like in (l,h,w,c) "
"format. Do reshape/transpose.".format(data.shape))
return
nrows = int(np.ceil(np.sqrt(data.shape[0])))
# BGR
if c == 3:
for i in range(l):
mng = plt.get_current_fig_manager()
mng.resize(*mng.window.maxsize())
plt.subplot(nrows, nrows, i + 1) # doh, one-based!
im = np.squeeze(data[i, ...]).astype(np.float32)
im = im[:, :, ::-1] # BGR to RGB
# force it to range [0,1]
im_min, im_max = im.min(), im.max()
if im_max > im_min:
im_std = (im - im_min) / (im_max - im_min)
else:
print "[Warning] image is constant!"
im_std = np.zeros_like(im)
plt.imshow(im_std)
plt.axis('off')
plt.title("{}: t={}".format(label, i))
plt.show()
# plt.waitforbuttonpress()
else:
for j in range(min(c, max_outputs_to_show)):
for i in range(l):
mng = plt.get_current_fig_manager()
mng.resize(*mng.window.maxsize())
plt.subplot(nrows, nrows, i + 1) # doh, one-based!
im = np.squeeze(data[i, ...]).astype(np.float32)
im = im[:, :, j]
# force it to range [0,1]
im_min, im_max = im.min(), im.max()
if im_max > im_min:
im_std = (im - im_min) / (im_max - im_min)
else:
print "[Warning] image is constant!"
im_std = np.zeros_like(im)
plt.imshow(im_std)
plt.axis('off')
plt.title("{}: o={}, t={}".format(label, j, i))
plt.show()
# plt.waitforbuttonpress()
elif data.ndim == 1:
print("[Info] {} (min, max, mean, std)=({}, {}, {}, {})".format(
label,
np.min(data),
np.max(data),
np.mean(data),
np.std(data)))
print("[Info] data[:10]={}".format(data[:10]))
return
def main(model_name):
show_images = False
diagnose_plots = False
count_correct = 0
model_dir = os.path.join(PROJ_DIR,'log_models')
global backend
# override backend if provided as an input arg
if len(sys.argv) > 1:
if 'tf' in sys.argv[1].lower():
backend = 'tf'
else:
backend = 'th'
print "[Info] Using backend={}".format(backend)
model_weight_filename = os.path.join(model_dir, model_name)
model_json_filename = os.path.join(model_dir, 'sports1M_model_custom_2l.json')
print("[Info] Reading model architecture...")
model = model_from_json(open(model_json_filename, 'r').read())
# visualize model
model_img_filename = os.path.join(model_dir, 'c3d_model_custom.png')
if not os.path.exists(model_img_filename):
from keras.utils import plot_model
plot_model(model, to_file=model_img_filename)
model.load_weights(model_weight_filename)
#print("[Info] Loading model weights -- DONE!")
model.compile(loss='mean_squared_error', optimizer='sgd')
#print("[Info] Loading labels...")
with open('labels_aggr.txt', 'r') as f:
labels_txt = [line.strip() for line in f.readlines()]
print('Total labels: {}'.format(len(labels_txt)))
for TEST_VIDEO in TEST_VIDEOS:
print 'Test video path name:', TEST_VIDEO
cap = cv2.VideoCapture(TEST_VIDEO)
fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
print 'frame per second:', fps
vid, vid_view, frame_i = [], [], 0
while True:
ret, img = cap.read()
if ret:
frame_i += 1
if frame_i % 1 == 0:
vid_view.append(img)
vid.append(cv2.resize(img, (171, 128)))
else:
break
total_video_frames = len(vid)
print 'vid len', total_video_frames
vid = np.array(vid, dtype=np.float32)
# plt.imshow(vid[2000]/256)
# plt.show()
# sample 16-frame clip
STOP = False
total_test_clips = int(total_video_frames / CLIP_LENGTH)
while not STOP:
for num in xrange(total_test_clips):
# start_frame = 2000
start_frame = num * CLIP_LENGTH # 0x16,1x16
offset_time = start_frame / fps
print '\nstart frame:{}, offset:{}'.format(start_frame,offset_time)
X = vid[start_frame:(start_frame + CLIP_LENGTH), :, :, :]
def eva_one_clip(X, start_frame, model, EVA_SAVE_PATH_NO_AGGR, EVA_SAVE_PATH,count_correct):
# subtract mean
do_sub_mean = False
if do_sub_mean:
mean_cube = np.load('models/train01_16_128_171_mean.npy')
mean_cube = np.transpose(mean_cube, (1, 2, 3, 0))
# diagnose(mean_cube, verbose=True, label='Mean cube', plots=show_images)
X -= mean_cube
# center crop
X = X[:, 8:120, 30:142, :] # (l, h, w, c)
# diagnose(X, verbose=True, label='Center-cropped X', plots=show_images)
if backend == 'th':
X = np.transpose(X, (3, 0, 1, 2)) # input_shape = (3,16,112,112)
else:
pass # input_shape = (16,112,112,3)
# get activations for intermediate layers if needed
inspect = False
if inspect:
inspect_layers = [
# 'fc6',
# 'fc7',
]
for layer in inspect_layers:
int_model = c3d_model.get_int_model(model=model, layer=layer, backend=backend)
int_output = int_model.predict_on_batch(np.array([X]))
int_output = int_output[0, ...]
print "[Debug] at layer={}: output.shape={}".format(layer, int_output.shape)
diagnose(int_output,
verbose=True,
label='{} activation'.format(layer),
plots=diagnose_plots,
backend=backend)
# inference
start_p = time.time()
output = model.predict_on_batch(np.array([X]))
end_p = time.time()
print 'time elapsed for model.predict:{:.5} s'.format(end_p - start_p)
max_output = max(output[0])
# if max_output < 0.5:
# EVA_SAVE_PATH = EVA_SAVE_PATH_NO_AGGR
# EVA_SAVE_PATH = EVA_SAVE_PATH_NO_AGGR #setting save type
v_str = os.path.splitext(os.path.basename(TEST_VIDEO))[0]
clip_name = EVA_SAVE_PATH + v_str + '_' + str(start_frame) + '_' + "%.3f" % max_output + '.mp4'
filename_f = EVA_SAVE_PATH + v_str + '_' + str(start_frame) + '_' + "%.3f" % max_output + '.jpg'
# pred_label = output[0].argmax()
indx_of_interest = start_frame
print 'index of interest:', indx_of_interest
def save_current_subclips_to_frames(v_str):
for frame, i in zip(vid_view[start_frame:start_frame + 16], xrange(CLIP_LENGTH)):
filename_f_i = EVA_SAVE_PATH + 'eva_' + v_str + '_' + str(start_frame) + '_' + "%.3f" % max_output + str(
i) + '.png'
cv2.imwrite(filename_f_i, frame)
# if max_output > 0.4:
# save_current_subclips_to_frames()
def save_start_frame_of_interest(vid_view, indx_of_interest, filename_f):
frame_save = vid_view[indx_of_interest]
# cv2.imshow(filename_f,frame_save)
cv2.imwrite(filename_f, frame_save)
def save_subclip(TEST_VIDEO, indx_of_interest, fps):
clip = VideoFileClip(TEST_VIDEO)
v_time = indx_of_interest / fps
print 'Position of maximum probability:{}'.format(indx_of_interest)
print 'aggr high time point: {}'.format(v_time)
subclip = clip.subclip(v_time - 8, v_time + 2) # 74.6-8, 76 set an interval around frame of interest
subclip.write_videofile(clip_name)
cv2.waitKey(130)
if max_output > 0.3 or max_output < 0.1:
# if max_output > 0.2 or max_output < 0.08: #for no aggr
# save_subclip(TEST_VIDEO,indx_of_interest,fps)
# save_start_frame_of_interest(vid_view,indx_of_interest,filename_f)
#save_current_subclips_to_frames(v_str)
pass
# show results
save_probability_to_png = False
if save_probability_to_png:
print('Saving class probabilities in probabilities.png')
plt.plot(output[0])
plt.title('Probability')
plt.savefig('probabilities_'+v_str+'.png')
print('Maximum probability: {:.5f}'.format(max(output[0])))
print('Predicted label: {}'.format(labels_txt[output[0].argmax()]))
# sort top five predictions from softmax output
top_inds = output[0].argsort()[::-1][:5] # reverse sort and take five largest items
top_inds_list = top_inds.tolist()
print '\nTop probabilities and labels:',top_inds #array([0, 1])
print 'out[0][0]:{:0.4f}'.format(output[0][0])
print 'out[0][1]:{:0.4f}'.format(output[0][1])
'''
for i,index in zip(top_inds,xrange(NUM_CLASSES)): #[1,0] [0,1]
#print('{1}: {0:.5f} other'.format(int(output[0][i]), labels_txt[i]))
if index == 0:
if i == gt_label:
count_correct += 1
print labels_txt[i],': {:0.4f}'.format(output[0][i]),' top1 correct ',count_correct
else:
print labels_txt[i],': {:0.4f}'.format(output[0][i]),' top1'
else:
#print('{1}: {0:.5f} other'.format(int(output[0][i]), labels_txt[i]))
print labels_txt[i], ': {:0.4f}'.format(output[0][i]), ' other'
'''
return count_correct
count_correct = eva_one_clip(X, start_frame, model, EVA_SAVE_PATH_NO_AGGR, EVA_SAVE_PATH,count_correct)
print 'Precision:{:.3f}'.format(count_correct/float(total_test_clips))
if (total_video_frames - start_frame) < CLIP_LENGTH * 2:
print 'set STOP-True'
STOP = True
break
print 'TEST end.'
if __name__ == '__main__':
#model_name = 'k_01-0.46.hdf5'#offi
model_name = 'No1_k_01-0.62_0.27.hdf5'
main(model_name)
|
apache-2.0
|
developerator/Maturaarbeit
|
GenerativeAdversarialNets/CelebA32 with DCGAN/CelebA32_dcgan.py
|
1
|
6898
|
'''
By Tim Ehrensberger
The base of the functions for the network's training is taken from https://github.com/Zackory/Keras-MNIST-GAN/blob/master/mnist_gan.py by Zackory Erickson
The network structure is inspired by https://github.com/aleju/face-generator by Alexander Jung
'''
import os
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from keras.layers import Input, BatchNormalization, Activation, MaxPooling2D
from keras.models import Model, Sequential
from keras.layers.core import Reshape, Dense, Dropout, Flatten
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import Convolution2D, UpSampling2D
from keras.datasets import cifar10
from keras.optimizers import Adam
from keras.regularizers import l1_l2
#------
# DATA
#------
from keras import backend as K
K.set_image_dim_ordering('th')
import h5py
# Get hdf5 file
hdf5_file = os.path.join("PATH TO DATASET", "CelebA_32_data.h5")
with h5py.File(hdf5_file, "r") as hf:
X_train = hf["data"] [()] #[()] makes it read the whole thing
X_train = X_train.astype(np.float32) / 255
#----------------
# HYPERPARAMETERS
#----------------
randomDim = 100
adam = Adam(lr=0.0002, beta_1=0.5)
reg = lambda: l1_l2(l1=1e-7, l2=1e-7)
#dropout = 0.4
#-----------
# Generator
#-----------
h = 5
generator = Sequential()
#In: 100
generator.add(Dense(256 * 4 * 4, input_dim=100, kernel_regularizer=reg()))
generator.add(BatchNormalization())
generator.add(Reshape((256, 4, 4)))
#generator.add(Dropout(dropout))
#Out: 256 x 4 x 4
#In: 256 x 4 x 4
generator.add(UpSampling2D(size=(2, 2)))
generator.add(Convolution2D(128, (h, h), padding='same', kernel_regularizer=reg())) #1
generator.add(BatchNormalization(axis=1))
generator.add(LeakyReLU(0.2))
#generator.add(Dropout(dropout))
#Out: 128 x 8 x 8
#In: 128 x 8 x 8
generator.add(UpSampling2D(size=(2, 2)))
generator.add(Convolution2D(128, (h, h), padding='same', kernel_regularizer=reg())) #2
generator.add(BatchNormalization(axis=1))
generator.add(LeakyReLU(0.2))
#generator.add(Dropout(dropout))
#Out: 128 x 16 x 16
#In: 128 x 16 x 16
generator.add(UpSampling2D(size=(2, 2)))
generator.add(Convolution2D(64, (h, h), padding='same', kernel_regularizer=reg())) #3
generator.add(BatchNormalization(axis=1))
generator.add(LeakyReLU(0.2))
#generator.add(Dropout(dropout))
#Out: 64 x 32 x 32
#In: 64 x 32 x 32
generator.add(Convolution2D(3, (h, h), padding='same', kernel_regularizer=reg())) #4
generator.add(Activation('sigmoid'))
#Out: 3 x 32 x 32
generator.compile(loss='binary_crossentropy', optimizer=adam)
#--------------
# Discriminator
#--------------
discriminator = Sequential()
#In: 3 x 32 x 32
discriminator.add(Convolution2D(64, (h, h), padding='same', input_shape=(3, 32, 32), kernel_regularizer=reg()))
discriminator.add(MaxPooling2D(pool_size=(2, 2)))
discriminator.add(LeakyReLU(0.2))
#Out: 64 x 16 x 16
#In: 64 x 16 x 16
discriminator.add(Convolution2D(128, (h, h), padding='same', kernel_regularizer=reg()))
discriminator.add(MaxPooling2D(pool_size=(2, 2)))
discriminator.add(LeakyReLU(0.2))
#Out: 128 x 8 x 8
#In: 128 x 8 x 8
discriminator.add(Convolution2D(256, (h, h), padding='same', kernel_regularizer=reg()))
discriminator.add(MaxPooling2D(pool_size=(2, 2)))#Average?
discriminator.add(LeakyReLU(0.2))
#Out: 256 x 4 x 4
#In: 256 x 4 x 4
discriminator.add(Flatten())
discriminator.add(Dense(512))
discriminator.add(LeakyReLU(0.2))
#discriminator.add(Dropout(dropout))
discriminator.add(Dense(1))
discriminator.add(Activation('sigmoid'))
#Out: 1 (Probability)
discriminator.compile(loss='binary_crossentropy', optimizer=adam)
#-----
# GAN
#-----
discriminator.trainable = False
ganInput = Input(shape=(randomDim,))
x = generator(ganInput)
ganOutput = discriminator(x)
gan = Model(inputs=ganInput, outputs=ganOutput)
gan.compile(loss='binary_crossentropy', optimizer=adam)
#-----------
# FUNCTIONS
#-----------
dLosses = []
gLosses = []
def plotLoss(epoch):
assertExists('images')
plt.figure(figsize=(10, 8))
plt.plot(dLosses, label='Discriminative loss')
plt.plot(gLosses, label='Generative loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.savefig('images/dcgan_loss_epoch_%d.png' % epoch)
# Create a wall of generated images
noise = np.random.normal(0, 1, size=[examples, randomDim])
def plotGeneratedImages(epoch, examples=100, dim=(10, 10), figsize=(10, 10)):
generatedImages = generator.predict(noise)
generatedImages = generatedImages.transpose(0, 2, 3, 1) #transpose is crucial
assertExists('images')
plt.figure(figsize=figsize)
for i in range(generatedImages.shape[0]):
plt.subplot(dim[0], dim[1], i+1)
plt.imshow(generatedImages[i, :, :, :], interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.savefig('images/dcgan_generated_image_epoch_%d.png' % epoch)
# Save the generator and discriminator networks (and weights) for later use
def savemodels(epoch):
assertExists('models')
generator.save('models/dcgan_generator_epoch_%d.h5' % epoch)
discriminator.save('models/dcgan_discriminator_epoch_%d.h5' % epoch)
def train(epochs=1, batchSize=128):
batchCount = X_train.shape[0] // batchSize
print('Epochs:', epochs)
print('Batch size:', batchSize)
print('Batches per epoch:', batchCount)
for e in range(1, epochs+1):
print('-'*15, 'Epoch %d' % e, '-'*15)
for _ in tqdm(range(batchCount)):
# Get a random set of input noise and images
noise = np.random.normal(0, 1, size=[batchSize, randomDim])
imageBatch = X_train[np.random.randint(0, X_train.shape[0], size=batchSize)]
# Generate fake images
generatedImages = generator.predict(noise)
X = np.concatenate([imageBatch, generatedImages])
# Labels for generated and real data
yDis = np.zeros(2*batchSize)
# One-sided label smoothing = not exactly 1
yDis[:batchSize] = 0.9
# Train discriminator
discriminator.trainable = True
dloss = discriminator.train_on_batch(X, yDis) # here only D is trained
# Train generator
noise = np.random.normal(0, 1, size=[batchSize, randomDim])
yGen = np.ones(batchSize)
discriminator.trainable = False
gloss = gan.train_on_batch(noise, yGen) # here only G is trained because D is not trainable
# Store loss of most recent batch from this epoch
dLosses.append(dloss)
gLosses.append(gloss)
#plot after every epoch
plotGeneratedImages(e)
savemodels(e)
# Plot losses from every epoch
plotLoss(e)
def assertExists(path):
if not os.path.exists(path):
os.makedirs(path)
if __name__ == '__main__':
train(100, 128)
|
mit
|
rs2/pandas
|
pandas/tests/groupby/transform/test_transform.py
|
1
|
39695
|
""" test with the .transform """
from io import StringIO
import numpy as np
import pytest
from pandas._libs.groupby import group_cumprod_float64, group_cumsum
from pandas.core.dtypes.common import ensure_platform_int, is_timedelta64_dtype
import pandas as pd
from pandas import (
Categorical,
DataFrame,
MultiIndex,
Series,
Timestamp,
concat,
date_range,
)
import pandas._testing as tm
from pandas.core.groupby.groupby import DataError
def assert_fp_equal(a, b):
assert (np.abs(a - b) < 1e-12).all()
def test_transform():
data = Series(np.arange(9) // 3, index=np.arange(9))
index = np.arange(9)
np.random.shuffle(index)
data = data.reindex(index)
grouped = data.groupby(lambda x: x // 3)
transformed = grouped.transform(lambda x: x * x.sum())
assert transformed[7] == 12
# GH 8046
# make sure that we preserve the input order
df = DataFrame(
np.arange(6, dtype="int64").reshape(3, 2), columns=["a", "b"], index=[0, 2, 1]
)
key = [0, 0, 1]
expected = (
df.sort_index()
.groupby(key)
.transform(lambda x: x - x.mean())
.groupby(key)
.mean()
)
result = df.groupby(key).transform(lambda x: x - x.mean()).groupby(key).mean()
tm.assert_frame_equal(result, expected)
def demean(arr):
return arr - arr.mean()
people = DataFrame(
np.random.randn(5, 5),
columns=["a", "b", "c", "d", "e"],
index=["Joe", "Steve", "Wes", "Jim", "Travis"],
)
key = ["one", "two", "one", "two", "one"]
result = people.groupby(key).transform(demean).groupby(key).mean()
expected = people.groupby(key).apply(demean).groupby(key).mean()
tm.assert_frame_equal(result, expected)
# GH 8430
df = tm.makeTimeDataFrame()
g = df.groupby(pd.Grouper(freq="M"))
g.transform(lambda x: x - 1)
# GH 9700
df = DataFrame({"a": range(5, 10), "b": range(5)})
result = df.groupby("a").transform(max)
expected = DataFrame({"b": range(5)})
tm.assert_frame_equal(result, expected)
def test_transform_fast():
df = DataFrame({"id": np.arange(100000) / 3, "val": np.random.randn(100000)})
grp = df.groupby("id")["val"]
values = np.repeat(grp.mean().values, ensure_platform_int(grp.count().values))
expected = pd.Series(values, index=df.index, name="val")
result = grp.transform(np.mean)
tm.assert_series_equal(result, expected)
result = grp.transform("mean")
tm.assert_series_equal(result, expected)
# GH 12737
df = pd.DataFrame(
{
"grouping": [0, 1, 1, 3],
"f": [1.1, 2.1, 3.1, 4.5],
"d": pd.date_range("2014-1-1", "2014-1-4"),
"i": [1, 2, 3, 4],
},
columns=["grouping", "f", "i", "d"],
)
result = df.groupby("grouping").transform("first")
dates = [
pd.Timestamp("2014-1-1"),
pd.Timestamp("2014-1-2"),
pd.Timestamp("2014-1-2"),
pd.Timestamp("2014-1-4"),
]
expected = pd.DataFrame(
{"f": [1.1, 2.1, 2.1, 4.5], "d": dates, "i": [1, 2, 2, 4]},
columns=["f", "i", "d"],
)
tm.assert_frame_equal(result, expected)
# selection
result = df.groupby("grouping")[["f", "i"]].transform("first")
expected = expected[["f", "i"]]
tm.assert_frame_equal(result, expected)
# dup columns
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=["g", "a", "a"])
result = df.groupby("g").transform("first")
expected = df.drop("g", axis=1)
tm.assert_frame_equal(result, expected)
def test_transform_broadcast(tsframe, ts):
grouped = ts.groupby(lambda x: x.month)
result = grouped.transform(np.mean)
tm.assert_index_equal(result.index, ts.index)
for _, gp in grouped:
assert_fp_equal(result.reindex(gp.index), gp.mean())
grouped = tsframe.groupby(lambda x: x.month)
result = grouped.transform(np.mean)
tm.assert_index_equal(result.index, tsframe.index)
for _, gp in grouped:
agged = gp.mean()
res = result.reindex(gp.index)
for col in tsframe:
assert_fp_equal(res[col], agged[col])
# group columns
grouped = tsframe.groupby({"A": 0, "B": 0, "C": 1, "D": 1}, axis=1)
result = grouped.transform(np.mean)
tm.assert_index_equal(result.index, tsframe.index)
tm.assert_index_equal(result.columns, tsframe.columns)
for _, gp in grouped:
agged = gp.mean(1)
res = result.reindex(columns=gp.columns)
for idx in gp.index:
assert_fp_equal(res.xs(idx), agged[idx])
def test_transform_axis(tsframe):
# make sure that we are setting the axes
# correctly when on axis=0 or 1
# in the presence of a non-monotonic indexer
# GH12713
base = tsframe.iloc[0:5]
r = len(base.index)
c = len(base.columns)
tso = DataFrame(
np.random.randn(r, c), index=base.index, columns=base.columns, dtype="float64"
)
# monotonic
ts = tso
grouped = ts.groupby(lambda x: x.weekday())
result = ts - grouped.transform("mean")
expected = grouped.apply(lambda x: x - x.mean())
tm.assert_frame_equal(result, expected)
ts = ts.T
grouped = ts.groupby(lambda x: x.weekday(), axis=1)
result = ts - grouped.transform("mean")
expected = grouped.apply(lambda x: (x.T - x.mean(1)).T)
tm.assert_frame_equal(result, expected)
# non-monotonic
ts = tso.iloc[[1, 0] + list(range(2, len(base)))]
grouped = ts.groupby(lambda x: x.weekday())
result = ts - grouped.transform("mean")
expected = grouped.apply(lambda x: x - x.mean())
tm.assert_frame_equal(result, expected)
ts = ts.T
grouped = ts.groupby(lambda x: x.weekday(), axis=1)
result = ts - grouped.transform("mean")
expected = grouped.apply(lambda x: (x.T - x.mean(1)).T)
tm.assert_frame_equal(result, expected)
def test_transform_dtype():
# GH 9807
# Check transform dtype output is preserved
df = DataFrame([[1, 3], [2, 3]])
result = df.groupby(1).transform("mean")
expected = DataFrame([[1.5], [1.5]])
tm.assert_frame_equal(result, expected)
def test_transform_bug():
# GH 5712
# transforming on a datetime column
df = DataFrame(dict(A=Timestamp("20130101"), B=np.arange(5)))
result = df.groupby("A")["B"].transform(lambda x: x.rank(ascending=False))
expected = Series(np.arange(5, 0, step=-1), name="B")
tm.assert_series_equal(result, expected)
def test_transform_numeric_to_boolean():
# GH 16875
# inconsistency in transforming boolean values
expected = pd.Series([True, True], name="A")
df = pd.DataFrame({"A": [1.1, 2.2], "B": [1, 2]})
result = df.groupby("B").A.transform(lambda x: True)
tm.assert_series_equal(result, expected)
df = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
result = df.groupby("B").A.transform(lambda x: True)
tm.assert_series_equal(result, expected)
def test_transform_datetime_to_timedelta():
# GH 15429
# transforming a datetime to timedelta
df = DataFrame(dict(A=Timestamp("20130101"), B=np.arange(5)))
expected = pd.Series([Timestamp("20130101") - Timestamp("20130101")] * 5, name="A")
# this does date math without changing result type in transform
base_time = df["A"][0]
result = (
df.groupby("A")["A"].transform(lambda x: x.max() - x.min() + base_time)
- base_time
)
tm.assert_series_equal(result, expected)
# this does date math and causes the transform to return timedelta
result = df.groupby("A")["A"].transform(lambda x: x.max() - x.min())
tm.assert_series_equal(result, expected)
def test_transform_datetime_to_numeric():
# GH 10972
# convert dt to float
df = DataFrame({"a": 1, "b": date_range("2015-01-01", periods=2, freq="D")})
result = df.groupby("a").b.transform(
lambda x: x.dt.dayofweek - x.dt.dayofweek.mean()
)
expected = Series([-0.5, 0.5], name="b")
tm.assert_series_equal(result, expected)
# convert dt to int
df = DataFrame({"a": 1, "b": date_range("2015-01-01", periods=2, freq="D")})
result = df.groupby("a").b.transform(
lambda x: x.dt.dayofweek - x.dt.dayofweek.min()
)
expected = Series([0, 1], name="b")
tm.assert_series_equal(result, expected)
def test_transform_casting():
# 13046
data = """
idx A ID3 DATETIME
0 B-028 b76cd912ff "2014-10-08 13:43:27"
1 B-054 4a57ed0b02 "2014-10-08 14:26:19"
2 B-076 1a682034f8 "2014-10-08 14:29:01"
3 B-023 b76cd912ff "2014-10-08 18:39:34"
4 B-023 f88g8d7sds "2014-10-08 18:40:18"
5 B-033 b76cd912ff "2014-10-08 18:44:30"
6 B-032 b76cd912ff "2014-10-08 18:46:00"
7 B-037 b76cd912ff "2014-10-08 18:52:15"
8 B-046 db959faf02 "2014-10-08 18:59:59"
9 B-053 b76cd912ff "2014-10-08 19:17:48"
10 B-065 b76cd912ff "2014-10-08 19:21:38"
"""
df = pd.read_csv(
StringIO(data), sep=r"\s+", index_col=[0], parse_dates=["DATETIME"]
)
result = df.groupby("ID3")["DATETIME"].transform(lambda x: x.diff())
assert is_timedelta64_dtype(result.dtype)
result = df[["ID3", "DATETIME"]].groupby("ID3").transform(lambda x: x.diff())
assert is_timedelta64_dtype(result.DATETIME.dtype)
def test_transform_multiple(ts):
grouped = ts.groupby([lambda x: x.year, lambda x: x.month])
grouped.transform(lambda x: x * 2)
grouped.transform(np.mean)
def test_dispatch_transform(tsframe):
df = tsframe[::5].reindex(tsframe.index)
grouped = df.groupby(lambda x: x.month)
filled = grouped.fillna(method="pad")
fillit = lambda x: x.fillna(method="pad")
expected = df.groupby(lambda x: x.month).transform(fillit)
tm.assert_frame_equal(filled, expected)
def test_transform_transformation_func(transformation_func):
# GH 30918
df = DataFrame(
{
"A": ["foo", "foo", "foo", "foo", "bar", "bar", "baz"],
"B": [1, 2, np.nan, 3, 3, np.nan, 4],
},
index=pd.date_range("2020-01-01", "2020-01-07"),
)
if transformation_func == "cumcount":
test_op = lambda x: x.transform("cumcount")
mock_op = lambda x: Series(range(len(x)), x.index)
elif transformation_func == "fillna":
test_op = lambda x: x.transform("fillna", value=0)
mock_op = lambda x: x.fillna(value=0)
elif transformation_func == "tshift":
msg = (
"Current behavior of groupby.tshift is inconsistent with other "
"transformations. See GH34452 for more details"
)
pytest.xfail(msg)
else:
test_op = lambda x: x.transform(transformation_func)
mock_op = lambda x: getattr(x, transformation_func)()
result = test_op(df.groupby("A"))
groups = [df[["B"]].iloc[:4], df[["B"]].iloc[4:6], df[["B"]].iloc[6:]]
expected = concat([mock_op(g) for g in groups])
if transformation_func == "cumcount":
tm.assert_series_equal(result, expected)
else:
tm.assert_frame_equal(result, expected)
def test_transform_select_columns(df):
f = lambda x: x.mean()
result = df.groupby("A")[["C", "D"]].transform(f)
selection = df[["C", "D"]]
expected = selection.groupby(df["A"]).transform(f)
tm.assert_frame_equal(result, expected)
def test_transform_exclude_nuisance(df):
# this also tests orderings in transform between
# series/frame to make sure it's consistent
expected = {}
grouped = df.groupby("A")
expected["C"] = grouped["C"].transform(np.mean)
expected["D"] = grouped["D"].transform(np.mean)
expected = DataFrame(expected)
result = df.groupby("A").transform(np.mean)
tm.assert_frame_equal(result, expected)
def test_transform_function_aliases(df):
result = df.groupby("A").transform("mean")
expected = df.groupby("A").transform(np.mean)
tm.assert_frame_equal(result, expected)
result = df.groupby("A")["C"].transform("mean")
expected = df.groupby("A")["C"].transform(np.mean)
tm.assert_series_equal(result, expected)
def test_series_fast_transform_date():
# GH 13191
df = pd.DataFrame(
{"grouping": [np.nan, 1, 1, 3], "d": pd.date_range("2014-1-1", "2014-1-4")}
)
result = df.groupby("grouping")["d"].transform("first")
dates = [
pd.NaT,
pd.Timestamp("2014-1-2"),
pd.Timestamp("2014-1-2"),
pd.Timestamp("2014-1-4"),
]
expected = pd.Series(dates, name="d")
tm.assert_series_equal(result, expected)
def test_transform_length():
# GH 9697
df = pd.DataFrame({"col1": [1, 1, 2, 2], "col2": [1, 2, 3, np.nan]})
expected = pd.Series([3.0] * 4)
def nsum(x):
return np.nansum(x)
results = [
df.groupby("col1").transform(sum)["col2"],
df.groupby("col1")["col2"].transform(sum),
df.groupby("col1").transform(nsum)["col2"],
df.groupby("col1")["col2"].transform(nsum),
]
for result in results:
tm.assert_series_equal(result, expected, check_names=False)
def test_transform_coercion():
# 14457
# when we are transforming be sure to not coerce
# via assignment
df = pd.DataFrame(dict(A=["a", "a"], B=[0, 1]))
g = df.groupby("A")
expected = g.transform(np.mean)
result = g.transform(lambda x: np.mean(x))
tm.assert_frame_equal(result, expected)
def test_groupby_transform_with_int():
# GH 3740, make sure that we might upcast on item-by-item transform
# floats
df = DataFrame(
dict(
A=[1, 1, 1, 2, 2, 2],
B=Series(1, dtype="float64"),
C=Series([1, 2, 3, 1, 2, 3], dtype="float64"),
D="foo",
)
)
with np.errstate(all="ignore"):
result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
expected = DataFrame(
dict(B=np.nan, C=Series([-1, 0, 1, -1, 0, 1], dtype="float64"))
)
tm.assert_frame_equal(result, expected)
# int case
df = DataFrame(dict(A=[1, 1, 1, 2, 2, 2], B=1, C=[1, 2, 3, 1, 2, 3], D="foo"))
with np.errstate(all="ignore"):
result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
expected = DataFrame(dict(B=np.nan, C=[-1, 0, 1, -1, 0, 1]))
tm.assert_frame_equal(result, expected)
# int that needs float conversion
s = Series([2, 3, 4, 10, 5, -1])
df = DataFrame(dict(A=[1, 1, 1, 2, 2, 2], B=1, C=s, D="foo"))
with np.errstate(all="ignore"):
result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
s1 = s.iloc[0:3]
s1 = (s1 - s1.mean()) / s1.std()
s2 = s.iloc[3:6]
s2 = (s2 - s2.mean()) / s2.std()
expected = DataFrame(dict(B=np.nan, C=concat([s1, s2])))
tm.assert_frame_equal(result, expected)
# int downcasting
result = df.groupby("A").transform(lambda x: x * 2 / 2)
expected = DataFrame(dict(B=1, C=[2, 3, 4, 10, 5, -1]))
tm.assert_frame_equal(result, expected)
def test_groupby_transform_with_nan_group():
# GH 9941
df = pd.DataFrame({"a": range(10), "b": [1, 1, 2, 3, np.nan, 4, 4, 5, 5, 5]})
result = df.groupby(df.b)["a"].transform(max)
expected = pd.Series(
[1.0, 1.0, 2.0, 3.0, np.nan, 6.0, 6.0, 9.0, 9.0, 9.0], name="a"
)
tm.assert_series_equal(result, expected)
def test_transform_mixed_type():
index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], [1, 2, 3, 1, 2, 3]])
df = DataFrame(
{
"d": [1.0, 1.0, 1.0, 2.0, 2.0, 2.0],
"c": np.tile(["a", "b", "c"], 2),
"v": np.arange(1.0, 7.0),
},
index=index,
)
def f(group):
group["g"] = group["d"] * 2
return group[:1]
grouped = df.groupby("c")
result = grouped.apply(f)
assert result["d"].dtype == np.float64
# this is by definition a mutating operation!
with pd.option_context("mode.chained_assignment", None):
for key, group in grouped:
res = f(group)
tm.assert_frame_equal(res, result.loc[key])
def _check_cython_group_transform_cumulative(pd_op, np_op, dtype):
"""
Check a group transform that executes a cumulative function.
Parameters
----------
pd_op : callable
The pandas cumulative function.
np_op : callable
The analogous one in NumPy.
dtype : type
The specified dtype of the data.
"""
is_datetimelike = False
data = np.array([[1], [2], [3], [4]], dtype=dtype)
ans = np.zeros_like(data)
labels = np.array([0, 0, 0, 0], dtype=np.int64)
ngroups = 1
pd_op(ans, data, labels, ngroups, is_datetimelike)
tm.assert_numpy_array_equal(np_op(data), ans[:, 0], check_dtype=False)
def test_cython_group_transform_cumsum(any_real_dtype):
# see gh-4095
dtype = np.dtype(any_real_dtype).type
pd_op, np_op = group_cumsum, np.cumsum
_check_cython_group_transform_cumulative(pd_op, np_op, dtype)
def test_cython_group_transform_cumprod():
# see gh-4095
dtype = np.float64
pd_op, np_op = group_cumprod_float64, np.cumproduct
_check_cython_group_transform_cumulative(pd_op, np_op, dtype)
def test_cython_group_transform_algos():
# see gh-4095
is_datetimelike = False
# with nans
labels = np.array([0, 0, 0, 0, 0], dtype=np.int64)
ngroups = 1
data = np.array([[1], [2], [3], [np.nan], [4]], dtype="float64")
actual = np.zeros_like(data)
actual.fill(np.nan)
group_cumprod_float64(actual, data, labels, ngroups, is_datetimelike)
expected = np.array([1, 2, 6, np.nan, 24], dtype="float64")
tm.assert_numpy_array_equal(actual[:, 0], expected)
actual = np.zeros_like(data)
actual.fill(np.nan)
group_cumsum(actual, data, labels, ngroups, is_datetimelike)
expected = np.array([1, 3, 6, np.nan, 10], dtype="float64")
tm.assert_numpy_array_equal(actual[:, 0], expected)
# timedelta
is_datetimelike = True
data = np.array([np.timedelta64(1, "ns")] * 5, dtype="m8[ns]")[:, None]
actual = np.zeros_like(data, dtype="int64")
group_cumsum(actual, data.view("int64"), labels, ngroups, is_datetimelike)
expected = np.array(
[
np.timedelta64(1, "ns"),
np.timedelta64(2, "ns"),
np.timedelta64(3, "ns"),
np.timedelta64(4, "ns"),
np.timedelta64(5, "ns"),
]
)
tm.assert_numpy_array_equal(actual[:, 0].view("m8[ns]"), expected)
@pytest.mark.parametrize(
"op, args, targop",
[
("cumprod", (), lambda x: x.cumprod()),
("cumsum", (), lambda x: x.cumsum()),
("shift", (-1,), lambda x: x.shift(-1)),
("shift", (1,), lambda x: x.shift()),
],
)
def test_cython_transform_series(op, args, targop):
# GH 4095
s = Series(np.random.randn(1000))
s_missing = s.copy()
s_missing.iloc[2:10] = np.nan
labels = np.random.randint(0, 50, size=1000).astype(float)
# series
for data in [s, s_missing]:
# print(data.head())
expected = data.groupby(labels).transform(targop)
tm.assert_series_equal(expected, data.groupby(labels).transform(op, *args))
tm.assert_series_equal(expected, getattr(data.groupby(labels), op)(*args))
@pytest.mark.parametrize("op", ["cumprod", "cumsum"])
@pytest.mark.parametrize("skipna", [False, True])
@pytest.mark.parametrize(
"input, exp",
[
# When everything is NaN
({"key": ["b"] * 10, "value": np.nan}, pd.Series([np.nan] * 10, name="value")),
# When there is a single NaN
(
{"key": ["b"] * 10 + ["a"] * 2, "value": [3] * 3 + [np.nan] + [3] * 8},
{
("cumprod", False): [3.0, 9.0, 27.0] + [np.nan] * 7 + [3.0, 9.0],
("cumprod", True): [
3.0,
9.0,
27.0,
np.nan,
81.0,
243.0,
729.0,
2187.0,
6561.0,
19683.0,
3.0,
9.0,
],
("cumsum", False): [3.0, 6.0, 9.0] + [np.nan] * 7 + [3.0, 6.0],
("cumsum", True): [
3.0,
6.0,
9.0,
np.nan,
12.0,
15.0,
18.0,
21.0,
24.0,
27.0,
3.0,
6.0,
],
},
),
],
)
def test_groupby_cum_skipna(op, skipna, input, exp):
df = pd.DataFrame(input)
result = df.groupby("key")["value"].transform(op, skipna=skipna)
if isinstance(exp, dict):
expected = exp[(op, skipna)]
else:
expected = exp
expected = pd.Series(expected, name="value")
tm.assert_series_equal(expected, result)
@pytest.mark.arm_slow
@pytest.mark.parametrize(
"op, args, targop",
[
("cumprod", (), lambda x: x.cumprod()),
("cumsum", (), lambda x: x.cumsum()),
("shift", (-1,), lambda x: x.shift(-1)),
("shift", (1,), lambda x: x.shift()),
],
)
def test_cython_transform_frame(op, args, targop):
s = Series(np.random.randn(1000))
s_missing = s.copy()
s_missing.iloc[2:10] = np.nan
labels = np.random.randint(0, 50, size=1000).astype(float)
strings = list("qwertyuiopasdfghjklz")
strings_missing = strings[:]
strings_missing[5] = np.nan
df = DataFrame(
{
"float": s,
"float_missing": s_missing,
"int": [1, 1, 1, 1, 2] * 200,
"datetime": pd.date_range("1990-1-1", periods=1000),
"timedelta": pd.timedelta_range(1, freq="s", periods=1000),
"string": strings * 50,
"string_missing": strings_missing * 50,
},
columns=[
"float",
"float_missing",
"int",
"datetime",
"timedelta",
"string",
"string_missing",
],
)
df["cat"] = df["string"].astype("category")
df2 = df.copy()
df2.index = pd.MultiIndex.from_product([range(100), range(10)])
# DataFrame - Single and MultiIndex,
# group by values, index level, columns
for df in [df, df2]:
for gb_target in [
dict(by=labels),
dict(level=0),
dict(by="string"),
]: # dict(by='string_missing')]:
# dict(by=['int','string'])]:
gb = df.groupby(**gb_target)
# allowlisted methods set the selection before applying
# bit a of hack to make sure the cythonized shift
# is equivalent to pre 0.17.1 behavior
if op == "shift":
gb._set_group_selection()
if op != "shift" and "int" not in gb_target:
# numeric apply fastpath promotes dtype so have
# to apply separately and concat
i = gb[["int"]].apply(targop)
f = gb[["float", "float_missing"]].apply(targop)
expected = pd.concat([f, i], axis=1)
else:
expected = gb.apply(targop)
expected = expected.sort_index(axis=1)
tm.assert_frame_equal(expected, gb.transform(op, *args).sort_index(axis=1))
tm.assert_frame_equal(expected, getattr(gb, op)(*args).sort_index(axis=1))
# individual columns
for c in df:
if c not in ["float", "int", "float_missing"] and op != "shift":
msg = "No numeric types to aggregate"
with pytest.raises(DataError, match=msg):
gb[c].transform(op)
with pytest.raises(DataError, match=msg):
getattr(gb[c], op)()
else:
expected = gb[c].apply(targop)
expected.name = c
tm.assert_series_equal(expected, gb[c].transform(op, *args))
tm.assert_series_equal(expected, getattr(gb[c], op)(*args))
def test_transform_with_non_scalar_group():
# GH 10165
cols = pd.MultiIndex.from_tuples(
[
("syn", "A"),
("mis", "A"),
("non", "A"),
("syn", "C"),
("mis", "C"),
("non", "C"),
("syn", "T"),
("mis", "T"),
("non", "T"),
("syn", "G"),
("mis", "G"),
("non", "G"),
]
)
df = pd.DataFrame(
np.random.randint(1, 10, (4, 12)), columns=cols, index=["A", "C", "G", "T"]
)
msg = "transform must return a scalar value for each group.*"
with pytest.raises(ValueError, match=msg):
df.groupby(axis=1, level=1).transform(lambda z: z.div(z.sum(axis=1), axis=0))
@pytest.mark.parametrize(
"cols,exp,comp_func",
[
("a", pd.Series([1, 1, 1], name="a"), tm.assert_series_equal),
(
["a", "c"],
pd.DataFrame({"a": [1, 1, 1], "c": [1, 1, 1]}),
tm.assert_frame_equal,
),
],
)
@pytest.mark.parametrize("agg_func", ["count", "rank", "size"])
def test_transform_numeric_ret(cols, exp, comp_func, agg_func, request):
if agg_func == "size" and isinstance(cols, list):
# https://github.com/pytest-dev/pytest/issues/6300
# workaround to xfail fixture/param permutations
reason = "'size' transformation not supported with NDFrameGroupy"
request.node.add_marker(pytest.mark.xfail(reason=reason))
# GH 19200
df = pd.DataFrame(
{"a": pd.date_range("2018-01-01", periods=3), "b": range(3), "c": range(7, 10)}
)
result = df.groupby("b")[cols].transform(agg_func)
if agg_func == "rank":
exp = exp.astype("float")
comp_func(result, exp)
@pytest.mark.parametrize("mix_groupings", [True, False])
@pytest.mark.parametrize("as_series", [True, False])
@pytest.mark.parametrize("val1,val2", [("foo", "bar"), (1, 2), (1.0, 2.0)])
@pytest.mark.parametrize(
"fill_method,limit,exp_vals",
[
(
"ffill",
None,
[np.nan, np.nan, "val1", "val1", "val1", "val2", "val2", "val2"],
),
("ffill", 1, [np.nan, np.nan, "val1", "val1", np.nan, "val2", "val2", np.nan]),
(
"bfill",
None,
["val1", "val1", "val1", "val2", "val2", "val2", np.nan, np.nan],
),
("bfill", 1, [np.nan, "val1", "val1", np.nan, "val2", "val2", np.nan, np.nan]),
],
)
def test_group_fill_methods(
mix_groupings, as_series, val1, val2, fill_method, limit, exp_vals
):
vals = [np.nan, np.nan, val1, np.nan, np.nan, val2, np.nan, np.nan]
_exp_vals = list(exp_vals)
# Overwrite placeholder values
for index, exp_val in enumerate(_exp_vals):
if exp_val == "val1":
_exp_vals[index] = val1
elif exp_val == "val2":
_exp_vals[index] = val2
# Need to modify values and expectations depending on the
# Series / DataFrame that we ultimately want to generate
if mix_groupings: # ['a', 'b', 'a, 'b', ...]
keys = ["a", "b"] * len(vals)
def interweave(list_obj):
temp = list()
for x in list_obj:
temp.extend([x, x])
return temp
_exp_vals = interweave(_exp_vals)
vals = interweave(vals)
else: # ['a', 'a', 'a', ... 'b', 'b', 'b']
keys = ["a"] * len(vals) + ["b"] * len(vals)
_exp_vals = _exp_vals * 2
vals = vals * 2
df = DataFrame({"key": keys, "val": vals})
if as_series:
result = getattr(df.groupby("key")["val"], fill_method)(limit=limit)
exp = Series(_exp_vals, name="val")
tm.assert_series_equal(result, exp)
else:
result = getattr(df.groupby("key"), fill_method)(limit=limit)
exp = DataFrame({"val": _exp_vals})
tm.assert_frame_equal(result, exp)
@pytest.mark.parametrize("fill_method", ["ffill", "bfill"])
def test_pad_stable_sorting(fill_method):
# GH 21207
x = [0] * 20
y = [np.nan] * 10 + [1] * 10
if fill_method == "bfill":
y = y[::-1]
df = pd.DataFrame({"x": x, "y": y})
expected = df.drop("x", 1)
result = getattr(df.groupby("x"), fill_method)()
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("test_series", [True, False])
@pytest.mark.parametrize(
"freq",
[
None,
pytest.param(
"D",
marks=pytest.mark.xfail(
reason="GH#23918 before method uses freq in vectorized approach"
),
),
],
)
@pytest.mark.parametrize("periods", [1, -1])
@pytest.mark.parametrize("fill_method", ["ffill", "bfill", None])
@pytest.mark.parametrize("limit", [None, 1])
def test_pct_change(test_series, freq, periods, fill_method, limit):
# GH 21200, 21621, 30463
vals = [3, np.nan, np.nan, np.nan, 1, 2, 4, 10, np.nan, 4]
keys = ["a", "b"]
key_v = np.repeat(keys, len(vals))
df = DataFrame({"key": key_v, "vals": vals * 2})
df_g = df
if fill_method is not None:
df_g = getattr(df.groupby("key"), fill_method)(limit=limit)
grp = df_g.groupby(df.key)
expected = grp["vals"].obj / grp["vals"].shift(periods) - 1
if test_series:
result = df.groupby("key")["vals"].pct_change(
periods=periods, fill_method=fill_method, limit=limit, freq=freq
)
tm.assert_series_equal(result, expected)
else:
result = df.groupby("key").pct_change(
periods=periods, fill_method=fill_method, limit=limit, freq=freq
)
tm.assert_frame_equal(result, expected.to_frame("vals"))
@pytest.mark.parametrize(
"func, expected_status",
[
("ffill", ["shrt", "shrt", "lng", np.nan, "shrt", "ntrl", "ntrl"]),
("bfill", ["shrt", "lng", "lng", "shrt", "shrt", "ntrl", np.nan]),
],
)
def test_ffill_bfill_non_unique_multilevel(func, expected_status):
# GH 19437
date = pd.to_datetime(
[
"2018-01-01",
"2018-01-01",
"2018-01-01",
"2018-01-01",
"2018-01-02",
"2018-01-01",
"2018-01-02",
]
)
symbol = ["MSFT", "MSFT", "MSFT", "AAPL", "AAPL", "TSLA", "TSLA"]
status = ["shrt", np.nan, "lng", np.nan, "shrt", "ntrl", np.nan]
df = DataFrame({"date": date, "symbol": symbol, "status": status})
df = df.set_index(["date", "symbol"])
result = getattr(df.groupby("symbol")["status"], func)()
index = MultiIndex.from_tuples(
tuples=list(zip(*[date, symbol])), names=["date", "symbol"]
)
expected = Series(expected_status, index=index, name="status")
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("func", [np.any, np.all])
def test_any_all_np_func(func):
# GH 20653
df = pd.DataFrame(
[["foo", True], [np.nan, True], ["foo", True]], columns=["key", "val"]
)
exp = pd.Series([True, np.nan, True], name="val")
res = df.groupby("key")["val"].transform(func)
tm.assert_series_equal(res, exp)
def test_groupby_transform_rename():
# https://github.com/pandas-dev/pandas/issues/23461
def demean_rename(x):
result = x - x.mean()
if isinstance(x, pd.Series):
return result
result = result.rename(columns={c: "{c}_demeaned" for c in result.columns})
return result
df = pd.DataFrame({"group": list("ababa"), "value": [1, 1, 1, 2, 2]})
expected = pd.DataFrame({"value": [-1.0 / 3, -0.5, -1.0 / 3, 0.5, 2.0 / 3]})
result = df.groupby("group").transform(demean_rename)
tm.assert_frame_equal(result, expected)
result_single = df.groupby("group").value.transform(demean_rename)
tm.assert_series_equal(result_single, expected["value"])
@pytest.mark.parametrize("func", [min, max, np.min, np.max, "first", "last"])
def test_groupby_transform_timezone_column(func):
# GH 24198
ts = pd.to_datetime("now", utc=True).tz_convert("Asia/Singapore")
result = pd.DataFrame({"end_time": [ts], "id": [1]})
result["max_end_time"] = result.groupby("id").end_time.transform(func)
expected = pd.DataFrame([[ts, 1, ts]], columns=["end_time", "id", "max_end_time"])
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
"func, values",
[
("idxmin", ["1/1/2011"] * 2 + ["1/3/2011"] * 7 + ["1/10/2011"]),
("idxmax", ["1/2/2011"] * 2 + ["1/9/2011"] * 7 + ["1/10/2011"]),
],
)
def test_groupby_transform_with_datetimes(func, values):
# GH 15306
dates = pd.date_range("1/1/2011", periods=10, freq="D")
stocks = pd.DataFrame({"price": np.arange(10.0)}, index=dates)
stocks["week_id"] = dates.isocalendar().week
result = stocks.groupby(stocks["week_id"])["price"].transform(func)
expected = pd.Series(data=pd.to_datetime(values), index=dates, name="price")
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("func", ["cumsum", "cumprod", "cummin", "cummax"])
def test_transform_absent_categories(func):
# GH 16771
# cython transforms with more groups than rows
x_vals = [1]
x_cats = range(2)
y = [1]
df = DataFrame(dict(x=Categorical(x_vals, x_cats), y=y))
result = getattr(df.y.groupby(df.x), func)()
expected = df.y
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("func", ["ffill", "bfill", "shift"])
@pytest.mark.parametrize("key, val", [("level", 0), ("by", Series([0]))])
def test_ffill_not_in_axis(func, key, val):
# GH 21521
df = pd.DataFrame([[np.nan]])
result = getattr(df.groupby(**{key: val}), func)()
expected = df
tm.assert_frame_equal(result, expected)
def test_transform_invalid_name_raises():
# GH#27486
df = DataFrame(dict(a=[0, 1, 1, 2]))
g = df.groupby(["a", "b", "b", "c"])
with pytest.raises(ValueError, match="not a valid function name"):
g.transform("some_arbitrary_name")
# method exists on the object, but is not a valid transformation/agg
assert hasattr(g, "aggregate") # make sure the method exists
with pytest.raises(ValueError, match="not a valid function name"):
g.transform("aggregate")
# Test SeriesGroupBy
g = df["a"].groupby(["a", "b", "b", "c"])
with pytest.raises(ValueError, match="not a valid function name"):
g.transform("some_arbitrary_name")
@pytest.mark.parametrize(
"obj",
[
DataFrame(
dict(a=[0, 0, 0, 1, 1, 1], b=range(6)), index=["A", "B", "C", "D", "E", "F"]
),
Series([0, 0, 0, 1, 1, 1], index=["A", "B", "C", "D", "E", "F"]),
],
)
def test_transform_agg_by_name(reduction_func, obj):
func = reduction_func
g = obj.groupby(np.repeat([0, 1], 3))
if func == "ngroup": # GH#27468
pytest.xfail("TODO: g.transform('ngroup') doesn't work")
if func == "size": # GH#27469
pytest.xfail("TODO: g.transform('size') doesn't work")
if func == "corrwith" and isinstance(obj, Series): # GH#32293
pytest.xfail("TODO: implement SeriesGroupBy.corrwith")
args = {"nth": [0], "quantile": [0.5], "corrwith": [obj]}.get(func, [])
result = g.transform(func, *args)
# this is the *definition* of a transformation
tm.assert_index_equal(result.index, obj.index)
if hasattr(obj, "columns"):
tm.assert_index_equal(result.columns, obj.columns)
# verify that values were broadcasted across each group
assert len(set(DataFrame(result).iloc[-3:, -1])) == 1
def test_transform_lambda_with_datetimetz():
# GH 27496
df = DataFrame(
{
"time": [
Timestamp("2010-07-15 03:14:45"),
Timestamp("2010-11-19 18:47:06"),
],
"timezone": ["Etc/GMT+4", "US/Eastern"],
}
)
result = df.groupby(["timezone"])["time"].transform(
lambda x: x.dt.tz_localize(x.name)
)
expected = Series(
[
Timestamp("2010-07-15 03:14:45", tz="Etc/GMT+4"),
Timestamp("2010-11-19 18:47:06", tz="US/Eastern"),
],
name="time",
)
tm.assert_series_equal(result, expected)
def test_transform_fastpath_raises():
# GH#29631 case where fastpath defined in groupby.generic _choose_path
# raises, but slow_path does not
df = pd.DataFrame({"A": [1, 1, 2, 2], "B": [1, -1, 1, 2]})
gb = df.groupby("A")
def func(grp):
# we want a function such that func(frame) fails but func.apply(frame)
# works
if grp.ndim == 2:
# Ensure that fast_path fails
raise NotImplementedError("Don't cross the streams")
return grp * 2
# Check that the fastpath raises, see _transform_general
obj = gb._obj_with_exclusions
gen = gb.grouper.get_iterator(obj, axis=gb.axis)
fast_path, slow_path = gb._define_paths(func)
_, group = next(gen)
with pytest.raises(NotImplementedError, match="Don't cross the streams"):
fast_path(group)
result = gb.transform(func)
expected = pd.DataFrame([2, -2, 2, 4], columns=["B"])
tm.assert_frame_equal(result, expected)
def test_transform_lambda_indexing():
# GH 7883
df = pd.DataFrame(
{
"A": ["foo", "bar", "foo", "bar", "foo", "flux", "foo", "flux"],
"B": ["one", "one", "two", "three", "two", "six", "five", "three"],
"C": range(8),
"D": range(8),
"E": range(8),
}
)
df = df.set_index(["A", "B"])
df = df.sort_index()
result = df.groupby(level="A").transform(lambda x: x.iloc[-1])
expected = DataFrame(
{
"C": [3, 3, 7, 7, 4, 4, 4, 4],
"D": [3, 3, 7, 7, 4, 4, 4, 4],
"E": [3, 3, 7, 7, 4, 4, 4, 4],
},
index=MultiIndex.from_tuples(
[
("bar", "one"),
("bar", "three"),
("flux", "six"),
("flux", "three"),
("foo", "five"),
("foo", "one"),
("foo", "two"),
("foo", "two"),
],
names=["A", "B"],
),
)
tm.assert_frame_equal(result, expected)
def test_categorical_and_not_categorical_key(observed):
# Checks that groupby-transform, when grouping by both a categorical
# and a non-categorical key, doesn't try to expand the output to include
# non-observed categories but instead matches the input shape.
# GH 32494
df_with_categorical = pd.DataFrame(
{
"A": pd.Categorical(["a", "b", "a"], categories=["a", "b", "c"]),
"B": [1, 2, 3],
"C": ["a", "b", "a"],
}
)
df_without_categorical = pd.DataFrame(
{"A": ["a", "b", "a"], "B": [1, 2, 3], "C": ["a", "b", "a"]}
)
# DataFrame case
result = df_with_categorical.groupby(["A", "C"], observed=observed).transform("sum")
expected = df_without_categorical.groupby(["A", "C"]).transform("sum")
tm.assert_frame_equal(result, expected)
expected_explicit = pd.DataFrame({"B": [4, 2, 4]})
tm.assert_frame_equal(result, expected_explicit)
# Series case
result = df_with_categorical.groupby(["A", "C"], observed=observed)["B"].transform(
"sum"
)
expected = df_without_categorical.groupby(["A", "C"])["B"].transform("sum")
tm.assert_series_equal(result, expected)
expected_explicit = pd.Series([4, 2, 4], name="B")
tm.assert_series_equal(result, expected_explicit)
|
bsd-3-clause
|
xuewei4d/scikit-learn
|
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
|
10
|
23178
|
import numpy as np
from scipy.stats.mstats import mquantiles
import pytest
from numpy.testing import assert_allclose
from sklearn.datasets import load_diabetes
from sklearn.datasets import load_iris
from sklearn.datasets import make_classification, make_regression
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LinearRegression
from sklearn.utils._testing import _convert_container
from sklearn.inspection import plot_partial_dependence
# TODO: Remove when https://github.com/numpy/numpy/issues/14397 is resolved
pytestmark = pytest.mark.filterwarnings(
"ignore:In future, it will be an error for 'np.bool_':DeprecationWarning:"
"matplotlib.*")
@pytest.fixture(scope="module")
def diabetes():
return load_diabetes()
@pytest.fixture(scope="module")
def clf_diabetes(diabetes):
clf = GradientBoostingRegressor(n_estimators=10, random_state=1)
clf.fit(diabetes.data, diabetes.target)
return clf
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("grid_resolution", [10, 20])
def test_plot_partial_dependence(grid_resolution, pyplot, clf_diabetes,
diabetes):
# Test partial dependence plot function.
# Use columns 0 & 2 as 1 is not quantitative (sex)
feature_names = diabetes.feature_names
disp = plot_partial_dependence(clf_diabetes, diabetes.data,
[0, 2, (0, 2)],
grid_resolution=grid_resolution,
feature_names=feature_names,
contour_kw={"cmap": "jet"})
fig = pyplot.gcf()
axs = fig.get_axes()
assert disp.figure_ is fig
assert len(axs) == 4
assert disp.bounding_ax_ is not None
assert disp.axes_.shape == (1, 3)
assert disp.lines_.shape == (1, 3)
assert disp.contours_.shape == (1, 3)
assert disp.deciles_vlines_.shape == (1, 3)
assert disp.deciles_hlines_.shape == (1, 3)
assert disp.lines_[0, 2] is None
assert disp.contours_[0, 0] is None
assert disp.contours_[0, 1] is None
# deciles lines: always show on xaxis, only show on yaxis if 2-way PDP
for i in range(3):
assert disp.deciles_vlines_[0, i] is not None
assert disp.deciles_hlines_[0, 0] is None
assert disp.deciles_hlines_[0, 1] is None
assert disp.deciles_hlines_[0, 2] is not None
assert disp.features == [(0, ), (2, ), (0, 2)]
assert np.all(disp.feature_names == feature_names)
assert len(disp.deciles) == 2
for i in [0, 2]:
assert_allclose(disp.deciles[i],
mquantiles(diabetes.data[:, i],
prob=np.arange(0.1, 1.0, 0.1)))
single_feature_positions = [(0, (0, 0)), (2, (0, 1))]
expected_ylabels = ["Partial dependence", ""]
for i, (feat_col, pos) in enumerate(single_feature_positions):
ax = disp.axes_[pos]
assert ax.get_ylabel() == expected_ylabels[i]
assert ax.get_xlabel() == diabetes.feature_names[feat_col]
assert_allclose(ax.get_ylim(), disp.pdp_lim[1])
line = disp.lines_[pos]
avg_preds = disp.pd_results[i]
assert avg_preds.average.shape == (1, grid_resolution)
target_idx = disp.target_idx
line_data = line.get_data()
assert_allclose(line_data[0], avg_preds["values"][0])
assert_allclose(line_data[1], avg_preds.average[target_idx].ravel())
# two feature position
ax = disp.axes_[0, 2]
coutour = disp.contours_[0, 2]
expected_levels = np.linspace(*disp.pdp_lim[2], num=8)
assert_allclose(coutour.levels, expected_levels)
assert coutour.get_cmap().name == "jet"
assert ax.get_xlabel() == diabetes.feature_names[0]
assert ax.get_ylabel() == diabetes.feature_names[2]
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("kind, subsample, shape", [
('average', None, (1, 3)),
('individual', None, (1, 3, 442)),
('both', None, (1, 3, 443)),
('individual', 50, (1, 3, 50)),
('both', 50, (1, 3, 51)),
('individual', 0.5, (1, 3, 221)),
('both', 0.5, (1, 3, 222))
])
def test_plot_partial_dependence_kind(pyplot, kind, subsample, shape,
clf_diabetes, diabetes):
disp = plot_partial_dependence(clf_diabetes, diabetes.data, [0, 1, 2],
kind=kind, subsample=subsample)
assert disp.axes_.shape == (1, 3)
assert disp.lines_.shape == shape
assert disp.contours_.shape == (1, 3)
assert disp.contours_[0, 0] is None
assert disp.contours_[0, 1] is None
assert disp.contours_[0, 2] is None
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize(
"input_type, feature_names_type",
[('dataframe', None),
('dataframe', 'list'), ('list', 'list'), ('array', 'list'),
('dataframe', 'array'), ('list', 'array'), ('array', 'array'),
('dataframe', 'series'), ('list', 'series'), ('array', 'series'),
('dataframe', 'index'), ('list', 'index'), ('array', 'index')]
)
def test_plot_partial_dependence_str_features(pyplot, clf_diabetes, diabetes,
input_type, feature_names_type):
if input_type == 'dataframe':
pd = pytest.importorskip("pandas")
X = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
elif input_type == 'list':
X = diabetes.data.tolist()
else:
X = diabetes.data
if feature_names_type is None:
feature_names = None
else:
feature_names = _convert_container(diabetes.feature_names,
feature_names_type)
grid_resolution = 25
# check with str features and array feature names and single column
disp = plot_partial_dependence(clf_diabetes, X,
[('age', 'bmi'), 'bmi'],
grid_resolution=grid_resolution,
feature_names=feature_names,
n_cols=1, line_kw={"alpha": 0.8})
fig = pyplot.gcf()
axs = fig.get_axes()
assert len(axs) == 3
assert disp.figure_ is fig
assert disp.axes_.shape == (2, 1)
assert disp.lines_.shape == (2, 1)
assert disp.contours_.shape == (2, 1)
assert disp.deciles_vlines_.shape == (2, 1)
assert disp.deciles_hlines_.shape == (2, 1)
assert disp.lines_[0, 0] is None
assert disp.deciles_vlines_[0, 0] is not None
assert disp.deciles_hlines_[0, 0] is not None
assert disp.contours_[1, 0] is None
assert disp.deciles_hlines_[1, 0] is None
assert disp.deciles_vlines_[1, 0] is not None
# line
ax = disp.axes_[1, 0]
assert ax.get_xlabel() == "bmi"
assert ax.get_ylabel() == "Partial dependence"
line = disp.lines_[1, 0]
avg_preds = disp.pd_results[1]
target_idx = disp.target_idx
assert line.get_alpha() == 0.8
line_data = line.get_data()
assert_allclose(line_data[0], avg_preds["values"][0])
assert_allclose(line_data[1], avg_preds.average[target_idx].ravel())
# contour
ax = disp.axes_[0, 0]
coutour = disp.contours_[0, 0]
expect_levels = np.linspace(*disp.pdp_lim[2], num=8)
assert_allclose(coutour.levels, expect_levels)
assert ax.get_xlabel() == "age"
assert ax.get_ylabel() == "bmi"
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
def test_plot_partial_dependence_custom_axes(pyplot, clf_diabetes, diabetes):
grid_resolution = 25
fig, (ax1, ax2) = pyplot.subplots(1, 2)
disp = plot_partial_dependence(clf_diabetes, diabetes.data,
['age', ('age', 'bmi')],
grid_resolution=grid_resolution,
feature_names=diabetes.feature_names,
ax=[ax1, ax2])
assert fig is disp.figure_
assert disp.bounding_ax_ is None
assert disp.axes_.shape == (2, )
assert disp.axes_[0] is ax1
assert disp.axes_[1] is ax2
ax = disp.axes_[0]
assert ax.get_xlabel() == "age"
assert ax.get_ylabel() == "Partial dependence"
line = disp.lines_[0]
avg_preds = disp.pd_results[0]
target_idx = disp.target_idx
line_data = line.get_data()
assert_allclose(line_data[0], avg_preds["values"][0])
assert_allclose(line_data[1], avg_preds.average[target_idx].ravel())
# contour
ax = disp.axes_[1]
coutour = disp.contours_[1]
expect_levels = np.linspace(*disp.pdp_lim[2], num=8)
assert_allclose(coutour.levels, expect_levels)
assert ax.get_xlabel() == "age"
assert ax.get_ylabel() == "bmi"
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("kind, lines", [
('average', 1), ('individual', 442), ('both', 443)
])
def test_plot_partial_dependence_passing_numpy_axes(pyplot, clf_diabetes,
diabetes, kind, lines):
grid_resolution = 25
feature_names = diabetes.feature_names
disp1 = plot_partial_dependence(clf_diabetes, diabetes.data,
['age', 'bmi'], kind=kind,
grid_resolution=grid_resolution,
feature_names=feature_names)
assert disp1.axes_.shape == (1, 2)
assert disp1.axes_[0, 0].get_ylabel() == "Partial dependence"
assert disp1.axes_[0, 1].get_ylabel() == ""
assert len(disp1.axes_[0, 0].get_lines()) == lines
assert len(disp1.axes_[0, 1].get_lines()) == lines
lr = LinearRegression()
lr.fit(diabetes.data, diabetes.target)
disp2 = plot_partial_dependence(lr, diabetes.data,
['age', 'bmi'], kind=kind,
grid_resolution=grid_resolution,
feature_names=feature_names,
ax=disp1.axes_)
assert np.all(disp1.axes_ == disp2.axes_)
assert len(disp2.axes_[0, 0].get_lines()) == 2 * lines
assert len(disp2.axes_[0, 1].get_lines()) == 2 * lines
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("nrows, ncols", [(2, 2), (3, 1)])
def test_plot_partial_dependence_incorrent_num_axes(pyplot, clf_diabetes,
diabetes, nrows, ncols):
grid_resolution = 5
fig, axes = pyplot.subplots(nrows, ncols)
axes_formats = [list(axes.ravel()), tuple(axes.ravel()), axes]
msg = "Expected ax to have 2 axes, got {}".format(nrows * ncols)
disp = plot_partial_dependence(clf_diabetes, diabetes.data,
['age', 'bmi'],
grid_resolution=grid_resolution,
feature_names=diabetes.feature_names)
for ax_format in axes_formats:
with pytest.raises(ValueError, match=msg):
plot_partial_dependence(clf_diabetes, diabetes.data,
['age', 'bmi'],
grid_resolution=grid_resolution,
feature_names=diabetes.feature_names,
ax=ax_format)
# with axes object
with pytest.raises(ValueError, match=msg):
disp.plot(ax=ax_format)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
def test_plot_partial_dependence_with_same_axes(pyplot, clf_diabetes,
diabetes):
# The first call to plot_partial_dependence will create two new axes to
# place in the space of the passed in axes, which results in a total of
# three axes in the figure.
# Currently the API does not allow for the second call to
# plot_partial_dependence to use the same axes again, because it will
# create two new axes in the space resulting in five axes. To get the
# expected behavior one needs to pass the generated axes into the second
# call:
# disp1 = plot_partial_dependence(...)
# disp2 = plot_partial_dependence(..., ax=disp1.axes_)
grid_resolution = 25
fig, ax = pyplot.subplots()
plot_partial_dependence(clf_diabetes, diabetes.data, ['age', 'bmi'],
grid_resolution=grid_resolution,
feature_names=diabetes.feature_names, ax=ax)
msg = ("The ax was already used in another plot function, please set "
"ax=display.axes_ instead")
with pytest.raises(ValueError, match=msg):
plot_partial_dependence(clf_diabetes, diabetes.data,
['age', 'bmi'],
grid_resolution=grid_resolution,
feature_names=diabetes.feature_names, ax=ax)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
def test_plot_partial_dependence_feature_name_reuse(pyplot, clf_diabetes,
diabetes):
# second call to plot does not change the feature names from the first
# call
feature_names = diabetes.feature_names
disp = plot_partial_dependence(clf_diabetes, diabetes.data,
[0, 1],
grid_resolution=10,
feature_names=feature_names)
plot_partial_dependence(clf_diabetes, diabetes.data, [0, 1],
grid_resolution=10, ax=disp.axes_)
for i, ax in enumerate(disp.axes_.ravel()):
assert ax.get_xlabel() == feature_names[i]
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
def test_plot_partial_dependence_multiclass(pyplot):
grid_resolution = 25
clf_int = GradientBoostingClassifier(n_estimators=10, random_state=1)
iris = load_iris()
# Test partial dependence plot function on multi-class input.
clf_int.fit(iris.data, iris.target)
disp_target_0 = plot_partial_dependence(clf_int, iris.data, [0, 1],
target=0,
grid_resolution=grid_resolution)
assert disp_target_0.figure_ is pyplot.gcf()
assert disp_target_0.axes_.shape == (1, 2)
assert disp_target_0.lines_.shape == (1, 2)
assert disp_target_0.contours_.shape == (1, 2)
assert disp_target_0.deciles_vlines_.shape == (1, 2)
assert disp_target_0.deciles_hlines_.shape == (1, 2)
assert all(c is None for c in disp_target_0.contours_.flat)
assert disp_target_0.target_idx == 0
# now with symbol labels
target = iris.target_names[iris.target]
clf_symbol = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf_symbol.fit(iris.data, target)
disp_symbol = plot_partial_dependence(clf_symbol, iris.data, [0, 1],
target='setosa',
grid_resolution=grid_resolution)
assert disp_symbol.figure_ is pyplot.gcf()
assert disp_symbol.axes_.shape == (1, 2)
assert disp_symbol.lines_.shape == (1, 2)
assert disp_symbol.contours_.shape == (1, 2)
assert disp_symbol.deciles_vlines_.shape == (1, 2)
assert disp_symbol.deciles_hlines_.shape == (1, 2)
assert all(c is None for c in disp_symbol.contours_.flat)
assert disp_symbol.target_idx == 0
for int_result, symbol_result in zip(disp_target_0.pd_results,
disp_symbol.pd_results):
assert_allclose(int_result.average, symbol_result.average)
assert_allclose(int_result["values"], symbol_result["values"])
# check that the pd plots are different for another target
disp_target_1 = plot_partial_dependence(clf_int, iris.data, [0, 1],
target=1,
grid_resolution=grid_resolution)
target_0_data_y = disp_target_0.lines_[0, 0].get_data()[1]
target_1_data_y = disp_target_1.lines_[0, 0].get_data()[1]
assert any(target_0_data_y != target_1_data_y)
multioutput_regression_data = make_regression(n_samples=50, n_targets=2,
random_state=0)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("target", [0, 1])
def test_plot_partial_dependence_multioutput(pyplot, target):
# Test partial dependence plot function on multi-output input.
X, y = multioutput_regression_data
clf = LinearRegression().fit(X, y)
grid_resolution = 25
disp = plot_partial_dependence(clf, X, [0, 1], target=target,
grid_resolution=grid_resolution)
fig = pyplot.gcf()
axs = fig.get_axes()
assert len(axs) == 3
assert disp.target_idx == target
assert disp.bounding_ax_ is not None
positions = [(0, 0), (0, 1)]
expected_label = ["Partial dependence", ""]
for i, pos in enumerate(positions):
ax = disp.axes_[pos]
assert ax.get_ylabel() == expected_label[i]
assert ax.get_xlabel() == "{}".format(i)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
def test_plot_partial_dependence_dataframe(pyplot, clf_diabetes, diabetes):
pd = pytest.importorskip('pandas')
df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
grid_resolution = 25
plot_partial_dependence(
clf_diabetes, df, ['bp', 's1'], grid_resolution=grid_resolution,
feature_names=df.columns.tolist()
)
dummy_classification_data = make_classification(random_state=0)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize(
"data, params, err_msg",
[(multioutput_regression_data, {"target": None, 'features': [0]},
"target must be specified for multi-output"),
(multioutput_regression_data, {"target": -1, 'features': [0]},
r'target must be in \[0, n_tasks\]'),
(multioutput_regression_data, {"target": 100, 'features': [0]},
r'target must be in \[0, n_tasks\]'),
(dummy_classification_data,
{'features': ['foobar'], 'feature_names': None},
'Feature foobar not in feature_names'),
(dummy_classification_data,
{'features': ['foobar'], 'feature_names': ['abcd', 'def']},
'Feature foobar not in feature_names'),
(dummy_classification_data, {'features': [(1, 2, 3)]},
'Each entry in features must be either an int, '),
(dummy_classification_data, {'features': [1, {}]},
'Each entry in features must be either an int, '),
(dummy_classification_data, {'features': [tuple()]},
'Each entry in features must be either an int, '),
(dummy_classification_data,
{'features': [123], 'feature_names': ['blahblah']},
'All entries of features must be less than '),
(dummy_classification_data,
{'features': [0, 1, 2], 'feature_names': ['a', 'b', 'a']},
'feature_names should not contain duplicates'),
(dummy_classification_data, {'features': [(1, 2)], 'kind': 'individual'},
'It is not possible to display individual effects for more than one'),
(dummy_classification_data, {'features': [(1, 2)], 'kind': 'both'},
'It is not possible to display individual effects for more than one'),
(dummy_classification_data, {'features': [1], 'subsample': -1},
'When an integer, subsample=-1 should be positive.'),
(dummy_classification_data, {'features': [1], 'subsample': 1.2},
r'When a floating-point, subsample=1.2 should be in the \(0, 1\) range')]
)
def test_plot_partial_dependence_error(pyplot, data, params, err_msg):
X, y = data
estimator = LinearRegression().fit(X, y)
with pytest.raises(ValueError, match=err_msg):
plot_partial_dependence(estimator, X, **params)
@pytest.mark.filterwarnings("ignore:A Bunch will be returned")
@pytest.mark.parametrize("params, err_msg", [
({'target': 4, 'features': [0]},
'target not in est.classes_, got 4'),
({'target': None, 'features': [0]},
'target must be specified for multi-class'),
({'target': 1, 'features': [4.5]},
'Each entry in features must be either an int,'),
])
def test_plot_partial_dependence_multiclass_error(pyplot, params, err_msg):
iris = load_iris()
clf = GradientBoostingClassifier(n_estimators=10, random_state=1)
clf.fit(iris.data, iris.target)
with pytest.raises(ValueError, match=err_msg):
plot_partial_dependence(clf, iris.data, **params)
def test_plot_partial_dependence_does_not_override_ylabel(pyplot, clf_diabetes,
diabetes):
# Non-regression test to be sure to not override the ylabel if it has been
# See https://github.com/scikit-learn/scikit-learn/issues/15772
_, axes = pyplot.subplots(1, 2)
axes[0].set_ylabel("Hello world")
plot_partial_dependence(clf_diabetes, diabetes.data,
[0, 1], ax=axes)
assert axes[0].get_ylabel() == "Hello world"
assert axes[1].get_ylabel() == "Partial dependence"
@pytest.mark.parametrize(
"kind, expected_shape",
[("average", (1, 2)), ("individual", (1, 2, 50)), ("both", (1, 2, 51))],
)
def test_plot_partial_dependence_subsampling(
pyplot, clf_diabetes, diabetes, kind, expected_shape
):
# check that the subsampling is properly working
# non-regression test for:
# https://github.com/scikit-learn/scikit-learn/pull/18359
matplotlib = pytest.importorskip("matplotlib")
grid_resolution = 25
feature_names = diabetes.feature_names
disp1 = plot_partial_dependence(
clf_diabetes,
diabetes.data,
["age", "bmi"],
kind=kind,
grid_resolution=grid_resolution,
feature_names=feature_names,
subsample=50,
random_state=0,
)
assert disp1.lines_.shape == expected_shape
assert all(
[
isinstance(line, matplotlib.lines.Line2D)
for line in disp1.lines_.ravel()
]
)
@pytest.mark.parametrize(
"kind, line_kw, label",
[
("individual", {}, None),
("individual", {"label": "xxx"}, None),
("average", {}, None),
("average", {"label": "xxx"}, "xxx"),
("both", {}, "average"),
("both", {"label": "xxx"}, "xxx"),
],
)
def test_partial_dependence_overwrite_labels(
pyplot,
clf_diabetes,
diabetes,
kind,
line_kw,
label,
):
"""Test that make sure that we can overwrite the label of the PDP plot"""
disp = plot_partial_dependence(
clf_diabetes,
diabetes.data,
[0, 2],
grid_resolution=25,
feature_names=diabetes.feature_names,
kind=kind,
line_kw=line_kw,
)
for ax in disp.axes_.ravel():
if label is None:
assert ax.get_legend() is None
else:
legend_text = ax.get_legend().get_texts()
assert len(legend_text) == 1
assert legend_text[0].get_text() == label
|
bsd-3-clause
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.