markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
We have sets of foregrounds and backgrounds along with the variables
$\alpha$: parameters in the concentration function (which is a function of $z_i,M_i$)
$\theta$: prior distribution of halo masses
$z_i$: foreground galaxy redshift
$x_i$: foreground galaxy angular coordinates
$z_j$: background galaxy redshift
$x_j$: background galaxy angular coordinates
$g_j$: reduced shear
$\sigma_{\epsilon_j}^{obs}$: noise from our ellipticity measurement process
$\sigma_{\epsilon}^{int}$: intrinsic variance in ellipticities
$\epsilon_j^{obs}$: intrinsic variance in ellipticities
Stellar Mass Threshold | from pandas import read_table
from pangloss import GUO_FILE
m_h = 'M_Subhalo[M_sol/h]'
m_s = 'M_Stellar[M_sol/h]'
guo_data = read_table(GUO_FILE)
nonzero_guo_data= guo_data[guo_data[m_h] > 0]
import matplotlib.pyplot as plt
stellar_mass_threshold = 5.883920e+10
plt.scatter(nonzero_guo_data[m_h], nonzero_guo_data[m_s], alpha=0.05)
plt.axhline(y=stellar_mass_threshold, color='red')
plt.xlabel('Halo Mass')
plt.ylabel('Stellar Mass')
plt.title('SMHM Scatter')
plt.xscale('log')
plt.yscale('log')
from math import log
import numpy as np
start = log(nonzero_guo_data[m_s].min(), 10)
stop = log(nonzero_guo_data[m_s].max(), 10)
m_logspace = np.logspace(start, stop, num=20, base=10)[:-1]
m_corrs = []
thin_data = nonzero_guo_data[[m_s, m_h]]
for cutoff in m_logspace:
tmp = thin_data[nonzero_guo_data[m_s] > cutoff]
m_corrs.append(tmp.corr()[m_s][1])
plt.plot(m_logspace, m_corrs, label='correlation')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Stellar Mass - Halo Mass Correlation')
plt.title('SMHM Correlation')
plt.rcParams['figure.figsize'] = (10, 6)
# plt.plot(hist[1][:-1], hist[0], label='correlation')
plt.hist(nonzero_guo_data[m_s], bins=m_logspace, alpha=0.4, normed=False, label='dataset')
plt.axvline(x=stellar_mass_threshold, color='red', label='threshold')
plt.xscale('log')
plt.legend(loc=2)
plt.xlabel('Stellar Mass')
plt.ylabel('Number of Samples')
plt.title('Stellar Mass Distribution') | GroupMeeting_11_16.ipynb | davidthomas5412/PanglossNotebooks | mit |
Results | from pandas import read_csv
res = read_csv('data3.csv')
tru = read_csv('true3.csv')
start = min([res[res[c] > 0][c].min() for c in res.columns[1:-1]])
stop = res.max().max()
base = 10
start = log(start, base)
end = log(stop, base)
res_logspace = np.logspace(start, end, num=10, base=base)
plt.rcParams['figure.figsize'] = (20, 12)
for i,val in enumerate(tru.columns[1:]):
plt.subplot(int('91' + str(i+1)))
x = res[val][res[val] > 0]
weights = np.exp(res['log-likelihood'][res[val] > 0])
t = tru[val].loc[0]
plt.hist(x, bins=res_logspace, alpha=0.4, normed=True, label='prior')
plt.hist(x, bins=res_logspace, weights=weights, alpha=0.4, normed=True, label='posterior')
plt.axvline(x=t, color='red', label='truth', linewidth=1)
plt.xscale('log')
plt.legend()
plt.ylabel('PDF')
plt.xlabel('Halo Mass (log-scale)')
plt.title('Halo ID ' + val)
plt.show()
res.columns
res[['112009306000027', 'log-likelihood']].sort('log-likelihood') | GroupMeeting_11_16.ipynb | davidthomas5412/PanglossNotebooks | mit |
3. Enter DV360 Report To Storage Recipe Parameters
Specify either report name or report id to move a report.
The most recent valid file will be moved to the bucket.
Modify the values below for your use case, can be done multiple times, then click play. | FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'dbm_report_id':'', # DV360 report ID given in UI, not needed if name used.
'auth_write':'service', # Credentials used for writing data.
'dbm_report_name':'', # Name of report, not needed if ID used.
'dbm_bucket':'', # Google cloud bucket.
'dbm_path':'', # Path and filename to write to.
}
print("Parameters Set To: %s" % FIELDS)
| colabs/dbm_to_storage.ipynb | google/starthinker | apache-2.0 |
4. Execute DV360 Report To Storage
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'report_id':{'field':{'name':'dbm_report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},
'name':{'field':{'name':'dbm_report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'storage':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bucket':{'field':{'name':'dbm_bucket','kind':'string','order':3,'default':'','description':'Google cloud bucket.'}},
'path':{'field':{'name':'dbm_path','kind':'string','order':4,'default':'','description':'Path and filename to write to.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
| colabs/dbm_to_storage.ipynb | google/starthinker | apache-2.0 |
Characterizing sample data | variant_annotations = [{
'va':va,
'n_te': len(list(va.transcriptEffects)),
'n_ef': len(list(ef for te in va.transcriptEffects for ef in te.effects)),
'sos': ";".join(sorted(set("{ef.id}:{ef.term}".format(ef=ef)
for te in va.transcriptEffects
for ef in te.effects)))
}
for va in gc.searchVariantAnnotations(variant_annotation_set.id, **region_constraints)
]
variant_annotations_df = pd.DataFrame(variant_annotations) | nb/Exploring SO terms.ipynb | reece/ga4gh-examples | apache-2.0 |
The following is an inline graphic image. See instructions below it for reproducing it.
To regenerate this data:
Eval the next cell
Select Bar Chart from Table menu
Drag-drop "sos" to left column under Count pulldown
Drag-drop n_te, then n_ef to row to right of Count pulldown | pivot_ui(variant_annotations_df) | nb/Exploring SO terms.ipynb | reece/ga4gh-examples | apache-2.0 |
The searches
Using the data above, we can search for single and multiple terms and compare to expectations.
We'll be using this function:
Signature: gc.searchVariantAnnotations(variantAnnotationSetId, referenceName=None, referenceId=None,
start=None, end=None, featureIds=[], effects=[])
Docstring:
Returns an iterator over the Annotations fulfilling the specified conditions from the specified
AnnotationSet.
The JSON string for an effect term must be specified on the command line :
`--effects '{"term": "exon_variant"}'`. | def _mk_effect_filter(so_ids=[]):
"""return list of so_id effect filters for the given list of so_ids
>>> print(_mk_effect_filter(so_ids="SO:1 SO:2 SO:3".split()))
['{"id":"SO:1"}', '{"id":"SO:2"}', '{"id":"SO:3"}']
"""
return [{"id": so_id} for so_id in so_ids]
def _fetch_variant_annotations(gc, so_ids=[], **args):
return gc.searchVariantAnnotations(variant_annotation_set.id,
effects=_mk_effect_filter(so_ids),
**args)
# expected:
#so_terms
#SO:0000605:intergenic_region 697
#SO:0000605:intergenic_region;SO:0001631:upstream_gene_variant 63
#SO:0000605:intergenic_region;SO:0001632:downstream_gene_variant 56
#SO:0001583:missense_variant 16
#SO:0001587:stop_gained 1
#SO:0001819:synonymous_variant 7
[(so_set,
len(list(_fetch_variant_annotations(gc, so_ids=so_set.split(), **region_constraints))))
for so_set in ["SO:0001819", "SO:0001632", "SO:0000605",
"SO:0000605 SO:0001632", "SO:0001632 SO:0000605",
"SO:9999999", "SO:0000605 SO:999999"]
] | nb/Exploring SO terms.ipynb | reece/ga4gh-examples | apache-2.0 |
However, this ceases to be true when two sinusoids of equal frequency and phase are multiplied together. In this case, instead of averaging out to zero, the product of the two waves have a nonzero mean value. | df['sin_mixed'] = np.multiply(df.sine, df.sine)
df['mean_mixed'] = np.mean(df.sin_mixed)
df[['sin_mixed','mean_mixed']][:1000].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
This DC voltage produced by the product of the two waves is very sensitive to changes in frequency. The plots below show that a 101Hz signal has a mean value of zero when multiplied by a 100Hz signal. | df['sin_mixed_101'] = np.multiply(df.sine, sine_wave(101))
df['mean_mixed_101'] = np.mean(df.sin_mixed_101)
df[['sin_mixed_101','mean_mixed_101']].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
This is really useful in situations where you have a signal of a known frequency. With the proper equipment, you can "lock in" to your known-frequency signal, and track changes to the amplitude and phase of that signal - even in the presence of overwhelming noise.
You can show this pretty easily by just scaling down one of the waves in our prior example, and burying it in noise. (This signal is about 20dB below the noise floor in this case.) | noise_fl = np.array([(2 * np.random.random() - 1) for a in range(10000)])
df['sine_noisy'] = np.add(noise_fl, 0.1*df['sine'])
df['sin_noisy_mixed'] = np.multiply(df.sine_noisy, df.sine)
df['mean_noisy_mixed'] = df['sin_noisy_mixed'].mean()
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
df['sine_noisy'].plot(ax=axes[0])
df[['sin_noisy_mixed', 'mean_noisy_mixed']].plot(ax=axes[1]) | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
It doesn't look like much at the prior altitude, but it's definitely the signal we're looking for. That's because the lock-in output scales with the amplitude of both the input signal and the reference waveform:
$$U_{out}=\frac{1}{2}V_{sig}V_{ref}cos(\theta)$$
As a result, the lock-in amp has a small (but meaningful) amplitude: | df['mean_noisy_mixed'].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
Great! We can pull really weak signals out of seemingly endless noise. So, why haven't we used this technology to revolutionize all communications with infite signal-to-noise ratio?
Like all real systems, there's a tradeoff, and for a lock-in amplifier, that tradeoff is time. Lock-in amps rely on a persistent periodic signal - without one, there isn't anything to lock on to! That's the catch of multiplying two signals of identical frequencies together: it takes time for that DC offset component to form.
A second tradeoff of the averaging method becomes obvious when you consider how to implement the averaging in a practical manner. Since we're talking about this in the context of electronics: one of the simplest ways to average, electronically, is to just filter by frequency, and it doesn't get much simpler than a single pole lowpass filter for a nice gentle average. The result looks pretty good when applied to the product of two sine waves: | def lowpass(x, alpha=0.001):
data = [x[0]]
for a in x[1:]:
data.append(data[-1] + (alpha*(a-data[-1])))
return np.array(data)
df['sin_mixed_lp'] = lowpass(df.sin_mixed)
df['sin_mixed_lp'].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
...but it starts to break down when you filter the noisy signals, which can contain large fluctuations that aren't necessarily real: | df['sin_noisy_mixed_lp'] = lowpass(df.sin_noisy_mixed)
df['sin_noisy_mixed_lp'].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
We can clean get rid of some of that statistical noise junk by rerunning the filter, of course, but that takes time, and also robs the lock-in of a bit of responsiveness. | df['sin_noisy_mixed_lp2'] = lowpass(df.sin_noisy_mixed_lp)
df['sin_noisy_mixed_lp2'].plot() | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
On top of all this, lock-in amps are highly sensitive to phase differences between reference and signal tones. Take a look at the plots below, where our noisy signal is mixed with waves 45 and 90 degrees offset from it. | df['sin_phase45_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=45))
df['sin_phase90_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=90))
df['sin_phase45_mixed_lp'] = lowpass(df['sin_phase45_mixed'])
df['sin_phase90_mixed_lp'] = lowpass(df['sin_phase90_mixed'])
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
df[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']].plot(ax=axes[0])
df[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']][6000:].plot(ax=axes[1]) | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
These plots illustrate that there's a component of phase sensitivity. As the phase of signal moves farther and farther out of phase with the reference, the lock-in output starts to trend downwards, closer to zero. You can see, too, why lock-ins require time to settle out to a final value - the left plot shows how signals that are greatly out of phase with one another can produce an initial signal value where none should exist! The right plot, however, shows how the filtered, 90-degree offset signal (green trace) declines over time to the correct average value of approximately zero.
Quadrature Output
Like all lab equipment, lock-in amplifiers were originally analog devices. Analog lock-ins required a bit of tedious work to get optimum performance from amplifier - typically adjusting the phase of the reference so as to be in-phase with the target signal. This could prove time consuming given the time delay required for the output to stabilize! However, advances in digital technology have since yielded some nice improvements for lock-in amplifiers:
digitally generated, near-perfect reference signals,
simultaneous sine and cosine mixing,
DSP based output filter - easily and accurately change filter order and corner!
This easy access to both sine-mixed and cosine-mixed signals allow us to plot the output of a digital lock-in amplifier as a quadrature modulated signal, which shows changes in both the magnitude and phase of the lock-in vector: | def cosine_wave(freq, phase=0, Fs=10000):
ph_rad = (phase/360.0)*(2.0*np.pi)
return np.array([np.cos(((2 * np.pi * freq * a) / Fs) + ph_rad) for a in range(Fs)])
df['cos_noisy_mixed'] = np.multiply(df.sine_noisy, cosine_wave(100))
df['cos_noisy_mixed_lp'] = lowpass(df['cos_noisy_mixed'])
df['noisy_quad_mag'] = np.sqrt(np.add(np.square(df['cos_noisy_mixed_lp']),
np.square(df['sin_noisy_mixed_lp'])))
df['noisy_quad_pha'] = np.arctan2(df['cos_noisy_mixed_lp'], df['sin_noisy_mixed_lp'])
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
axes[0].set_title('Magnitude')
axes[1].set_title('Phase (radians)')
df['noisy_quad_mag'][8000:].plot(ax=axes[0])
df['noisy_quad_pha'][8000:].plot(ax=axes[1]) | assets/lockin_amp_simulation.ipynb | Cushychicken/cushychicken.github.io | mit |
We can use Scikit-Learn's LinearRegression estimator to fit this data and construct the best-fit line: | from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model.fit(x[:, np.newaxis], y)
xfit = np.linspace(0, 10, 1000)
yfit = model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit); | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
We see that the results are very close to the inputs, as we might hope.
The LinearRegression estimator is much more capable than this, however—in addition to simple straight-line fits, it can also handle multidimensional linear models of the form
$$
y = a_0 + a_1 x_1 + a_2 x_2 + \cdots
$$
where there are multiple $x$ values.
Geometrically, this is akin to fitting a plane to points in three dimensions, or fitting a hyper-plane to points in higher dimensions.
The multidimensional nature of such regressions makes them more difficult to visualize, but we can see one of these fits in action by building some example data, using NumPy's matrix multiplication operator: | rng = np.random.RandomState(1)
X = 10 * rng.rand(100, 3)
y = 0.5 + np.dot(X, [1.5, -2., 1.])
model.fit(X, y)
print(model.intercept_)
print(model.coef_) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
Here the $y$ data is constructed from three random $x$ values, and the linear regression recovers the coefficients used to construct the data.
In this way, we can use the single LinearRegression estimator to fit lines, planes, or hyperplanes to our data.
It still appears that this approach would be limited to strictly linear relationships between variables, but it turns out we can relax this as well.
Basis Function Regression
One trick you can use to adapt linear regression to nonlinear relationships between variables is to transform the data according to basis functions.
We have seen one version of this before, in the PolynomialRegression pipeline used in Hyperparameters and Model Validation and Feature Engineering.
The idea is to take our multidimensional linear model:
$$
y = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + \cdots
$$
and build the $x_1, x_2, x_3,$ and so on, from our single-dimensional input $x$.
That is, we let $x_n = f_n(x)$, where $f_n()$ is some function that transforms our data.
For example, if $f_n(x) = x^n$, our model becomes a polynomial regression:
$$
y = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots
$$
Notice that this is still a linear model—the linearity refers to the fact that the coefficients $a_n$ never multiply or divide each other.
What we have effectively done is taken our one-dimensional $x$ values and projected them into a higher dimension, so that a linear fit can fit more complicated relationships between $x$ and $y$.
Polynomial basis functions
This polynomial projection is useful enough that it is built into Scikit-Learn, using the PolynomialFeatures transformer: | from sklearn.preprocessing import PolynomialFeatures
x = np.array([2, 3, 4])
poly = PolynomialFeatures(3, include_bias=False)
poly.fit_transform(x[:, None]) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
We see here that the transformer has converted our one-dimensional array into a three-dimensional array by taking the exponent of each value.
This new, higher-dimensional data representation can then be plugged into a linear regression.
As we saw in Feature Engineering, the cleanest way to accomplish this is to use a pipeline.
Let's make a 7th-degree polynomial model in this way: | from sklearn.pipeline import make_pipeline
poly_model = make_pipeline(PolynomialFeatures(7),
LinearRegression()) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
With this transform in place, we can use the linear model to fit much more complicated relationships between $x$ and $y$.
For example, here is a sine wave with noise: | rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = np.sin(x) + 0.1 * rng.randn(50)
poly_model.fit(x[:, np.newaxis], y)
yfit = poly_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit); | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
Our linear model, through the use of 7th-order polynomial basis functions, can provide an excellent fit to this non-linear data!
Gaussian basis functions
Of course, other basis functions are possible.
For example, one useful pattern is to fit a model that is not a sum of polynomial bases, but a sum of Gaussian bases.
The result might look something like the following figure:
figure source in Appendix
The shaded regions in the plot are the scaled basis functions, and when added together they reproduce the smooth curve through the data.
These Gaussian basis functions are not built into Scikit-Learn, but we can write a custom transformer that will create them, as shown here and illustrated in the following figure (Scikit-Learn transformers are implemented as Python classes; reading Scikit-Learn's source is a good way to see how they can be created): | from sklearn.base import BaseEstimator, TransformerMixin
class GaussianFeatures(BaseEstimator, TransformerMixin):
"""Uniformly spaced Gaussian features for one-dimensional input"""
def __init__(self, N, width_factor=2.0):
self.N = N
self.width_factor = width_factor
@staticmethod
def _gauss_basis(x, y, width, axis=None):
arg = (x - y) / width
return np.exp(-0.5 * np.sum(arg ** 2, axis))
def fit(self, X, y=None):
# create N centers spread along the data range
self.centers_ = np.linspace(X.min(), X.max(), self.N)
self.width_ = self.width_factor * (self.centers_[1] - self.centers_[0])
return self
def transform(self, X):
return self._gauss_basis(X[:, :, np.newaxis], self.centers_,
self.width_, axis=1)
gauss_model = make_pipeline(GaussianFeatures(20),
LinearRegression())
gauss_model.fit(x[:, np.newaxis], y)
yfit = gauss_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit)
plt.xlim(0, 10); | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
We put this example here just to make clear that there is nothing magic about polynomial basis functions: if you have some sort of intuition into the generating process of your data that makes you think one basis or another might be appropriate, you can use them as well.
Regularization
The introduction of basis functions into our linear regression makes the model much more flexible, but it also can very quickly lead to over-fitting (refer back to Hyperparameters and Model Validation for a discussion of this).
For example, if we choose too many Gaussian basis functions, we end up with results that don't look so good: | model = make_pipeline(GaussianFeatures(30),
LinearRegression())
model.fit(x[:, np.newaxis], y)
plt.scatter(x, y)
plt.plot(xfit, model.predict(xfit[:, np.newaxis]))
plt.xlim(0, 10)
plt.ylim(-1.5, 1.5); | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
With the data projected to the 30-dimensional basis, the model has far too much flexibility and goes to extreme values between locations where it is constrained by data.
We can see the reason for this if we plot the coefficients of the Gaussian bases with respect to their locations: | def basis_plot(model, title=None):
fig, ax = plt.subplots(2, sharex=True)
model.fit(x[:, np.newaxis], y)
ax[0].scatter(x, y)
ax[0].plot(xfit, model.predict(xfit[:, np.newaxis]))
ax[0].set(xlabel='x', ylabel='y', ylim=(-1.5, 1.5))
if title:
ax[0].set_title(title)
ax[1].plot(model.steps[0][1].centers_,
model.steps[1][1].coef_)
ax[1].set(xlabel='basis location',
ylabel='coefficient',
xlim=(0, 10))
model = make_pipeline(GaussianFeatures(30), LinearRegression())
basis_plot(model) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
The lower panel of this figure shows the amplitude of the basis function at each location.
This is typical over-fitting behavior when basis functions overlap: the coefficients of adjacent basis functions blow up and cancel each other out.
We know that such behavior is problematic, and it would be nice if we could limit such spikes expliticly in the model by penalizing large values of the model parameters.
Such a penalty is known as regularization, and comes in several forms.
Ridge regression ($L_2$ Regularization)
Perhaps the most common form of regularization is known as ridge regression or $L_2$ regularization, sometimes also called Tikhonov regularization.
This proceeds by penalizing the sum of squares (2-norms) of the model coefficients; in this case, the penalty on the model fit would be
$$
P = \alpha\sum_{n=1}^N \theta_n^2
$$
where $\alpha$ is a free parameter that controls the strength of the penalty.
This type of penalized model is built into Scikit-Learn with the Ridge estimator: | from sklearn.linear_model import Ridge
model = make_pipeline(GaussianFeatures(30), Ridge(alpha=0.1))
basis_plot(model, title='Ridge Regression') | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
The $\alpha$ parameter is essentially a knob controlling the complexity of the resulting model.
In the limit $\alpha \to 0$, we recover the standard linear regression result; in the limit $\alpha \to \infty$, all model responses will be suppressed.
One advantage of ridge regression in particular is that it can be computed very efficiently—at hardly more computational cost than the original linear regression model.
Lasso regression ($L_1$ regularization)
Another very common type of regularization is known as lasso, and involves penalizing the sum of absolute values (1-norms) of regression coefficients:
$$
P = \alpha\sum_{n=1}^N |\theta_n|
$$
Though this is conceptually very similar to ridge regression, the results can differ surprisingly: for example, due to geometric reasons lasso regression tends to favor sparse models where possible: that is, it preferentially sets model coefficients to exactly zero.
We can see this behavior in duplicating the ridge regression figure, but using L1-normalized coefficients: | from sklearn.linear_model import Lasso
model = make_pipeline(GaussianFeatures(30), Lasso(alpha=0.001))
basis_plot(model, title='Lasso Regression') | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
With the lasso regression penalty, the majority of the coefficients are exactly zero, with the functional behavior being modeled by a small subset of the available basis functions.
As with ridge regularization, the $\alpha$ parameter tunes the strength of the penalty, and should be determined via, for example, cross-validation (refer back to Hyperparameters and Model Validation for a discussion of this).
Example: Predicting Bicycle Traffic
As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.
We have seen this data already in Working With Time Series.
In this section, we will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor.
Fortunately, the NOAA makes available their daily weather station data (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.
We will perform a simple linear regression to relate weather and other information to bicycle counts, in order to estimate how a change in any one of these parameters affects the number of riders on a given day.
In particular, this is an example of how the tools of Scikit-Learn can be used in a statistical modeling framework, in which the parameters of the model are assumed to have interpretable meaning.
As discussed previously, this is not a standard approach within machine learning, but such interpretation is possible for some models.
Let's start by loading the two datasets, indexing by date: | !sudo apt-get update
!apt-get -y install curl
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
# !wget -o FremontBridge.csv "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
import pandas as pd
counts = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True)
weather = pd.read_csv('data/BicycleWeather.csv', index_col='DATE', parse_dates=True) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
We also might suspect that the hours of daylight would affect how many people ride; let's use the standard astronomical calculation to add this information: | from datetime import datetime
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
daily['daylight_hrs'] = list(map(hours_of_daylight, daily.index))
daily[['daylight_hrs']].plot()
plt.ylim(8, 17) | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
We can also add the average temperature and total precipitation to the data.
In addition to the inches of precipitation, let's add a flag that indicates whether a day is dry (has zero precipitation): | # temperatures are in 1/10 deg C; convert to C
weather['TMIN'] /= 10
weather['TMAX'] /= 10
weather['Temp (C)'] = 0.5 * (weather['TMIN'] + weather['TMAX'])
# precip is in 1/10 mm; convert to inches
weather['PRCP'] /= 254
weather['dry day'] = (weather['PRCP'] == 0).astype(int)
daily = daily.join(weather[['PRCP', 'Temp (C)', 'dry day']],rsuffix='0') | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
It is evident that we have missed some key features, especially during the summer time.
Either our features are not complete (i.e., people decide whether to ride to work based on more than just these) or there are some nonlinear relationships that we have failed to take into account (e.g., perhaps people ride less at both high and low temperatures).
Nevertheless, our rough approximation is enough to give us some insights, and we can take a look at the coefficients of the linear model to estimate how much each feature contributes to the daily bicycle count: | params = pd.Series(model.coef_, index=X.columns)
params | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | csaladenes/csaladenes.github.io | mit |
Partial Differential Equations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a partial differential equation. You'll simulate the surface of a square pond as a few raindrops land on it.
Basic setup
A few imports you'll need. | #Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
| site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
A function for displaying the state of the pond's surface as an image. | def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue())) | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file. | sess = tf.InteractiveSession() | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Computational convenience functions | def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k) | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Define the PDE
Our pond is a perfect 500 x 500 square, as is the case for most ponds found in nature. | N = 500 | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Here you create a pond and hit it with some rain drops. | # Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1]) | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Now you specify the details of the differential equation. | # Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_)) | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Run the simulation
This is where it gets fun -- running time forward with a simple for loop. | # Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1]) | site/en/r1/tutorials/non-ml/pdes.ipynb | tensorflow/docs | apache-2.0 |
Wait so.. the rows of a matrix $A$ are orthogonal iff $AA^T$ is diagonal? Hmm. Math.StackEx Link | np.isclose(np.eye(len(U)), U @ U.T)
np.isclose(np.eye(len(V)), V.T @ V) | FACLA/SVD-NMF-review.ipynb | WNoxchi/Kaukasos | mit |
Wait but that also gives True for $VV^T$. Hmmm.
2. Truncated SVD
Okay, so SVD is an exact decomposition of a matrix and allows us to pull out distinct topics from data (due to their orthonormality (orthogonality?)).
But doing so for a large data corpus is ... bad. Especially if most of the data's meaning / information relevant to us is captured by a small prominent subset. IE: prevalence of articles like a and the are likely poor indicators of any particular meaning in a piece of text since they're everywhere in English. Likewise for other types of data.
Hmm, so, if I understood correctly, the Σ/S/s/σ matrix is ordered by value max$\rightarrow$min.. but computing the SVD of a large dataset $A$ is exactly what we want to avoid using T-SVD. Okay so how?
$\rightarrow$Full SVD we're calculating the full dimension of topics -- but its handy to limit to the most important ones -- this is how SVD is used in compression.
Aha. This is where I was confused. Truncation is used with Randomization in R-SVD. The Truncated section was just introducing the concept. Got it.
So that's where, in R-SVD, we use a buffer in addition to the portion of the dataset we take for SVD.
And yay scikit-learn has R-SVD built in. | from sklearn import decomposition
# ofc this is just dummy data to test it works
datavectors = np.random.randint(-1000,1000,size=(10,50))
U,S,V = decomposition.randomized_svd(datavectors, n_components=5)
U.shape, S.shape, V.shape | FACLA/SVD-NMF-review.ipynb | WNoxchi/Kaukasos | mit |
The idea of T-SVD is that we want to compute an approximation to the range of $A$. The range of $A$ is the space covered by the column basis.
ie: Range(A) = {y: Ax = y}
that is: all $y$ you can achieve by multiplying $x$ with $A$.
Depending on your space, the bases are vectors that you can take linear combinations of to get any value in your space.
3. Details of Randomized SVD (Truncated)
Our goal is to have an algorithm to perform Truncated SVD using Randomized values from the dataset matrix. We want to use randomization to calculate the topics we're interested in, instead of calculating all of them.
Aha. So.. the way to do that, using randomization, is to have a special kind of randomization. Find a matrix $Q$ with some special properties that will allow us to pull a matrix that is a near match to our dataset matrix $A$ in the ways we want it to be. Ie: It'll have the same singular values, meaning the same importance-ordered topics.
Wow mathematics is really.. somethin.
That process:
Compute an approximation to the range of $A$. ie: we want $Q$ with $r$ orthonormal columns st:
$$A \approx QQ^TA$$
Construct $B = Q^TA,$, which is small $(r \times n)$
Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$), $B = SΣV^T$
Since: $$A \approx QQ^TA = Q(SΣV^T)$$ if we set $U = QS$, then we have a low-rank approximation of $A \approx UΣV^T$.
-- okay so.. confusion here. What is $S$ and $Σ$? Because I see them elsewhere taken to mean the same thing on this subject, but all of a sudden they seem to be totally different things.
-- oh, so apparently $S$ here is actually something different. $Σ$ is what's been interchangeably referred to in Hellenic/Latin letters throughout the notebook.
NOTE that $A: m \times n$ while $Q: m \times r$, so $Q$ is generally a tall, skinny matrix and therefore much smaller & easier to compute with than $A$.
Also, because $S$ & $Q$ are both orthonormal, setting $R = QS$ makes $R$ orthonormal as well.
How do we find Q (in step 1)?
General Idea: we find this special $Q$, then we do SVD on this smaller matrix $Q^TA$, and we plug that back in to have our Truncated-SVD for $A$.
And HERE is where the Random part of Randomized SVD comes in! How do we find $Q$?:
We just take a bunch of random vectors $w_i$ and look at / evaluate the subspace formed by $Aw_i$. We form a matrix $W$ with the $w_i$'s as its columns. Then we take the QR Decomposition of $AW = QR$. Then the colunms of $Q$ form an orthonormal basis for $AW$, which is the range of $A$.
Basically a QR Decomposition exists for any matrix, and is an orthonormal matrix $\times$ an upper triangular matrix.
So basically: we take $AW$, $W$ is random, get the $QR$ -- and a property of the QR-Decomposition is that $Q$ forms an orthonormal basis for $AW$ -- and $AW$ gives the range of $A$.
Since $AW$ has far more rows than columns, it turns out in practice that these columns are approximately orthonormal. It's very unlikely you'll get linearly-dependent columns when you choose random values.
Aand apparently the QR-Decomp is v.foundational to Numerical Linear Algebra.
How do we choose r?
We chose $Q$ to have $r$ orthonormal columns, and $r$ gives us the dimension of $B$.
We choose $r$ to be the number of topics we want to retrieve $+$ some buffer.
See the lesson notebook and accompanying lecture time for an implementatinon of Randomized SVD. NOTE that Scikit-Learn's implementation is more powerful; the example is for example purposes.
4. Non-negative Matrix Factorization
Wiki
NMF is a group of algorithms in multivariate analysis and linear algebra where a matrix $V$ is factorized into (usually) two matrices $W$ & $H$, with the property that all three matrices have no negative elements.
Lecture 2 40:32
The key thing in SVD is orthogonality -- basically everything is orthogonal to eachother -- the key idea in NMF is that nothing is negative. The lower-bound is zero-clamped.
NOTE your original dataset shoudl be nonnegative if you use NMF, or else you won't be able to reconstruct it.
Idea
Rather than constraining our factors to be orthogonal, another idea would be to constrain them to be non-negative. NMF is a factorization of a non-negative dataset $V$: $$V=WH$$ into non-negative matrices $W$, $H$. Often positive factors will be more easily interpretable (and this is the reason behind NMF's popularity).
huh.. really now.?..
For example if your dataset is a matrix of faces $V$, where each columns holds a vectorized face, then $W$ would be a matrix of column facial features, and $H$ a matrix of column relative importance of features in each image.
Applications of NMF / Sklearn
NMF is a 'difficult' problem because it is unconstrained and NP-Hard
NMF looks smth like this in schematic form:
Documents Topics Topic Importance Indicators
W --------- --- -----------------
o | | | | | ||| | | | | | | | | |
r | | | | | ≈ ||| -----------------
d | | | | | |||
s --------- ---
V W H | # workflow w NMF is something like this
V = np.random.randint(0, 20, size=(10,10))
m,n = V.shape
d = 5 # num_topics
clsf = decomposition.NMF(n_components=d, random_state=1)
W1 = clsf.fit_transform(V)
H1 = clsf.components_ | FACLA/SVD-NMF-review.ipynb | WNoxchi/Kaukasos | mit |
We can see the essential features of Python used:
Python does not declare the type of the variables;
There is nothing special about lists or arrays as variables when passed as arguments;
To define functions the keyword is def;
To define the start of a block (the body of a function, or a loop, or a conditional) a colon : is used;
To define the block itself, indentation is used. The block ends when the code indentation ends;
Comments are either enclosed in quotes " as for the docstring, or using #;
The return value(s) from a function use the keyword return;
Accessing arrays uses square brackets;
The function range produces a range of integers, usually used to loop over.
Note: there are in-built Python functions to sort lists which should be used in general: | unsorted = [2, 4, 6, 0, 1, 3, 5]
print(sorted(unsorted)) | examples.ipynb | IanHawke/msc-or-week0 | mit |
This gets rid of the need for a temporary variable.
Exercise
Here is a pseudo-code for the counting sort algorithm:
Start with an unsorted list list of length n.
Find the minimum value min_value and maximum value max_value of the list.
Create a list counts that will count the number of entries in the list with value between min_value and max_value inclusive, and set its entries to zero
For each element i in list from the first to the last:
Add one to the counts list entry whose index matches the value of this element
For each element i in the counts list from the first to the last:
Set the next j entries of list equal to i
After this loop, the list list is now sorted.
Translate this into Python. Note that the in-built Python min and max functions can be used on lists. To create a list of the correct size you can use
python
counts = list(range(min_value, max_value+1))
but this list will not contain zeros so must be reset. | def countingsort(unsorted):
"""
Sorts an array using counting sort algorithm
Paramters
---------
unsorted : list
The unsorted list
Returns
sorted : list
The sorted list (in place)
"""
# Allocate the counts array
min_value = min(unsorted)
max_value = max(unsorted)
# This creates a list of the right length, but the entries are not zero, so reset
counts = list(range(min_value, max_value+1))
for i in range(len(counts)):
counts[i] = 0
# Count the values
last = len(unsorted)
for i in range(last):
counts[unsorted[i]] += 1
# Write the items back into the list array
next_index = 0
for i in range(min_value, max_value+1):
for j in range(counts[i]):
unsorted[next_index] = i
next_index += 1
return unsorted
unsorted = [2, 4, 6, 0, 1, 3, 5]
print(countingsort(unsorted)) | examples.ipynb | IanHawke/msc-or-week0 | mit |
Simplex Method
For the linear programming problem
$$
\begin{aligned}
\max x_1 + x_2 &= z \
2 x_1 + x_2 & \le 4 \
x_1 + 2 x_2 & \le 3
\end{aligned}
$$
where $x_1, x_2 \ge 0$, one standard approach is the simplex method.
Introducing slack variables $s_1, s_2 \ge 0$ the standard tableau form becomes
$$
\begin{pmatrix}
1 & -1 & -1 & 0 & 0 \
0 & 2 & 1 & 1 & 0 \
0 & 1 & 2 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
z & x_1 & x_2 & s_1 & s_2
\end{pmatrix}^T = \begin{pmatrix} 0 \ 4 \ 3 \end{pmatrix}.
$$
The simplex method performs row operations to remove all negative numbers from the top row, at each stage choosing the smallest (in magnitude) pivot.
Assume the tableau is given in this standard form. We can use numpy to implement the problem. | import numpy
tableau = numpy.array([ [1, -1, -1, 0, 0, 0],
[0, 2, 1, 1, 0, 4],
[0, 1, 2, 0, 1, 3] ], dtype=numpy.float64)
print(tableau) | examples.ipynb | IanHawke/msc-or-week0 | mit |
Lets now view the first few lines of the data table. The rows of the data table are each of the Nobel prizes awarded and the columns are the information about who won the prize.
We have put the data into a pandas DataFrame we can now use all the functions associated with DataFrames. A useful function is .head(), this prints out the first few lines of the data table. | data.head(5) # Displaying some of the data so you can see what form it takes in the DataFrame | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Plotting a histogram
Lets learn how to plot histograms. We will plot the number of prizes awarded per year. Nobel prizes can be awarded for up to three people per category. As each winner is recorded as an individual entry the histogram will tell us if there has been a trend of increasing or decreasing multiple prize winners in one year.
However before we plot the histogram we should find information out about the data so that we can check the range of the data we want to plot. | # print the earliest year in the data
print(data.Year.min())
# print the latest year in the data
print(data.Year.max()) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
The data set also contains entries for economics. Economics was not one of the original Nobel prizes and has only been given out since 1969. If we want to do a proper comparison we will need to filter this data out. We can do this with a pandas query.
We can then check there are no economics prizes left by finding the length of the data after applying a query to only select economics prizes. This will be used in the main analysis to count the number of $B^+$ and $B^-$ mesons. | # filter out the Economics prizes from the data
data_without_economics = data.query("Category != 'economics'")
print('Number of economics prizes in "data_without_economics":')
print(len(data_without_economics.query("Category == 'economics'"))) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
We can now plot the histogram over a sensible range using the hist function from matplotlib. You will use this throughout the main analysis. | # plot the histogram of number of winners against year
H_WinnersPerYear = data_without_economics.Year.hist(bins=11, range=[1900, 2010])
xlabel('Year')
ylabel('Number of Winners') | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
From the histogram we can see that there has been a recent trend of more multiple prize winners in the same year. However there is a drop in the range 1940 - 1950, this was due to prizes being awarded intermittently during World War II. To isolate this gap we can change the bin size (by changing the number of bins variable) to contain this range. Try changing the slider below (you will have to click in code box and press shift+enter to activate it) and see how the number of bins affects the look of the histogram. | def plot_hist(bins):
changingBins = data_without_economics.Year.hist(bins=bins, range=[1900,2010])
xlabel('Year')
ylabel('Number of People Given Prizes')
BinSize = round(60/bins, 2)
print(BinSize)
interact(plot_hist, bins=[2, 50, 1]) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
As you can see by varying the slider - changing the bin size really does change how the data looks! There is discussion on what is the appropiate bin size to use in the main notebook.
Preselections
We now want to select our data. This is the same process as with filtering out economics prizes before but we'll go into more detail. This time lets filter out everything except Physics. We could do so by building a new dataset from the old one with loops and if statements, but the inbuilt pandas function .query() provides a quicker way. By passing a conditional statement, formatted into a string, we can create a new dataframe which is filled with only data that made the conditional statement true. A few examples are given below but only filtering out all but physics is used. | modernPhysics = "(Category == 'physics' && Year > 2005)" # Integer values don't go inside quotes
physicsOnly = "(Category == 'physics')"
# apply the physicsOnly query
physicsOnlyDataFrame = data.query(physicsOnly) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Lets check the new DataFrames to see if this has worked! | physicsOnlyDataFrame.head(5) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Brilliant! You will find this technique useful to select kaons in the main analysis. Lets now plot the number of winners per year just for physics. | H_PhysicsWinnersPerYear = physicsOnlyDataFrame.Year.hist(bins=15, range=[1920,2010])
xlabel('Year') #Plot an x label
ylabel('Number of Winners in Physics') #Plot a y label | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
We have now successfully plotted the histogram of just the physics prizes after applying our pre-selection.
Calculations, Scatter Plots and 2D Histogram
Adding New Data to a Data Frame
You will find this section useful for when it comes to creating a Dalitz plot in the particle physics analysis.
We want to see what ages people have been awarded Nobel prizes and measure the spread in the ages.
Then we'll consider if over time people have been getting awarded Nobel prizes earlier or later in their life.
First we'll need to calculate the age or the winners at the time the prize was awarded based on the Year and Birthdate columns. We create an AgeAwarded variable and add this to the data. | # Create new variable in the dataframe
physicsOnlyDataFrame['AgeAwarded'] = physicsOnlyDataFrame.Year - physicsOnlyDataFrame.BirthYear
physicsOnlyDataFrame.head(5) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Lets make a plot of the age of the winners at the time they were awarded the prize | # plot a histogram of the laureates ages
H_AgeAwarded = physicsOnlyDataFrame.AgeAwarded.hist(bins=15) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Making Calculations
Lets calculate a measure of the spread in ages of the laureates. We will calculate the standard deviation of the distribution. | # count number of entries
NumEntries = len(physicsOnlyDataFrame)
# calculate square of ages
physicsOnlyDataFrame['AgeAwardedSquared'] = physicsOnlyDataFrame.AgeAwarded**2
# calculate sum of square of ages, and sum of ages
AgeSqSum = physicsOnlyDataFrame['AgeAwardedSquared'].sum()
AgeSum = physicsOnlyDataFrame['AgeAwarded'].sum()
# calculate std and print it
std = sqrt((AgeSqSum-(AgeSum**2/NumEntries)) / NumEntries)
print(std) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
There is actually a function that would calculate the rms for you, but we wanted to teach you how to manipulate data to make calculations! | # calculate standard deviation (rms) of distribution
print(physicsOnlyDataFrame['AgeAwarded'].std()) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Scatter Plot
Now lets plot a scatter plot of Age vs Date awarded | scatter(physicsOnlyDataFrame['Year'], physicsOnlyDataFrame['AgeAwarded'])
plt.xlim(1900, 2010) # change the x axis range
plt.ylim(20, 100) # change the y axis range
xlabel('Year Awarded')
ylabel('Age Awarded') | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
2D Histogram
We can also plot a 2D histogram and bin the results. The number of entries in the data set is relatively low so we will need to use reasonably large bins to have acceptable statistics in each bin. We have given you the ability to change the number of bins so you can see how the plot changes. Note that the number of total bins is the value of the slider squared. This is because the value of bins given in the hist2d function is the number of bins on one axis. | hist2d(physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded, bins=10)
colorbar() # Add a colour legend
xlabel('Year Awarded')
ylabel('Age Awarded') | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Alternatively you can use interact to add a slider to vary the number of bins | def plot_histogram(bins):
hist2d(physicsOnlyDataFrame['Year'].values,physicsOnlyDataFrame['AgeAwarded'].values, bins=bins)
colorbar() #Set a colour legend
xlabel('Year Awarded')
ylabel('Age Awarded')
interact(plot_histogram, bins=[1, 20, 1]) # Creates the slider | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Playing with the slider will show you the effect of changing bthe in size in a 2D histogram. The darker bins in the top right corner show that there does appear to be a trend of Nobel prizes being won at an older age in more recent years.
Manipulating 2D histograms
This section is advanced and only required for the final section of the main analysis.
As the main analysis requires the calculation of an asymmetry, we now provide a contrived example of how to do this using the nobel prize dataset, we recommed only reading this section after reaching the "Searching for local matter anti-matter differences" section of the main analysis.
First calculate the number of entries in each bin of the 2D histogram and store these values in physics_counts as a 2D array.
xedges and yedges are 1D arrays containing the values of the bin edges along each axis. | physics_counts, xedges, yedges, Image = hist2d(
physicsOnlyDataFrame.Year, physicsOnlyDataFrame.AgeAwarded,
bins=10, range=[(1900, 2010), (20, 100)]
)
colorbar() # Add a colour legend
xlabel('Year Awarded')
ylabel('Age Awarded') | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Repeat the procedure used for physics to get the 2D histgram of age against year awarded for chemistry nobel prizes. | # Make the "chemistryOnlyDataFrame" dataset
chemistryOnlyDataFrame = data.query("(Category == 'chemistry')")
chemistryOnlyDataFrame['AgeAwarded'] = chemistryOnlyDataFrame.Year - chemistryOnlyDataFrame.BirthYear
# Plot the histogram
chemistry_counts, xedges, yedges, Image = hist2d(
chemistryOnlyDataFrame.Year, chemistryOnlyDataFrame.AgeAwarded,
bins=10, range=[(1900, 2010), (20, 100)]
)
colorbar() # Add a colour legend
xlabel('Year Awarded')
ylabel('Age Awarded') | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Subtract the chemistry_counts from the physics_counts and normalise by their sum. This is known as an asymmetry. | counts = (physics_counts - chemistry_counts) / (physics_counts + chemistry_counts) | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Where there are no nobel prize winners for either subject counts will contain an error value (nan) as the number was divided by zero. Here we replace these error values with 0. | counts[np.isnan(counts)] = 0 | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Finally plot the asymmetry using the pcolor function. As positive and negative values each have a different meaning we use the seismic colormap, see here for a full list of all available colormaps. | pcolor(xedges, yedges, counts, cmap='seismic')
colorbar() | Example-Analysis.ipynb | lhcb/opendata-project | gpl-2.0 |
Now, this program can be executed as follows: | DataStructureVisualization(BinarySearchTree).run()
import io
import base64
from IPython.display import HTML
video = io.open('../res/bst.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" width="500" height="350" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))) | doc/OpenAnalysis/05 - Data Structures.ipynb | OpenWeavers/openanalysis | gpl-3.0 |
Reading Data | #Names of all of the columns
names = [
'sep_length'
, 'sep_width'
, 'petal_length'
, 'petal_width'
, 'species'
]
#Import dataset
data = pd.read_csv('iris.data', sep = ',', header = None, names = names)
data.head(10)
data.shape | Clustering/Iris/Iris.ipynb | neeasthana/ML-SQL | gpl-3.0 |
Separate Data | #Select Predictor columns
X = data.ix[:,:-1]
#Scale X so that all columns have the same mean and variance
X_scaled = preprocessing.scale(X)
#Select target column
y = data['species']
y.value_counts() | Clustering/Iris/Iris.ipynb | neeasthana/ML-SQL | gpl-3.0 |
Scatter Plot Matrix | # Visualize dataset with scatterplot matrix
%matplotlib inline
g = sns.PairGrid(data, hue="species")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter) | Clustering/Iris/Iris.ipynb | neeasthana/ML-SQL | gpl-3.0 |
K Means (3 clusters) | #train a k-nearest neighbor algorithm
fit = KMeans(n_clusters=3).fit(X_scaled)
fit.labels_
#remake labels so that they properly matchup with the classes
labels = fit.labels_[:]
for index,val in enumerate(labels):
if val == 1:
labels[index] = 1
elif val == 2:
labels[index] = 3
else:
labels[index] = 2
labels
conf_mat = np.zeros((3,3))
true = np.array([0]*50 + [1]*50 + [2]*50)
for i,val in enumerate(true):
conf_mat[val,labels[i]-1] += 1
#true vs. predicted
print(pd.DataFrame(conf_mat)) | Clustering/Iris/Iris.ipynb | neeasthana/ML-SQL | gpl-3.0 |
Having the data in a proper dataframe, we are now in a position to create the features and response values. | # Calculate log-returns and label responses:
# 'direction' equals 1 if stock closed above
# previous day and 0 if it fell.
today = np.log(shsPr / shsPr.shift(1))
direction = np.where(today >= 0, 1, 0)
# Convert 'direction' to dataframe
direction = pd.DataFrame(direction, index=today.index, columns=today.columns)
# Lag1, 2: t-1 and t-2 returns; excl. smi (in last column)
Lag1 = np.log(shsPr.iloc[:, :-1].shift(1) / shsPr.iloc[:, :-1].shift(2))
Lag2 = np.log(shsPr.iloc[:, :-1].shift(2) / shsPr.iloc[:, :-1].shift(3))
# Previous day return for SMI index
smi = np.log(shsPr.iloc[:, -1].shift(1) / shsPr.iloc[:, -1].shift(2)) | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
KNN Algorithm Applied
Now comes the difficult part. What we want to achieve is to run the KNN algorithm for every stock and for different hyperparameter $k$ and see how it performs. For this we do the following steps:
Create a feature matrix X containing Lag1, Lag2 and SMI data for share i
Create a response vector y with binary direction values
Split data to training (before 2016-06-30) and test set (after 2016-06-30)
Run KNN for different values of $k$ (loop)
Write test score for given $k$ to matrix scr
Once we have run through all $k$'s we proceed with step 1. with share i+1
This means we need two loops. The first corresponds to the share (e.g. ABB, Adecco, etc.), the second runs the KNN algorithm for different values of $k$.
The reason for this approach is that we are interested in finding any pattern/structure that would provide a successful trading strategy. There is obviously no free lunch. Predicting share price direction is by no means an easy task and we must be well aware that we are in for a difficult job here. If it were simple, neither one of us would be sitting here but run his own fund. But nonetheless, let us see how KNN performs and how homogeneous (or heterogeneous) the results are.
As usual our first step is to prepare the ground by loading the necessary package and defining some auxiliary variables. The KNN function we will be using is available through the sklearn (short for Scikit-learn) package. We only load the neighbor sublibrary which contains the needed KNN function called KNeigborsClassifier(). KNN is applied with the default distance metric: Euclidean distance (Minkowski's distance with $m=2$). If we would prefer another distance metric we would have to specify it (see documentation). | # Import relevant functions
from sklearn import neighbors
# k = {1, 3, ..., 200}
k = np.arange(1, 200, 2)
# Array to store results in. Dimension is [k x m]
# with m=20 for the 20 companies (excl. SMI)
scr = np.empty(shape=(len(k), len(shsPr.columns)-1))
for i in range(len(shsPr.columns)-1):
# 1) Create matrix with feature values of stock i
X = pd.concat([Lag1.iloc[:, i], Lag2.iloc[:, i], smi], axis=1)
X = X[3:] # Drop first three rows with NaN (due to lag)
# 2) Remove first three rows of response dataframe
# to have equal no. of rows for features and response
y = direction.iloc[:, i]
y = y[3:]
# 3) Split data into training set...
X_train = X[:'2016-06-30']
y_train = y[:'2016-06-30']
# ...and test set.
X_test = X['2016-07-01':]
y_test = y['2016-07-01':]
# Convert responses to 1xN array (with .ravel() function)
y_train = y_train.values.ravel()
y_test = y_test.values.ravel()
for j in range(len(k)):
# 4) Run KNN
# Instantiate KNN class
knn = neighbors.KNeighborsClassifier(n_neighbors=k[j])
# Fit KNN classifier using training set
knn = knn.fit(X_train, y_train)
# 5) Extract test score for k[j]
scr[j, i] = knn.score(X_test, y_test)
# Convert data to pandas dataframe
tickers = shsPr.columns
scr = pd.DataFrame(scr, index=k, columns=tickers[:-1])
scr.head() | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
Results & Analysis
Now let's see the results in an overview. | scr.describe()
scr.max().nlargest(5) | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
Following finance theory, returns should be distributed symmetrically. Thus the simplest guess would be to expect a share price to increase on 50% of the days and to decrease on the remaining 50%. Similar to guessing a coin flip, if we would guess an 'up' movement for every day, we obviously would - in the long run - be correct 50% of the times. This would make for a score of 50%.
Looking in that light at the above summary, we see some very interesting results. For 10 out of 20 stocks KNN produces test scores of > 50% for even the 0.25th percentile. Let's plot the ones with the highest test-scores (ABBN, ZURN, NOVN, SIK) to see at what value of $k$ the best test-score is achieved. | nms = ['ABBN', 'ZURN', 'NOVN', 'SIK']
plt.figure(figsize=(12, 8))
for col in nms:
scr[col].plot(legend=True)
plt.axhline(0.50, c='k', ls='--'); | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
For Zurich the peak is early (max. score around $k=60$) while the others peak later, i.e. with higher values of $k$. Furthermore, it seems interesting that for $k > 40$ test scores remained (barely) above 50%. If this is indeed a pattern we would have found a trading strategy, wouldn't we?
To further assess our results we look into KNN's prediction of Givaudan's stock movements. For this we rerun our KNN classifier algorithm for ABBN as before. | # 1) Create matrix with feature values of stock i
X = pd.concat([Lag1['ABBN'], Lag2['ABBN'], smi], axis=1)
X = X[3:] # Drop first three rows with NaN (due to lag)
# 2) Remove first three rows of response dataframe
# to have equal no. of rows for features and response
y = direction['ABBN']
y = y[3:]
# 3) Split data into training set...
X_train = X[:'2016-06-30']
y_train = y[:'2016-06-30']
# ...and test set.
X_test = X['2016-07-01':]
y_test = y['2016-07-01':]
# Convert responses to 1xN array (with .ravel() function)
y_train = y_train.values.ravel()
y_test = y_test.values.ravel() | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
For GIVN the maximum score is reached where $k=145$. You can check this with the scr['ABBN'].idxmax() command, which provides the index of the maximum value of the selected column. In our case, the index is equivalent to the value of $k$. Thus we run KNN with $k=145$. | scr['ABBN'].idxmax()
# 4) Run KNN
# Instantiate KNN class for GIVN with k=145
knn = neighbors.KNeighborsClassifier(n_neighbors=145)
# Fit KNN classifier using training set
knn = knn.fit(X_train, y_train)
# 5) Extract test score for ABB
scr_ABBN = knn.score(X_test, y_test)
scr_ABBN | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
The score of 59.68% is the very same as what we have seen in above summary statistic. Nothing new so far. (Recall that the score is the total of correctly predicted outcomes.)
However, the alert reader should by now raise some questions regarding our assumption that 50% of the returns should have been positive. In the long run, this might be true. But our training sample contained only 1'017 records and of these 534 were positive. | # Percentage of 'up' days in training set
y_train.sum() / y_train.size | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
Therefore, if we would guess 'up' for every day of our test set and given the distribution of classes in the test set is exactly as in our training set, then we would predict the correct movement in 52.51% of the cases. So in that light, the predictive power of our KNN algorithm has to be put in perspective to the 52.51%.
In summary, our KNN algorithm has a score of 59.68%. Our best guess (based on the training set) would yield a score of 52.51%. This still displays that overall our KNN algorithm outperforms our best guess. Nonetheless, the margin is smaller than initially thought.
Confusion Matrix
There are more tools to assess the accuracy of an algorithm. We postpone the discussion of these tools to a later chapter and at this stage restrict ourselves to the discussion of a tool called "confusion matrix".
A confusion matrix is a convenient way of displaying how our classifier performs. In binary classification (with e.g. response $y \in {0, 1}$) there are four prediction categories possible (Ting (2011)):
True positive: True response value is 1, predicted value is 1 ("hit")
True negative: True response value is 0, predicted value is 0 ("correct rejection")
False positive: True response value is 0, predicted value is 1 ("False alarm", Type 1 error)
False negative: True response value is 1, predicted value is 0 ("Miss", Type 2 error)
These information help us to understand how our (KNN) algorithm performed. There are different two ways of arranging confusion matrix. James et al. (2013) follow the convention that column labels indicate the true class label and rows the predicted response class. Others have it transposed such that column labels indicate predicted classes and row labels show true values. We will use the latter approach as it is more common.
<img src="Graphics/0207_ConfusionMatrixExplained.png" alt="ConfusionMatrixExplained" style="width: 800px;"/>
To run this in Python, we first predict the response value for each data entry in our test matrix X_test. Then we arrange the data in a suitable manner. | # Predict 'up' (=1) or 'down' (=0) for test set
pred = knn.predict(X_test)
# Store data in DataFrame
cfm = pd.DataFrame({'True direction': y_test,
'Predicted direction': pred})
cfm.replace(to_replace={0:'Down', 1:'Up'}, inplace=True)
# Arrange data to confusion matrix
print(cfm.groupby(['Predicted direction','True direction']) \
.size().unstack('Predicted direction')) | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
As mentioned before, rows represent the true outcome and columns show what class KNN predicted. In 31 cases, the test set's true response was 'down' (in our case represented by 0) and KNN correctly predicted 'down'. 120 times KNN was correct in predicting an 'up' (=1) movement. 19 returns in the test set were positive but KNN predicted a negative return. And in 83 out of 253 cases KNN predicted an 'up' movement whereas in reality the stock price decreased. The KNN score of 59.68% for ABB is the sum of true positive and negative (31 + 120) in relation to the total number of predictions (253 = 31 + 19 + 83 + 120). The error rate is 1 - score or (19 + 83)/253.
Class-specific performance is also helpful to better understand results. The related terms are sensitivity and specifity. In the above case, sensitivity is the percentage of true 'up' movements that are identified. A good 86.3% (= 120 / (19 + 120)). The specifity is the percentage of 'down' movements that are correctly identified, here a poor 27.2% (= 31 / (31 + 83)). More on this in the next chapter.
Because confusion matrices are important to analyze results, Scikit-learn has its own command to generate it. It is part of the metrics sublibrary. The difficulty is that in contrast to above (manually generated) table, the function's output provides no labels. Therefore one must be sure to know which value are where. Here's the code to generate the confusion matrix. | from sklearn.metrics import confusion_matrix
# Confusion matrix
confusion_matrix(y_test, pred) | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
In general it is tremendously helpful to visualize results. At the time of writing in February 2022, Anaconda shipped with Sklearn version 0.24.2. This Sklearn version uses a confusion matrix-plotting function that is deprecated in versioun 1.0. Therefore, be aware that below plotting function only works for versions < 1.0. Details see here. | from sklearn.metrics import plot_confusion_matrix
# To plot the normalized figures, add normalize = 'true', 'pred', or 'all'
plot_confusion_matrix(estimator = knn,
X = X_test, y_true = y_test,
display_labels = ['Down', 'Up'],
values_format = '.0f');
# To plot the normalized figures, add normalize = 'true', 'pred', or 'all'
# This time with different color scheme
plot_confusion_matrix(estimator = knn,
X = X_test, y_true = y_test,
cmap = plt.cm.Blues,
normalize = 'all',
display_labels = ['Down', 'Up'],
values_format = '.2f'); | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
Further Ressources
In writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:
Batista, Gustavo, and Diego Furtado Silva, 2009, How k-nearest neighbor parameters affect its performance, in Argentine Symposium on Artificial Intelligence, 1–12, sn.
Fortmann-Roe, Scott, 2012, Understanding the Bias-Variance Tradeoff from website, http://scott.fortmann-roe.com/docs/BiasVariance.html, 08/15/17.
Guggenbuehler, Jan P., 2015, Predicting net new money using machine learning algorithms and newspaper articles, Technical report, University of Zurich, Zurich.
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, An Introduction to Statistical Learning: With Applications in R (Springer Science & Business Media, New York, NY).
Müller, Andreas C., and Sarah Guido, 2017, Introduction to Machine Learning with Python (O’Reilly Media, Sebastopol, CA).
Russell, Stuart, and Peter Norvig, 2009, Artificial Intelligence: A Modern Approach (Prentice Hall Press, Upper Saddle River, NJ).
Ting, Kai Ming, 2011, Confusion matrix, in Claude Sammut, and Geoffrey I. Webb, eds., Encyclopedia of Machine Learning (Springer Science & Business Media, New York, NY). | import sklearn
sklearn.show_versions() | 0207_kNN.ipynb | bMzi/ML_in_Finance | mit |
Part 2 | import matplotlib.pyplot as plt
%matplotlib inline | BikeShareStep2.ipynb | HeyIamJames/bikeshare | mit |
Plot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month. | plt.plot(weather['months'], weather['temp'])
plt.xlabel("This is just an x-axis")
plt.ylabel("This is just a y-axis")
plt.show()
x = weather.groupby('months').agg({"humidity":np.mean})
plt.bar([n for n in range(1, 13)], x['humidity'])
plt.title("weather and humidity by months")
plt.show() | BikeShareStep2.ipynb | HeyIamJames/bikeshare | mit |
Use a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season. | xs = range(10)
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='r', marker='*', label='series1')
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='g', marker='o', label='series2')
plt.title("A scatterplot with two series")
plt.legend(loc=9)
plt.show()
w = weather[['season_desc', 'temp', 'total_riders']]
fall = w.loc[w['season_desc'] == 'Fall']
winter = w.loc[w['season_desc'] == 'Winter']
spring = w.loc[w['season_desc'] == 'Spring']
summer = w.loc[w['season_desc'] == 'Summer']
plt.scatter(fall['temp'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)
plt.scatter(winter['temp'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)
plt.scatter(spring['temp'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)
plt.scatter(summer['temp'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)
plt.legend(loc='lower right')
plt.xlabel('temperature')
plt.ylabel('rental volume')
plt.show() | BikeShareStep2.ipynb | HeyIamJames/bikeshare | mit |
Create another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season. | w = weather[['season_desc', 'windspeed', 'total_riders']]
fall = w.loc[w['season_desc'] == 'Fall']
winter = w.loc[w['season_desc'] == 'Winter']
spring = w.loc[w['season_desc'] == 'Spring']
summer = w.loc[w['season_desc'] == 'Summer']
plt.scatter(fall['windspeed'], fall['total_riders'], color='orange', marker='^', label='fall', s=100, alpha=.41)
plt.scatter(winter['windspeed'], winter['total_riders'], color='blue', marker='*', label='winter', s=100, alpha=.41)
plt.scatter(spring['windspeed'], spring['total_riders'], color='purple', marker='d', label='spring', s=100, alpha=.41)
plt.scatter(summer['windspeed'], summer['total_riders'], color='red', marker='o', label='summer', s=100, alpha=.41)
plt.legend(loc='lower right')
plt.xlabel('windspeed x1000 mph')
plt.ylabel('rental volume') | BikeShareStep2.ipynb | HeyIamJames/bikeshare | mit |
How do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude. | usage = pd.read_table('usage_2012.tsv')
stations = pd.read_table('stations.tsv')
stations.head()
c = DataFrame(counts.index, columns=['station'])
c['counts'] = counts.values
s = stations[['station','lat','long']]
u = pd.concat([usage['station_start']], axis=1, keys=['station'])
counts = u['station'].value_counts()
m = pd.merge(s, c, on='station')
plt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.2)
plt.legend(loc='lower right')
plt.xlabel('longitude')
plt.ylabel('latitude')
plt.show() | BikeShareStep2.ipynb | HeyIamJames/bikeshare | mit |
上面的测试准确率可以看出,不同的训练集、测试集分割的方法导致其准确率不同,而交叉验证的基本思想是:将数据集进行一系列分割,生成一组不同的训练测试集,然后分别训练模型并计算测试准确率,最后对结果进行平均处理。这样来有效降低测试准确率的差异。
2. K折交叉验证
将数据集平均分割成K个等份
使用1份数据作为测试数据,其余作为训练数据
计算测试准确率
使用不同的测试集,重复2、3步骤
对测试准确率做平均,作为对未知数据预测准确率的估计 | # 下面代码演示了K-fold交叉验证是如何进行数据分割的
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print '{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations')
for iteration, data in enumerate(kf, start=1):
print '{:^9} {} {:^25}'.format(iteration, data[0], data[1]) | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | jasonding1354/pyDataScienceToolkits_Base | mit |
3. 使用交叉验证的建议
K=10是一个一般的建议
如果对于分类问题,应该使用分层抽样(stratified sampling)来生成数据,保证正负例的比例在训练集和测试集中的比例相同
4. 交叉验证的例子
4.1 用于调节参数
交叉验证的方法可以帮助我们进行调参,最终得到一组最佳的模型参数。下面的例子我们依然使用iris数据和KNN模型,通过调节参数,得到一组最佳的参数使得测试数据的准确率和泛化能力最佳。 | from sklearn.cross_validation import cross_val_score
knn = KNeighborsClassifier(n_neighbors=5)
# 这里的cross_val_score将交叉验证的整个过程连接起来,不用再进行手动的分割数据
# cv参数用于规定将原始数据分成多少份
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print scores
# use average accuracy as an estimate of out-of-sample accuracy
# 对十次迭代计算平均的测试准确率
print scores.mean()
# search for an optimal value of K for KNN model
k_range = range(1,31)
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print k_scores
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(k_range, k_scores)
plt.xlabel("Value of K for KNN")
plt.ylabel("Cross validated accuracy") | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | jasonding1354/pyDataScienceToolkits_Base | mit |
上面的例子显示了偏置-方差的折中,K较小的情况时偏置较低,方差较高;K较高的情况时,偏置较高,方差较低;最佳的模型参数取在中间位置,该情况下,使得偏置和方差得以平衡,模型针对于非样本数据的泛化能力是最佳的。
4.2 用于模型选择
交叉验证也可以帮助我们进行模型选择,以下是一组例子,分别使用iris数据,KNN和logistic回归模型进行模型的比较和选择。 | # 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()
# 10-fold cross-validation with logistic regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean() | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | jasonding1354/pyDataScienceToolkits_Base | mit |
4.3 用于特征选择
下面我们使用advertising数据,通过交叉验证来进行特征的选择,对比不同的特征组合对于模型的预测效果。 | import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
# read in the advertising dataset
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# create a Python list of three feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the DataFrame (X)
X = data[feature_cols]
# select the Sales column as the response (y)
y = data.Sales
# 10-fold cv with all features
lm = LinearRegression()
scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')
print scores | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | jasonding1354/pyDataScienceToolkits_Base | mit |
这里要注意的是,上面的scores都是负数,为什么均方误差会出现负数的情况呢?因为这里的mean_squared_error是一种损失函数,优化的目标的使其最小化,而分类准确率是一种奖励函数,优化的目标是使其最大化。 | # fix the sign of MSE scores
mse_scores = -scores
print mse_scores
# convert from MSE to RMSE
rmse_scores = np.sqrt(mse_scores)
print rmse_scores
# calculate the average RMSE
print rmse_scores.mean()
# 10-fold cross-validation with two features (excluding Newspaper)
feature_cols = ['TV', 'Radio']
X = data[feature_cols]
print np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean() | Scikit-learn/.ipynb_checkpoints/(4)cross_validation-checkpoint.ipynb | jasonding1354/pyDataScienceToolkits_Base | mit |
Para ver un ejemplo de su uso, realiza lo siguiente:
* Se define las regiones x e y.
* Se define un polinomio para probar la definición de la región.
* Se utiliza la función region_estabilidad y se grafica el resultado. | #Se define la región
x = np.linspace(-3.0, 1.5)
y = np.linspace(-3.0, 3.0)
X, Y = np.meshgrid(x, y)
#Se define el polinomio
def p(w):
return [1, -1 - w - w ** 2 / 2 - w ** 3 / 6 - w ** 4 / 24]
Z = region_estabilidad(p, X, Y)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(20,7)
plt.contour(X, Y, Z, np.linspace(0.0, 1.0, 9))
plt.show() | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
El ejemplo anterior muestra el uso de Numpy como una biblioteca o módulo que permite hacer manipulación de matrices y como la calidad del procesamiento numérico perminte considarar su uso en problemas de métodos numéricos.
Problema del laberito con Numpy
El problema tiene el mismo nombre que un ejemplo del uso del algoritmo Backtracking para solución de problemas por "fuerza bruta", respecto a esa técnica se puede consultar el brog de JC Gómez.
El ejemplo que comparto tienen que ver más con el uso de Cadenas de Markov, el ejemplo es solo para mostrar como funcionan y como se hace uso de ellas para resolver el problema con ciertos supuestos iniciales y como hacerlo con numpy.
Suponemos que se colocará un dispositivo que se puede desplazar y pasar de área como en el cuadro siguiente: La idea es que puede pasar del cuadro 1 hacia el 2 o el 4, si empieza en el cuadro 2 puede pasar del 1 al 3, pero si inicia en el 5 solo puede pasar al 6, etc.
Entonces lo que se plantea es que si inicia en el cuadro 1 y pasa al 2, eso solo depende de haber estado en el cuadro 1, si se pasa al cuadro 3 solo depende del estado 2, no de haber estado en el estado 1. Entonces la idea de los procesos de Markov es que para conocer si se pasará al cuadro 3 iniciando en el cuadro 1 solo se requiere conocer los pasos previos.
En lenguaje de probabilidad se expresa así:
\begin{align}
P(X_{n}|X_{n-1},X_{n-2},...,X_{1}) = P(X_{n}|X_{n-1})\\[5pt]
\end{align}
Entonces los supuestos son que se tienen 6 posibles estados iniciales o lugares desde donde iniciar, el cuadro 1 hasta el cuadro 6. Se hace una matriz que concentra la información ordenada de la relación entre los posibles movimientos entre cuadros contiguos. Entonces la relación de estados es:
\begin{align}
p_{ij}= P(X_{n}=j|X_{n-1}=i)\\[5pt]
\end{align}
Donde se refiere a la probabilidad de estar en el cuadro j si se estaba en el estado i, para el cuadro 2 las probabilidades serían:
\begin{align}
p_{21}= P(X_{n}=1|X_{n-1}=2)\\[5pt]
p_{23}= P(X_{n}=3|X_{n-1}=2)\\[5pt]
0= P(X_{n}=4|X_{n-1}=2)\\[5pt]
0= P(X_{n}=5|X_{n-1}=2)\\[5pt]
0= P(X_{n}=6|X_{n-1}=2)\\[5pt]
\end{align}
Visto como matriz se vería como:
\begin{array}{ccc}
p_{11} & p_{12} & p_{13} & p_{14} & p_{15} & p_{16} \
p_{21} & p_{22} & p_{23} & p_{24} & p_{25} & p_{26} \
p_{31} & p_{32} & p_{33} & p_{34} & p_{35} & p_{36}\
p_{41} & p_{42} & p_{43} & p_{44} & p_{45} & p_{46}\
p_{51} & p_{52} & p_{53} & p_{54} & p_{55} & p_{56}\
p_{61} & p_{62} & p_{63} & p_{64} & p_{65} & p_{66}\end{array}
Matriz anterior se llama matriz de transición, para este ejemplo es de la forma siguiente:
\begin{array}{ccc}
\frac{1}{3} & \frac{1}{3} & 0 & \frac{1}{3} & 0 & 0 \
\frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 0 & 0 & 0 \
0 & \frac{1}{3} & \frac{1}{3} & 0 & 0 & \frac{1}{3}\
\frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 & 0\
0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2}\
0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} & \frac{1}{3}\end{array}
Se tienen entonces si la probabilidad de iniciar en cualquier estado es 1/6, entonces se tienen que la probabilidad despues de dos movimientos o cambios será la matriz de transición multiplicada por si misma en dos ocasiones, se vería así:
Vector de estados iniciales:
\begin{align}
v=(\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6},\frac{1}{6})\
\end{align}
En el primer estado sería:
\begin{align}
v*M
\end{align}
En el segundo estado sería:
\begin{align}
v*M^2
\end{align}
Asi lo que se tendrá es la probabilidad de estar en cualqueir cuadro para un segundo movimiento, hace esto en Numpy pensando en que es para procesar matrices resulta sencillo. Solo basta definir la matriz, hacer producto de matrices y vectores. | #Definición de la matriz
import numpy as np
M=np.matrix([[1.0/3.0,1.0/3.0,0,1.0/3.0,0,0],[1.0/3.0,1.0/3.0,1.0/3.0,0,0,0],[0,1.0/3.0,1.0/3.0,0,0,1.0/3.0],
[1.0/2.0,0,0,1.0/2.0,0,0],[0,0,0,0,1.0/2.0,1.0/2.0],[0,0,1.0/3.0,0,1.0/3.0,1.0/3.0]])
M
#Definición del vector de estados iniciales
v=np.array([1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0,1.0/3.0])
#Primer estado o movimiento
v*M
#Segundo movimiento
v*M.dot(M)
#Tercer Movimiento
v.dot(M).dot(M).dot(M).dot(M).dot(M).dot(M).dot(M) | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Si se procede ha seguir incrementando los estados o cambios se "estabiliza"; es decir, el cambio en la probabilidad de estar en la caja 1 después de tercer movimiento es 37.53%, lo cual tiende a quedarse en 37.5% al incrementar los movimientos.
Ejemplo de PageRank de manera rupestre, calcuado con Numpy
El algoritmo para generar un Ranking en las páginas web fue desarrollado e implementado por los fundadores de Google, no profundizo en los detalles pero en general la idea es la siguiente:
Representar la Web como un gráfo dirigido.
Usar la matriz asociada a un grafo para analizar el comportamiento de la web bajo ciertos supuestos.
Agregar al modelo base lo necesario para que se adapate a la naturaleza de la web de mejor manera.
El primer objetivo es representar cada página como un vértice de un grafo y una arista representa la relación de la página a otra página; es decir, si dentro de la página A se hace referencia a la página B, entonces se agrega una "flecha", por lo tanto un ejemplo sencillo lo representa el siguiente grafo:
La imagen anterior representa un gráfo donde se simula que hay relación entre 3 páginas, la flecha indica el direccionamiento de una página a otra. Entonces para modelar la relación entre las páginas lo que se usa es matriz de adyacencia, esta matriz concentra la información de las relaciones entre nodos. La matriz adyacente de ese gráfo sería como:
\begin{array}{ccc}
.33 & .5 & 1 \
.33 & 0 & 0 \
.33 & .5 & 0 \end{array}
Esta matriz es una matriz de Markov por columnas, cada una suma 1, el objetivo es que se tenga un vector con el ranking del orden de las páginas por prioridad al iniciar la búsqueda en internet y después de hacer uso de la matriz se puede conocer cual es el orden de prioridad.
Así que suponiendo que cualquiera de las 3 páginas tienen la misma probabilidad de ser la página inicial, se tienen que el vector inicial es:
\begin{align}
v=(.33,.33,.33)\
\end{align}
Después de usar la matriz la ecuación que nos permitiría conocer el mejor ranking de las páginas sería:
\begin{align}
v=M*v
\end{align}
Entonces el problema se pasa a resolver un problema de vectores propios y valores propios, por lo tanto el problema sería calcularlos.
\begin{align}
Mv=\lambdav
\end{align} | import numpy as np
M=np.matrix([[.33,.5,1],[.33,0,0],[.33,.5,0]])
lambda1,v=np.linalg.eig(M)
lambda1,v | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
El ejemplo anterior tiene 3 vectores propios y 3 valores propios, pero no refleja el problema en general de la relación entre las páginas web, ya que podría suceder que una página no tienen referencias a ninguan otra salvo a si misma o tienen solo referencias a ella y ella a ninguna, ni a si misma. Entonces un caso más general sería representado como el gráfo siguiente:
En el grafo se tiene un nodo D y E, los cuales muestran un comportamiento que no se reflejaba en el grafo anterior. El nodo E y D no tienen salidas a otros nodos o páginas.
La matriz asociada para este gráfo resulta ser la siguiente:
\begin{array}{ccccc}
.33 & .25 & 0.5 & 0 & 0 \
.33 & 0 & 0 & 0 & 0 \
.33 & .25 & 0 & 0 & 0\
0 & .25 & 0.5 & 1 & 0\
0 & .25 & 0 & 0 & 1 \end{array}
Se observa que para esta matriz no cada columna tienen valor 1, ejemplo la correspondiente a la columna D tienen todas sus entradas 0. Entonces el algoritmo de PageRank propone modificar la matriz adyacencia y agregarle una matriz que la compensa.
La ecuación del siguiente modo:
\begin{align}
A=\betaM + (1-\beta)\frac{1}{n}ee^T
\end{align}
Lo que sucede con esta ecuación, es que las columnas como la correspiente el nodo D toman valores que en suma dan 1, en caso como el nodo E se "perturban" de modo que permite que el algoritmo no se quede antorado en ese tipo de nodos. Si se considera la misma hipótesis de que inicialmente se tienen la misma probabilidad de estar en cualquiera de las páginas (nodos) entonces se considera ahora solo un vector de tamaño 5 en lugar de 3, el cual se vería así:
\begin{align}
v=(.33,.33,.33,.33,.33)\
\end{align}
El coeficiente beta toma valores de entre 0.8 y 0.9, suelen considerarse 0.85 su valor o por lo menos es el que se suponía se usaba por parte de Google, en resumen este parámetros es de control. La letra e representa un vector de a forma v=(1,1,1,1,1), el producto con su traspuesta genera una matriz cuadrada la cual al multiplicarse por 1/n tiene una matriz de Markov.
La matriz A ya es objeto de poder calcular el vector y valor propio, sin entrar en detalle esta puede cumple condiciones del teorema de Perron y de Frobenius, lo cual en resumen implica que se busque calcular u obtener el "vector dominando".
Pensando en el calculo de los vectores y valores propios para una matriz como la asociada al grafo de ejemplo resulta trivial el cálculo, pero para el caso de millones de nodos sería computacionalmente muy costoso por lo cual lo que se usa es hacer un proceso de aproximación el cual convege "rápido" y fue parte del secreto para que las busquedas y el ranking de las páginas fuera mucho más rápido.
El código de la estimación en numpy sería el siguente: | from __future__ import division
import numpy as np
#Se definen los calores de las constantes de control y las matrices requeridas
beta=0.85
#Matriz adyacente
M=np.matrix([[0.33,0.25,0.5,0,0],[.33,0,0,0,0],[0.33,0.25,0,0,0],[0,.25,.5,1,0],[0,.25,0,0,1]])
#Cantidad de nodos
n=M.shape[1]
#Matriz del modelo de PageRank
A=beta*M+((1-beta)*(1/n)*np.ones((5,5)))
#Se define el vector inicial del ranking
v=np.ones(5)/5
#Proceso de estimación por iteracciones
iterN=1
while True:
v1=v
v=v.dot(M)
print "Interación %d\n" %iterN
print v
if not np.any((0.00001<np.abs(v-v1))):
break
else:
iterN=iterN+1
print "M*v\n"
v.dot(M) | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Se tienen como resultado que en 17 iteraciones el algoritmo indica que el PageRank no es totalmente dominado por el nodo E y D, pese a que son las "páginas" que tienen mayor valor, pero las otras 3 resultan muy cercanas en importancia. Se aprecia como se va ajustando el valor conforme avanzan las etapas de los cálculos.
Data.Frame y Series
Los dos elementos principales de Pandas son Data.Frame y Series. El nombre Data.Frame es igual que el que se usa en R project y en escencia tiene la misma finalidad de uso, para la carga y procesamiento de datos.
Los siguientes ejemplos son breves, para conocer con detalle las propiedades, operaciones y caracteristicas que tienen estos dos objetos se puede consultar el libro Python for Data Analysis o el sitio oficial del módulo Pandas.
Primero se carga el módulo y los objetos y se muestran como usarlos de manera sencilla. | #Se carga el módulo
import pandas as pd
from pandas import Series, DataFrame
#Se construye una Serie, se agregan primero la lista de datos y después la lista de índices
datos_series=Series([1,2,3,4,5,6],index=['a','b','c','d','e','f'])
#Se muestra como carga los datos Pandas en la estrutura definida
datos_series
#Se visualizan los valores que se guardan en la estructura de datos
datos_series.values
#Se visualizan los valores registrados como índices
datos_series.index
#Se seleccionan algún valor asociado al índice 'b'
datos_series['b']
#Se revisa si tienen datos nulos o NaN
datos_series.isnull()
#Se calcula la suma acumulada, es decir 1+2,1+2+3,1+2+3+4,1+2+3+4+5,1+2+3+4+5+6
datos_series.cumsum()
#Se define un DataFrame, primero se define un diccionario y luego de genera el DataFrame
datos={'Estado':['Guanajuato','Querétaro','Jalisco','Durango','Colima'],'Población':[5486000,1828000,7351000,1633000,723455],'superficie':[30607,11699,78588,123317,5627]}
Datos_Estados=DataFrame(datos)
Datos_Estados
#Se genrea de nuevo el DataFrame y lo que se hace es asignarle índice para manipular los datos
Datos_Estados=DataFrame(datos,index=[1,2,3,4,5])
Datos_Estados
#Se selecciona una columna
Datos_Estados.Estado
#Otro modo de elegir la columna es del siguiente modo.
Datos_Estados['Estado']
#Se elige una fila, y se hace uso del índice que se definió para los datos
Datos_Estados.ix[2]
#Se selecciona más de una fila
Datos_Estados.ix[[3,4]]
#Descripción estadística en general, la media, la desviación estándar, el máximo, mínimo, etc.
Datos_Estados.describe()
#Se modifica el DataFrame , se agrega una nueva columna
from numpy import nan as NA
Datos_Estados['Índice']=[1.0,4.0,NA,4.0,NA]
Datos_Estados
#Se revisa si hay datos NaN o nulos
Datos_Estados.isnull()
#Pandas cuenta con herramientas para tratar los Missing Values, en esto se pueden explorar como con isnull() o
#eliminar con dropna. En este caso de llena con fillna
Datos_Estados.fillna(0) | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Los ejemplos anteriores muestras que es muy sencillo manipular los datos con Pandas, ya sea con Series o con DataFrame. Para mayor detalle de las funciones lo recomendable es consultar las referencias mencionadas anteriormente.
Cargar datos desde diversos archivos y estadísticas sencillas. | #Se agraga a la consola de ipython la salida de matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=(30,8)
#Se cargan los datos desde un directorio, se toma como headers los registros de la fila 0
datos=pd.read_csv('~/Documentos/Tutorial-Python/Datos/Mujeres_ingeniería_y_tecnología.csv', encoding='latin1')
#Se visualizan los primeros 10 registros
datos.head(10)
#Se observa la forma de los datos o las dimensiones, se observa que son 160 filas y 5 columnas.
datos.shape
#Se da una descripción del tipo de variables que se cargan en el dataFrame
datos.dtypes
#Se puede visualizar las información de las colunnas de manera más completa
datos.info()
#Se hace un resumen estadístico global de las variables o columnas
datos.describe() | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Viendo los datos que se tienen, es natural preguntarse algo al respecto. Lo sencillo es, ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería?, pero también se puede agregar a la pregunta anterior el preguntar en qué año o cíclo ocurrió eso.
Algo sencillo para abordar las anteriores preguntas construir una tabla que permita visualizar la relación entre las variables mencionadas. | #Se construye una tabla pivot para ordenar los datos y conocer como se comportó el total de mujeres
#inscritas a ingenierías
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')
#Se revisa cual son los 3 estados con mayor cantidad de inscripciones en el cíclo 2012/2013
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2012/2013')[-3:]
#Se grafican los resultados de la tabla anterior, pero ordenadas por el cíclo 2010/2011
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').sort_values(by='2010/2011').plot(kind='bar') | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Observación: se vuelve evidente que las entidades federales o Estados donde se inscriben más mujeres a ingenierías son el Distrito Federal(Ciudad de México), Estado de México, Veracruz, Puebla, Guanajuato, Jalisco y Nuevo León. También se observa que en todos los estados en el periódo 2010/2011 la cantidad de mujeres que se inscribieron fue mayor y decayó significativamente en los años siguientes.
Esto responde a la pregunta : ¿cuál es el estado que presenta mayor cantidad de inscripciones de mujeres a ingeniería? | #Se grafica el boxplot para cada periodo
datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING').plot(kind='box', title='Boxplot') | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Nota: para estre breve análisis se hace uso de la construcción de tablas pivot en pandas. Esto facilidad analizar como se comportan las variables categóricas de los datos. En este ejemplo se muestra que el periódo 2010/2011 tuvo una media mayor de mujeres inscritas en ingeniarías, pero también se ve que la relación entre los estados fue más dispersa. Pero también se ve que los periódos del 2011/2012, 2012/2013 y 2013/2014 tienen comportamientos "similares".
Otras herramientas gráficas
Conforme evolucionó Pandas y el módulo se volvió más usado, la límitante que tenía a mi parecer, era el nivel de gráficos base. Para usar los DataFrame y Series en matplotlib se necesita definir los array o procesarlos de modo tal que puedan contruirse mejores gráficos. Matplotlib es un módulo muy potente, pero resulta mucho más engorroso hacer un análisis grafico. Si se ha usado R project para hacer una exploración de datos, resulta muy facil constrir los gráficos básicos y con librerías como ggplot2 o lattice se puede hacer un análisis gráfico sencillo y potente.
Ante este problema se diseño una librería para complementar el análisis grafico, que es algo asi como "de alto nivel" al comprarla con matplotlib. El módulo se llama seaborn.
Para los siguientes ejemplos uso los datos que se han analizado anteriormente. | #Se construye la tabla
#Tabla1=datos.pivot('ENTIDAD','CICLO','MUJERES_INSC_ING')
datos.head()
#Se carga la librería seaborn
import seaborn as sns
#sns.set(style="ticks")
#sns.boxplot(x="CICLO",y="MUJERES_INSC_ING",data=datos,palette="PRGn",hue="CICLO") | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Como cargar un json y analizarlo.
En la siguiente se da una ejemplo de como cargar datos desde algún servicio web que regresa un arvhivo de tipo JSON. | sns.factorplot(x="ENTIDAD",y="MUJERES_INSC_ING",hue="CICLO",data=datos,palette="muted", size=15,kind="bar")
#Otra gráfica, se muestra el cruce entre las mujeres que se inscriben en ingeniería y el total de mujeres
with sns.axes_style('white'):
sns.jointplot('MUJERES_INSC_ING','MAT_TOTAL_SUP',data=datos,kind='hex') | Code/Capítulo_1-Carga_Pandas.ipynb | dlegor/Tutorial-Pandas-Python | cc0-1.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.