markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Read in files | dir_in_res = '../out/20.0216 feat/reg_rf_boruta'
dir_in_anlyz = os.path.join(dir_in_res, 'anlyz_filtered')
df_featSummary = pd.read_csv(os.path.join(dir_in_anlyz, 'feat_summary.csv')) #feature summary
df_featSummary['feat_sources'] = df_featSummary['feat_sources'].apply(literal_eval)
df_featSummary['feat_genes'] = df_featSummary['feat_genes'].apply(literal_eval)
feat_summary_annot_gene = pd.read_csv(os.path.join(dir_in_anlyz, 'onsamegene', 'feat_summary_annot.csv'), header=0, index_col=0)
gs_name = 'paralog'
feat_summary_annot_paralog = pd.read_csv(os.path.join(dir_in_anlyz, f'insame{gs_name}', 'feat_summary_annot.csv'), header=0, index_col=0)
gs_name = 'Panther'
feat_summary_annot_panther = pd.read_csv(os.path.join(dir_in_anlyz, f'insamegeneset{gs_name}', 'feat_summary_annot.csv'), header=0, index_col=0)
| _____no_output_____ | MIT | notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb | pritchardlabatpsu/cga |
Breakdown - basic - top most important feature | df_counts = df_featSummary.groupby('feat_source1')['feat_source1'].count()
df_counts = df_counts.to_dict()
df_sl = pd.DataFrame([{'new_syn_lethal':df_counts['CERES'],
'classic_syn_lethal': sum([df_counts[k] for k in ['CN','Mut','RNA-seq']]) }])
df_sl = df_sl.T.squeeze()
df_sl | _____no_output_____ | MIT | notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb | pritchardlabatpsu/cga |
Breakdown of lethality, top most important feature | df_src1 = df_featSummary[['target','feat_source1']].set_index('target')
df = pd.DataFrame({'isNotCERES': df_src1.feat_source1.isin(['RNA-seq', 'CN', 'Mut']),
'sameGene': feat_summary_annot_gene.inSame_1,
'sameParalog': feat_summary_annot_paralog.inSame_1,
'sameGS': feat_summary_annot_panther.inSame_1,
'isCERES': df_src1.feat_source1 == 'CERES'
})
lethal_dict = {'sameGene': 'Same gene',
'sameParalog': 'Paralog',
'sameGS': 'Gene set',
'isCERES': 'Functional',
'isNotCERES': 'Classic synthetic'}
df_counts = pd.DataFrame({'sum':df.sum(axis=0)})
df_counts['lethality'] = [lethal_dict[n] for n in df_counts.index]
df_counts
plt.figure()
ax = sns.barplot(df_counts['lethality'], df_counts['sum'], color='steelblue')
ax.set(xlabel='Lethality types', ylabel='Number of genes')
plt.tight_layout() | _____no_output_____ | MIT | notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb | pritchardlabatpsu/cga |
Breakdown of lethality, top 10 most important feature | df_src = df_featSummary.set_index('target').feat_sources
df = pd.DataFrame({'hasNoCERES': df_src.apply(lambda x: any([n in x for n in ['CN','Mut','RNA-seq','Lineage']])),
'sameGene': feat_summary_annot_gene.inSame_top10,
'sameParalog': feat_summary_annot_paralog.inSame_top10,
'sameGS': feat_summary_annot_panther.inSame_top10,
'hasCERES': df_src.apply(lambda x: 'CERES' in x)
})
lethal_dict = {'sameGene': 'Same gene',
'sameParalog': 'Paralog',
'sameGS': 'Gene set',
'hasCERES': 'Functional',
'hasNoCERES': 'Classic synthetic'}
df_counts = pd.DataFrame({'sum':df.sum(axis=0)})
df_counts['lethality'] = [lethal_dict[n] for n in df_counts.index]
df_counts
plt.figure()
ax = sns.barplot(df_counts['lethality'], df_counts['sum'], color='steelblue')
ax.set(xlabel='Lethality types', ylabel='Number of genes', ylim=[0,500])
plt.tight_layout() | _____no_output_____ | MIT | notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb | pritchardlabatpsu/cga |
Module 2: Inversion In the previous module we started with a continuous distribution of a physical property and discretized it into many cells, then we performed a forward simulation that created data from known model parameters. Inversion, of course, is exactly the opposite process. Imagine each model parameter that we had represents a layer in a 1D layered earth. At the surface of the earth we measure the data, and when we invert we do so for the model parameters. Our goal is to take the observed data and recover models that emulate the real Earth as closely as possible. You may have noticed that the act of discretizing our problem created more cells than data values. In our latter example we produced 20 data points from 1000 model parameters, which is only a few data points and many model parameters. While this was not much of a problem in the forward simulation, when we want to do the inverse process, that is, obtain the model parameters from the data, it is clear that we have many more unknowns than knowns. In short, we have an underdetermined problem, and therefore infinite possible solutions. In mathematical terms, geophysical surveys represent what are called "ill-posed" problems. An "ill-posed" problem is any problem that does not satisfy the requirements for the definition of "well-posed" problem. A *well-posed* problem is a problem in mathematics that must satisfy all three of the following criteria: A solution exists. The solution is unique. The solution's behaviors change continuously with continuously changing initial conditions.Any mathematical formulation that does not satisfy all three of the above is, by definition, an ill-posed problem. Since we are dealing with an underdetermined system, I hope that it is clear that we are dealing with an ill-posed problem (i.e., we have no unique solution), and we are going to have to come up with a method (or methods) that can help us choose from the available solutions. We need to devise an algorithm that can choose the "best" model from the infinitely many that are available to us. In short, we are going to have to find an optimum model. More specifically, in the context of most geophysics problems, we are going to use gradient-based optimization. This process involves building an *objective function*, which is a function that casts our inverse problem as an optimization problem. We will build an objective function consisting of two parts:$$\phi = \phi_d + \beta \phi_m$$Where the terms on the right hand side are (1) a data misfit (denoted as $\phi_d$) and (2) a model regularization (denoted as $\phi_m$). These two parts will be elaborated in detail below.Once we have formulated the objective function, we will take derivatives and obtain a recovered model. This module will flesh out the details of the model objective function, and then take first and second derivatives to derive an expression that gives us a solution for our model parameters. The Data Misfit, $\phi_d$ A *misfit* describes how close synthetic data matches measurements that are made in the field. Traditionally this term refers to the difference between the measured data and the predicted data. If these two quantities are sufficiently close, then we consider the model to be a viable candidate for the solution to our problem. Because the data are inaccurate, a model that reproduces those data exactly is not feasible. A realistic goal, rather, is to find a model whose predicted data are consistent with the errors in the observations, and this requires incorporating knowledge about the noise and uncertainties. The concept of fitting the data means that some estimate of the βnoiseβ be available. Unfortunately βnoiseβ within the context of inversion is everything that cannot be accounted for by a compatible relationship between the model and the data. More specifically, noise refers to (1) noise from data aquisition in the field, (2) uncertainty in source and receiver locations, (3) numerical error, (4) physical assumptions about our model that do not capture all of the physics. A standard approach is to assume that each datum, $d_i$, contains errors that can be described as Gaussian with a standard deviation $\epsilon_i$. It is important to give a significant amount of thought towards assigning standard deviations in the data, but a reasonable starting point is to assign each $\epsilon_i$ as $\epsilon_i= floor +\%|d_i|$. Incorporating both the differences between predicted and measured data and a measure of the uncertainties in the data yields our misfit function, $\phi_d$:$$\phi_d (m) = \frac{1}{2} \sum_{i=1}^N \left( \frac{F[m] -d_i^{obs} }{\epsilon_i}\right)^2 = \frac{1}{2} \|W_d(F[m] - d^{obs}) \|_2^2$$ Note that the right hand size of the equation is written as a matrix-vector product, with each $\epsilon_i$ in the denominator placed as elements on a diagonal matrix $W_d$, as follows:\begin{equation}\begin{split}W_d = \begin{bmatrix} \frac{1}{\epsilon_1} & 0 & 0 & \cdots & 0\\ 0 & \frac{1}{\epsilon_2} & 0 & \cdots & 0\\ 0 & 0 & \frac{1}{\epsilon_3} & \cdots & \vdots\\ 0 & 0 & 0 & \ddots & \frac{1}{\epsilon_M}\\ \end{bmatrix}\end{split}\end{equation}If we return to linear problem from the previous section where our forward operator was simply a matrix of kernel functions, we can substitute $F[m]$ with $G$ and obtain$$\phi_d (m) = \frac{1}{2} \sum_{i=1}^N \left( \frac{(Gm)_i -d_i^{obs} }{\epsilon_i}\right)^2 = \frac{1}{2} \|W_d(Gm - d^{obs}) \|_2^2$$ Now that we have defined a measure of misfit, the next task is to determine a tolerance value, such that if the misfit is about equal to that value, then we have an acceptable fit. Suppose that the standard deviations are known and that errors are Gaussian, then $\phi_d$ becomes a $\chi_N^2$ variable with $N$ degrees of freedom. This is a well-known quantity with an expected value $E[\chi_N^2]=N$ and a standard deviation of $\sqrt{2N}$. Basically, what this means is that computing $\phi_d$ should give us a value that is close to the number of data, $N$. The Model Regularization, $\phi_m$ There are many options for choosing a model regularization, but the goal in determining a model regularization is the same: given that we have no unique solution, we must make assumptions in order to recast the problem in such a way that a solution exists. A general function used in 1D is as follows:$$\phi_m = \alpha_s \int (m)^2 dx + \alpha_x \int \left( \frac{dm}{dx} \right)^2 dx$$Each term in the above expression is a norm that measures characteristics about our model. The first term is a representation of the square of the Euclidean length for a continuous function, and therefore measures the length of the model, while the second term uses derivative information to measure the model's smoothness. Usually the model regularization is defined with respect to a reference model. In the above, the reference model would simply be zero, but choosing a non-zero reference model $m_{ref}$, yields the following:$$\phi_m = \alpha_s \int (m-m_{ref})^2 dx + \alpha_x \int \left( \frac{d}{dx} (m-m_{ref}) \right)^2 dx$$As before, we will discretize this expression. It is easiest to break up each term and treat them separately, at first.We will denote each term of $\phi_m$ as $\phi_s$ and $\phi_x$, respectively. Consider the first term. Translating the integral to a sum yields:$$\phi_s = \alpha_s \int (m)^2 dx \rightarrow \sum_{i=1}^N \int_{x_{i-1}}^{x_i} (m_i)^2 dx = \sum_{i=1}^N m_i^2 (x_i - x_{i-1})$$Each spatial "cell" is $x_i - x_{i-1}$, which is the distance between nodes, as you may recall from the previous module. To simplify notation, we will use $\Delta x_{n_i}$ to denote the *ith* distance between nodes: We can then write $\phi_s$ as:$$\phi_s = \alpha_s \sum_{i=1}^N m_i^2 \Delta x_{n_i} = \alpha_s m^T W_s^T W_s m = \alpha_s \|W_s m\|_2^2$$with:\begin{equation}\begin{split}W_s = \begin{bmatrix} {\sqrt{\Delta x_{n_1}}} & 0 & 0 & \cdots & 0\\ 0 & {\sqrt{\Delta x_{n_2}}} & 0 & \cdots & 0\\ 0 & 0 & {\sqrt{\Delta x_{n_3}}} & \cdots & \vdots\\ 0 & 0 & 0 & \ddots & {\sqrt{\Delta x_{n_N}}}\\ \end{bmatrix}\end{split}\end{equation}For the second term, we will do a similar process. First, we will delineate $\Delta x_{c_i}$ as the distance between cell centers: A discrete approximation to the integral can be made by evaluating the derivative of the model based on how much it changes between the cell-centers, that is, we will take the average gradient between the *ith* and *i+1th* cells:$$\phi_x = \alpha_x \int \left( \frac{dm}{dx} \right)^2 dx \rightarrow \sum_{i=1}^{N-1} \left( \frac{m_{i+1}-m_i}{h_k}\right) \Delta x_{c_i} = m^T W_x^T W_x m = \|W_x m\|_2^2$$The matrix $W_x$ is a finite difference matrix constructed thus:\begin{equation}\begin{split}D_x = \begin{bmatrix} -\frac{1}{{\Delta x_{c_1}}} & \frac{1}{{\Delta x_{c_1}}} & 0 & \cdots & 0\\ 0 & -\frac{1}{{\Delta x_{c_2}}} & \frac{1}{{\Delta x_{c_2}}} & \cdots & 0\\ 0 & 0 & \ddots & \ddots & \vdots\\ 0 & 0 & 0 & -\frac{1}{{\Delta x_{c_{M-1}}}} & \frac{1}{{\Delta x_{c_{M-1}}}}\\ \end{bmatrix}\end{split}\end{equation}and then we need to account for the integration, so we multiply by a diagonal matrix $\rm diag(\sqrt{v})$\begin{equation}W_x = D_x \rm diag(\sqrt{v})\end{equation}So to summarize, we have $\phi_m = \phi_s + \phi_x$ with \begin{equation}\begin{split} \phi_m & = \phi_s + \phi_x \\[0.4em] & = \alpha_s \|W_s (m - m_{ref})\|_2^2 + \alpha_x \|W_x (m - m_{ref})\|_2^2 \\[0.4em] \end{split}\end{equation}Next, we will write this more compactly by stacking $W_s$ and $W_x$ into a matrix $W_m$ as follows\begin{equation}\begin{split}W_m = \begin{bmatrix} \sqrt{\alpha_s} W_s\\ \sqrt{\alpha_x} W_x\end{bmatrix}\end{split}\end{equation} Model Objective Function If we go back and recall what was discussed in the introduction, the model objective function casts the inverse problem as an optimization problem, and as mentioned, we will be using gradient-based optimization, so we will need to take derivatives. The complete model objective function that we are dealing with will contain both the data misfit and the model regularization. This means that we can write it as $\phi$ as the sum of the two and then differentiate:$$\phi = \phi_d + \beta \phi_m$$For the linear problem we are considering$$\phi_d = \frac{1}{2}\| W_d (Gm-d^{obs})\|_2^2 = \frac{1}{2}(Gm-d^{obs})^T W_d^T W_d (Gm-d^{obs})$$and$$\phi_m = \frac{1}{2} \|W_m (m-m_{ref}) \|^2_2 = \frac{1}{2}(m-m_{ref})^T W_m^T W_m (m-m_{ref})$$To simplify the terms and see the math a little more clearly, let's note that $W_d(Gm-d^{obs})$, and $\beta W_m(m-m_{ref})$ are simply vectors. And since we are taking the square of the 2-norm, all that we are really doing is taking the dot product of each vector with itself. So let $z=W_d(Gm-d^{obs})$, and let $y=W_m(m-m_{ref})$ where both $z$ and $y$ vectors are functions of $m$. So then:$$\phi_d = \frac{1}{2}\|z\|_2^2 = \frac{1}{2}z^T z $$$$\phi_m = \frac{1}{2}\|y\|_2^2 =\frac{1}{2}y^T y $$To minimize this, we want to look at $\nabla \phi$. Using our compact expressions:$$\phi = \phi_d + \beta \phi_m = \frac{1}{2}z^Tz + \beta \frac{1}{2}y^Ty \\ $$Taking the derivative with respect to $m$ yields:\begin{equation}\begin{split}\frac{d \phi}{dm}& = \frac{1}{2} \left(z^T \frac{dz}{dm} + z^T \frac{dz}{dm} + \beta y^T \frac{dy}{dm} + \beta y^T \frac{dy}{dm}\right)\\\\[0.6em]& = z^T \frac{dz}{dm} + \beta y^T \frac{dy}{dm}\end{split}\end{equation}Note that $$\frac{dz}{dm} = \frac{d}{dm}(W_d(Gm-d^{obs})) = W_d G $$ and $$ \frac{dy}{dm} = \frac{d}{dm}(W_m (m-m_{ref})) = W_m $$Next, let's substitute both derivatives, our expressions for $z$ and $y$, apply the transposes, and rearrange:\begin{equation}\begin{split}\frac{d \phi}{dm} & = z^T \frac{dz}{dm} + \beta y^T \frac{dy}{dm} \\[0.6em] & = (W_d(Gm-d^{obs}))^T W_d G + \beta (W_m (m-m_{ref}))^T W_m\\[0.6em] & = (Gm-d^{obs})^T W_d^T W_d G + \beta (m-m_{ref})^T W_m^T W_m \\[0.6em] & = ((Gm)^T - d^T) W_d^T W_d G + \beta (m^T-m_{ref}^T)W_m^T W_m \\[0.6em] & = (m^T G^T - d^T) W_d^T W_d G + \beta m^T W_m^T W_m - \beta m_{ref}^T W_m^T W_m \\[0.6em] & = m^T G^T W_d^T W_d G - d^T W_d^T W_d G + \beta m^T W_m^T W_m - \beta m_{ref}^T W_m^T W_m\\[0.6em] & = m^T G^T W_d^T W_d G + \beta m^T W_m^T W_m - d^T W_d^T W_d G - \beta m_{ref}^T W_m^T W_m \end{split}\end{equation}Now we have an expression for the derivative of our equation that we can work with. Setting the gradient to zero and gathering like terms gives:\begin{equation} \begin{split} m^T G^T W_d^T W_d G + \beta m^T W_m^T W_m = d^T W_d^T W_d G + \beta m_{ref}^T W_m^T W_m\\[0.6em] (G^T W_d^T W_d G + \beta W_m^T W_m)m = G^T W_d^T W_d d + \beta W_m^T W_m m_{ref}\\[0.6em]\end{split}\end{equation}From here we can do two things. First, we can solve for $m$, our recovered model:\begin{equation}\begin{split} m = (G^T W_d^T W_d G + \beta W_m^T W_m)^{-1} (G^T W_d^T W_d d + \beta W_m^T W_m m_{ref})\\[0.6em]\end{split}\end{equation}Second, we can get the second derivative simply from the bracketed terms on the left hand side of the equation above:\begin{equation} \frac{d^2 \phi}{dm^2} = G^T W_d^T W_d G + \beta W_m^T W_m\end{equation}In the model problem that we are solving, second derivative information is not required to obtain a solution, however, in non-linear problems or situations when higher order information is required, it is useful to have this available when we need it. Solving for $m$ in Python Before we solve for $m$, we will recreate what we had in the first module. First, install the appropriate packages: | # Import Packages
import numpy as np
import matplotlib.pyplot as plt | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Here is the model that we had previously: | # Begin by creating a ficticious set of model data
n_cells = 1000 # Set number of model parameters
n_nodes = n_cells + 1
xn = np.linspace(0, 1, n_nodes) # Define 1D domain on nodes
xc = 0.5*(xn[1:] + xn[:-1]) # Define 1D domain on cell centers
# Define Gaussian function:
def gauss(x, amplitude, mean, std):
"""Define a gaussian function"""
return amplitude * np.exp(-((x-mean)/std)**2 / 2)
# Choose parameters for Gaussian, evaluate, and store in an array, f.
std = 6e-2
mean = 0.7
amplitude_gaussian = 1
gaussian = gauss(xc, amplitude_gaussian, mean, std)
fig, ax = plt.subplots(1, 1)
ax.plot(xc, gaussian)
ax.set_title("Gaussian")
# Define a boxcar function:
x_boxcar = np.r_[0.2, 0.35]
amplitude_boxcar = 1
boxcar = np.zeros(n_cells) # initialize an array of all zeros
boxcar_inds = (xc >= x_boxcar.min()) & (xc <= x_boxcar.max()) # find the indices of the boxcar
boxcar[boxcar_inds] = amplitude_boxcar
# construct the model
mtrue = gaussian + boxcar
# Plot
fig, ax = plt.subplots(1, 1)
ax.plot(xc, mtrue)
ax.set_xlabel('x')
ax.set_ylabel('m(x)')
ax.set_title('Model, $m(x)$') | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Again, we define out kernel functions and averaging and volume matrices as before: | # Make the set of kernel functions
def kernel_functions(x, j, p, q):
return np.exp(-p*j*x) * np.cos(2*np.pi*q*j*x)
p = 0.01 # Set values for p, q
q = 0.15
n_data = 20 # specify number of output data
j_min = 0
j_max = n_data
j_values = np.linspace(j_min, j_max, n_data)
Gn = np.zeros((n_nodes, n_data))
for i, j in enumerate(j_values):
Gn[:, i] = kernel_functions(xn, j, p, q)
# Plot
fig, ax = plt.subplots(1, 1)
ax.plot(xn, Gn)
ax.set_xlabel('x')
ax.set_ylabel('g(x)')
ax.set_title('Kernel functions, $g(x)$')
# Make Averaging Matrix
Av = np.zeros((n_cells, n_nodes)) # Create a matrix of zeros of the correct dimensions
# and fill in with elements usin the loop below (note the 1/2 is included in here).
for i in range(n_cells):
Av[i, i] = 0.5
Av[i, i+1] = 0.5
print(Av.shape)
# make the Volume, "delta x" array
delta_x = np.diff(xn) # set x-spacings
V = np.diag(delta_x) # create diagonal matrix
print(V.shape) | (1000, 1000)
| MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Last, we produce our data: | G = Gn.T @ Av.T @ V
d = G @ mtrue
# Plot
fig, ax = plt.subplots(1, 1)
ax.plot(d, '-o')
ax.set_title('Synthetic Data $d$') | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Introducing noise to the data This is where we stood at the end of the last module. Next, to simulate taking data in the field, we are going to add a noise to the data before we perform our inversion. We will do this by defining a lambda function that assigns a floor value and percent scaling factor. Also, we will assume that the noise is Gaussian. We then add the noise to the original data to make a simulated vector of observed data. The superposition of our noise and original data is plotted below. | # Add noise to our synthetic data
add_noise = False # set to true if you want to add noise to the data
if add_noise is True:
relative_noise = 0.04
noise_floor = 1e-2
noise = (
relative_noise * np.random.randn(n_data) * np.abs(d) + # percent of data
noise_floor * np.random.randn(n_data)
)
dobs = d + noise
else:
dobs = d
fig, ax = plt.subplots(1, 1)
ax.plot(d, '-o', label="d clean")
ax.plot(dobs, '-s', label="dobs")
ax.set_title("synthetic data")
ax.legend() | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Setting up the inverse problemNow we will assemble the pieces for constructing an objective function to be minimized in the inversion. Throughout we use L2 norms, so the first thing we will do is define a simple function for computing a weighted L2 norm. | def weighted_l2_norm(W, v):
"""
A function that returns a weighted L2 norm. The parameter W is a weighting matrix
and v is a vector.
"""
Wv = W @ v
return Wv.T @ Wv | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Calculating $\phi_d$ We are now in a position to build up the data misfit term, $\phi_d$. We will need a function to compute the 2-norm, so constructing a function to do this is useful. Next we will make the matrix $W_d$, which is a diagonal matrix that contains the inverses of the uncertainty in our data. Again, we will define a floor and percent error for each datum. Last, we calculate $\phi_d$ using our 2-norm function that we created. It is insightful to see what values have been assigned to our floor and misfit, so they are printed below. | # Calculate the data misfit, phi_d
noise_floor = 1e-3
relative_error = 0.05
standard_deviation = noise_floor + relative_error * np.abs(dobs)
# construct Wd
Wd = np.diag(1/standard_deviation)
fig, ax = plt.subplots(1, 1)
img = ax.imshow(Wd, "Greys")
plt.colorbar(img, ax=ax)
ax.set_title("Wd") | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Calculating $\phi_m$As discussed above, we are going to first need to make our $W_m$ matrix, which is a partitioned matrix from two other matrices, $W_s$ and $W_x$, each scaled by a separate parameter $\alpha_s$ and $\alpha_x$. We are going to discuss the manner in which $\alpha_s$ and $\alpha_x$ are selected in more detail during the next module. But for the moment, we will set them as they are defined below. Once this matrix is built up, calculating $\phi_m$ is a simple matter, given that we have made a function to compute the 2-norm already. For the sake of illustration, I compute and print $\phi_m$ from the residual of our reference model $m_{ref}$ and our true model $m$. However, of interest to us will be the residual of the model that we recover $m_{rec}$ and our reference model. | # Start with Ws
sqrt_vol = np.sqrt(delta_x) # in 1D - the "Volume" = length of each cell (delta_x)
Ws = np.diag(sqrt_vol)
# and now Wx
Dx = np.zeros((n_cells-1, n_cells)) # differencing matrix
for i, dx in enumerate(delta_x[:-1]):
Dx[i, i] = -1/dx
Dx[i, i+1] = 1/dx
Wx = Dx @ np.diag(sqrt_vol)
print(Ws.shape, Wx.shape)
# plot both
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
plot_up_to = 10 # plot 10 entries
# plot Ws
img = ax[0].imshow(Ws[:plot_up_to, :plot_up_to], "Greys")
plt.colorbar(img, ax=ax[0])
ax[0].set_title("Ws")
# plot Wx
img = ax[1].imshow(Wx[:plot_up_to, :plot_up_to+1], "bwr")
plt.colorbar(img, ax=ax[1])
ax[1].set_title("Wx")
plt.tight_layout() | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Stack Ws, Wx to make a single regularization matrix Wm | alpha_s = 1e-6
alpha_x = 1
Wm = np.vstack([
np.sqrt(alpha_s)*Ws,
np.sqrt(alpha_x)*Wx
])
print(Wm.shape) | (1999, 1000)
| MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Inverting for our recovered model At last we can invert to find our recovered model and see how it compares with the true model. First we will assign a value for $\beta$. As with the $\alpha$ parameters from before, we will assign a value, but the choice of beta will be a topic that we explore more fully in the next module. Once our $\beta$ value is assigned, we will define yet another lambda function to obtain the recovered model, plot it against our true model, and then output our results for $\phi_d$ and $\phi_m$. | beta = 1e-1 # Set beta value
mref = 0.5 * np.ones(n_cells) # choose a reference model
WdG = Wd @ G
mrec = (
np.linalg.inv(WdG.T @ WdG + beta * Wm.T @ Wm) @
(WdG.T @ Wd @ dobs + beta * Wm.T @ Wm @ mref)
)
fig, ax = plt.subplots(1, 1)
ax.plot(xc, mtrue, label="true")
ax.plot(xc, mrec, label="recovered")
ax.legend()
dpred = G @ mrec
fig, ax = plt.subplots(1, 1)
ax.plot(dobs, '-o', label="observed")
ax.plot(dpred, '-s', label="predicted")
ax.legend() | _____no_output_____ | MIT | Module 2, Inversion-Doug.ipynb | lheagy/inversion-tutorial |
Speed Test | times = []
valid_mask_t = torch.from_numpy(np.ones([1,80,80,1]).astype(np.float32)).to(DEVICE)
for d_i in range(10):
_target = torch.from_numpy(d_trains[d_i].astype(np.float32)).to(DEVICE)
calibration_map = make_circle_masks(_target.size(0), map_size[0], map_size[1],
rmin=0.5, rmax=0.5)[..., None]
calibration_map = torch.from_numpy(calibration_map.astype(np.float32)).to(DEVICE)
x0 = np.repeat(seed[None, ...], _target.size(0), 0)*0
x0 = torch.from_numpy(x0.astype(np.float32)).to(DEVICE)
start_time = time.time()
x, history = test(x0, _target, valid_mask_t, calibration_map, N_STEPS)
times.append((time.time()-start_time)/_target.size(0))
print(times[-1])
print("---------")
print(np.mean(times)) | 0.03891327977180481
0.04005876183509827
0.04132132604718208
0.04402513429522514
0.04346586391329765
0.04147135838866234
0.03921307995915413
0.038483794778585434
0.04098214581608772
0.044003926217556
---------
0.04119386710226536
| MIT | 02_Traffic_info_test_2_hidden_12_pool_multi_location.ipynb | chenmingxiang110/NCA_Prediction |
Fit interpretable models to the training set and test on validation sets. | #%matplotlib inline
#%load_ext autoreload
#%autoreload 2
import os
import pickle as pkl
from os.path import join as oj
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier
import imodels
from rulevetting.api import validation
from rulevetting.projects.csi_pecarn.dataset import Dataset
MODELS_DIR = './models'
os.makedirs(MODELS_DIR, exist_ok=True)
outcome_def = 'outcome' # output
def var_selection(df,method=['rfe',10]): ## input: a dataframe with outcome as the last column, method: ['rfe',number of
## features to choose] or ['lasso',penalty] output: a dataframe containing the columns we select and the outcome column
algo=method[0]
param=method[1]
X=df.drop(columns=['outcome'])
y=df.outcome
if algo=='rfe':
mymodel=LogisticRegression()
myrfe = RFE(mymodel,n_features_to_select=param)
myfit = myrfe.fit(X, y)
index=np.append(myfit.support_,True)
elif algo=='lasso':
mylasso = LogisticRegression(penalty='l1', solver='liblinear',C=param) ## for example C=0.1
myfit=mylasso.fit(X, y)
index=np.append(myfit.coef_[0]!=0,True)
return index
df_train, df_tune, _ = Dataset().get_data(load_csvs=True)
def predict_and_save(model, model_name='decision_tree'):
'''Plots cv and returns cv, saves all stats
'''
results = {'model': model}
for x, y, suffix in zip([X_train, X_tune],
[y_train, y_tune],
['_train', '_tune']):
stats, threshes = validation.all_stats_curve(y, model.predict_proba(x)[:, 1],
plot=suffix == '_tune')
for stat in stats.keys():
results[stat + suffix] = stats[stat]
results['threshes' + suffix] = threshes
pkl.dump(results, open(oj(MODELS_DIR, model_name + '.pkl'), 'wb'))
return stats, threshes
def model_valid(max_num=20,model_name='decision_tree'):
'''use validation set to select # of features'''
record=np.zeros(max_num)
sensitivity=np.zeros(max_num)
for num in range(1,max_num+1):
index=var_selection(df_train,method=['rfe',num])
loc_train=df_train.loc[:,index]
loc_tune=df_tune.loc[:,index]
loc_=_.loc[:,index]
X_train = loc_train.drop(columns=outcome_def)
y_train = loc_train[outcome_def].values
X_tune = loc_tune.drop(columns=outcome_def)
y_tune = loc_tune[outcome_def].values
if model_name=='decision_tree':
model = DecisionTreeClassifier(max_depth=4, class_weight={0: 1, 1: 1e3})
model.fit(X_train, y_train)
elif model_name=='logistic':
model= LogisticRegression()
model.fit(X_train, y_train)
elif model_name=='adaboost':
model= AdaBoostClassifier(n_estimators=50, learning_rate=1)
model.fit(X_train, y_train)
stats, threshes = validation.all_stats_curve(y_tune, model.predict_proba(X_tune)[:, 1],
plot=False)
sens=stats['sens']
spec=stats['spec']
if sens[0]<0.98:
record[num-1]=0.
sensitivity[num-1]=sens[0]
continue
j=0
while sens[j]>0.98:
#print([j, sens[j]], spec[j])
#print(sens[j])
cur_pec=spec[j]
j+=1
record[num-1]=cur_pec
sensitivity[num-1]=sens[j]
print(record)
print(sensitivity)
return np.argmax(record)+1 ## output the optimal number of features via validation
# print(model_valid(20,model_name='adaboost')) ## output zero when sens<.98, otherwise output spec (adaboost,decision_tree,logistic)
# print(model_valid(20,model_name='decision_tree')) ## output zero when sens<.98, otherwise output spec (adaboost,decision_tree,logistic)
# print(model_valid(30,model_name='logistic')) ## output zero when sens<.98, otherwise output spec (adaboost,decision_tree,logistic)
index=var_selection(df_train,method=['rfe',9])
print(df_train.columns[index])
df_train=df_train.loc[:,index]
df_tune=df_tune.loc[:,index]
_=_.loc[:,index]
X_train = df_train.drop(columns=outcome_def)
y_train = df_train[outcome_def].values
X_tune = df_tune.drop(columns=outcome_def)
y_tune = df_tune[outcome_def].values
processed_feats = df_train.keys().values.tolist()
feature_names=processed_feats | Index(['ArrPtIntub', 'DxCspineInjury', 'FocalNeuroFindings', 'HighriskDiving',
'IntervForCervicalStab', 'PtExtremityWeakness', 'PtSensoryLoss',
'PtTenderExt', 'SubInj_TorsoTrunk', 'outcome'],
dtype='object')
| MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
fit simple models **decision tree** | # fit decision tree
dt = DecisionTreeClassifier(max_depth=4, class_weight={0: 1, 1: 1e3})
dt.fit(X_train, y_train)
stats, threshes = predict_and_save(dt, model_name='decision_tree')
print(stats,threshes)
plt.show()
plt.savefig("tree-roc.png", dpi='figure', format=None, metadata=None,
bbox_inches=None, pad_inches=0,
facecolor='auto', edgecolor='auto',
backend=None)
fig = plt.figure(figsize=(50, 40))
plot_tree(dt, feature_names=feature_names, filled=True)
plt.show()
# fit logitstic
dt= LogisticRegression()
dt.fit(X_train, y_train)
stats_lr, threshes_lr = predict_and_save(dt, model_name='logistic')
print(stats_lr, "\n")
print(threshes_lr)
plt.show()
fig = plt.figure(figsize=(50, 40))
plt.show()
# fit adaboost
dt= AdaBoostClassifier(n_estimators=100, learning_rate=1)
dt.fit(X_train, y_train)
stats_ab, threshes_ab = predict_and_save(dt, model_name='adaboost')
print(stats_ab, "\n")
print(threshes_ab)
plt.show()
fig = plt.figure(figsize=(50, 40))
plt.show()
(np.asarray(stats_lr["sens"]) - np.asarray(stats_ab["sens"])) * 1000 | _____no_output_____ | MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
**bayesian rule list (this one is slow)** | np.random.seed(13)
# train classifier (allow more iterations for better accuracy; use BigDataRuleListClassifier for large datasets)
print('training bayesian_rule_list...')
brl = imodels.BayesianRuleListClassifier(listlengthprior=2, max_iter=10000, class1label="IwI", verbose=False)
brl.fit(X_train, y_train, feature_names=feature_names)
stats, threshes = predict_and_save(brl, model_name='bayesian_rule_list')
print(brl)
print(brl) | Trained RuleListClassifier for detecting IwI
=============================================
IF IntervForCervicalStab > 0.5 THEN probability of IwI: 59.3% (54.8%-63.8%)
ELSE IF FocalNeuroFindings > 0.5 THEN probability of IwI: 15.8% (9.7%-23.0%)
ELSE IF DxCspineInjury > 0.5 THEN probability of IwI: 10.0% (5.1%-16.2%)
ELSE probability of IwI: 1.3% (0.8%-2.0%)
============================================
| MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
**rulefit** | # fit a rulefit model
np.random.seed(13)
rulefit = imodels.RuleFitRegressor(max_rules=4)
rulefit.fit(X_train, y_train, feature_names=feature_names)
# preds = rulefit.predict(X_test)
stats, threshes = predict_and_save(rulefit, model_name='rulefit')
'''
def print_best(sens, spec):
idxs = np.array(sens) > 0.9
print(np.array(sens)[idxs], np.array(spec)[idxs])
print_best(sens, spec)
'''
# pd.reset_option('display.max_colwidth')
rulefit.visualize() | _____no_output_____ | MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
**greedy (CART) rule list** | class_weight = {0: 1, 1: 100}
d = imodels.GreedyRuleListClassifier(max_depth=9, class_weight=class_weight, criterion='neg_corr')
d.fit(X_train, y_train, feature_names=feature_names, verbose=False)
stats, threshes = predict_and_save(d, model_name='grl')
# d.print_list()
print(d) | /Users/seunghoonpaik/Desktop/SH/Berkeley/Coursework/215A/Lab/final-proj/andy-github/rule-env/lib/python3.8/site-packages/numpy/lib/function_base.py:2691: RuntimeWarning: invalid value encountered in true_divide
c /= stddev[:, None]
/Users/seunghoonpaik/Desktop/SH/Berkeley/Coursework/215A/Lab/final-proj/andy-github/rule-env/lib/python3.8/site-packages/numpy/lib/function_base.py:2692: RuntimeWarning: invalid value encountered in true_divide
c /= stddev[None, :]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7/7 [00:00<00:00, 2065.43it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6/6 [00:00<00:00, 1816.90it/s] | MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
**rf** look at all the results | def plot_metrics(suffix, title=None, fs=15):
for fname in sorted(os.listdir(MODELS_DIR)):
if 'pkl' in fname:
if not fname[:-4] == 'rf':
r = pkl.load(open(oj(MODELS_DIR, fname), 'rb'))
# print(r)
# print(r.keys())
threshes = np.array(r['threshes' + suffix])
sens = np.array(r['sens' + suffix])
spec = np.array(r['spec' + suffix])
plt.plot(100 * sens, 100 * spec, 'o-', label=fname[:-4], alpha=0.6, markersize=3)
plt.xlabel('Sensitivity (%)', fontsize=fs)
plt.ylabel('Specificity (%)', fontsize=fs)
s = suffix[1:]
if title is None:
plt.title(f'{s}\n{data_sizes[s][0]} IAI-I / {data_sizes[s][1]}')
else:
plt.title(title, fontsize=fs)
# print best results
if suffix == '_test2':
idxs = (sens > 0.95) & (spec > 0.43)
if np.sum(idxs) > 0:
idx_max = np.argmax(spec[idxs])
print(fname, f'{100 * sens[idxs][idx_max]:0.2f} {100 * spec[idxs][idx_max]:0.2f}')
if suffix == '_test2':
plt.plot(96.77, 43.98, 'o', color='black', label='Original CDR', ms=4)
else:
plt.plot(97.0, 42.5, 'o', color='black', label='Original CDR', ms=4)
plt.grid()
suffixes = ['_train', '_tune'] # _train, _test1, _test2, _cv
titles = ['Train (PECARN)', 'Tune (PECARN)']
R, C = 1, len(suffixes)
plt.figure(dpi=200, figsize=(C * 2.5, R * 3), facecolor='w')
fs = 10
for i, suffix in enumerate(suffixes):
ax = plt.subplot(R, C, i + 1)
plot_metrics(suffix, title=titles[i], fs=fs)
if i > 0:
plt.ylabel('')
plt.yticks([0, 25, 50, 75, 100], labels=[''] * 5)
# ax.yaxis.set_visible(False)
plt.xlim((50, 101))
plt.ylim((0, 101))
plt.tight_layout()
# plt.subplot(R, C, 1)
# plt.legend(fontsize=20)
plt.legend(bbox_to_anchor=(1.1, 1), fontsize=fs, frameon=False)
# plt.savefig('figs/metrics_3_splits')
plt.show() | _____no_output_____ | MIT | rulevetting/projects/csi_pecarn/notebooks/fit_models_ll.ipynb | aashen12/rule-vetting |
**This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/underfitting-and-overfitting).**--- RecapYou've built your first model, and now it's time to optimize the size of the tree to make better predictions. Run this cell to set up your coding environment where the previous step left off. | # Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE: {:,.0f}".format(val_mae))
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex5 import *
print("\nSetup complete") | _____no_output_____ | MIT | exercise-underfitting-and-overfitting.ipynb | gabboraron/Intro_to_Machine_Learning-Kaggle |
ExercisesYou could write the function `get_mae` yourself. For now, we'll supply it. This is the same function you read about in the previous lesson. Just run the cell below. | def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y):
model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae) | _____no_output_____ | MIT | exercise-underfitting-and-overfitting.ipynb | gabboraron/Intro_to_Machine_Learning-Kaggle |
Step 1: Compare Different Tree SizesWrite a loop that tries the following values for *max_leaf_nodes* from a set of possible values.Call the *get_mae* function on each value of max_leaf_nodes. Store the output in some way that allows you to select the value of `max_leaf_nodes` that gives the most accurate model on your data. | candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500]
# Write loop to find the ideal tree size from candidate_max_leaf_nodes
results = []
for max_leaf_nodes in candidate_max_leaf_nodes:
my_mae = get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y)
print("Max leaf nodes: %d \t\t Mean Absolute Error: %d" %(max_leaf_nodes, my_mae))
results.append(my_mae)
best_value = min(results)
# Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500)
best_tree_size = candidate_max_leaf_nodes[results.index(best_value)]
# Check your answer
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution() | _____no_output_____ | MIT | exercise-underfitting-and-overfitting.ipynb | gabboraron/Intro_to_Machine_Learning-Kaggle |
Step 2: Fit Model Using All DataYou know the best tree size. If you were going to deploy this model in practice, you would make it even more accurate by using all of the data and keeping that tree size. That is, you don't need to hold out the validation data now that you've made all your modeling decisions. | # Fill in argument to make optimal size and uncomment
final_model = DecisionTreeRegressor(max_leaf_nodes=best_tree_size, random_state=1)
# fit the final model and uncomment the next two lines
final_model.fit(X, y)
# Check your answer
step_2.check()
step_2.hint()
step_2.solution() | _____no_output_____ | MIT | exercise-underfitting-and-overfitting.ipynb | gabboraron/Intro_to_Machine_Learning-Kaggle |
Differentiable SVGTensor optimization Load a target SVG and apply the standard pre-processing. | svg = SVG.load_svg("docs/imgs/dolphin.svg").normalize().zoom(0.9).canonicalize().simplify_heuristic() | simplify
| MIT | notebooks/svgtensor.ipynb | GeorgeProjects/deepsvg |
Convert the SVG to the differentiable SVGTensor data-structure. | svg_target = SVGTensor.from_data(svg.to_tensor())
p_target = svg_target.sample_points()
plot_points(p_target, show_color=True) | _____no_output_____ | MIT | notebooks/svgtensor.ipynb | GeorgeProjects/deepsvg |
Create an arbitrary SVG whose BΓ©zier parameters will be optimized to match the target shape. | circle = SVG.unit_circle().normalize().zoom(0.9).split(8) # split: 1/2/4/8
svg_pred = SVGTensor.from_data(circle.to_tensor()) | _____no_output_____ | MIT | notebooks/svgtensor.ipynb | GeorgeProjects/deepsvg |
SVGTensor enables to sample points in a differentiable way, so that the loss that will be backpropagated down to the SVG BΓ©zier parameters. | p_pred = svg_pred.sample_points()
plot_points(p_pred, show_color=True)
svg_pred.control1.requires_grad_(True)
svg_pred.control2.requires_grad_(True)
svg_pred.end_pos.requires_grad_(True);
optimizer = optim.Adam([svg_pred.control1, svg_pred.control2, svg_pred.end_pos], lr=0.1) | _____no_output_____ | MIT | notebooks/svgtensor.ipynb | GeorgeProjects/deepsvg |
Write a standard gradient descent algorithm and observe the step-by-step optimization! | img_list = []
for i in range(150):
optimizer.zero_grad()
p_pred = svg_pred.sample_points()
l = svg_emd_loss(p_pred, p_target)
l.backward()
optimizer.step()
if i % 4 == 0:
img = svg_pred.draw(with_points=True, do_display=False, return_png=True)
img_list.append(img)
to_gif(img_list)
svg = SVG.load_svg("docs/imgs/dolphin.svg")
print(svg)
svg_tensor = svg.to_tensor()
print(svg_tensor) | SVG[Bbox(0.0 0.0 294.8680114746094 294.8680114746094)](
SVGPathGroup(SVGPath(M[P(0.0, 0.0), P(284.3949890136719, 115.12999725341797)] C[P(284.3949890136719, 115.12999725341797), P(280.5419921875, 119.21499633789062), P(274.864990234375, 119.21499633789062), P(272.9989929199219, 119.21499633789062)] C[P(272.9989929199219, 119.21499633789062), P(269.53900146484375, 119.21499633789062), P(265.260986328125, 118.64199829101562), P(259.1309814453125, 117.35599517822266)] C[P(259.1309814453125, 117.35599517822266), P(254.31597900390625, 116.34599304199219), P(250.16998291015625, 115.33899688720703), P(246.51397705078125, 114.45199584960938)] C[P(246.51397705078125, 114.45199584960938), P(239.3219757080078, 112.70499420166016), P(234.1239776611328, 111.4419937133789), P(229.23097229003906, 111.4419937133789)] C[P(229.23097229003906, 111.4419937133789), P(226.4729766845703, 111.4419937133789), P(221.44097900390625, 112.63699340820312), P(216.11196899414062, 113.90299224853516)] C[P(216.11196899414062, 113.90299224853516), P(207.81497192382812, 115.87399291992188), P(197.48997497558594, 118.32598876953125), P(187.5029754638672, 118.32598876953125)] C[P(187.5029754638672, 118.32598876953125), P(177.41998291015625, 118.32598876953125), P(166.80697631835938, 117.5419921875), P(160.11997985839844, 116.9369888305664)] C[P(160.11997985839844, 116.9369888305664), P(157.89898681640625, 125.26298522949219), P(152.61497497558594, 138.99298095703125), P(140.6209716796875, 148.82098388671875)] C[P(140.6209716796875, 148.82098388671875), P(124.31397247314453, 162.1809844970703), P(112.22396850585938, 168.1409912109375), P(111.71697235107422, 168.3879852294922)] C[P(111.71697235107422, 168.3879852294922), P(108.6199722290039, 169.89797973632812), P(104.8959732055664, 169.1069793701172), P(102.68296813964844, 166.4659881591797)] C[P(102.68296813964844, 166.4659881591797), P(100.469970703125, 163.82699584960938), P(100.33796691894531, 160.02098083496094), P(102.36396789550781, 157.23599243164062)] C[P(102.36396789550781, 157.23599243164062), P(102.43096923828125, 157.1409912109375), P(110.58097076416016, 145.6199951171875), P(110.2679672241211, 132.6719970703125)] C[P(110.2679672241211, 132.6719970703125), P(110.14396667480469, 127.52099609375), P(109.95697021484375, 123.48500061035156), P(109.7659683227539, 120.41099548339844)] C[P(109.7659683227539, 120.41099548339844), P(99.60897064208984, 123.40499877929688), P(82.31497192382812, 130.09799194335938), P(73.39396667480469, 142.69699096679688)] C[P(73.39396667480469, 142.69699096679688), P(59.880001068115234, 161.77999877929688), P(54.33599853515625, 191.16000366210938), P(55.82099914550781, 200.79800415039062)] C[P(55.82099914550781, 200.79800415039062), P(57.367000579833984, 210.83999633789062), P(57.551998138427734, 210.83999633789062), P(62.31699752807617, 210.83999633789062)] C[P(62.31699752807617, 210.83999633789062), P(72.6989974975586, 210.83999633789062), P(81.4209976196289, 211.26499938964844), P(91.2449951171875, 216.6129913330078)] C[P(91.2449951171875, 216.6129913330078), P(99.93499755859375, 221.3419952392578), P(104.72099304199219, 226.5159912109375), P(105.23699188232422, 227.0909881591797)] C[P(105.23699188232422, 227.0909881591797), P(107.18598937988281, 229.2559814453125), P(107.70499420166016, 232.35398864746094), P(106.5679931640625, 235.0349884033203)] C[P(106.5679931640625, 235.0349884033203), P(105.43099212646484, 237.71798706054688), P(102.843994140625, 239.49899291992188), P(99.93299102783203, 239.6029815673828)] C[P(99.93299102783203, 239.6029815673828), P(97.12499237060547, 239.70797729492188), P(88.62799072265625, 240.41598510742188), P(83.1669921875, 242.39797973632812)] C[P(83.1669921875, 242.39797973632812), P(80.2669906616211, 243.4509735107422), P(77.76898956298828, 244.6829833984375), P(75.35298919677734, 245.87498474121094)] C[P(75.35298919677734, 245.87498474121094), P(71.48799133300781, 247.781982421875), P(67.83699035644531, 249.58297729492188), P(63.57798767089844, 250.0859832763672)] C[P(63.57798767089844, 250.0859832763672), P(62.192989349365234, 250.24998474121094), P(60.77098846435547, 250.32998657226562), P(59.42498779296875, 250.3049774169922)] C[P(59.42498779296875, 250.3049774169922), P(57.85498809814453, 253.9519805908203), P(55.182987213134766, 258.2989807128906), P(50.688987731933594, 261.74798583984375)] C[P(50.688987731933594, 261.74798583984375), P(47.24898910522461, 264.38897705078125), P(43.74898910522461, 266.0179748535156), P(40.660987854003906, 267.4539794921875)] C[P(40.660987854003906, 267.4539794921875), P(36.868988037109375, 269.218994140625), P(33.87298583984375, 270.61297607421875), P(31.572986602783203, 273.4649658203125)] C[P(31.572986602783203, 273.4649658203125), P(25.544986724853516, 280.9399719238281), P(23.521987915039062, 289.0449523925781), P(23.502986907958984, 289.1259765625)] C[P(23.502986907958984, 289.1259765625), P(22.71498680114746, 292.36297607421875), P(19.8809871673584, 294.708984375), P(16.55298614501953, 294.8599853515625)] C[P(16.55298614501953, 294.8599853515625), P(16.437986373901367, 294.8659973144531), P(16.321985244750977, 294.86798095703125), P(16.206985473632812, 294.86798095703125)] C[P(16.206985473632812, 294.86798095703125), P(13.015985488891602, 294.86798095703125), P(10.152985572814941, 292.8559875488281), P(9.11198616027832, 289.8119812011719)] C[P(9.11198616027832, 289.8119812011719), P(8.642986297607422, 288.4449768066406), P(4.689986228942871, 275.9159851074219), P(9.9509859085083, 257.57696533203125)] C[P(9.9509859085083, 257.57696533203125), P(11.831985473632812, 251.02096557617188), P(14.745985984802246, 245.34596252441406), P(17.562986373901367, 239.85595703125)] C[P(17.562986373901367, 239.85595703125), P(22.14698600769043, 230.9269561767578), P(26.105987548828125, 223.21595764160156), P(24.44498634338379, 213.76695251464844)] C[P(24.44498634338379, 213.76695251464844), P(16.636985778808594, 169.35595703125), P(17.175987243652344, 133.24594116210938), P(26.04598617553711, 106.44395446777344)] C[P(26.04598617553711, 106.44395446777344), P(39.8380012512207, 64.76000213623047), P(73.22599792480469, 41.53499984741211), P(79.77899932861328, 37.30400085449219)] C[P(79.77899932861328, 37.30400085449219), P(83.02999877929688, 35.202999114990234), P(85.95600128173828, 33.35499954223633), P(88.4219970703125, 31.819000244140625)] C[P(88.4219970703125, 31.819000244140625), P(86.13499450683594, 29.996999740600586), P(83.22799682617188, 28.08300018310547), P(79.7719955444336, 26.61400032043457)] C[P(79.7719955444336, 26.61400032043457), P(71.90599822998047, 23.270000457763672), P(68.5999984741211, 22.356000900268555), P(67.68399810791016, 22.13800048828125)] C[P(67.68399810791016, 22.13800048828125), P(67.625, 22.14000129699707), P(67.56800079345703, 22.14000129699707), P(67.51100158691406, 22.14000129699707)] C[P(67.51100158691406, 22.14000129699707), P(64.24600219726562, 22.141000747680664), P(61.736000061035156, 19.985000610351562), P(60.781002044677734, 16.801000595092773)] C[P(60.781002044677734, 16.801000595092773), P(59.78900146484375, 13.495000839233398), P(61.63200378417969, 9.946001052856445), P(64.6030044555664, 8.191000938415527)] C[P(64.6030044555664, 8.191000938415527), P(65.16799926757812, 7.855999946594238), P(78.68099975585938, 0.0), P(96.48999786376953, 0.0)] C[P(96.48999786376953, 0.0), P(100.16500091552734, 0.0), P(103.81599426269531, 0.3370000123977661), P(107.34099578857422, 1.0019999742507935)] C[P(107.34099578857422, 1.0019999742507935), P(118.3239974975586, 3.0749998092651367), P(126.08599853515625, 6.171999931335449), P(133.5919952392578, 9.165999412536621)] C[P(133.5919952392578, 9.165999412536621), P(140.92098999023438, 12.089999198913574), P(147.843994140625, 14.851999282836914), P(158.17999267578125, 17.02899932861328)] C[P(158.17999267578125, 17.02899932861328), P(163.20098876953125, 18.086999893188477), P(167.8249969482422, 18.902999877929688), P(172.29598999023438, 19.6929988861084)] C[P(172.29598999023438, 19.6929988861084), P(187.67898559570312, 22.408998489379883), P(200.9639892578125, 24.7549991607666), P(220.82699584960938, 34.89799880981445)] C[P(220.82699584960938, 34.89799880981445), P(246.75099182128906, 48.13800048828125), P(261.3280029296875, 62.624000549316406), P(261.8739929199219, 75.68499755859375)] C[P(261.8739929199219, 75.68499755859375), P(261.9889831542969, 78.43799591064453), P(261.9289855957031, 80.89799499511719), P(261.6929931640625, 83.05999755859375)] C[P(261.6929931640625, 83.05999755859375), P(264.1189880371094, 84.27299499511719), P(267.0669860839844, 85.80599975585938), P(270.04498291015625, 87.5)] C[P(270.04498291015625, 87.5), P(282.16796875, 94.3949966430664), P(287.2519836425781, 99.55899810791016), P(287.5909729003906, 105.3219985961914)] C[P(287.5909729003906, 105.3219985961914), P(287.82598876953125, 109.33200073242188), P(286.7510070800781, 112.63099670410156), P(284.3949890136719, 115.12999725341797)] Z[P(284.3949890136719, 115.12999725341797), P(284.3949890136719, 115.12999725341797)]))
)
tensor([[ 0.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 0.0000,
0.0000, -1.0000, -1.0000, -1.0000, -1.0000, 284.3950, 115.1300],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 284.3950,
115.1300, 280.5420, 119.2150, 274.8650, 119.2150, 272.9990, 119.2150],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 272.9990,
119.2150, 269.5390, 119.2150, 265.2610, 118.6420, 259.1310, 117.3560],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 259.1310,
117.3560, 254.3160, 116.3460, 250.1700, 115.3390, 246.5140, 114.4520],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 246.5140,
114.4520, 239.3220, 112.7050, 234.1240, 111.4420, 229.2310, 111.4420],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 229.2310,
111.4420, 226.4730, 111.4420, 221.4410, 112.6370, 216.1120, 113.9030],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 216.1120,
113.9030, 207.8150, 115.8740, 197.4900, 118.3260, 187.5030, 118.3260],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 187.5030,
118.3260, 177.4200, 118.3260, 166.8070, 117.5420, 160.1200, 116.9370],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 160.1200,
116.9370, 157.8990, 125.2630, 152.6150, 138.9930, 140.6210, 148.8210],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 140.6210,
148.8210, 124.3140, 162.1810, 112.2240, 168.1410, 111.7170, 168.3880],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 111.7170,
168.3880, 108.6200, 169.8980, 104.8960, 169.1070, 102.6830, 166.4660],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 102.6830,
166.4660, 100.4700, 163.8270, 100.3380, 160.0210, 102.3640, 157.2360],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 102.3640,
157.2360, 102.4310, 157.1410, 110.5810, 145.6200, 110.2680, 132.6720],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 110.2680,
132.6720, 110.1440, 127.5210, 109.9570, 123.4850, 109.7660, 120.4110],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 109.7660,
120.4110, 99.6090, 123.4050, 82.3150, 130.0980, 73.3940, 142.6970],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 73.3940,
142.6970, 59.8800, 161.7800, 54.3360, 191.1600, 55.8210, 200.7980],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 55.8210,
200.7980, 57.3670, 210.8400, 57.5520, 210.8400, 62.3170, 210.8400],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 62.3170,
210.8400, 72.6990, 210.8400, 81.4210, 211.2650, 91.2450, 216.6130],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 91.2450,
216.6130, 99.9350, 221.3420, 104.7210, 226.5160, 105.2370, 227.0910],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 105.2370,
227.0910, 107.1860, 229.2560, 107.7050, 232.3540, 106.5680, 235.0350],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 106.5680,
235.0350, 105.4310, 237.7180, 102.8440, 239.4990, 99.9330, 239.6030],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 99.9330,
239.6030, 97.1250, 239.7080, 88.6280, 240.4160, 83.1670, 242.3980],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 83.1670,
242.3980, 80.2670, 243.4510, 77.7690, 244.6830, 75.3530, 245.8750],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 75.3530,
245.8750, 71.4880, 247.7820, 67.8370, 249.5830, 63.5780, 250.0860],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 63.5780,
250.0860, 62.1930, 250.2500, 60.7710, 250.3300, 59.4250, 250.3050],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 59.4250,
250.3050, 57.8550, 253.9520, 55.1830, 258.2990, 50.6890, 261.7480],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 50.6890,
261.7480, 47.2490, 264.3890, 43.7490, 266.0180, 40.6610, 267.4540],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 40.6610,
267.4540, 36.8690, 269.2190, 33.8730, 270.6130, 31.5730, 273.4650],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 31.5730,
273.4650, 25.5450, 280.9400, 23.5220, 289.0450, 23.5030, 289.1260],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 23.5030,
289.1260, 22.7150, 292.3630, 19.8810, 294.7090, 16.5530, 294.8600],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 16.5530,
294.8600, 16.4380, 294.8660, 16.3220, 294.8680, 16.2070, 294.8680],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 16.2070,
294.8680, 13.0160, 294.8680, 10.1530, 292.8560, 9.1120, 289.8120],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 9.1120,
289.8120, 8.6430, 288.4450, 4.6900, 275.9160, 9.9510, 257.5770],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 9.9510,
257.5770, 11.8320, 251.0210, 14.7460, 245.3460, 17.5630, 239.8560],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 17.5630,
239.8560, 22.1470, 230.9270, 26.1060, 223.2160, 24.4450, 213.7670],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 24.4450,
213.7670, 16.6370, 169.3560, 17.1760, 133.2459, 26.0460, 106.4440],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 26.0460,
106.4440, 39.8380, 64.7600, 73.2260, 41.5350, 79.7790, 37.3040],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 79.7790,
37.3040, 83.0300, 35.2030, 85.9560, 33.3550, 88.4220, 31.8190],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 88.4220,
31.8190, 86.1350, 29.9970, 83.2280, 28.0830, 79.7720, 26.6140],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 79.7720,
26.6140, 71.9060, 23.2700, 68.6000, 22.3560, 67.6840, 22.1380],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 67.6840,
22.1380, 67.6250, 22.1400, 67.5680, 22.1400, 67.5110, 22.1400],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 67.5110,
22.1400, 64.2460, 22.1410, 61.7360, 19.9850, 60.7810, 16.8010],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 60.7810,
16.8010, 59.7890, 13.4950, 61.6320, 9.9460, 64.6030, 8.1910],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 64.6030,
8.1910, 65.1680, 7.8560, 78.6810, 0.0000, 96.4900, 0.0000],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 96.4900,
0.0000, 100.1650, 0.0000, 103.8160, 0.3370, 107.3410, 1.0020],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 107.3410,
1.0020, 118.3240, 3.0750, 126.0860, 6.1720, 133.5920, 9.1660],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 133.5920,
9.1660, 140.9210, 12.0900, 147.8440, 14.8520, 158.1800, 17.0290],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 158.1800,
17.0290, 163.2010, 18.0870, 167.8250, 18.9030, 172.2960, 19.6930],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 172.2960,
19.6930, 187.6790, 22.4090, 200.9640, 24.7550, 220.8270, 34.8980],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 220.8270,
34.8980, 246.7510, 48.1380, 261.3280, 62.6240, 261.8740, 75.6850],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 261.8740,
75.6850, 261.9890, 78.4380, 261.9290, 80.8980, 261.6930, 83.0600],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 261.6930,
83.0600, 264.1190, 84.2730, 267.0670, 85.8060, 270.0450, 87.5000],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 270.0450,
87.5000, 282.1680, 94.3950, 287.2520, 99.5590, 287.5910, 105.3220],
[ 2.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 287.5910,
105.3220, 287.8260, 109.3320, 286.7510, 112.6310, 284.3950, 115.1300],
[ 6.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, 284.3950,
115.1300, -1.0000, -1.0000, -1.0000, -1.0000, 284.3950, 115.1300]])
| MIT | notebooks/svgtensor.ipynb | GeorgeProjects/deepsvg |
Table of Contents1 Intro2 Load Data3 Cyclical Feeding3.1 TODOs4 Image Sharpening5 Source Data FaceSwap and Upscaling6 Celeba Test IntroNotebook exploring random experiments around the use of the trained Faceswap generators. | import numpy as np
import pandas as pd
import seaborn as sns
from PIL import Image
import matplotlib.pyplot as plt
from pathlib import Path
import sys
import pickle
import yaml
from numpy.random import shuffle
from ast import literal_eval
import tensorflow as tf
import cv2
from tqdm import tqdm
# Plotting
%matplotlib notebook
#%matplotlib inline
sns.set_context("paper")
sns.set_style("dark")
sys.path.append('../face_swap')
from utils import image_processing
from utils import super_resolution
from face_swap.deep_swap import swap_faces, Swapper
from face_swap import faceswap_utils as utils
from face_swap.plot_utils import stack_images
from face_swap import FaceGenerator, FaceDetector
from face_swap.train import get_original_data
from face_swap import gan, gan_utils
from face_swap import CONFIG_PATH
from face_swap.Face import Face
%load_ext autoreload
%autoreload 2
data_folder = Path.home() / "Documents/datasets/"
models_folder = Path.home() / "Documents/models/" | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
Load Data | # Load two random celeba faces
from_face_img = cv2.cvtColor(cv2.imread(str(data_folder / "img_align_celeba" /
"000{}{}{}.jpg".format(*np.random.randint(0, 9, 3)))),
cv2.COLOR_BGR2RGB)
to_face_img = cv2.cvtColor(cv2.imread(str(data_folder / "img_align_celeba" /
"000{}{}{}.jpg".format(*np.random.randint(0, 9, 3)))),
cv2.COLOR_BGR2RGB)
plt.imshow(from_face_img)
plt.show()
plt.imshow(to_face_img)
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
Cyclical FeedingCycling feeding own output to generator. Can start with actual face or random noise. TODOs* Try apply text on image before feeding to generator | def crop(img, crop_factor=0.2):
h, w = img.shape[:2]
h_crop = int((h * crop_factor)//2)
w_crop = int((w * crop_factor)//2)
return img[h_crop:h-h_crop, w_crop:w-w_crop]
def zoom(img, zoom_factor=1.5):
h, w = img.shape[:2]
mat = cv2.getRotationMatrix2D((w//2, h//2), 0, zoom_factor)
#mat[:, 2] -= (w//2, h//2)
result = cv2.warpAffine(img, mat, (w, h), borderMode=cv2.BORDER_REPLICATE)
return result
# load config
with open(CONFIG_PATH, 'r') as ymlfile:
cfg = yaml.load(ymlfile)
model_cfg = cfg['masked_gan']['v1']
# load generator and related functions
gen_a, gen_b, _, _ = gan.get_gan(model_cfg, load_discriminators=False)
_, _, _, fun_generate_a, fun_mask_a, fun_abgr_a = gan_utils.cycle_variables_masked(gen_a)
_, _, _, fun_generate_b, fun_mask_b, fun_abgr_b = gan_utils.cycle_variables_masked(gen_b)
gen_fun_a = lambda x: fun_abgr_a([np.expand_dims(x, 0)])[0][0]
gen_fun_b = lambda x: fun_abgr_b([np.expand_dims(x, 0)])[0][0]
generator_a = FaceGenerator.FaceGenerator(
lambda face_img: FaceGenerator.gan_masked_generate_face(gen_fun_a, face_img),
input_size=(64, 64), tanh_fix=True)
generator_b = FaceGenerator.FaceGenerator(
lambda face_img: FaceGenerator.gan_masked_generate_face(gen_fun_b, face_img),
input_size=(64, 64), tanh_fix=True)
gen_input = Face(img, img)
use_a = True
generator = generator_a if use_a else generator_b
for i in range(500):
out = get_hr_version(sr_model, generator.generate(gen_input, (64, 64))[0])
#out = generator.generate(gen_input, (128, 128))[0]
gen_input.face_img = FaceGenerator.random_transform(out, **cfg['random_transform'])
#gen_input.img = zoom(out)
res_path = str(data_folder / 'faceswap_experiments/cycle_feed/02/_{:04d}.png'.format(i))
#cv2.imwrite(res_path, zoom(out))
cv2.imwrite(res_path, out)
# swap generator randomly every epoch
#generator = generator_a if np.random.rand() > 0.5 else generator_b
# swap generator every N epoch
if i%50 == 0:
use_a = not use_a
generator = generator_a if use_a else generator_b | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
Image Sharpening | # adapted from https://github.com/AdityaPokharel/Sharpen-Image
regular_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
edge_enhance_kernel = np.array([[-1,-1,-1,-1,-1],
[-1,2,2,2,-1],
[-1,2,8,2,-1],
[-2,2,2,2,-1],
[-1,-1,-1,-1,-1]])/8.0
def sharpen(img, kernel=regular_kernel):
# apply kernel to input image
res = cv2.filter2D(img, -1, kernel)
return res
# see also cv2.detailEnhance(src, sigma_s=10, sigma_r=0.15)
plt.imshow(sharpen(to_face_img))
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
Source Data FaceSwap and UpscalingTry to cherry pick some results of face-swapping on the training data, apply upscaling to a reasonable size (e.g. 128x128) and any possible post-processing that might help in improving image quality. | input_path = data_folder / "facesets" / "cage"
out_path = data_folder / "faceswap_experiments" / "source_faceswap" / "cage_trump"
out_size = (64, 64)
# collected all image paths
img_paths = image_processing.get_imgs_paths(input_path, as_str=False)
# iterate over all collected image paths
for i, img_path in enumerate(img_paths):
img = cv2.imread(str(img_path))
gen_input = Face(img, img)
gen_face = generator_b.generate(gen_input)[0]
gen_face = sharpen(gen_face)
gen_face = cv2.resize(gen_face, out_size)
cv2.imwrite(str(out_path / "out_{:04d}.jpg".format(i)),
gen_face) | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
Celeba TestTest Celeba training and generation of artworks | def plot_sample(images: list, predict_fun,
tanh_fix=False, save_to: str=None,
nb_test_imgs=14, nb_columns=3, white_border=3):
# need number of images divisible by number of columns
nb_rows = nb_test_imgs//nb_columns
assert nb_test_imgs % nb_columns == 0
images = images[0:nb_test_imgs]
figure = np.stack([
images,
predict_fun(images),
], axis=1)
# we split images on two columns
figure = figure.reshape((nb_columns, nb_rows) + figure.shape[1:])
figure = stack_images(figure)
img_width = images[0].shape[1]
img_height = images[0].shape[0]
for i in range(1, nb_columns):
x = img_width*2*i
figure[:, x-white_border:x+white_border, :] = 255.0
for i in range(1, nb_rows):
y = img_height*i
figure[y-white_border:y+white_border, :, :] = 255.0
if save_to:
cv2.imwrite(save_to, figure)
else:
figure = cv2.cvtColor(figure, cv2.COLOR_BGR2RGB)
#plt.imshow(figure)
#plt.show()
display(Image.fromarray(figure))
# crashes in notebooks
#cv2.imshow('', figure)
#cv2.waitKey(0)
# load config
with open(CONFIG_PATH, 'r') as ymlfile:
cfg = yaml.load(ymlfile)
model_cfg = cfg['masked_gan']['v1']
model_cfg['models_path'] = str(models_folder / "face_recognition/deep_faceswap/masked_gan/cage_celeba/v4")
#tf.reset_default_graph()
face_detector = FaceDetector.FaceDetector(cfg)
# load generator and related functions
netGA, netGB, _, _ = gan.get_gan(model_cfg, load_discriminators=False)
# define generation and plotting function
# depending if using masked gan model or not
if model_cfg['masked']:
distorted_A, fake_A, mask_A, path_A, fun_mask_A, fun_abgr_A = gan_utils.cycle_variables_masked(netGA)
distorted_B, fake_B, mask_B, path_B, fun_mask_B, fun_abgr_B = gan_utils.cycle_variables_masked(netGB)
#gen_plot_a = lambda x: np.array(path_A([x])[0])
#gen_plot_b = lambda x: np.array(path_B([x])[0])
gen_plot_a = lambda x: np.array(fun_abgr_A([x])[0][ :, :, :, 1:])
gen_plot_b = lambda x: np.array(fun_abgr_B([x])[0][ :, :, :, 1:])
gen_plot_mask_a = lambda x: np.array(fun_mask_A([x])[0])*2-1
gen_plot_mask_b = lambda x: np.array(fun_mask_B([x])[0])*2-1
else:
gen_plot_a = lambda x: netGA.predict(x)
gen_plot_b = lambda x: netGB.predict(x)
sr_model = super_resolution.get_SRResNet(cfg['super_resolution'])
resize_fun = lambda img, size: FaceGenerator.super_resolution_resizing(sr_model, img, size)
gen_fun_a = lambda x: fun_abgr_A([np.expand_dims(x, 0)])[0][0]
gen_fun_b = lambda x: fun_abgr_B([np.expand_dims(x, 0)])[0][0]
gen_input_size = literal_eval(model_cfg['img_shape'])[:2]
face_generator = FaceGenerator.FaceGenerator(
lambda face_img: FaceGenerator.gan_masked_generate_face(gen_fun_a, face_img),
input_size=gen_input_size, config=cfg['swap'], resize_fun=resize_fun)
swapper = Swapper(face_detector, face_generator, cfg['swap'], save_all=True)
def swap(img):
face = Face(img.copy(), Face.Rectangle(0, 64, 64, 0))
#return swap_faces(face, face_detector, cfg['swap'], face_generator)
return face.get_face_img()
#gen_plot_b = lambda x: [swap(img) for img in x]
gen_plot = lambda x: [swapper.swap(img) for img in x]
img_dir_a = data_folder / 'facesets/cage'
img_dir_b = data_folder / 'celeba_tmp'
#images_a, images_b = get_original_data(img_dir_a, img_dir_b, img_size=None, tanh_fix=False)
images = image_processing.load_data(image_processing.get_imgs_paths(img_dir_a), (128, 128))
dest_folder = str(data_folder / "faceswap_experiments/source_faceswap/cage_celeba_masked/test_1/_{}.png")
swapper.config['mask_method'] = "gen_mask"
face_generator.border_expand = (0.1, 0.1)
face_generator.blur_size = 13
face_generator.align = False
#shuffle(images)
for i in range(20):
print(i)
images_subset = images[i*15:(i+1)*15]
try:
plot_sample(images_subset, gen_plot, nb_test_imgs=15, nb_columns=3,
save_to=dest_folder.format(i), tanh_fix=False)
except FaceDetector.FaceSwapException:
pass | _____no_output_____ | Apache-2.0 | notebooks/Creative Experiments.ipynb | 5agado/face-swap |
In this notebook, we show the dynamical relaxation time. Init | from __future__ import division
%load_ext autoreload
%autoreload 2
import sys,os
sys.path.insert(1, os.path.join(sys.path[0], '..'))
from matplotlib import rcParams, rc
import spc
import model
import chi2
import margin
import tools as tl
import numpy as np
import matplotlib
%matplotlib notebook
import matplotlib.pyplot as plt
from scipy.integrate import quad
import h5py
import glob
import re
import scan
import pickle
import glob
from multiprocessing import Pool
from contextlib import closing
from matplotlib import cm
from tqdm import tqdm
plt.rcParams.update({'font.size': 12})
path = '../data/SPARC.txt'
data = spc.readSPARC(path)
path = '../data/SPARC_Lelli2016c.txt'
spc.readSPARC_ext(data, path)
data2 = {}
for gal in data:
data2[gal.name] = gal | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
Functions moved to the corresponding .py file | # def model.tau(f, m, v=57., rho=0.003):
# """ relaxation time computation [Gyr]
# :param f: fraction
# :param m: scalar mass [eV]
# :param v: dispersion [km/s]
# :param rho: DM density [Msun/pc**3]
# """
# return 0.6 * 1./f**2 * (m/(1.e-22))**3 * (v/100)**6 * (rho/0.1)**(-2)
# model.tau(0.2, 1e-22, 100, 0.1)
# def reconstruct_density(gal, flg_give_R=False):
# """ reconstruct the local density based on the rotaion curve
# """
# V = gal.Vobs
# r = gal.R
# M_unit = 232501.397985234 # Msun computed with km/s, kpc
# M = V**2 * r * M_unit
# r_mid = (r[1:] + r[:-1]) /2.
# dr = r[1:] - r[:-1]
# rho = (M[1:] - M[:-1]) / 4./np.pi/r_mid**2 / dr /1e9 #[Msun/pc**3]
# if flg_give_R:
# return (r_mid, rho)
# else:
# return rho | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
Check the data | #gal = data2['UGC01281']
gal = data2['UGC04325']
print(gal.Vobs[-1])
model.reconstruct_density_DM(gal)
plt.subplots()
plt.plot(gal.R, gal.Vobs, '.')
plt.xlabel('R [kpc]')
plt.ylabel(r'$v$ km/s')
fn, _, _ = model.reconstruct_density_DM(gal)
plt.subplots()
r_arr = np.logspace(gal.R[0], gal.R[-1])
plt.plot(r_arr, fn(r_arr), '.')
plt.xscale('log')
plt.yscale('log')
plt.xlabel('R [kpc]')
plt.ylabel(r'$\rho$ [M$_\odot$/pc$^3$]')
plt.tight_layout()
vf_arr = []
rhof_arr = []
for gal in data:
v_f = gal.Vobs[-1]
vf_arr.append(v_f)
fn,_,_ = model.reconstruct_density_DM(gal)
rhof_arr.append(fn(gal.R[-1]))
plt.subplots()
plt.plot(vf_arr, 'k.')
plt.ylim(0, 400)
plt.xlabel('Galaxy ID')
plt.ylabel('V [km/s]')
plt.title('End velocity of the rotation curve')
plt.subplots()
plt.plot(rhof_arr, 'k.')
plt.yscale('log')
plt.xlabel('Galaxy ID')
plt.ylabel(r'$\rho$ [M$_\odot$/pc$^3$]')
plt.title('Density at the end of the rotation curve')
plt.subplots()
plt.title("Scattering of rotation velocity")
plt.xlabel('R [kpc]')
plt.ylabel('V [km/s]')
for name, gal in data2.items():
plt.plot(gal.R, gal.Vobs, lw='0.8') | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
Relaxatin time at last data point | f1 = 0.85
f2 = 0.15
m1_arr = np.logspace(-25, -19, 100)
m2_arr = np.logspace(-25, -19, 100)
m1_mesh, m2_mesh = np.meshgrid(m1_arr, m2_arr, indexing='ij')
m1_flat, m2_flat = m1_mesh.reshape(-1), m2_mesh.reshape(-1)
gal = data2['UGC04325']
tau1_flat = []
tau1_self_flat = []
for i in range(len(m1_flat)):
m1 = m1_flat[i]
m2 = m2_flat[i]
R = gal.R[-1]
sigma = model.sigma_disp_over_vcirc(gal, gal.R[-1]) * gal.Vobs[-1]
rho_fn, _, _ = model.reconstruct_density_DM(gal)
rho = rho_fn(gal.R[-1])
cut_log=True
tau1 = 1./(1./model.tau(f1, m1, sigma, rho, R, cut_log=cut_log) + 1./model.tau(f2, m2, sigma, rho, R, cut_log=cut_log))
tau1_self = model.tau(f1, m1, sigma, rho, R, cut_log=cut_log)
tau1_flat.append(tau1)
tau1_self_flat.append(tau1_self)
tau1_flat = np.asarray(tau1_flat)
tau1_self_flat = np.asarray(tau1_self_flat)
tau1_mesh = tau1_flat.reshape(m1_mesh.shape)
tau1_self_mesh = tau1_self_flat.reshape(m1_mesh.shape)
_, ax = plt.subplots()
plt.contourf(m1_mesh, m2_mesh, tau1_mesh, levels=[10, np.inf], colors='lightblue')
plt.contour(m1_mesh, m2_mesh, tau1_self_mesh, levels=[10], linestyles={'dashed'})
plt.fill_betweenx(np.logspace(-25, -19), 1e-25, 2.66e-21, color='salmon', alpha=0.5, zorder=0)
#label
plt.text(4e-23, 1e-20, r"Lyman-$\alpha$ constraints", color='red', fontsize=14, rotation=90)
plt.text(1e-21, 1e-20, r"$\tau$ > 10 Gyr", color='blue', fontsize=14)
plt.text(8e-23, 1e-24, r"Coulomb log breaks for $m_2$", color='blue', fontsize=14)
plt.text(1e-25, 1e-24, r"Coulomb, $m_1$ and $m_2$", color='blue', fontsize=14)
plt.text(3e-25, 1e-20, r"Coulomb, $m_1$", color='blue', fontsize=14)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('$m_1$ [eV], 85% of total mass')
plt.ylabel('$m_2$ [eV], 15% of total mass')
plt.xlim(1e-25, 1e-19)
plt.ylim(1e-25, 1e-19)
plt.title(r"UGC 1281")
ax.set_aspect(aspect=0.618)
plt.tight_layout()
#plt.savefig('./sol_relaxation_contour.pdf')
# check relaxation time at the last data point
gal
#f1 = 0.85
f1 = 1.
m1_target_arr = []
vf_arr = []
rhof_arr = []
m1_arr = np.logspace(-25, -19, 100)
for gal in data:
fn, _, _ = model.reconstruct_density_DM(gal) # last data point is selected
rho_f = fn(gal.R[-1])
v_f = gal.Vobs[-1] # last data point
vf_arr.append(v_f)
rhof_arr.append(rho_f)
tau1_self_arr = []
for m1 in m1_arr:
R = gal.R[-1]
sigma = model.sigma_disp_over_vcirc(gal, gal.R[-1]) * gal.Vobs[-1]
cut_log=True
tau1_self = model.tau(f1, m1, sigma=sigma, rho=rho_f, R=gal.R[-1], cut_log=cut_log)
tau1_self_arr.append(tau1_self)
tau1_self_arr = np.asarray(tau1_self_arr)
#print(tau1_self_arr)
mask = np.where(tau1_self_arr < 1000, True, False)
#print(mask)
if sum(mask) > 0:
m1_target = np.exp(np.interp(np.log(10), np.log(tau1_self_arr[mask]), np.log(m1_arr[mask])))
m1_target_arr.append(m1_target) | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
This is the result with coulomb log > 1. | plt.subplots()
plt.plot(m1_target_arr, 'k.')
plt.yscale('log')
plt.ylim(1e-25, 1e-19)
plt.xlabel('Galaxy ID')
plt.ylabel('m [eV]')
plt.title('Dynamical relaxation time set to 10 Gyr')
_, ax = plt.subplots()
plt.fill_betweenx(np.logspace(-25, -19), 1e-25, 2.66e-21, color='salmon', alpha=0.5, zorder=0)
f1 = 0.85
f2 = 0.15
m1_arr = np.logspace(-25, -19, 50)
m2_arr = np.logspace(-25, -19, 50)
m1_mesh, m2_mesh = np.meshgrid(m1_arr, m2_arr, indexing='ij')
m1_flat, m2_flat = m1_mesh.reshape(-1), m2_mesh.reshape(-1)
for gal in data:
fn, _, _ = model.reconstruct_density_DM(gal) # last data point is selected
rho_f = fn(gal.R[-1])
v_f = gal.Vobs[-1] # last data point
tau1_flat = []
tau1_self_flat = []
for i in range(len(m1_flat)):
R = gal.R[-1]
sigma = model.sigma_disp_over_vcirc(gal, gal.R[-1]) * gal.Vobs[-1]
cut_log=True
m1 = m1_flat[i]
m2 = m2_flat[i]
tau1 = 1./(1./model.tau(f1,
m1,
sigma=sigma,
rho=rho_f,
R=R,
cut_log=cut_log) +
1./model.tau(f2,
m2,
sigma=sigma,
rho=rho_f,
R=R,
cut_log=cut_log))
tau1_self = model.tau(f1,
m1,
sigma=sigma,
rho=rho_f,
R=R,
cut_log=cut_log)
tau1_flat.append(tau1)
tau1_self_flat.append(tau1_self)
tau1_flat = np.asarray(tau1_flat)
tau1_self_flat = np.asarray(tau1_self_flat)
tau1_mesh = tau1_flat.reshape(m1_mesh.shape)
tau1_self_mesh = tau1_self_flat.reshape(m1_mesh.shape)
plt.contour(m1_mesh, m2_mesh, tau1_mesh, levels=[10], colors='lightblue')
#label
plt.text(1e-24, 1e-24, r"Lyman-$\alpha$ constraints", color='red', fontsize=14)
plt.text(1e-21, 1e-20, r"$\tau$ > 10 Gyr", color='blue', fontsize=14)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('$m_1$ [eV], 85% of total mass')
plt.ylabel('$m_2$ [eV], 15% of total mass')
plt.xlim(8e-26, 1e-19)
plt.ylim(8e-26, 1e-19)
ax.set_aspect(aspect=0.618)
plt.tight_layout()
#plt.savefig('./sol_relaxation_contour.pdf') | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
change the fraction | #gal = data2['NGC0100']
gal = data2['UGC04325']
#gal = data2['UGC01281']
#gal = data2['NGC3769']
#gal = data2['NGC3877']
#gal = data2['NGC6503']
m2 = 1.e-23 # [eV]
#f2 = 0.15
m1_arr = np.logspace(-25.2, -18.8, 50)
f1_arr = np.linspace(0., 1., 50)
m1_mesh, f1_mesh = np.meshgrid(m1_arr, f1_arr, indexing='ij')
m1_flat, f1_flat = m1_mesh.reshape(-1), f1_mesh.reshape(-1)
tau1_flat = []
tau1_self_flat = []
r_over_rc = 10
cut_log = True
for i in range(len(m1_flat)):
m1 = m1_flat[i]
f1 = f1_flat[i]
f2 = 1.-f1
tau1 = 1./(1./model.relaxation_at_rc(m1, gal, f1, multiplier=r_over_rc, cut_log=cut_log)
+ 1./model.relaxation_at_rc(m2, gal, f2, multiplier=r_over_rc, cut_log=cut_log))
tau1_flat.append(tau1)
tau1_self = model.relaxation_at_rc(m1, gal, f1, multiplier=r_over_rc, cut_log=cut_log)
tau1_self_flat.append(tau1_self)
tau1_flat = np.asarray(tau1_flat)
tau1_self_flat = np.asarray(tau1_self_flat)
tau1_mesh = tau1_flat.reshape(m1_mesh.shape)
tau1_self_mesh = tau1_self_flat.reshape(m1_mesh.shape)
_, ax = plt.subplots()
#plt.contourf(m1_mesh, f1_mesh, tau1_mesh, levels=[10, np.inf], colors='lightblue')
plt.contourf(m1_mesh, f1_mesh, tau1_self_mesh, levels=[10, np.inf], colors='lightblue')
plt.fill_between([1,2], 101, 100, color='C0', label=r"$\tau$ > 10 Gyr", alpha=0.2)
#label
#plt.text(2e-23, 1e-22, r"Lyman-$\alpha$", color='red', fontsize=14)
#plt.text(3e-21, 0.5, r"$\tau$ > 10 Gyr", color='blue', fontsize=14)
plt.xscale('log')
#plt.yscale('log')
plt.xlabel('$m_1$ [eV]')
#plt.ylabel('$m_2$ [eV], 15% of total mass')
plt.ylabel(r'$f_1$')
plt.xlim(2e-23, 1e-19)
plt.ylim(0.02, 1.)
# overlay with Kobayashi
path = '../data/Kobayashi2017.csv'
data_lym_arr = np.loadtxt(path, delimiter=',')
x = data_lym_arr[:,0]
y = data_lym_arr[:,1]
x = np.insert(x, 0, 1e-25)
y = np.insert(y, 0, y[0])
plt.fill_between(x, y, 100, color='C1', label=r'Lyman-$\alpha$', alpha=0.2)
plt.legend(loc=4)
ax.set_aspect(aspect=0.618)
plt.title('%s' %gal.name)
plt.tight_layout()
plt.savefig('./plots/relaxation_time_f1_m1_%s.pdf' %gal.name)
#gal = data2['NGC0100']
gal = data2['UGC04325']
#gal = data2['UGC01281']
#gal = data2['NGC3769']
#gal = data2['NGC3877']
#gal = data2['NGC6503']
m2 = 1.e-23 # [eV]
#f2 = 0.15
m1_arr = np.logspace(-25.2, -18.8, 50)
f1_arr = np.linspace(0., 1., 50)
m1_mesh, f1_mesh = np.meshgrid(m1_arr, f1_arr, indexing='ij')
m1_flat, f1_flat = m1_mesh.reshape(-1), f1_mesh.reshape(-1)
tau1_flat = []
tau1_self_flat = []
r_over_rc = 10
cut_log = True
for i in range(len(m1_flat)):
m1 = m1_flat[i]
f1 = f1_flat[i]
f2 = 1.-f1
tau1 = 1./(1./model.relaxation_at_rc(m1, gal, f1, multiplier=r_over_rc, cut_log=cut_log)
+ 1./model.relaxation_at_rc(m2, gal, f2, multiplier=r_over_rc, cut_log=cut_log))
tau1_flat.append(tau1)
tau1_self = model.relaxation_at_rc(m1, gal, f1, multiplier=r_over_rc, cut_log=cut_log)
tau1_self_flat.append(tau1_self)
tau1_flat = np.asarray(tau1_flat)
tau1_self_flat = np.asarray(tau1_self_flat)
tau1_mesh = tau1_flat.reshape(m1_mesh.shape)
tau1_self_mesh = tau1_self_flat.reshape(m1_mesh.shape)
_, ax = plt.subplots()
plt.contourf(m1_mesh, f1_mesh, tau1_mesh, levels=[10, np.inf], colors='lightblue')
#plt.contourf(m1_mesh, f1_mesh, tau1_self_mesh, levels=[10, np.inf], colors='lightblue')
plt.fill_between([1,2], 101, 100, color='C0', label=r"$\tau$ > 10 Gyr", alpha=0.2)
#label
#plt.text(2e-23, 1e-22, r"Lyman-$\alpha$", color='red', fontsize=14)
#plt.text(3e-21, 0.5, r"$\tau$ > 10 Gyr", color='blue', fontsize=14)
plt.xscale('log')
#plt.yscale('log')
plt.xlabel('$m_1$ [eV]')
#plt.ylabel('$m_2$ [eV], 15% of total mass')
plt.ylabel(r'$f_1$')
plt.xlim(2e-23, 1e-19)
plt.ylim(0.02, 1.)
# overlay with Kobayashi
path = '../data/Kobayashi2017.csv'
data_lym_arr = np.loadtxt(path, delimiter=',')
x = data_lym_arr[:,0]
y = data_lym_arr[:,1]
x = np.insert(x, 0, 1e-25)
y = np.insert(y, 0, y[0])
plt.fill_between(x, y, 100, color='C1', label=r'Lyman-$\alpha$', alpha=0.2)
plt.legend(loc=4)
ax.set_aspect(aspect=0.618)
plt.title('%s' %gal.name)
plt.tight_layout()
plt.savefig('./plots/relaxation_time_f1_m1_two_species_%s.pdf' %gal.name) | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
velocity dispersion | gal = data2['NGC0100']
R = np.logspace(-1, 3)
#y = model.sigma_disp(gal, R, get_array=False)
# debug interp
#y_npinterp = model.sigma_disp_over_vcirc(gal, R)
# no interp
ratio_arr = model.sigma_disp_over_vcirc(gal, R)
plt.subplots()
#plt.plot(R, y)
#plt.plot(R, y_npinterp)
plt.plot(R, ratio_arr, '--')
plt.xscale('log')
#plt.yscale('log')
plt.xlabel('R [kpc]')
plt.ylabel(r'$\sigma/V_{\rm circ}$') | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
The Comloub Log | # plot out to check
gal = spc.findGalaxyByName('UGC04325', data)
interpol_method = 'linear' #nearest
f_arr = np.linspace(0.01, 1, 200)
#m = 2e-23
#m = 1.3e-23
m = 1e-23
#m = 3e-24
#m = 1e-21
r_supply_arr = np.array([model.supply_radius(f, m, gal) for f in f_arr])
r_relax_arr = np.array([model.relax_radius(f, m, gal, interpol_method=interpol_method) for f in f_arr])
r_relax_arr2 = np.array([model.relax_radius(f, m, gal, interpol_method=interpol_method, cut_log=False) for f in f_arr])
r_core_arr = np.array([1.9 * model.rc(m, model.M_SH(m, gal)) for f in f_arr])
plt.subplots()
plt.plot(f_arr, r_supply_arr, label=r'$r_{supply}$')
plt.plot(f_arr, r_relax_arr, label=r'$r_{relax}$')
plt.plot(f_arr, r_relax_arr2, '--', label=r'$r_{relax}$', color='C1')
plt.plot(f_arr, r_core_arr, label=r'$r_{core}$')
#plt.xscale('log')
plt.yscale('log')
plt.ylabel('r [kpc]')
plt.xlabel('f')
#plt.title('m=%.1e eV, %s' %(m, gal.name))
plt.title('m=%s eV, %s' %(tl.scientific(m), gal.name))
plt.legend(loc='best')
plt.tight_layout()
#plt.savefig('./plots/r_comparison_%s.pdf' %(gal.name)) | _____no_output_____ | MIT | notebooks/3_demo_dynamical_relaxation_time.ipynb | ChenSun-Phys/ULDM_x_SPARC |
Although Basenji is unaware of the locations of known genes in the genome, we can go in afterwards and ask what a model predicts for those locations to interpret it as a gene expression prediction.To do this, you'll need * Trained model * Gene Transfer Format (GTF) gene annotations * BigWig coverage tracks * Gene sequences saved in my HDF5 format. First, make sure you have an hg19 FASTA file visible. If you have it already, put a symbolic link into the data directory. Otherwise, I have a machine learning friendly simplified version you can download in the next cell. | import os, subprocess
if not os.path.isfile('data/hg19.ml.fa'):
subprocess.call('curl -o data/hg19.ml.fa https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa', shell=True)
subprocess.call('curl -o data/hg19.ml.fa.fai https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa.fai', shell=True) | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Next, let's grab a few CAGE datasets from FANTOM5 related to heart biology.These data were processed by1. Aligning with Bowtie2 with very sensitive alignment parameters.2. Distributing multi-mapping reads and estimating genomic coverage with bam_cov.py | if not os.path.isfile('data/CNhs11760.bw'):
subprocess.call('curl -o data/CNhs11760.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs11760.bw', shell=True)
subprocess.call('curl -o data/CNhs12843.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs12843.bw', shell=True)
subprocess.call('curl -o data/CNhs12856.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs12856.bw', shell=True)_ | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Then we'll write out these BigWig files and labels to a samples table. | samples_out = open('data/heart_wigs.txt', 'w')
print('aorta\tdata/CNhs11760.bw', file=samples_out)
print('artery\tdata/CNhs12843.bw', file=samples_out)
print('pulmonic_valve\tdata/CNhs12856.bw', file=samples_out)
samples_out.close() | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Predictions in the portion of the genome that we trained might inflate our accuracy, so we'll focus on chr9 genes, which have formed my typical test set. Then we use [basenji_hdf5_genes.py](https://github.com/calico/basenji/blob/master/bin/basenji_hdf5_genes.py) to create the file.The most relevant options are:| Option/Argument | Value | Note ||:---|:---|:---|| -g | data/human.hg19.genome | Genome assembly chromosome length to bound gene sequences. || -l | 262144 | Sequence length. || -c | 0.333 | Multiple genes per sequence are allowed, but the TSS must be in the middle 1/3 of the sequence. || -p | 3 | Use 3 threads via | -t | data/heart_wigs.txt | Save coverage values from this table of BigWig files. || -w | 128 | Bin the coverage values at 128 bp resolution. || fasta_file | data/hg19.ml.fa | Genome FASTA file for extracting sequences. || gtf_file | data/gencode_chr9.gtf | Gene annotations in gene transfer format. || hdf5_file | data/gencode_chr9_l262k_w128.h5 | Gene sequence output HDF5 file. | | ! basenji_hdf5_genes.py -g data/human.hg19.genome -l 262144 -c 0.333 -p 3 -t data/heart_wigs.txt -w 128 data/hg19.ml.fa data/gencode_chr9.gtf data/gencode_chr9_l262k_w128.h5 | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Now, you can either train your own model in the [Train/test tutorial](https://github.com/calico/basenji/blob/master/tutorials/train_test.ipynb) or download one that I pre-trained. | if not os.path.isfile('models/gm12878_d10.tf.meta'):
subprocess.call('curl -o models/gm12878_d10.tf.index https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.index', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.meta https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.meta', shell=True)
subprocess.call('curl -o models/gm12878_d10.tf.data-00000-of-00001 https://storage.googleapis.com/basenji_tutorial_data/model_gm12878_d10.tf.data-00000-of-00001', shell=True) | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Finally, you can offer data/gencode_chr9_l262k_w128.h5 and the model to [basenji_test_genes.py](https://github.com/calico/basenji/blob/master/bin/basenji_test_genes.py) to make gene expression predictions and benchmark them.The most relevant options are:| Option/Argument | Value | Note ||:---|:---|:---|| -o | data/gencode_chr9_test | Output directory. || --rc | | Average the forward and reverse complement to form prediction. || -s | | Make scatter plots, comparing predictions to experiment values. || --table | | Print gene expression table. || params_file | models/params_small.txt | Table of parameters to setup the model architecture and optimization. || model_file | models/gm12878_best.tf | Trained saved model prefix. || genes_hdf5_file | data/gencode_chr9_l262k_w128.h5 | HDF5 file containing the gene sequences, annotations, and experiment values. | | ! basenji_test_genes.py -o data/gencode_chr9_test --rc -s --table models/params_small.txt models/gm12878_best.tf data/gencode_chr9_l262k_w128.h5 | _____no_output_____ | Apache-2.0 | tutorials/genes.ipynb | JasperSnoek/basenji |
Day 1: Chronal Calibration "We've detected some temporal anomalies," one of Santa's Elves at the Temporal Anomaly Research and Detection Instrument Station tells you. She sounded pretty worried when she called you down here. "At 500-year intervals into the past, someone has been changing Santa's history!""The good news is that the changes won't propagate to our time stream for another 25 days, and we have a device" - she attaches something to your wrist - "that will let you fix the changes with no such propagation delay. It's configured to send you 500 years further into the past every few days; that was the best we could do on such short notice.""The bad news is that we are detecting roughly fifty anomalies throughout time; the device will indicate fixed anomalies with stars. The other bad news is that we only have one device and you're the best person for the job! Good lu--" She taps a button on the device and you suddenly feel like you're falling. To save Christmas, you need to get all fifty stars by December 25th.Collect stars by solving puzzles. Two puzzles will be made available on each day in the advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck!After feeling like you've been falling for a few minutes, you look at the device's tiny screen. "Error: Device must be calibrated before first use. Frequency drift detected. Cannot maintain destination lock." Below the message, the device shows a sequence of changes in frequency (your puzzle input). A value like +6 means the current frequency increases by 6; a value like -3 means the current frequency decreases by 3.For example, if the device displays frequency changes of +1, -2, +3, +1, then starting from a frequency of zero, the following changes would occur: Current frequency 0, change of +1; resulting frequency 1. Current frequency 1, change of -2; resulting frequency -1. Current frequency -1, change of +3; resulting frequency 2. Current frequency 2, change of +1; resulting frequency 3. In this example, the resulting frequency is 3.Here are other example situations: +1, +1, +1 results in 3 +1, +1, -2 results in 0 -1, -2, -3 results in -6Starting with a frequency of zero, what is the resulting frequency after all of the changes in frequency have been applied?Your puzzle answer was 402. | day1_input = read_input('day_01.txt')
day1_freq_changes = map(int, day1_input.split())
# Part 1 - total frequency change
sum(day1_freq_changes) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 1 Part Two You notice that the device repeats the same frequency change list over and over. To calibrate the device, you need to find the first frequency it reaches twice.For example, using the same list of changes above, the device would loop as follows: Current frequency 0, change of +1; resulting frequency 1. Current frequency 1, change of -2; resulting frequency -1. Current frequency -1, change of +3; resulting frequency 2. Current frequency 2, change of +1; resulting frequency 3. (At this point, the device continues from the start of the list.) Current frequency 3, change of +1; resulting frequency 4. Current frequency 4, change of -2; resulting frequency 2, which has already been seen.In this example, the first frequency reached twice is 2. Note that your device might need to repeat its list of frequency changes many times before a duplicate frequency is found, and that duplicates might be found while in the middle of processing the list.Here are other examples: +1, -1 first reaches 0 twice. +3, +3, +4, -2, -4 first reaches 10 twice. -6, +3, +8, +5, -6 first reaches 5 twice. +7, +7, -2, -7, -4 first reaches 14 twice.What is the first frequency your device reaches twice?Your puzzle answer was 481. | def first_freq_seen_twice(changes):
# Keep looping through input and tracking what how many times we've seen current frequency
i = 0
loop_count = 0
N = len(changes)
seen_twice = None
current_freq = 0
freq_seen = defaultdict(int)
freq_seen[0] = 1
while seen_twice is None:
if i % N == 0:
loop_count += 1
this_change = changes[i % N]
current_freq += this_change
freq_seen[current_freq] += 1
if freq_seen[current_freq] > 1:
seen_twice = current_freq
i += 1
return seen_twice, loop_count
# first frequency seen twice and number of loop iterations required
first_freq_seen_twice(day1_freq_changes) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Feels like there should be a smarter way to do this e.g. use cumsum input list somehow.We see from the naive solution that it takes 142 loops to see a frequency again.The plot below shows that the frequency changes are usually small with a few large jumps, and from part 1 we know that each loop has a net offset of +402. So we're interested in number of loops required before second or third regions in plot below start to ovelap either with first region or each other.I don't have a solution for this yet, one to ponder. | _ = plt.plot(np.cumsum(day1_freq_changes))
_ = plt.show() | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 2: Inventory Management System You stop falling through time, catch your breath, and check the screen on the device. "Destination reached. Current Year: 1518. Current Location: North Pole Utility Closet 83N10." You made it! Now, to find those anomalies.Outside the utility closet, you hear footsteps and a voice. "...I'm not sure either. But now that so many people have chimneys, maybe he could sneak in that way?" Another voice responds, "Actually, we've been working on a new kind of suit that would let him fit through tight spaces like that. But, I heard that a few days ago, they lost the prototype fabric, the design plans, everything! Nobody on the team can even seem to remember important details of the project!""Wouldn't they have had enough fabric to fill several boxes in the warehouse? They'd be stored together, so the box IDs should be similar. Too bad it would take forever to search the warehouse for two similar box IDs..." They walk too far away to hear any more.Late at night, you sneak to the warehouse - who knows what kinds of paradoxes you could cause if you were discovered - and use your fancy wrist device to quickly scan every box and produce a list of the likely candidates (your puzzle input).To make sure you didn't miss any, you scan the likely candidate boxes again, counting the number that have an ID containing exactly two of any letter and then separately counting those with exactly three of any letter. You can multiply those two counts together to get a rudimentary checksum and compare it to what your device predicts.For example, if you see the following box IDs: abcdef contains no letters that appear exactly two or three times. bababc contains two a and three b, so it counts for both. abbcde contains two b, but no letter appears exactly three times. abcccd contains three c, but no letter appears exactly two times. aabcdd contains two a and two d, but it only counts once. abcdee contains two e. ababab contains three a and three b, but it only counts once.Of these box IDs, four of them contain a letter which appears exactly twice, and three of them contain a letter which appears exactly three times. Multiplying these together produces a checksum of 4 * 3 = 12.What is the checksum for your list of box IDs?Your puzzle answer was 6225. | box_ids = read_input('day_02.txt').split('\n')
len(box_ids)
def count_letters(box_id):
res = defaultdict(int)
for letter in box_id:
res[letter] += 1
count_2 = 1 if 2 in res.values() else 0
count_3 = 1 if 3 in res.values() else 0
return count_2, count_3
# checksum =
np.prod(np.array([sum(lst) for lst in zip(*[count_letters(box_id) for box_id in box_ids])])) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part TwoConfident that your list of box IDs is complete, you're ready to find the boxes full of prototype fabric.The boxes will have IDs which differ by exactly one character at the same position in both strings. For example, given the following box IDs: abcde fghij klmno pqrst fguij axcye wvxyzThe IDs abcde and axcye are close, but they differ by two characters (the second and fourth). However, the IDs fghij and fguij differ by exactly one character, the third (h and u). Those must be the correct boxes.What letters are common between the two correct box IDs? (In the example above, this is found by removing the differing character from either ID, producing fgij.)Your puzzle answer was revtaubfniyhsgxdoajwkqilp. | box_ids_ints = np.array([[ord(c) for c in box_id] for box_id in box_ids])
def find_diff_1_boxes(box_ids):
X = np.array([[ord(c) for c in box_id] for box_id in box_ids])
res = []
n_boxes = len(box_ids)
for i in range(n_boxes):
for j in range(i, n_boxes):
char_diff = np.not_equal(X[i, :] - X[j, :], 0)
n_char_diff = char_diff.sum()
if n_char_diff == 1:
res.append((i, j, int(np.nonzero(char_diff)[0])))
return res
[(box_ids[i], box_ids[j], box_ids[i][:ind] + box_ids[i][ind+1:]) for (i, j, ind) in find_diff_1_boxes(box_ids)] | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 3: No Matter How You Slice It The Elves managed to locate the chimney-squeeze prototype fabric for Santa's suit (thanks to someone who helpfully wrote its box IDs on the wall of the warehouse in the middle of the night). Unfortunately, anomalies are still affecting them - nobody can even agree on how to cut the fabric.The whole piece of fabric they're working on is a very large square - at least 1000 inches on each side.Each Elf has made a claim about which area of fabric would be ideal for Santa's suit. All claims have an ID and consist of a single rectangle with edges parallel to the edges of the fabric. Each claim's rectangle is defined as follows:* The number of inches between the left edge of the fabric and the left edge of the rectangle.* The number of inches between the top edge of the fabric and the top edge of the rectangle.* The width of the rectangle in inches.* The height of the rectangle in inches.A claim like `123 @ 3,2: 5x4` means that claim ID 123 specifies a rectangle 3 inches from the left edge, 2 inches from the top edge, 5 inches wide, and 4 inches tall. Visually, it claims the square inches of fabric represented by (and ignores the square inches of fabric represented by .) in the diagram below: ........... ........... ...... ...... ...... ...... ........... ........... ...........The problem is that many of the claims overlap, causing two or more claims to cover part of the same areas. For example, consider the following claims: 1 @ 1,3: 4x4 2 @ 3,1: 4x4 3 @ 5,5: 2x2Visually, these claim the following areas: ........ ...2222. ...2222. .11XX22. .11XX22. .111133. .111133. ........The four square inches marked with X are claimed by both 1 and 2. (Claim 3, while adjacent to the others, does not overlap either of them.)If the Elves all proceed with their own plans, none of them will have enough fabric. How many square inches of fabric are within two or more claims?Your puzzle answer was 116920. | def parse_day_03():
lines = [line.split() for line in read_input('day_03.txt').split('\n')]
def parse_rec(rec):
id = int(rec[0].lstrip('#'))
x0, y0 = map(int, rec[2].rstrip(':').split(','))
w, h = map(int, rec[3].split('x'))
return {'id': id, 'x0': x0, 'y0': y0, 'w': w, 'h': h}
recs = map(parse_rec, lines)
return recs
claims = parse_day_03()
def find_overlap(claims):
X = np.zeros((1000, 1000))
claim_ok = []
# Part 1: label all the squares claimed
for claim in claims:
X[claim['y0']:claim['y0']+claim['h'], claim['x0']:claim['x0']+claim['w']] += 1
n_overlap = (X > 1).sum()
# Part 2: check whether a claim is the only one for a given region
for claim in claims:
all_ok = (X[claim['y0']:claim['y0']+claim['h'], claim['x0']:claim['x0']+claim['w']] == 1).all()
if all_ok:
claim_ok.append(claim['id'])
return n_overlap, claim_ok
find_overlap(claims) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two Amidst the chaos, you notice that exactly one claim doesn't overlap by even a single square inch of fabric with any other claim. If you can somehow draw attention to it, maybe the Elves will be able to make Santa's suit after all!For example, in the claims above, only claim 3 is intact after all claims are made.What is the ID of the only claim that doesn't overlap?Your puzzle answer was 382. Day 4: Repose Record You've sneaked into another supply closet - this time, it's across from the prototype suit manufacturing lab. You need to sneak inside and fix the issues with the suit, but there's a guard stationed outside the lab, so this is as close as you can safely get.As you search the closet for anything that might help, you discover that you're not the first person to want to sneak in. Covering the walls, someone has spent an hour starting every midnight for the past few months secretly observing this guard post! They've been writing down the ID of the one guard on duty that night - the Elves seem to have decided that one guard was enough for the overnight shift - as well as when they fall asleep or wake up while at their post (your puzzle input).For example, consider the following records, which have already been organized into chronological order: [1518-11-01 00:00] Guard 10 begins shift [1518-11-01 00:05] falls asleep [1518-11-01 00:25] wakes up [1518-11-01 00:30] falls asleep [1518-11-01 00:55] wakes up [1518-11-01 23:58] Guard 99 begins shift [1518-11-02 00:40] falls asleep [1518-11-02 00:50] wakes up [1518-11-03 00:05] Guard 10 begins shift [1518-11-03 00:24] falls asleep [1518-11-03 00:29] wakes up [1518-11-04 00:02] Guard 99 begins shift [1518-11-04 00:36] falls asleep [1518-11-04 00:46] wakes up [1518-11-05 00:03] Guard 99 begins shift [1518-11-05 00:45] falls asleep [1518-11-05 00:55] wakes upTimestamps are written using year-month-day hour:minute format. The guard falling asleep or waking up is always the one whose shift most recently started. Because all asleep/awake times are during the midnight hour (00:00 - 00:59), only the minute portion (00 - 59) is relevant for those events.Visually, these records show that the guards are asleep at these times: Date ID Minute 000000000011111111112222222222333333333344444444445555555555 012345678901234567890123456789012345678901234567890123456789 11-01 10 ............... 11-02 99 .................................................. 11-03 10 ....................................................... 11-04 99 .................................................. 11-05 99 ..................................................The columns are Date, which shows the month-day portion of the relevant day; ID, which shows the guard on duty that day; and Minute, which shows the minutes during which the guard was asleep within the midnight hour. (The Minute column's header shows the minute's ten's digit in the first row and the one's digit in the second row.) Awake is shown as ., and asleep is shown as .Note that guards count as asleep on the minute they fall asleep, and they count as awake on the minute they wake up. For example, because Guard 10 wakes up at 00:25 on 1518-11-01, minute 25 is marked as awake.If you can figure out the guard most likely to be asleep at a specific time, you might be able to trick that guard into working tonight so you can have the best chance of sneaking in. You have two strategies for choosing the best guard/minute combination.__Strategy 1__: Find the guard that has the most minutes asleep. What minute does that guard spend asleep the most?In the example above, Guard 10 spent the most minutes asleep, a total of 50 minutes (20+25+5), while Guard 99 only slept for a total of 30 minutes (10+10+10). Guard 10 was asleep most during minute 24 (on two days, whereas any other minute the guard was asleep was only seen on one day).While this example listed the entries in chronological order, your entries are in the order you found them. You'll need to organize them before they can be analyzed.What is the ID of the guard you chose multiplied by the minute you chose? (In the above example, the answer would be 10 * 24 = 240.)Your puzzle answer was 146622. | events = sorted(read_input('day_04.txt').split('\n'))
def parse_sleep_events(events):
res = []
rec = None
for event in events:
if 'Guard' in event:
# Start new record
if rec is not None:
res.append(rec)
rec = {
'guard_id': int(re.findall('#(\d+)', event)[0]),
'sleep': [],
'wake': []
}
if 'asleep' in event:
rec['sleep'].append(int(re.findall(' 00:(\d{2})', event)[0]))
if 'wakes' in event:
rec['wake'].append(int(re.findall(' 00:(\d{2})', event)[0]))
guard_sleeps = defaultdict(list)
for rec in res:
shift = np.zeros(60, dtype=np.int32)
for sleep in rec['sleep']:
shift[sleep] = 1
for wake in rec['wake']:
shift[wake] = -1
shift = np.cumsum(shift)
guard_sleeps[rec['guard_id']].append(shift)
for guard_id in guard_sleeps.keys():
guard_sleeps[guard_id] = np.array(guard_sleeps[guard_id])
return guard_sleeps
guard_sleeps = parse_sleep_events(events)
def find_sleepiest(guard_sleeps):
total_sleep = {k: v.sum() for k, v in guard_sleeps.iteritems()}
sleepiest = None
max_sleep = 0
for guard, sleep in total_sleep.iteritems():
if sleep > max_sleep:
sleepiest = guard
max_sleep = sleep
most_often_asleep = np.argmax(np.sum(guard_sleeps[sleepiest], axis=0))
return sleepiest, max_sleep, most_often_asleep, sleepiest*most_often_asleep
find_sleepiest(guard_sleeps) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two Strategy 2: Of all guards, which guard is most frequently asleep on the same minute?In the example above, Guard 99 spent minute 45 asleep more than any other guard or minute - three times in total. (In all other cases, any guard spent any minute asleep at most twice.)What is the ID of the guard you chose multiplied by the minute you chose? (In the above example, the answer would be 99 * 45 = 4455.)Your puzzle answer was 31848. | def find_most_often_asleep(guard_sleeps):
often_asleep = {k: np.sum(v, axis=0) for k, v in guard_sleeps.iteritems()}
most_sleeps = 0
sleep_time = None
sleep_guard = None
for guard, sleep in often_asleep.iteritems():
if np.max(sleep) > most_sleeps:
most_sleeps = np.max(sleep)
sleep_time = np.argmax(sleep)
sleep_guard = guard
return sleep_guard, most_sleeps, sleep_time, sleep_guard * sleep_time
find_most_often_asleep(guard_sleeps) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 5: Alchemical Reduction You've managed to sneak in to the prototype suit manufacturing lab. The Elves are making decent progress, but are still struggling with the suit's size reduction capabilities.While the very latest in 1518 alchemical technology might have solved their problem eventually, you can do better. You scan the chemical composition of the suit's material and discover that it is formed by extremely long polymers (one of which is available as your puzzle input).The polymer is formed by smaller units which, when triggered, react with each other such that two adjacent units of the same type and opposite polarity are destroyed. Units' types are represented by letters; units' polarity is represented by capitalization. For instance, r and R are units with the same type but opposite polarity, whereas r and s are entirely different types and do not react.For example:- In aA, a and A react, leaving nothing behind.- In abBA, bB destroys itself, leaving aA. As above, this then destroys itself, leaving nothing.- In abAB, no two adjacent units are of the same type, and so nothing happens.- In aabAAB, even though aa and AA are of the same type, their polarities match, and so nothing happens.Now, consider a larger example, dabAcCaCBAcCcaDA: dabAcCaCBAcCcaDA The first 'cC' is removed. dabAaCBAcCcaDA This creates 'Aa', which is removed. dabCBAcCcaDA Either 'cC' or 'Cc' are removed (the result is the same). dabCBAcaDA No further actions can be taken.After all possible reactions, the resulting polymer contains 10 units.How many units remain after fully reacting the polymer you scanned? Your puzzle answer was 11814. | polymer = read_input('day_05.txt')
def reduce_polymer(polymer, remove_unit=None):
lower_letters = [chr(x) for x in range(ord('a'), ord('z') + 1)]
upper_letters = [chr(x) for x in range(ord('A'), ord('Z') + 1)]
lower_upper = [low + upp for low, upp in zip(lower_letters, upper_letters)]
upper_lower = [upp + low for low, upp in zip(lower_letters, upper_letters)]
if remove_unit is not None:
polymer = polymer.replace(remove_unit.lower(), '').replace(remove_unit.upper(), '')
n_poly = len(polymer)
n_poly_new = n_poly
done = False
while not done:
for lu in lower_upper:
polymer = polymer.replace(lu, '')
for ul in upper_lower:
polymer = polymer.replace(ul, '')
n_poly_new = len(polymer)
done = n_poly_new == n_poly
n_poly = n_poly_new
return polymer
len(reduce_polymer(polymer)) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two Time to improve the polymer.One of the unit types is causing problems; it's preventing the polymer from collapsing as much as it should. Your goal is to figure out which unit type is causing the most problems, remove all instances of it (regardless of polarity), fully react the remaining polymer, and measure its length.For example, again using the polymer dabAcCaCBAcCcaDA from above:- Removing all A/a units produces dbcCCBcCcD. Fully reacting this polymer produces dbCBcD, which has length 6.- Removing all B/b units produces daAcCaCAcCcaDA. Fully reacting this polymer produces daCAcaDA, which has length 8.- Removing all C/c units produces dabAaBAaDA. Fully reacting this polymer produces daDA, which has length 4.- Removing all D/d units produces abAcCaCBAcCcaA. Fully reacting this polymer produces abCBAc, which has length 6.In this example, removing all C/c units was best, producing the answer 4.What is the length of the shortest polymer you can produce by removing all units of exactly one type and fully reacting the result?Your puzzle answer was 4282. | min([len(reduce_polymer(polymer, chr(x))) for x in range(ord('a'), ord('z')+1)]) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 6: Chronal Coordinates The device on your wrist beeps several times, and once again you feel like you're falling."Situation critical," the device announces. "Destination indeterminate. Chronal interference detected. Please specify new target coordinates."The device then produces a list of coordinates (your puzzle input). Are they places it thinks are safe or dangerous? It recommends you check manual page 729. The Elves did not give you a manual.If they're dangerous, maybe you can minimize the danger by finding the coordinate that gives the largest distance from the other points.Using only the Manhattan distance, determine the area around each coordinate by counting the number of integer X,Y locations that are closest to that coordinate (and aren't tied in distance to any other coordinate).Your goal is to find the size of the largest area that isn't infinite. For example, consider the following list of coordinates: 1, 1 1, 6 8, 3 3, 4 5, 5 8, 9If we name these coordinates A through F, we can draw them on a grid, putting 0,0 at the top left: .......... .A........ .......... ........C. ...D...... .....E.... .B........ .......... .......... ........F.This view is partial - the actual grid extends infinitely in all directions. Using the Manhattan distance, each location's closest coordinate can be determined, shown here in lowercase: aaaaa.cccc aAaaa.cccc aaaddecccc aadddeccCc ..dDdeeccc bb.deEeecc bBb.eeee.. bbb.eeefff bbb.eeffff bbb.ffffFfLocations shown as . are equally far from two or more coordinates, and so they don't count as being closest to any.In this example, the areas of coordinates A, B, C, and F are infinite - while not shown here, their areas extend forever outside the visible grid. However, the areas of coordinates D and E are finite: D is closest to 9 locations, and E is closest to 17 (both including the coordinate's location itself). Therefore, in this example, the size of the largest area is 17.What is the size of the largest area that isn't infinite?Your puzzle answer was 4342. | coords = np.array([[int(x), int(y)] for x, y in [c.split(',') for c in read_input('day_06.txt').split('\n') ]])
coords[:10]
def label_closest(coords):
x_max, y_max = coords.max(axis=0) + 1
region = np.nan*np.zeros((x_max, y_max))
for x in range(x_max):
for y in range(y_max):
dist = np.sum(np.abs(coords - np.array([x, y])), axis=1)
if len(dist[dist == dist.min()]) == 1:
region[x, y] = np.argmin(dist)
return region
closest = label_closest(coords)
def find_largest_finite_area(closest):
# Ignore points that go to infinity ie ones on boundary
on_boundary = [int(x) for x in list(
set(closest[0, :].tolist()) | set(closest[-1, :].tolist())
| set(closest[:, 0].tolist()) | set(closest[:, -1].tolist())) if not np.isnan(x)]
all_indexes = [int(x) for x in np.unique(closest) if not np.isnan(x)]
finite_region_indexes = list(set(all_indexes) - set(on_boundary))
max_area = 0
max_area_index = None
for ind in finite_region_indexes:
area = np.sum(closest == ind)
if area > max_area:
max_area = area
max_area_index = ind
return max_area_index, max_area
find_largest_finite_area(closest) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two On the other hand, if the coordinates are safe, maybe the best you can do is try to find a region near as many coordinates as possible.For example, suppose you want the sum of the Manhattan distance to all of the coordinates to be less than 32. For each location, add up the distances to all of the given coordinates; if the total of those distances is less than 32, that location is within the desired region. Using the same coordinates as above, the resulting region looks like this: .......... .A........ .......... .....C. ..D... ..E... .B..... .......... .......... ........F.In particular, consider the highlighted location 4,3 located at the top middle of the region. Its calculation is as follows, where abs() is the absolute value function:- Distance to coordinate A: `abs(4-1) + abs(3-1) = 5`- Distance to coordinate B: `abs(4-1) + abs(3-6) = 6`- Distance to coordinate C: `abs(4-8) + abs(3-3) = 4`- Distance to coordinate D: `abs(4-3) + abs(3-4) = 2`- Distance to coordinate E: `abs(4-5) + abs(3-5) = 3`- Distance to coordinate F: `abs(4-8) + abs(3-9) = 10`- Total distance: `5 + 6 + 4 + 2 + 3 + 10 = 30`Because the total distance to all coordinates (30) is less than 32, the location is __within__ the region.This region, which also includes coordinates D and E, has a total size of __16__.Your actual region will need to be much larger than this example, though, instead including all locations with a total distance of less than 10000.What is the size of the region containing all locations which have a total distance to all given coordinates of less than 10000?Your puzzle answer was 42966. | def label_total_dist(coords):
x_max, y_max = coords.max(axis=0) + 1
region = np.nan*np.zeros((x_max, y_max))
for x in range(x_max):
for y in range(y_max):
region[x, y] = np.sum(np.abs(coords - np.array([x, y])))
return region
total_dist = label_total_dist(coords)
np.sum(total_dist < 10000) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 7: The Sum of Its Parts You find yourself standing on a snow-covered coastline; apparently, you landed a little off course. The region is too hilly to see the North Pole from here, but you do spot some Elves that seem to be trying to unpack something that washed ashore. It's quite cold out, so you decide to risk creating a paradox by asking them for directions."Oh, are you the search party?" Somehow, you can understand whatever Elves from the year 1018 speak; you assume it's Ancient Nordic Elvish. Could the device on your wrist also be a translator? "Those clothes don't look very warm; take this." They hand you a heavy coat."We do need to find our way back to the North Pole, but we have higher priorities at the moment. You see, believe it or not, this box contains something that will solve all of Santa's transportation problems - at least, that's what it looks like from the pictures in the instructions." It doesn't seem like they can read whatever language it's in, but you can: "Sleigh kit. Some assembly required.""'Sleigh'? What a wonderful name! You must help us assemble this 'sleigh' at once!" They start excitedly pulling more parts out of the box.The instructions specify a series of steps and requirements about which steps must be finished before others can begin (your puzzle input). Each step is designated by a single letter. For example, suppose you have the following instructions: Step C must be finished before step A can begin. Step C must be finished before step F can begin. Step A must be finished before step B can begin. Step A must be finished before step D can begin. Step B must be finished before step E can begin. Step D must be finished before step E can begin. Step F must be finished before step E can begin.Visually, these requirements look like this: -->A--->B-- / \ \ C -->D----->E \ / ---->F-----Your first goal is to determine the order in which the steps should be completed. If more than one step is ready, choose the step which is first alphabetically. In this example, the steps would be completed as follows:- Only C is available, and so it is done first.- Next, both A and F are available. A is first alphabetically, so it is done next.- Then, even though F was available earlier, steps B and D are now also available, and B is the first alphabetically of the three.- After that, only D and F are available. E is not available because only some of its prerequisites are complete. Therefore, D is completed next.- F is the only choice, so it is done next.- Finally, E is completed.So, in this example, the correct order is CABDFE.In what order should the steps in your instructions be completed?Your puzzle answer was `SCLPAMQVUWNHODRTGYKBJEFXZI`. | step_dep = [[c[5], c[-12]] for c in read_input('day_07.txt').split('\n')]
def parse_dep(step_dep):
pre_cond, step = zip(*step_dep)
all_steps = list(set(pre_cond) | set(step))
deps = defaultdict(list)
for d in step_dep:
deps[d[1]].append(d[0])
for d in list(set(all_steps) - set(deps.keys())):
deps[d] = []
return deps
deps = parse_dep(step_dep)
def complete_steps(deps, n_helpers=0, step_base_time=60, display=False):
steps_taken = ''
steps_left = deps.keys()
workers = {n: [] for n in range(n_helpers+1)}
steps_in_progress = ''
time_taken = 0
while steps_left != []:
# List of all steps with no dependencies
next_steps = sorted([k for k, v in deps.iteritems() if v == [] and k not in steps_taken])
# Allocate all available workers to the next available steps
for step in next_steps:
for w, v in workers.iteritems():
if step in steps_in_progress:
break
if v == []:
workers[w] = (step, step_base_time + ord(step) - ord('A') + 1)
steps_in_progress += step
if display:
# Show what workers are doing and time remaining
print(workers)
# Increment time to next task completion
time_to_next_completed_step = min([v[1] for v in workers.values() if v != []])
time_taken += time_to_next_completed_step
# Update time remaining
for w, v in workers.iteritems():
if v != []:
workers[w] = (workers[w][0], workers[w][1] - time_to_next_completed_step)
# Record completed steps
for w, v in workers.iteritems():
if v != [] and v[1] == 0:
steps_taken += v[0]
steps_in_progress = ''.join(list(set(steps_in_progress) - set(v[0])))
workers[w] = []
# Update dependencies to remove completed steps
deps = {k: list(set(v) - set(steps_taken)) for k, v in deps.iteritems()}
# Update list of steps still to do
steps_left = list(set(steps_left) - set(steps_taken))
return steps_taken, time_taken
complete_steps(parse_dep(step_dep), n_helpers=4, display=True) | {0: ('S', 79), 1: [], 2: [], 3: [], 4: []}
{0: ('C', 63), 1: [], 2: [], 3: [], 4: []}
{0: ('L', 72), 1: ('P', 76), 2: [], 3: [], 4: []}
{0: ('V', 82), 1: ('P', 4), 2: ('W', 83), 3: [], 4: []}
{0: ('V', 78), 1: ('A', 61), 2: ('W', 79), 3: ('M', 73), 4: ('Q', 77)}
{0: ('V', 17), 1: ('Y', 85), 2: ('W', 18), 3: ('M', 12), 4: ('Q', 16)}
{0: ('V', 5), 1: ('Y', 73), 2: ('W', 6), 3: [], 4: ('Q', 4)}
{0: ('V', 1), 1: ('Y', 69), 2: ('W', 2), 3: [], 4: []}
{0: ('U', 81), 1: ('Y', 68), 2: ('W', 1), 3: [], 4: []}
{0: ('U', 80), 1: ('Y', 67), 2: ('N', 74), 3: [], 4: []}
{0: ('U', 13), 1: [], 2: ('N', 7), 3: [], 4: []}
{0: ('U', 6), 1: ('H', 68), 2: [], 3: [], 4: []}
{0: [], 1: ('H', 62), 2: [], 3: [], 4: []}
{0: ('O', 75), 1: [], 2: [], 3: [], 4: []}
{0: ('D', 64), 1: ('T', 80), 2: [], 3: [], 4: []}
{0: ('R', 78), 1: ('T', 16), 2: [], 3: [], 4: []}
{0: ('R', 62), 1: ('G', 67), 2: [], 3: [], 4: []}
{0: [], 1: ('G', 5), 2: [], 3: [], 4: []}
{0: ('K', 71), 1: [], 2: [], 3: [], 4: []}
{0: ('B', 62), 1: [], 2: [], 3: [], 4: []}
{0: ('J', 70), 1: [], 2: [], 3: [], 4: []}
{0: ('E', 65), 1: [], 2: [], 3: [], 4: []}
{0: ('F', 66), 1: [], 2: [], 3: [], 4: []}
{0: ('X', 84), 1: [], 2: [], 3: [], 4: []}
{0: ('Z', 86), 1: [], 2: [], 3: [], 4: []}
{0: ('I', 69), 1: [], 2: [], 3: [], 4: []}
| Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two As you're about to begin construction, four of the Elves offer to help. "The sun will set soon; it'll go faster if we work together." Now, you need to account for multiple people working on steps simultaneously. If multiple steps are available, workers should still begin them in alphabetical order.Each step takes 60 seconds plus an amount corresponding to its letter: A=1, B=2, C=3, and so on. So, step A takes 60+1=61 seconds, while step Z takes 60+26=86 seconds. No time is required between steps.To simplify things for the example, however, suppose you only have help from one Elf (a total of two workers) and that each step takes 60 fewer seconds (so that step A takes 1 second and step Z takes 26 seconds). Then, using the same instructions as above, this is how each second would be spent: Second Worker 1 Worker 2 Done 0 C . 1 C . 2 C . 3 A F C 4 B F CA 5 B F CA 6 D F CAB 7 D F CAB 8 D F CAB 9 D . CABF 10 E . CABFD 11 E . CABFD 12 E . CABFD 13 E . CABFD 14 E . CABFD 15 . . CABFDE Each row represents one second of time. The Second column identifies how many seconds have passed as of the beginning of that second. Each worker column shows the step that worker is currently doing (or . if they are idle). The Done column shows completed steps.Note that the order of the steps has changed; this is because steps now take time to finish and multiple workers can begin multiple steps simultaneously.In this example, it would take 15 seconds for two workers to complete these steps.With 5 workers and the 60+ second step durations described above, how long will it take to complete all of the steps?Your puzzle answer was `1234`. Day 8: Memory Maneuver The sleigh is much easier to pull than you'd expect for something its weight. Unfortunately, neither you nor the Elves know which way the North Pole is from here.You check your wrist device for anything that might help. It seems to have some kind of navigation system! Activating the navigation system produces more bad news: "Failed to start navigation system. Could not read software license file."The navigation system's license file consists of a list of numbers (your puzzle input). The numbers define a data structure which, when processed, produces some kind of tree that can be used to calculate the license number.The tree is made up of nodes; a single, outermost node forms the tree's root, and it contains all other nodes in the tree (or contains nodes that contain nodes, and so on).Specifically, a node consists of:- A header, which is always exactly two numbers:- - The quantity of child nodes.- - The quantity of metadata entries.- Zero or more child nodes (as specified in the header).- One or more metadata entries (as specified in the header).Each child node is itself a node that has its own header, child nodes, and metadata. For example: 2 3 0 3 10 11 12 1 1 0 1 99 2 1 1 2 A---------------------------------- B----------- C----------- D-----In this example, each node of the tree is also marked with an underline starting with a letter for easier identification. In it, there are four nodes:- A, which has 2 child nodes (B, C) and 3 metadata entries (1, 1, 2).- B, which has 0 child nodes and 3 metadata entries (10, 11, 12).- C, which has 1 child node (D) and 1 metadata entry (2).- D, which has 0 child nodes and 1 metadata entry (99).The first check done on the license file is to simply add up all of the metadata entries. In this example, that sum is 1+1+2+10+11+12+2+99=138.What is the sum of all metadata entries?Your puzzle answer was 36307. | license = map(int, read_input('day_08.txt').split())
license[:10]
def parse_license(license):
nodes = {}
node = 0
n_node = 0
meta_sum = 0
n_meta = 0
parent = None
mode = 'read_n_child'
for x in license:
if mode == 'read_n_child':
if node not in nodes.keys():
nodes[node] = {
'n_child': x, 'n_meta': 0, 'meta': [],
'parent': parent, 'children': [],
'value': 0 # Part 2
}
mode = 'read_n_meta'
else:
mode = 'read_meta'
continue
if mode == 'read_n_meta':
nodes[node]['n_meta'] = x
if nodes[node]['n_child'] == 0:
mode = 'read_meta'
n_meta = x
else:
# Create new node
parent = n_node
n_node += 1
node = n_node
mode = 'read_n_child'
continue
if mode == 'read_meta':
nodes[node]['meta'].append(x)
meta_sum += x
n_meta -= 1
if n_meta == 0:
# Part 2
if nodes[node]['n_child'] == 0:
nodes[node]['value'] = sum(nodes[node]['meta'])
else:
for m in nodes[node]['meta']:
# print(node, m, nodes[node]['children'])
if m <= len(nodes[node]['children']):
child_value = nodes[ nodes[node]['children'][m-1] ]['value']
# print(node, child_value)
nodes[node]['value'] += child_value
if nodes[node]['parent'] is not None:
nodes[nodes[node]['parent']]['children'].append(node)
if len(nodes[nodes[node]['parent']]['children']) == nodes[nodes[node]['parent']]['n_child']:
# Read parent metadata
n_meta = nodes[nodes[node]['parent']]['n_meta']
node = nodes[node]['parent']
mode = 'read_meta'
else:
# Create new node
parent = nodes[node]['parent']
n_node = n_node + 1
node = n_node
mode = 'read_n_child'
continue
return nodes, meta_sum
parse_license(map(int, '2 3 0 3 10 11 12 1 1 0 1 99 2 1 1 2'.split()))
# parse_license(map(int, '2 3 0 3 10 11 12 1 1 0 1 99 1 1 1 2'.split()))
license_nodes, license_meta_sum = parse_license(license)
license_meta_sum | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two The second check is slightly more complicated: you need to find the value of the root node (A in the example above).The value of a node depends on whether it has child nodes.If a node has no child nodes, its value is the sum of its metadata entries. So, the value of node B is 10+11+12=33, and the value of node D is 99.However, if a node does have child nodes, the metadata entries become indexes which refer to those child nodes. A metadata entry of 1 refers to the first child node, 2 to the second, 3 to the third, and so on. The value of this node is the sum of the values of the child nodes referenced by the metadata entries. If a referenced child node does not exist, that reference is skipped. A child node can be referenced multiple time and counts each time it is referenced. A metadata entry of 0 does not refer to any child node.For example, again using the above nodes:- Node C has one metadata entry, 2. Because node C has only one child node, 2 references a child node which does not exist, and so the value of node C is 0.- Node A has three metadata entries: 1, 1, and 2. The 1 references node A's first child node, B, and the 2 references node A's second child node, C. Because node B has a value of 33 and node C has a value of 0, the value of node A is 33+33+0=66.So, in this example, the value of the root node is 66.What is the value of the root node?Your puzzle answer was 25154. | license_nodes[0]['value'] | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 9: Marble Mania You talk to the Elves while you wait for your navigation system to initialize. To pass the time, they introduce you to their favorite marble game.The Elves play this game by taking turns arranging the marbles in a circle according to very particular rules. The marbles are numbered starting with 0 and increasing by 1 until every marble has a number.First, the marble numbered 0 is placed in the circle. At this point, while it contains only a single marble, it is still a circle: the marble is both clockwise from itself and counter-clockwise from itself. This marble is designated the current marble.Then, each Elf takes a turn placing the lowest-numbered remaining marble into the circle between the marbles that are 1 and 2 marbles clockwise of the current marble. (When the circle is large enough, this means that there is one marble between the marble that was just placed and the current marble.) The marble that was just placed then becomes the current marble.However, if the marble that is about to be placed has a number which is a multiple of 23, something entirely different happens. First, the current player keeps the marble they would have placed, adding it to their score. In addition, the marble 7 marbles counter-clockwise from the current marble is removed from the circle and also added to the current player's score. The marble located immediately clockwise of the marble that was removed becomes the new current marble.For example, suppose there are 9 players. After the marble with value 0 is placed in the middle, each player (shown in square brackets) takes a turn. The result of each of those turns would produce circles of marbles like this, where clockwise is to the right and the resulting current marble is in parentheses: [-] (0) [1] 0 (1) [2] 0 (2) 1 [3] 0 2 1 (3) [4] 0 (4) 2 1 3 [5] 0 4 2 (5) 1 3 [6] 0 4 2 5 1 (6) 3 [7] 0 4 2 5 1 6 3 (7) [8] 0 (8) 4 2 5 1 6 3 7 [9] 0 8 4 (9) 2 5 1 6 3 7 [1] 0 8 4 9 2(10) 5 1 6 3 7 [2] 0 8 4 9 2 10 5(11) 1 6 3 7 [3] 0 8 4 9 2 10 5 11 1(12) 6 3 7 [4] 0 8 4 9 2 10 5 11 1 12 6(13) 3 7 [5] 0 8 4 9 2 10 5 11 1 12 6 13 3(14) 7 [6] 0 8 4 9 2 10 5 11 1 12 6 13 3 14 7(15) [7] 0(16) 8 4 9 2 10 5 11 1 12 6 13 3 14 7 15 [8] 0 16 8(17) 4 9 2 10 5 11 1 12 6 13 3 14 7 15 [9] 0 16 8 17 4(18) 9 2 10 5 11 1 12 6 13 3 14 7 15 [1] 0 16 8 17 4 18 9(19) 2 10 5 11 1 12 6 13 3 14 7 15 [2] 0 16 8 17 4 18 9 19 2(20)10 5 11 1 12 6 13 3 14 7 15 [3] 0 16 8 17 4 18 9 19 2 20 10(21) 5 11 1 12 6 13 3 14 7 15 [4] 0 16 8 17 4 18 9 19 2 20 10 21 5(22)11 1 12 6 13 3 14 7 15 [5] 0 16 8 17 4 18(19) 2 20 10 21 5 22 11 1 12 6 13 3 14 7 15 [6] 0 16 8 17 4 18 19 2(24)20 10 21 5 22 11 1 12 6 13 3 14 7 15 [7] 0 16 8 17 4 18 19 2 24 20(25)10 21 5 22 11 1 12 6 13 3 14 7 15The goal is to be the player with the highest score after the last marble is used up. Assuming the example above ends after the marble numbered 25, the winning score is 23+9=32 (because player 5 kept marble 23 and removed marble 9, while no other player got any points in this very short example game).Here are a few more examples:- 10 players; last marble is worth 1618 points: high score is 8317- 13 players; last marble is worth 7999 points: high score is 146373- 17 players; last marble is worth 1104 points: high score is 2764- 21 players; last marble is worth 6111 points: high score is 54718- 30 players; last marble is worth 5807 points: high score is 37305What is the winning Elf's score?Your puzzle answer was 390093. | n_players, n_marbles = [int(el) for i, el in enumerate(read_input('day_09.txt').split()) if i in [0, 6]]
def play_marbles(n_players, n_marbles, display=False):
players = defaultdict(int)
current_player = 1
current_pos = 0
marbles = deque()
marbles.append(0)
for m in range(1, n_marbles + 1):
if display:
print(marbles)
if m % 23 != 0:
# next_pos = (current_pos + 1) % len(marbles)
# marbles = marbles[:next_pos + 1] + [m] + marbles[next_pos + 1:]
# current_pos = next_pos + 1
marbles.rotate(-1)
marbles.append(m)
else:
# players[current_player] += m
# pos_to_remove = (current_pos - 7) % len(marbles)
# players[current_player] += marbles[pos_to_remove]
# marbles = marbles[:pos_to_remove] + marbles[pos_to_remove + 1:]
# current_pos = pos_to_remove
marbles.rotate(7)
players[current_player] += m + marbles.pop()
marbles.rotate(-1)
current_player = (current_player + 1) % n_players
return max(players.values())
play_marbles(9, 25, display=True)
play_marbles(10, 1618)
play_marbles(n_players, n_marbles) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part Two Amused by the speed of your answer, the Elves are curious:What would the new winning Elf's score be if the number of the last marble were 100 times larger?Your puzzle answer was 3150377341. Confession:My original solution using Python lists was obviously going to be far to slow for this (took about 20s for Part 1). After thinking for a bit I caved in and checked the Reddit solutions and found out about deque solution which is equivalent but uses a better data structure. So today I learned something :) | play_marbles(n_players, n_marbles * 100) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 10: The Stars Align It's no use; your navigation system simply isn't capable of providing walking directions in the arctic circle, and certainly not in 1018.The Elves suggest an alternative. In times like these, North Pole rescue operations will arrange points of light in the sky to guide missing Elves back to base. Unfortunately, the message is easy to miss: the points move slowly enough that it takes hours to align them, but have so much momentum that they only stay aligned for a second. If you blink at the wrong time, it might be hours before another message appears.You can see these points of light floating in the distance, and record their position in the sky and their velocity, the relative change in position per second (your puzzle input). The coordinates are all given from your perspective; given enough time, those positions and velocities will move the points into a cohesive message!Rather than wait, you decide to fast-forward the process and calculate what the points will eventually spell.For example, suppose you note the following points: position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity= position= velocity=Each line represents one point. Positions are given as pairs: X represents how far left (negative) or right (positive) the point appears, while Y represents how far up (negative) or down (positive) the point appears.At 0 seconds, each point has the position given. Each second, each point's velocity is added to its position. So, a point with velocity is moving to the right, but is moving upward twice as quickly. If this point's initial position were , after 3 seconds, its position would become .Over time, the points listed above would move like this:Initially: ..................... ..................... ................... ...................... .................. ..................... ..................... ................... ..................... ..................... .................. .................. ..................... .................... .................... ....................After 1 second: ...................... ...................... .................... .................... ................... ...................... ..................... ................... .................... ................. .................... ................... .................... ................... ...................... ......................After 2 seconds: ...................... ...................... ...................... ..................... ............... ...................... .................... .................... .................... .................. ................... ................. ..................... ...................... ...................... ......................After 3 seconds: ...................... ...................... ...................... ...................... ................. ................... ................... ................ ................... ................... ................... ................. ...................... ...................... ...................... ......................After 4 seconds: ...................... ...................... ...................... ..................... .................. ................... ................ .................. .................... ..................... ................... ................... ..................... ..................... ...................... ...................... After 3 seconds, the message appeared briefly: HI. Of course, your message will be much longer and will take many more seconds to appear.What message will eventually appear in the sky?Your puzzle answer was LXJFKAXA. | stars = np.array(map(lambda s: map(int,
s.lstrip(
'position=<'
).replace(
'> velocity=<', ','
).rstrip(
'>'
).split(
','
)),
read_input('day_10.txt').split('\n')))
def align_stars(stars, t):
pos = stars[:, :2] + stars[:,2:]*t
pos = pos - np.min(pos, axis=0)
cols, rows = np.max(pos, axis=0)
res = None
if cols < 2000 and rows < 2000:
res = np.zeros((rows+1, cols+1))
for p in pos.tolist():
res[p[1], p[0]] = 1
return res
np.mean(stars[:, :2] / stars[:, 2:])
# Trial and error around min distsance
_ = plt.spy(align_stars(stars, 10312)) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part TwoGood thing you didn't have to wait, because that would have taken a long time - much longer than the 3 seconds in the example above.Impressed by your sub-hour communication capabilities, the Elves are curious: exactly how many seconds would they have needed to wait for that message to appear?Your puzzle answer was 10312. Day 11: Chronal Charge You watch the Elves and their sleigh fade into the distance as they head toward the North Pole.Actually, you're the one fading. The falling sensation returns.The low fuel warning light is illuminated on your wrist-mounted device. Tapping it once causes it to project a hologram of the situation: a 300x300 grid of fuel cells and their current power levels, some negative. You're not sure what negative power means in the context of time travel, but it can't be good.Each fuel cell has a coordinate ranging from 1 to 300 in both the X (horizontal) and Y (vertical) direction. In X,Y notation, the top-left cell is 1,1, and the top-right cell is 300,1.The interface lets you select any 3x3 square of fuel cells. To increase your chances of getting to your destination, you decide to choose the 3x3 square with the largest total power.The power level in a given fuel cell can be found through the following process:- Find the fuel cell's rack ID, which is its X coordinate plus 10.- Begin with a power level of the rack ID times the Y coordinate.- Increase the power level by the value of the grid serial number (your puzzle input).- Set the power level to itself multiplied by the rack ID.- Keep only the hundreds digit of the power level (so 12345 becomes 3; numbers with no hundreds digit become 0).- Subtract 5 from the power level.For example, to find the power level of the fuel cell at 3,5 in a grid with serial number 8:- The rack ID is 3 + 10 = 13.- The power level starts at 13 * 5 = 65.- Adding the serial number produces 65 + 8 = 73.- Multiplying by the rack ID produces 73 * 13 = 949.- The hundreds digit of 949 is 9.- Subtracting 5 produces 9 - 5 = 4.So, the power level of this fuel cell is 4.Here are some more example power levels:- Fuel cell at 122,79, grid serial number 57: power level -5.- Fuel cell at 217,196, grid serial number 39: power level 0.- Fuel cell at 101,153, grid serial number 71: power level 4.Your goal is to find the 3x3 square which has the largest total power. The square must be entirely within the 300x300 grid. Identify this square using the X,Y coordinate of its top-left fuel cell. For example:For grid serial number 18, the largest total 3x3 square has a top-left corner of 33,45 (with a total power of 29); these fuel cells appear in the middle of this 5x5 region: -2 -4 4 4 4 -4 4 4 4 -5 4 3 3 4 -4 1 1 2 4 -3 -1 0 2 -5 -2 For grid serial number 42, the largest 3x3 square's top-left is 21,61 (with a total power of 30); they are in the middle of this region: -3 4 2 2 2 -4 4 3 3 4 -5 3 3 4 -4 4 3 3 4 -3 3 3 3 -5 -1 What is the X,Y coordinate of the top-left fuel cell of the 3x3 square with the largest total power?Your puzzle input is 4151.Your puzzle answer was 20,46. | def grid_power(serial):
X, Y = np.meshgrid(np.arange(300, dtype=np.int)+1, np.arange(300, dtype=np.int)+1)
P = (np.floor((((((X + 10) * Y) + serial) * (X + 10)) % 1000) / 100) - 5).astype(np.int)
return P
def find_max_grid_power(power, size=3):
p_max = 0
x_m = 0
y_m = 0
y_max, x_max = power.shape
for x in range(x_max - size) :
for y in range(y_max - size) :
p = np.sum(power[y:y+size, x:x+size])
if p > p_max:
p_max = p
x_m = x
y_m = y
return p_max, x_m+1, y_m+1, size
find_max_grid_power(grid_power(4151)) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part TwoYou discover a dial on the side of the device; it seems to let you select a square of any size, not just 3x3. Sizes from 1x1 to 300x300 are supported.Realizing this, you now must find the square of any size with the largest total power. Identify this square by including its size as a third parameter after the top-left coordinate: a 9x9 square with a top-left corner of 3,5 is identified as 3,5,9.For example:- For grid serial number 18, the largest total square (with a total power of 113) is 16x16 and has a top-left corner of 90,269, so its identifier is 90,269,16.- For grid serial number 42, the largest total square (with a total power of 119) is 12x12 and has a top-left corner of 232,251, so its identifier is 232,251,12.What is the X,Y,size identifier of the square with the largest total power?Your puzzle input is still 4151.Your puzzle answer was 231,65,14. | find_max_grid_power(grid_power(42), 12)
def find_max_grid_power_and_size(serial):
power = grid_power(serial)
p_max = 0
x_m = 0
y_m = 0
s_m = 0
for size in range(1, 301):
p, x, y, s = find_max_grid_power(power, size)
if p > p_max:
p_max = p
x_m = x
y_m = y
s_m = s
return p_max, x_m, y_m, s_m
find_max_grid_power_and_size(4151) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 12: Subterranean Sustainability The year 518 is significantly more underground than your history books implied. Either that, or you've arrived in a vast cavern network under the North Pole.After exploring a little, you discover a long tunnel that contains a row of small pots as far as you can see to your left and right. A few of them contain plants - someone is trying to grow things in these geothermally-heated caves.The pots are numbered, with 0 in front of you. To the left, the pots are numbered -1, -2, -3, and so on; to the right, 1, 2, 3.... Your puzzle input contains a list of pots from 0 to the right and whether they do () or do not (.) currently contain a plant, the initial state. (No other pots currently contain plants.) For example, an initial state of `......` indicates that pots 0, 3, and 4 currently contain plants.Your puzzle input also contains some notes you find on a nearby table: someone has been trying to figure out how these plants spread to nearby pots. Based on the notes, for each generation of plants, a given pot has or does not have a plant based on whether that pot (and the two pots on either side of it) had a plant in the last generation. These are written as `LLCRR =>` N, where L are pots to the left, C is the current pot being considered, R are the pots to the right, and N is whether the current pot will have a plant in the next generation. For example:A note like `.... => .` means that a pot that contains a plant but with no plants within two pots of it will not have a plant in it during the next generation.A note like `. => .` means that an empty pot with two plants on each side of it will remain empty in the next generation.A note like `.. => ` means that a pot has a plant in a given generation if, in the previous generation, there were plants in that pot, the one immediately to the left, and the one two pots to the right, but not in the ones immediately to the right and two to the left.It's not clear what these plants are for, but you're sure it's important, so you'd like to make sure the current configuration of plants is sustainable by determining what will happen after 20 generations.For example, given the following input: initial state: .............. ... => .... => .... => ... => .. => ... => . => .. => . => .. => . => .. => . => . => For brevity, in this example, only the combinations which do produce a plant are listed. (Your input includes all possible combinations.) Then, the next 20 generations will look like this: 1 2 3 0 0 0 0 0: ............................ 1: ................................ 2: ............................ 3: .............................. 4: ............................ 5: .............................. 6: ........................... 7: ............................ 8: ......................... 9: ........................... 10: ......................... 11: ............................. 12: ......................... 13: ............................ 14: ......................... 15: ............................ 16: ......................... 17: ........................... 18: ..................... 19: ................... 20: ....................The generation is shown along the left, where 0 is the initial state. The pot numbers are shown along the top, where 0 labels the center pot, negative-numbered pots extend to the left, and positive pots extend toward the right. Remember, the initial state begins at pot 0, which is not the leftmost pot used in this example.After one generation, only seven plants remain. The one in pot 0 matched the rule looking for `....`, the one in pot 4 matched the rule looking for `...`, pot 9 matched `...`, and so on.In this example, after 20 generations, the pots shown as contain plants, the furthest left of which is pot -2, and the furthest right of which is pot 34. Adding up all the numbers of plant-containing pots after the 20th generation produces 325.After 20 generations, what is the sum of the numbers of all pots which contain a plant?Your puzzle answer was 1430. | pots_input = read_input('day_12.txt')
def parse_pots_rule(pots_input):
lines = pots_input.split('\n')
initial_state = lines[0].lstrip('initial state: ')
update_rules = defaultdict(lambda: '.')
for rule in lines[2:]:
r, t = rule.split(' => ')
update_rules[r] = t
return initial_state, update_rules
test_pots_input = """initial state: #..#.#..##......###...###
...## => #
..#.. => #
.#... => #
.#.#. => #
.#.## => #
.##.. => #
.#### => #
#.#.# => #
#.### => #
##.#. => #
##.## => #
###.. => #
###.# => #
####. => #"""
pots_initial_test, pots_rule_test = parse_pots_rule(test_pots_input)
def update_pots(state, rule):
new_state = list('.' * len(state))
for pos in range(2, len(state)-2):
new_state[pos] = rule[state[pos-2:pos+3]]
return ''.join(new_state)
def iterate_pots_update(initial_state, rule, n_gen, display=False, display_count=False):
# Pad initial state
state = ('.' * n_gen) + initial_state + ('.' * n_gen)
pot_numbers = (
range(-n_gen, 0) + range(len(initial_state) + 1)
+ range(len(initial_state)+1, len(initial_state)+1+n_gen)
)
# print(pot_numbers)
for i in range(n_gen):
pot_sum = sum(map(lambda (p, s): p if s == '#' else 0, zip(pot_numbers, state)))
pot_count = sum(map(lambda s: 1 if s == '#' else 0, state))
if display:
print(state)
if display_count and (n_gen - i) < 10:
# Display last 10 generations
print(i, pot_count, pot_sum)
state = update_pots(state, rule)
pot_sum = sum(map(lambda (p, s): p if s == '#' else 0, zip(pot_numbers, state)))
return pot_sum
iterate_pots_update(pots_initial_test, pots_rule_test, 20, display=True)
pots_initial, pots_rule = parse_pots_rule(pots_input)
iterate_pots_update(pots_initial, pots_rule, 20) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part TwoYou realize that 20 generations aren't enough. After all, these plants will need to last another 1500 years to even reach your timeline, not to mention your future.After fifty billion (50000000000) generations, what is the sum of the numbers of all pots which contain a plant?Your puzzle answer was 1150000000457. | # See if things settle down after a while
iterate_pots_update(pots_initial, pots_rule, 120, display_count=True) | (111, 23, 3010)
(112, 23, 3033)
(113, 23, 3056)
(114, 23, 3079)
(115, 23, 3102)
(116, 23, 3125)
(117, 23, 3148)
(118, 23, 3171)
(119, 23, 3194)
| Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Looks like by generation 120 the population stops changing and pot sum keeps increasing by 23 each generation. | # Final pot count after 50,000,000,000 generations
3217 + (23 * (50000000000 - 120)) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 13: Mine Cart Madness A crop of this size requires significant logistics to transport produce, soil, fertilizer, and so on. The Elves are very busy pushing things around in carts on some kind of rudimentary system of tracks they've come up with.Seeing as how cart-and-track systems don't appear in recorded history for another 1000 years, the Elves seem to be making this up as they go along. They haven't even figured out how to avoid collisions yet.You map out the tracks (your puzzle input) and see where you can help.Tracks consist of straight paths (| and -), curves (/ and \\), and intersections (+). Curves connect exactly two perpendicular pieces of track; for example, this is a closed loop: /----\ | | | | \----/Intersections occur when two perpendicular paths cross. At an intersection, a cart is capable of turning left, turning right, or continuing straight. Here are two loops connected by two intersections: /-----\ | | | /--+--\ | | | | \--+--/ | | | \-----/ Several carts are also on the tracks. Carts always face either up (^), down (v), left (). (On your initial map, the track under each cart is a straight path matching the direction the cart is facing.)Each time a cart has the option to turn (by arriving at any intersection), it turns left the first time, goes straight the second time, turns right the third time, and then repeats those directions starting again with left the fourth time, straight the fifth time, and so on. This process is independent of the particular intersection at which the cart has arrived - that is, the cart has no per-intersection memory.Carts all move at the same speed; they take turns moving a single step at a time. They do this based on their current location: carts on the top row move first (acting from left to right), then carts on the second row move (again from left to right), then carts on the third row, and so on. Once each cart has moved one step, the process repeats; each of these loops is called a tick.For example, suppose there are two carts on a straight track: | | | | | v | | | | | v v | | | | | v X | | ^ ^ | ^ ^ | | | | | | | |First, the top cart moves. It is facing down (v), so it moves down one square. Second, the bottom cart moves. It is facing up (^), so it moves up one square. Because all carts have moved, the first tick ends. Then, the process repeats, starting with the first cart. The first cart moves down, then the second cart moves up - right into the first cart, colliding with it! (The location of the crash is marked with an X.) This ends the second and last tick.Here is a longer example: /->-\ | | /----\ | /-+--+-\ | | | | | v | \-+-/ \-+--/ \------/ /-->\ | | /----\ | /-+--+-\ | | | | | | | \-+-/ \->--/ \------/ /---v | | /----\ | /-+--+-\ | | | | | | | \-+-/ \-+>-/ \------/ /---\ | v /----\ | /-+--+-\ | | | | | | | \-+-/ \-+->/ \------/ /---\ | | /----\ | /->--+-\ | | | | | | | \-+-/ \-+--^ \------/ /---\ | | /----\ | /-+>-+-\ | | | | | | ^ \-+-/ \-+--/ \------/ /---\ | | /----\ | /-+->+-\ ^ | | | | | | \-+-/ \-+--/ \------/ /---\ | | /----< | /-+-->-\ | | | | | | | \-+-/ \-+--/ \------/ /---\ | | /---<\ | /-+--+>\ | | | | | | | \-+-/ \-+--/ \------/ /---\ | | /--<-\ | /-+--+-v | | | | | | | \-+-/ \-+--/ \------/ /---\ | | /-<--\ | /-+--+-\ | | | | | v | \-+-/ \-+--/ \------/ /---\ | | /<---\ | /-+--+-\ | | | | | | | \-+-/ \-<--/ \------/ /---\ | | v----\ | /-+--+-\ | | | | | | | \-+-/ \<+--/ \------/ /---\ | | /----\ | /-+--v-\ | | | | | | | \-+-/ ^-+--/ \------/ /---\ | | /----\ | /-+--+-\ | | | | X | | \-+-/ \-+--/ \------/ After following their respective paths for a while, the carts eventually crash. To help prevent crashes, you'd like to know the location of the first crash. Locations are given in X,Y coordinates, where the furthest left column is X=0 and the furthest top row is Y=0: 111 0123456789012 0/---\ 1| | /----\ 2| /-+--+-\ | 3| | | X | | 4\-+-/ \-+--/ 5 \------/ In this example, the location of the first crash is 7,3. | tracks_input = read_input('day_13.txt')
tracks_test_input = r"""/->-\
| | /----\
| /-+--+-\ |
| | | | v |
\-+-/ \-+--/
\------/ """
def parse_tracks(tracks):
mine = np.array(map(list, tracks.split('\n')))
return mine
print(parse_tracks(tracks_test_input))
def update_tracks(mine):
new_mine = mine.copy()
rows, cols = mine.shape
cart_syms = '<>^v'
for i in range(rows):
for j in range(cols):
c = mine[i, j]
if c in cart_syms:
# Move cart
if c == '>':
new_mine[i, j] = '-'
c_next = new_mine[i, j+1]
new_mine[i, j+1] = (
'X' if c_next in cart_syms else (
''
)
) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Day 14: Chocolate Charts You finally have a chance to look at all of the produce moving around. Chocolate, cinnamon, mint, chili peppers, nutmeg, vanilla... the Elves must be growing these plants to make hot chocolate! As you realize this, you hear a conversation in the distance. When you go to investigate, you discover two Elves in what appears to be a makeshift underground kitchen/laboratory.The Elves are trying to come up with the ultimate hot chocolate recipe; they're even maintaining a scoreboard which tracks the quality score (0-9) of each recipe.Only two recipes are on the board: the first recipe got a score of 3, the second, 7. Each of the two Elves has a current recipe: the first Elf starts with the first recipe, and the second Elf starts with the second recipe.To create new recipes, the two Elves combine their current recipes. This creates new recipes from the digits of the sum of the current recipes' scores. With the current recipes' scores of 3 and 7, their sum is 10, and so two new recipes would be created: the first with score 1 and the second with score 0. If the current recipes' scores were 2 and 3, the sum, 5, would only create one recipe (with a score of 5) with its single digit.The new recipes are added to the end of the scoreboard in the order they are created. So, after the first round, the scoreboard is `3, 7, 1, 0`.After all new recipes are added to the scoreboard, each Elf picks a new current recipe. To do this, the Elf steps forward through the scoreboard a number of recipes equal to 1 plus the score of their current recipe. So, after the first round, the first Elf moves forward 1 + 3 = 4 times, while the second Elf moves forward 1 + 7 = 8 times. If they run out of recipes, they loop back around to the beginning. After the first round, both Elves happen to loop around until they land on the same recipe that they had in the beginning; in general, they will move to different recipes.Drawing the first Elf as parentheses and the second Elf as square brackets, they continue this process: (3)[7] (3)[7] 1 0 3 7 1 [0](1) 0 3 7 1 0 [1] 0 (1) (3) 7 1 0 1 0 [1] 2 3 7 1 0 (1) 0 1 2 [4] 3 7 1 [0] 1 0 (1) 2 4 5 3 7 1 0 [1] 0 1 2 (4) 5 1 3 (7) 1 0 1 0 [1] 2 4 5 1 5 3 7 1 0 1 0 1 2 [4](5) 1 5 8 3 (7) 1 0 1 0 1 2 4 5 1 5 8 [9] 3 7 1 0 1 0 1 [2] 4 (5) 1 5 8 9 1 6 3 7 1 0 1 0 1 2 4 5 [1] 5 8 9 1 (6) 7 3 7 1 0 (1) 0 1 2 4 5 1 5 [8] 9 1 6 7 7 3 7 [1] 0 1 0 (1) 2 4 5 1 5 8 9 1 6 7 7 9 3 7 1 0 [1] 0 1 2 (4) *5 1 5 8 9 1 6 7 7 9* 2 The Elves think their skill will improve after making a few recipes (your puzzle input). However, that could take ages; you can speed this up considerably by identifying the scores of the ten recipes after that. For example:- If the Elves think their skill will improve after making 9 recipes, the scores of the ten recipes after the first nine on the scoreboard would be 5158916779 (highlighted in the last line of the diagram).- After 5 recipes, the scores of the next ten would be 0124515891.- After 18 recipes, the scores of the next ten would be 9251071085.- After 2018 recipes, the scores of the next ten would be 5941429882.What are the scores of the ten recipes immediately after the number of recipes in your puzzle input?Your puzzle input is 260321.Your puzzle answer was 9276422810. | def make_recipes(n_recipes=0, find_recipe=None, initial='37'):
recipes = initial
pos_1 = 0
pos_2 = 1
new_recipes = []
if not find_recipe:
loop_cond = (lambda r: len(r) < n_recipes + 10)
else:
loop_cond = (lambda r: not(find_recipe in r[-len(find_recipe)-1:]))
while loop_cond(recipes):
# print(pos_1, pos_2)
new_recipe = (str(int(recipes[pos_1]) + int(recipes[pos_2])))
recipes += new_recipe
pos_1 = (pos_1 + (int(recipes[pos_1]) + 1)) % len(recipes)
pos_2 = (pos_2 + (int(recipes[pos_2]) + 1)) % len(recipes)
if find_recipe:
return len(re.sub(find_recipe + '.*', '', recipes))
else:
return recipes[(n_recipes):(n_recipes+10)]
make_recipes(9)
make_recipes(260321) | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
Part TwoAs it turns out, you got the Elves' plan backwards. They actually want to know how many recipes appear on the scoreboard to the left of the first recipes whose scores are the digits from your puzzle input.- 51589 first appears after 9 recipes.- 01245 first appears after 5 recipes.- 92510 first appears after 18 recipes.- 59414 first appears after 2018 recipes.How many recipes appear on the scoreboard to the left of the score sequence in your puzzle input?Your puzzle input is still 260321.Your puzzle answer was 20319117. | make_recipes(None, '59414')
make_recipes(None, '260321') | _____no_output_____ | Apache-2.0 | Advent Of Code 2018 mattmcd.ipynb | mattmcd/AdventOfCode2018 |
PART IISentiment Analysis Classifications - Review and ComparisonFirst, we needed to create vector words. For simplicity, we used a pre-trained model.Google was able to teach the Word2Vec model on a massive Google News dataset that contained over 100 billion different words! Google has created [3 million vector words](https://code.google.com/archive/p/word2vec/Pre-trained_word_and_phrase_vectors) from this model, each with a dimension of 300.Ideally, we would use these vectors, but because the vector-word matrix is quite large (3.6 GB), we used a much more manageable matrix, which was trained using [GloVe](https://nlp.stanford.edu/projects/glove/), with a similar model of vector word generation. This matrix contains 400,000 vector words, each with a dimension of 50. You can also download model [here](https://www.kaggle.com/anindya2906/glove6b?select=glove.6B.50d.txt). How word2vec works:The idea behind word2vec is that: Take a 3 layer neural network. (1 input layer + 1 hidden layer + 1 output layer) Feed it a word and train it to predict its neighbouring word. Remove the last (output layer) and keep the input and hidden layer. Now, input a word from within the vocabulary. The output given at the hidden layer is the βword embeddingβ of the input word. Two popular examples of methods of learning word embeddings from text include: Word2Vec GloVeTo get started, let's download the necessary libraries: | import numpy as np
import pandas as pd
import pickle
import gensim, logging
import gensim.models.keyedvectors as word2vec
import matplotlib.pyplot as plt
%matplotlib inline | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Also let's write a style for alignment in the middle of all graphs, images, etc: | from IPython.core.display import HTML
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""") | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Next, we will load the sample data we processed in the previous part: | with open('documents.pql', 'rb') as f:
docs = pickle.load(f)
print("Number of documents:", len(docs)) | Number of documents: 38544
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Now we will load our glove model in word2vec format. Because the GloVe dump from Stanford's site is slightly different from the word2vec format. You can convert a GloVe file to word2vec format using the following command in your console:`python -m gensim.scripts.glove2word2vec --input model/glove.6B.50d.txt --output model/glove.6B.50d.w2vformat.txt`After that you can delete original GloVe model.Next operation may take some time, as the model contains 400 000 words, so we will get a 400 000 x 50 embedding matrix that contains all the values of the word vectors. | model = word2vec.KeyedVectors.load_word2vec_format('model/glove.6B.50d.w2vformat.txt', binary=False) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Now let's get a list of all the words from our dictionary: | words = list(model.vocab) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Just to make sure everything is loaded correctly, we can look at the dimensions of the dictionary list and the embedding matrix: | print(words[:50], "\n\nTotal words:", len(words), "\n\nWord-Vectors shape:", model.vectors.shape) | ['the', ',', '.', 'of', 'to', 'and', 'in', 'a', '"', "'s", 'for', '-', 'that', 'on', 'is', 'was', 'said', 'with', 'he', 'as', 'it', 'by', 'at', '(', ')', 'from', 'his', "''", '``', 'an', 'be', 'has', 'are', 'have', 'but', 'were', 'not', 'this', 'who', 'they', 'had', 'i', 'which', 'will', 'their', ':', 'or', 'its', 'one', 'after']
Total words: 400000
Word-Vectors shape: (400000, 50)
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
We can also find a word like "football" in our word list and then access the corresponding vector through the embedding matrix: | print(model['football']) | [-1.8209 0.70094 -1.1403 0.34363 -0.42266 -0.92479 -1.3942
0.28512 -0.78416 -0.52579 0.89627 0.35899 -0.80087 -0.34636
1.0854 -0.087046 0.63411 1.1429 -1.6264 0.41326 -1.1283
-0.16645 0.17424 0.99585 -0.81838 -1.7724 0.078281 0.13382
-0.59779 -0.45068 2.5474 1.0693 -0.27017 -0.75646 0.24757
1.0261 0.11329 0.17668 -0.23257 -1.1561 -0.10665 -0.25377
-0.65102 0.32393 -0.58262 0.88137 -0.13465 0.96903 -0.076259
-0.59909 ]
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Word Average Embedding ModelWell, let's start analyzing our vectors. Our first approach will be the **word average embedding model**. The essence of this naive approach is to take the average of all word vectors from a sentence to get one 50-dimensional vector that represents the tone of the whole sentence that we feed the model and try to get some quick result.We didn't have to put a try/except, but even though I cleaned up our sample, there were a couple of words left after the processing that needed to be searched for and removed. | def sent_embed(words, docs):
x_sent_embed, y_sent_embed = [], []
count_words, count_non_words = 0, 0
# recover the embedding of each sentence with the average of the vector that composes it
# sent - sentence, state - state of the sentence (pos/neg)
for sent, state in docs:
# average embedding of all words in a sentence
sent_embed = []
for word in sent:
try:
# if word is present in the dictionary - add its vector representation
count_words += 1
sent_embed.append(model[word])
except KeyError:
# if word is not in the dictionary - add a zero vector
count_non_words += 1
sent_embed.append([0] * 50)
# add a sentence vector to the list
x_sent_embed.append(np.mean(sent_embed, axis=0).tolist())
# add a label to y_sent_embed
if state == 'pos': y_sent_embed.append(1)
elif state == 'neg': y_sent_embed.append(0)
print(count_non_words, "out of", count_words, "words were not found in the vocabulary.")
return x_sent_embed, y_sent_embed
x, y = sent_embed(words, docs) | 30709 out of 1802696 words were not found in the vocabulary.
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Cosine SimilarityTo measure the similarity of 2 words, we need a way to measure the degree of similarity between 2 embedding vectors for these 2 words. Given 2 vectors $u$ and $v$, cosine similarity is determined as follows:$$\text{cosine_similarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta)$$where: * $u.v$ - dot product (or inner product) of two vectors;* $||u||_2$ - norm (or length) of the vector $u$; * **Note**: norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$)* $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. **`cosine_similarity()`** is a method that used to estimate the similarity between word vectors. | def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
# compute the dot product between u and v
dot = np.dot(u,v)
# compute the L2 norm of u
norm_u = np.sqrt(sum(u**2))
# Compute the L2 norm of v
norm_v = np.sqrt(sum(v**2))
# Compute the cosine similarity defined by formula above
cosine_similarity = dot/(norm_u*norm_v)
return cosine_similarity | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Let's check the cosine similarity on 2 negative sentences: | print("Sentence #5: ", docs[5], "\n\nSentence #7: ", docs[7])
print("\nSentence Embedding #5: ", x[5], "\n\nSentence Embedding #7: ", x[7])
print("cosine_similarity = ", cosine_similarity(np.array(x[5]), np.array(x[7]))) | cosine_similarity = 0.8968743967161681
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
A value of 0.89 indicates that the sentences are close to each other, and so it is.Let's check on two positive sentences: | print("Sentence #1: ", docs[1], "\n\nSentence #4: ", docs[4])
print("\nSentence Embedding #1: ", x[1], "\n\nSentence Embedding #4: ", x[4])
print("cosine_similarity = ", cosine_similarity(np.array(x[1]), np.array(x[4]))) | cosine_similarity = 0.9481159093219256
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
These sentences are also close to each other. So now let's check sentences with different states: | print("Sentence #1: ", docs[0], "\n\nSentence #5: ", docs[6])
print("cosine_similarity = ", cosine_similarity(np.array(x[0]), np.array(x[6]))) | cosine_similarity = 0.7410293614966914
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
As we see, our average embedding still has some problems with separating different classes with cosine similarity. Split Corpus Now, for further work, we will divide our corpus for training, testing and development sets: | from sklearn.model_selection import train_test_split
# train test
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
# train dev
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.2, random_state=42)
print('Length of x_train:', len(x_train), '| Length of y_train:', len(y_train))
print('Length of x_test: ', len(x_test), '| Length of y_test: ', len(y_test))
print('Length of x_val: ', len(x_val), '| Length of y_val: ', len(y_val))
print("Shape of x_train set:", np.array(x_train).shape) | Shape of x_train set: (24668, 50)
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Visualization of Classification ReportWe will need these methods when we start to visualize our data, so we will write them now.The following function takes the conclusion of the `classification_report` function as an argument and plots the results ( function is based on [this](https://stackoverflow.com/a/31689645/14467732) solution). | def plot_classification_report(classification_report, title='Classification Report', cmap='RdBu'):
lines = classification_report.split('\n')
classes, plotMat, support, class_names = [], [], [], []
for line in lines[2 : (len(lines) - 5)]:
t = line.strip().split()
if len(t) < 2: continue
classes.append(t[0])
v = [float(x) for x in t[1: len(t) - 1]]
support.append(int(t[-1]))
class_names.append(t[0])
plotMat.append(v)
xlabel = 'Metrics'
ylabel = 'Classes'
xticklabels = ['Precision', 'Recall', 'F1-score']
yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)]
figure_width = 25
figure_height = len(class_names) + 7
correct_orientation = False
heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
This function is designed to create a heatmap with text in each cell using the matplotlib library (code based on idea from [here](https://stackoverflow.com/a/16124677/14467732)): | def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'):
fig, ax = plt.subplots()
c = ax.pcolor(AUC, edgecolors='k', linestyle='dashed', linewidths=0.2, cmap=cmap)
# put the major ticks at the middle of each cell
ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False)
ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False)
# set tick labels
ax.set_xticklabels(xticklabels, minor=False)
ax.set_yticklabels(yticklabels, minor=False)
# set title and x/y labels
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
# remove last blank column
plt.xlim( (0, AUC.shape[1]) )
# turn off all the ticks
ax = plt.gca()
for t in ax.xaxis.get_major_ticks():
t.tick1On = False
t.tick2On = False
for t in ax.yaxis.get_major_ticks():
t.tick1On = False
t.tick2On = False
# add color bar
plt.colorbar(c)
# add text in each cell
show_val(c)
# proper orientation (origin at the top left instead of bottom left)
if correct_orientation:
ax.invert_yaxis()
ax.xaxis.tick_top()
# resize
fig = plt.gcf()
fig.set_size_inches(cm_to_inch(figure_width, figure_height)) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
This function just inserts the text into the cells of the heatmap (idea is taken from [here](https://stackoverflow.com/a/25074150/14467732)): | def show_val(pc, fmt="%.2f", **kw):
pc.update_scalarmappable()
ax = pc.axes
for p, color, value in zip(pc.get_paths(), pc.get_facecolors(), pc.get_array()):
x, y = p.vertices[:-2, :].mean(0)
if np.all(color[:3] > 0.5):
color = (0.0, 0.0, 0.0)
else:
color = (1.0, 1.0, 1.0)
ax.text(x, y, fmt % value, ha="center", va="center", color=color, **kw) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
The last auxiliary function is intended to specify the size of the figure in centimeters in matplotlib, because by default there is only the method `set_size_inches`, therefore, we will convert inches to centimeters and use this method: | def cm_to_inch(*dim):
inch = 2.54
return tuple(i/inch for i in dim[0]) if type(dim[0]) == tuple else tuple(i/inch for i in dim) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
**Note:** To better understand the following classifiers, I advise you to read [this article](https://towardsdatascience.com/comparative-study-on-classic-machine-learning-algorithms-24f9ff6ab222) or other similar ones that you will find on the Internet. KNN ModelThe K-nearest neighbors (KNN) algorithm is a type of supervised machine learning algorithms. KNN is extremely easy to implement in its most basic form, and yet performs quite complex classification tasks. It is a lazy learning algorithm since it doesn't have a specialized training phase. Rather, it uses all of the data for training while classifying a new data point or instance. KNN is also a non-parametric learning algorithm, which means that it doesn't assume anything about the underlying data.KNN algorithm simply calculates the distance of a new data point to all other training data points. The distance can be of any type e.g Euclidean or Manhattan etc. It then selects the K-nearest data points, where K can be any integer. Finally it assigns the data point to the class to which the majority of the K data points belong.Now, let's build KNN classifier model.First, we import the `KNeighborsClassifier` module and create KNN classifier object by passing argument number of neighbors in `KNeighborsClassifier()` function. Then, fit our model on the train set using `fit()` and perform prediction on the test set using `predict()`.One way to help find the best value of neighbors is to plot the graph of neighbor value and the corresponding error rate for the dataset. We will plot the mean error for the predicted values of test set for all the neighbor values between 1 and 25.To do so, let's first calculate the mean of error for all the predicted values where neighbor ranges from 1 and 25: | from sklearn.neighbors import KNeighborsClassifier
error = []
# calculating error for neighbor values between 1 and 25
for i in range(1, 25):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(x_train, y_train)
pred_i = knn.predict(x_test)
error.append(np.mean(pred_i != y_test)) | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
The next step is to plot the error values against neighbor values: | plt.figure(figsize=(10, 5))
plt.plot(range(1, 25), error, color='black', linestyle='dashed', marker='o', markerfacecolor='green', markersize=10)
plt.title('Error Rate Neighbor Value')
plt.xlabel('Neighbor Value')
plt.ylabel('Mean Error') | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
As we can see, it is best to take k=5, but still mean error a little higher than normal. | # create KNN Classifier
knn = KNeighborsClassifier(n_neighbors=5, weights='distance')
# train the classifier using the training sets
knn.fit(x_train, y_train)
# predict the response for test dataset
y_pred = knn.predict(x_test)
print("Nearest Neighbors Result (k=5):\n" + '-' * 35)
print("Accuracy Score (k=5):", str(round(knn.score(x_test, y_test) * 100, 2)) + '%')
print("Accuracy (x_train, y_train):", str(round(knn.score(x_train, y_train), 4) * 100) + '%') | Nearest Neighbors Result (k=5):
-----------------------------------
Accuracy Score (k=5): 71.46%
Accuracy (x_train, y_train): 100.0%
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
The accuracy of the model is good, we can work with it.Now let's explore our KNN Classification results with help of `classification_report` function from sklearn.metrics: | from sklearn.metrics import classification_report
print('\nClassification KNN:\n', classification_report(y_test, knn.predict(x_test))) |
Classification KNN:
precision recall f1-score support
0 0.67 0.65 0.66 3292
1 0.74 0.77 0.75 4417
accuracy 0.71 7709
macro avg 0.71 0.71 0.71 7709
weighted avg 0.71 0.71 0.71 7709
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Now finally let's visualize our classification report: | plot_classification_report(classification_report(y_test, knn.predict(x_test)), title='KNN Classification Report') | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Logistic RegressionLogistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X. | from sklearn.linear_model import LogisticRegression
logit = LogisticRegression(solver='liblinear', multi_class='ovr', n_jobs=1)
logit.fit(x_train, y_train)
print("Accuracy Score:", str(round(logit.score(x_test, y_test) * 100, 2)) + '%')
print('\nClassification Logistic Regression:\n', classification_report(y_test, logit.predict(x_test))) |
Classification Logistic Regression:
precision recall f1-score support
0 0.70 0.64 0.67 3292
1 0.75 0.79 0.77 4417
accuracy 0.73 7709
macro avg 0.72 0.72 0.72 7709
weighted avg 0.73 0.73 0.73 7709
| MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Now let's visualize our classification report: | plot_classification_report(classification_report(y_test, logit.predict(x_test)), title='Logistic Regression Classification Report') | _____no_output_____ | MIT | Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb | JackShen1/sentimento |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.